diff --git a/abs_9K/test_abstract_short_2405.02178v1.json b/abs_9K/test_abstract_short_2405.02178v1.json new file mode 100644 index 0000000000000000000000000000000000000000..6105bbbccb4230bba7f7a467a731f82fc12bd4b1 --- /dev/null +++ b/abs_9K/test_abstract_short_2405.02178v1.json @@ -0,0 +1,17 @@ +{ + "url": "http://arxiv.org/abs/2405.02178v1", + "title": "Assessing and Verifying Task Utility in LLM-Powered Applications", + "abstract": "The rapid development of Large Language Models (LLMs) has led to a surge in\napplications that facilitate collaboration among multiple agents, assisting\nhumans in their daily tasks. However, a significant gap remains in assessing to\nwhat extent LLM-powered applications genuinely enhance user experience and task\nexecution efficiency. This highlights the need to verify utility of LLM-powered\napplications, particularly by ensuring alignment between the application's\nfunctionality and end-user needs. We introduce AgentEval, a novel framework\ndesigned to simplify the utility verification process by automatically\nproposing a set of criteria tailored to the unique purpose of any given\napplication. This allows for a comprehensive assessment, quantifying the\nutility of an application against the suggested criteria. We present a\ncomprehensive analysis of the effectiveness and robustness of AgentEval for two\nopen source datasets including Math Problem solving and ALFWorld House-hold\nrelated tasks. For reproducibility purposes, we make the data, code and all the\nlogs publicly available at https://bit.ly/3w3yKcS .", + "authors": "Negar Arabzadeh, Siging Huo, Nikhil Mehta, Qinqyun Wu, Chi Wang, Ahmed Awadallah, Charles L. A. Clarke, Julia Kiseleva", + "published": "2024-05-03", + "updated": "2024-05-03", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "label": "Original Paper", + "paper_cat": "LLM Fairness", + "gt": "The rapid development of Large Language Models (LLMs) has led to a surge in\napplications that facilitate collaboration among multiple agents, assisting\nhumans in their daily tasks. However, a significant gap remains in assessing to\nwhat extent LLM-powered applications genuinely enhance user experience and task\nexecution efficiency. This highlights the need to verify utility of LLM-powered\napplications, particularly by ensuring alignment between the application's\nfunctionality and end-user needs. We introduce AgentEval, a novel framework\ndesigned to simplify the utility verification process by automatically\nproposing a set of criteria tailored to the unique purpose of any given\napplication. This allows for a comprehensive assessment, quantifying the\nutility of an application against the suggested criteria. We present a\ncomprehensive analysis of the effectiveness and robustness of AgentEval for two\nopen source datasets including Math Problem solving and ALFWorld House-hold\nrelated tasks. For reproducibility purposes, we make the data, code and all the\nlogs publicly available at https://bit.ly/3w3yKcS .", + "main_content": "Introduction One of the long-lasting goals for intelligent agents (Winograd, 1972) is for them to seamlessly interact with humans in natural language and help their end-users with their tasks, such as completing household tasks, math tutoring, and so on. The rapid development of open-source libraries (Wu et al., 2023; Li et al., 2023a) helps that goal by simplifying the development of LLM-powered agentic applications for various user-centered tasks (Liang et al., 2023b; Hong et al., 2023; Talebirad and Nadiri, 2023). To ensure that the application\u2019s behavior meets the requirements of the application developers, it is also crucial to assess its potential utility to end users (Dibia et al., 2023), as \u2217Work done during an internship at Microsoft Research Task Task Description Successful Execution Failed Execution QuantifierAgent Quantified Criteria for the solution Criteria w/ accepted values CriticAgent A solution to be assessed VerifierAgent Adversarial attack targeted solution Robustness Check Updating criteria Multidimensional Task Utility Figure 1: An overview of the AgentEval framework: CriticAgent creates a set of criteria and suggested values; QuantifierAgent quantifies the criteria for a considered application; and VerifierAgent verifies the criteria based on its robustness. The output of the QuantifierAgent is a multi-dimensional assessment of the utility of the application based on a suggested list of criteria and their evaluations. this can significantly impact its improvement journey. Taking into account a range of applications, it is unrealistic to assume benchmarking for every domain, including but not limited to code generation (Liu et al., 2024), health care (Andrew, 2024), and many others whose development we witness every day (Wu et al., 2023). Moreover, directly evaluating agentic applications poses challenges, as current approaches predominantly rely on endto-end success metrics i.e., whether the application accomplishes tasks (Shridhar et al., 2020b, 2019; Myers et al., 2023). However, understanding a user\u2019s interactions with an application involves much more than success alone (Kiseleva et al., 2022a,b; Zhang et al., 2023). Consider math problem solving, although it is important that the application solves the problem correctly, its ability to present and explain solutions based on various criteria, such as completeness, conciseness, and clarity, is crucial. Furthermore, success is not alarXiv:2405.02178v1 [cs.CL] 3 May 2024 \fways clearly defined for a task. Recognizing such criteria and being able to quantify them is essential to assess whether developer requirements are being satisfied and if the application brings utility to the end-users. Given the objective of assessing arbitrary applications, relying solely on end-to-end success metrics is untenable, due to the expansive range of tasks requiring automation. The question is how to design a flexible methodology to assess the task utility for diverse set of applications? To bridge this gap, we introduce AgentEval, a framework to gauge the utility of LLM-powered applications. Its goal is to assess the utility by providing application developers with insights into how the current flow can be characterized. AgentEval builds on recent work showing that LLMs can be a scalable and cost-effective alternative to human evaluation for open-ended tasks (Li et al., 2023b). AgentEval as illustrated in Fig. 1, consists of the three following agents, formally defined in Sec. 3: (1) CriticAgent suggests the list of criteria based on the task description and a pair of solutions, where one is preferred over the other one (e.g., successful and failed examples). For instance, for math problems, the criteria could be be Efficiency and Clarity of the proposed solution; (2) QuantifierAgent quantifies how the solution performs for each criterion and returns the utility function, e.g. for math problems, if the \u2019 Clarity is \u2018not clear\u2019, \u2018moderately clear\u2019, or \u2018very clear\u2019; (3) VerifierAgent verifies the quality of the assessment of the suggested criteria to make sure the criteria are essential, robust, informative and have high discriminative power. In summary, our main contributions are: C1 Introducing AgentEval, a novel framework that leverages LLM-powered agents as a scalable and cost-effective alternative to human evaluations, to produce task utility through the collaboration of three agents: CriticAgent, QuantifierAgent and VerifierAgent; and C2 An in-depth analysis of AgentEval robustness for two applications across different solutions, that can be replicated on an unseen domain. 2 Related Work 2.1 Evaluation of LLMs Prior work (Guo et al., 2023; Ziyu et al., 2023; Chang et al., 2023; Liang et al., 2023a) has extensively studied the evaluation of LLMs on various fronts: how ethically sound they are (Stahl and Eke, 2024), how they align to human preferences (Hendrycks et al., 2021a; K\u00f6pf et al., 2024), their robustness (Wang et al., 2023b), and the knowledge, and reasoning capabilities they posses (Bian et al., 2023). Recent work evaluates LLMs on more specialized tasks, such as medical domain (Jin et al., 2019), multi-modal tasks (Mialon et al., 2023; Bang et al., 2023), or as agents in interactive environments (Liu et al., 2023). 2.2 User satisfaction prediction Studies suggest that users interacting with various systems operate with specific utility functions in mind (Li et al., 2020; Azzopardi et al., 2018; Ahmadvand et al., 2022). Traditionally, metrics defining user satisfaction were designed using large-scale collected behavioral signals (Kiseleva et al., 2014), and were tailored to specific applications, such as intelligent assistants (Kiseleva et al., 2016a,b), web search engines (Williams et al., 2016a,b; Williams and Zitouni, 2017), dialogue systems (See et al., 2019), multi-turn conversations (Li et al., 2021) and general-purpose personal assistants (Kiseleva and de Rijke, 2017). It was demonstrated that assessing users\u2019 satisfaction requires goes beyond a single metric. As such, here, we propose a flexible framework to assess user and developer requirements, which can eventually be used to improve the application flow. 2.3 Using LLMs as evaluators More recently, there has been a growing trend in utilizing LLMs as evaluators (Chiang and Lee, 2023; Fu et al., 2023), such as for qualitative research (Bano et al., 2023), or summarization. Specifically, Jain et al. (2023) studied the efficacy of few-shot prompted LLM evaluators in evaluating summaries that were written by other LLMs. Similarly, Wang et al. (2023a) explore if ChatGPT itself can be used as an evaluator, by prompting it to score texts. Other works (Tjuatja et al., 2023; Liu and Sun, 2023; Chiang and Lee, 2023) look at how LLMs can be used as proxies for human behavior, or work with humans, such as CoEval (Li et al., 2023b), which showed how LLMs can make human evaluation easier. Pan et al. (2024) also show how LLM evaluators can help build models that increase performance on downstream task. Building on the above, a different line of works identify weaknesses in single LLMs as direct evaluators (Huang et al., 2023), and propose to improve them, \fsuch as a multi-step calibration framework (Wang et al., 2023c). Given these drawbacks, recent work has looked at how multiple LLM agents can be used as evaluators. Chan et al. (2023), proposed ChatEval, a multi-agent team that discusses and evaluates responses from agents on generation tasks (debate-style), leading to text that aligns with better human preferences. Similarly, Chern et al. (2024) proposed a multiple agent-debate-assisted meta-evaluation framework. Building on these works, we propose an automatic multi-agent assessment of utility for arbitrary LLM-powered applications, to provide deep insights for developers. Our framework can uncover current flaws in these applications, and may lead to improvements in them, particularly if the application flow changes after it is applied, and then it is re-used. 3 Task Utility Fig. 2 outlines a taxonomy of target tasks for LLMpowered applications, in terms of success metrics. At a high level, these tasks can be categorized into: 1) Success is not clearly defined \u2014 Users use the system in an assistive manner, seeking suggestions from it, rather than expecting it to solve the task. For example, a user can request the system to generate an email. The user usually uses the system\u2019s response as a template, which can later be edited. Directly evaluating assistive tasks like these is hard, particularly for online evaluation, or when dealing with less well-defined tasks. One potential approach is to directly ask users how useful the help was, but this is not well-calibrated (Borisov et al., 2018), hard to quantify (Sepliarskaia et al., 2018), and expensive. 2) Success is clearly defined \u2014 It is clear whether the system solved the task or not, for example, assisting with household tasks, where success is clear and measurable. This category can be further divided into two subcategories: \u2022 an optimal solution exists \u2014 only one successful outcome is possible. For example, when asking an assistant to turn on a light, success is clearly defined, as there is only one way to do it. \u2022 multiple solutions exist \u2014 Increasingly, we observe situations where multiple trajectories of agent behavior can lead to success. For example, when asking an agent to suggest a food recipe, success could be multiple cuisines tasting good, but perhaps the recipe should not be expensive. Tasks for LLM-powered applications Tasks where LLM-powered systems can assist the end user Success is not clearly defined When an agent assumes the role of an assistant, and success is not clearly defined Success is clearly defined When success is clearly defined, it is usually evaluated in a binary way Optimal Solution Exists There is a clear path to a successful event Multiple Solutions Exist Multiple trajectories are leading to success Figure 2: The taxonomy of tasks assessment. AgentEval is currently focused on tasks where success is clearly defined and multiple successful solutions may exist. Previous research on assistive agents suggests human pairwise preferences as one of the most optimal assessments, i.e. when the annotator is presented with two agents side by side and asked for their preferences (Kiseleva et al., 2022b). In this setup of side-by-side pairwise comparison, humans tend to suggest a list criteria, explaining why they prefer one agent over the other. For instance,\u2018the first agent was faster\u2019 or \u2018the second agent converses more naturally\u2019. This comparative setup can guide humans to come up with a list of criteria that helps to infer the utility of the task. With this in mind, we designed AgentEval (Fig. 1), by employing LLMs to help us understand, verify, and assess task utility, namely: \u2022 CriticAgent: The goal of this agent is to suggest a set of criteria that can be used to assess task utility. The CriticAgent is given a task description, as well as optionally several pairs of solutions, where preferably some are preferred over the other ones, for instance, successful and failed examples. CriticAgent would return a set of criteria C = {c1, . . . , cn}, where each criterion ci is accompanied by a set of accepted values \u03c9 as ci : {\u03c9j}m j=1. For example, for solving math problems, the CriticAgent generated accepted values and criteria such as clarity, efficiency, and more see Tab. 1. \u2022 QuantifierAgent: The goal of QuantifierAgent is to quantify each of the suggested criterion, to access the task utility of the system Ut, for the end user. We define the Utility for task t as: Ut(s) = {Qi(s|ci)}n i=1. where s represents the task sample and Q(s|ci.) is the quantifier output for sample s based on the criterion ci. \fFor example, for math problem solving, given the generated criteria shown in Tab. 1, the solution\u2019s Accuracy could be quantified as \u201cIncorrect\u201d, \u201cpartially correct\u201d or \u201ccorrect\u201d. Eligible quantified values for quantification process are shown in \u201cAccepted values\u201d column in Tab. 1 \u2022 VerifierAgent: There might be cases where not all the criteria suggested by CriticAgent help assess utility. Some criteria might be redundant, while others may not aid in distinguishing performance. VerifierAgent validates the quality of the criteria in terms of robustness and their distinguishability of noisy samples. Essentially, it checks (1) if the criteria can be quantified robustly over repeated samples, and (2) if QuantifierAgent can identify the adversarial attacked targeted samples from the original ones. If the sanity checks do not pass, VerifierAgent will update the list of criteria, to end up with a set of robust, stable, informative and distinguishable criteria for assessment. Finally, we note that AgentEval allows for incorporating a human in the loop in the role of a domain expert. For instance, CriticAgent could be replaced by a human expert who either comes up with the relevant criteria or helps VerifierAgent verify the useful criteria and filter out the unessential ones. 4 Datasets and Solutions This section provides an overview of the datasets utilized in our study i.e., Math problem solving and ALFWorld household task. The math dataset is chosen for its widespread usage and complex problem-solving scenarios that are fundamental in evaluating the effectiveness. ALFWorld dataset offers a scenario involving multi-turn interactions within a moderately approximated multi-modal environment. Each dataset plays a critical role in evaluating different aspects of AgentEval\u2019s capabilities, from handling complex theoretical problems to navigating real-world scenarios. In both tasks, although success is clearly defined, multiple solutions exist for accomplishing the objectives. An example of Math problem solving and ALFWorld task is shown in Appendix A.1. Due to space, we report all experiments about Math problem solving in the main paper and we keep all the experiments related to ALFWorld dataset in the Appendix A.3. 4.1 MATH Problem Solving Dataset: The MATH dataset is a substantial collection of 12,500 challenging mathematics problems from high school competitions (Hendrycks et al., 2021b). Each problem comes with a step-by-step solution and is tagged by difficulty levels. Similar to the math problem experimental setup in Wu et al. (2023), we carry out evaluations on 120 problems from level-5 by three different solutions. Due to limited space, for more details about this dataset, we refer readers to Appendix A.2 Solutions: In establishing solutions for this task to assess, we draw inspiration from the experiments showcased in (Wu et al., 2023). We evaluate the proposed methodology by AutoGen (Wu et al., 2023), as well as Langchain ReAct (Yao et al., 2022) and a Vanilla solver that employs GPT-4 to tackle the task. These solutions have previously demonstrated promising and competitive performance (Wu et al., 2023). In Sec. 5.2, we explore how the measured performance with AgentEval correlates with the ground truths. 4.2 ALFWorld Household Task Dataset: ALFWorld presents a set of languagebased interactive decision-making tasks within simulated household environments (Shridhar et al., 2020b). ALFWorld is the first interactive parallel environment that aligns text descriptions and commands with physically embodied robotic simulation. Finally, the dataset\u2019s inclusion of household chores to more intricate problem-solving scenarios, provides a comprehensive testbed for evaluating the adaptability of multi-agent systems. For more information about the dataset and examples of the test cases, we refer the readers to Appendix A.3.1. Solutions: As for the solutions to assess for ALFWorld Household tasks, similar to (Wu et al., 2023), we consider ReAct (Yao et al., 2022) as well as AutoGen with two agents and AutoGen with three agents (Wu et al., 2023). In Appendix A.3.2, we discuss in more details the solutions under assessment. We assess and compare the performance of these three solutions using AgentEval. 5 Experiments 5.1 Implementation Details For all experiments, we use GPT-4 version 0613, accessed through Azure OpenAI services, as the LLM model and the temperature of 0. AgentEval utilizes AutoGen (Wu et al., 2023) for implementation, since it provides a versatile environment where agents can be finely tuned and customized based on specific application needs. This is cru\fCo Error_analysis y Aver Criteria Clarity Error_analysis Completeness Efficiency Vanilla SolverSuccess Vanilla Solve Failed ReActSuccess ReActFailed AutogenSuccess AutogenFailed Average Value Figure 3: AgentEval assessment of three solutions on math problems categorized by success and failed cases. cial for maintaining the flexibility to handle a wide range of applications. We tried to avoid much prompt engineering and tried to keep each agent\u2019s instructions as if we are instructing human annotators. Moreover, another advantages of using AutoGen for implementation of AgentEval is that it has the flexibility to involve human in the loop. Each agent could be replaced by a human annotator. We further provide all the prompts used in our experiments in our Git repository. 5.2 AgentEval for Math Problems When executing the CriticAgent for Math problem solving, we first obtain a set of criteria as presented in Tab. 1. Then, the QuantifierAgent is tasked with quantifying each criterion, based on the accepted values. We present the outcome of QuantifierAgent measuring performance of three solutions on this task in Fig. 3. Notably, we see that Agenteval does not quantify the three solutions as if they perform equally well across the different criteria. For instance, while all three solutions leverage GPT-4 as the underlying language model, Autogen outperforms ReAct and Vanilla GPT-4 in terms of accuracy. This observation, while confirmed by previous studies (Wu et al., 2023), extends to solution completeness and efficiency as well. As depicted in Fig. 3, the error analysis range of quantified values differs from other metrics. We scrutinize the results by categorizing them into successful and failed cases. AutoGen, Vanilla Solver and ReAct solutions are each presented in orange, blue and green respectively, where the darker bars represent the performance on successful cases and lighter bars represent the failed cases. The difference between the dark and light bar of each color, verify AgentEval\u2019s performance, as we expect that each positive criteria should be quantified higher for successful cases compared to their failed cases. Table 1: Verification Criteria for MathProblems Criteria Description Accepted Values Clarity The ease of understanding the steps, explanations, and language used in the solution. \u2013 Not Clear (0) \u2013 Moderately Clear (1) \u2013 Very Clear (2) Efficiency The use of optimal methods or approaches to solve the math problem. \u2013 Inefficient (0) \u2013 Moderately Efficient (1) \u2013 Efficient (2) Error Analysis The identification and description of possible errors or misconceptions in the math problem-solving process. \u2013 Not Addressed (0) \u2013 Partially Addressed (1) \u2013 Well Addressed (2) Completeness Quality of code in terms of efficiency and elegance \u2013 Incomplete (0) \u2013 Mostly Complete (1) \u2013 Complete (2) We observe that in most cases, the successful and failed cases are distinguished, even with 95% interval confidence on all the success and failed cases. When examining the differences between successful and failed cases among the three solutions, we note that not all successful cases are assessed identically, nor are all failed cases quantified with the same performance. This can be interpreted to mean that even though two solutions might both be successful, one might perform better or worse in certain criteria, such as clarity or efficiency. This observation provides us with valuable additional insights, especially for the developers of the proposed solutions, and goes beyond reporting the effectiveness of a application by one scalar value e.g., success rate. 6 Robustness Analysis and Verification In this section, we first analyze the robustness of AgentEval, then further investigate how VerifierAgent can increase the stability of our assessment. 6.1 Diversity of Criteria Here, our main goal is to study the diversity of the suggested criteria. We investigate the extent inputs to AgentEval (Fig. 1 such as \u2018Task Description\u2019 and \u2018Successful/Failed Executions\u2019) contribute to CriticAgent for creating a more diverse set of criteria. To do so, we use two distinct methods, with CriticAgent generating (1) \u201ctask-based\u201d criteria solely from the task description, and (2) \u201csolution-based\u201d criteria, derived from both the task and execution examples. For example, a solution to a mathematical problem, might satisfy criteria such as \u2018Accuracy\u2019 and \u2018Clarity\u2019, independent of the solution. However, when additional tools such as coding are used to solve the problems, additional criteria like \u2018Code Efficiency\u2019 may be introduced to the set of criteria. This makes sense, since the application leveraged coding to solve math problems. \fFigure 4: Task-based vs solution-based criteria for Math problems. Error bar show the 95% confidence interval. Fig. 4 displays the number of unique criteria extracted for mathematical problem solving in taskbased mode, and three different solution-based approaches. To keep the balance between computational costs and analyzing the robustness, we conducted 50 runs of the CriticAgent with different seeds. Subsequently, for N = 50 iterations, we randomly select M \u226450 samples, as shown on the x-axis of Fig. 4, and present the average number of unique extracted criteria, along with its 95% confidence interval after repeating this process 50 times. We note that because the total pool of criteria includes 50 iterations in total, the confidence intervals become smaller when M get closer to the maximum number of samples i.e., 50 To gain deeper insights into diversity of criteria, we took a closer look at them to study if they are truly unique or to what extent they have similarities. This is important to determine if CriticAgent, when continually generating criteria, will always produce new criteria, or if it will eventually converge to a set. We noted that some criteria are similar but worded differently. For example, \u2018Problem Complexity\u2019 vs. \u2018Problem Difficulty\u2019 or \u2018Time Taken\u2019 vs. \u2018Time to Completion\u2019. Tab. 3 in the Appendix lists such instances. To consolidate the similar criteria and reduce noise in the number of unique criteria and redundancy, inspired from previous work (Liu et al., 2022; Vahtola et al., 2022; Reimers and Gurevych, 2019), we employ a pre-trained language model fine-tuned for paraphrasing1, to measure the semantic similarity of criteria descriptions. Using a threshold \u03c4, we classify pairs with cosine similarity greater than \u03c4 as semi-identical ones and select one of them as the representative of the pair. Fig. 4 illustrates the impact of different \u03c4 values (0.7, 0.85, 1) on the diversity of criteria. A threshold of 1 means no filtering occurs. This analysis shows that the solution-based approach has potential to produce more diverse criteria than 1https://bit.ly/3UgsYOp the task-based approach, although this varies by the creativity of the model. For example, while the AutoGen solution demonstrates the highest diversity, task-based methods yield more unique criteria than ReAct and Vanilla Solver. Another interesting observation is that repeating the CriticAgent will eventually lead to a convergence in the number of criteria. This suggests that the CriticAgent\u2019s ability to create new criteria will diminish, converging to an almost finite list of criteria, which will reduce the cost as well. 6.2 Verification As outlined in Sec. 3 and illustrated in Fig. 1, the VerifierAgent\u2019s primary role is to ensure the selected criteria are effective toward evaluating the utility for the end-user, while maintaining robustness and high discriminative power. To achieve this, the VerifierAgent undertakes two main actions: (1) Criteria Stability: The criteria should be essential and robust, meaning they should not be redundant and we should be able to quantify them stably if we repeatedly quantify it for an individual solution, showing no divergence. As such, VerifierAgent enhances the criteria by iterating over the generation and quantification phases. It then consolidates these criteria by identifying and eliminating redundancies, followed by evaluating the dispersion of the distribution of the quantified criteria. This step modifies the criteria, ensuring that only the most robust criteria are retained. (2) Discriminative Power: A reliable evaluation should detect and withstand noise. To test that, we propose to use adversarial examples and then assess the system\u2019s ability to differentiate between these compromised examples and standard cases. Should the system fail to distinguish effectively, it indicates that the criteria are insufficient for reliable assessment under varied conditions. We note that both steps involve a tunable threshold that can be adapted based on application needs, \fFigure 5: Distribution of QuantifierAgent output on AutoGen results on successful (dark blue) and failed (light blue) cases on different criteria. ensuring flexible criteria validation. The proposed methodology for VerifierAgent is summarized in Algorithm 1 in the Appendix. 6.2.1 Criteria Stability Our goal here is to explore the stability of criteria and robustness of the quantifier for having a more essential, robust and stable set of criteria. We specifically evaluate the QuantifierAgent\u2019s robustness using criteria for mathematical problems (Table 1), conducting 50 repeats of runs with different seeds on 120 problems (Section 4.1). Ideal expected outcomes include consistent performance across all criteria on all the repeats. Fig. 5 illustrates the distribution of quantifier values for both failed (dark blue) and successful cases (light blue) across all criteria through box plots. The more robust a criterion, the narrower the range of quantified performance (narrower box plots). Also, the less overlap between the successful and failed boxes, the higher the distinguishability of the criteria. We observe that all four criteria, except \u2018error analysis\u2019 allow for easy differentiation between successful and failed cases. Additionally, some criteria prove to be more robust compared to others. We believe that such an analysis of the quantifier agent\u2019s performance will yield valuable insights for enhancing reliability, trustworthiness, and explainability in performance evaluation. A detailed examination of the stability of each criterion, especially how they differentiate between successful and failed cases, is provided in Appendix A.4.2. Further, to refine and expand the criteria set without redundancy, we operate the CriticAgent multiple times i.e., we execute CriticAgent 50 times with varied seeds. The criteria are then summarized into one list of useful criteria using the LLM. AddiFigure 6: \u2206sum of mean coefficient of variation across all criteria with increasing number of seeds. tionally, as explained in Section 6.1, we remove similar and redundant criteria using pre-trained language models, thus obtaining a comprehensive list of criteria. The refined criteria after 50 repeats are detailed in Tab. 4 in the Appendix. Now, we aim to determine the stability of these criteria through repeated quantifications. Our goal is to identify criteria that maintain consistent results without significant divergence, even when quantified multiple times. Using this consolidated list, we measure the dispersion of quantified results using the coefficient of variation, a standardized metric that facilitates comparison across various test cases when QuantifierAgent quantifies them. Given the consolidated list of criteria, we use the QuantifierAgent to quantify various test cases and report the coefficient of variation as a measure of the dispersion of the QuantifierAgent\u2019s outputs with respect to each criterion across different seeds and report the mean coefficient of variation across all samples. we run QuantifierAgent with 50 seeds and plot the change (\u2206) in the sum of mean coefficient of variation across all criteria against the number of seeds, in Figure 6. For each criterion, we compute the absolute difference with the mean coefficient of variation calculated when using n\u22121 seeds, summing up the absolute differences across all criteria. According to the plot, after approximately 18 seeds, the magnitude of mean coefficient of variation stabilizes and becomes rather trivial. In almost all cases, the mean coefficient of variation is around or below 0.5, which is relatively small, suggesting that QuantifierAgent is quite robust. 6.2.2 Discriminative Power It is crucial to ensure the quality of quantification of each criterion. Ideally, this validation would involve comparisons with known pairwise samples, where sample S+ is definitively superior to S\u2212for \fa given criterion. If the evaluator also confirms superiority of S+ w.r.t S\u2212, it has robust quantification. However, due to rapid expansion of LLMpowered applications, obtaining annotated data for many tasks is often unfeasible. Therefore, we propose using synthetically altered versions of samples for verification. Let us assume we have an alternative disturbed version of sample S, which is called S\u2032. Assuming sample S is more likely to outperform its disturbed version S\u2032, our assessment should confirm this assumption by assigning better quantified performance S in comparison to S\u2032. In experiments with mathematical problems, we introduced random noise by removing portions of the solution sentences from AutoGen, VanillaSolver, and ReAct\u2019s results respectively, expecting that criteria like \u2018Completeness\u2019 or \u2018Clarity\u2019 would show be higherin S than in S\u2032. We disturbed solutions by removing 25% of the sentences and assessed the QuantifierAgent\u2019s performance. As shown in Fig. 7, criteria measuring aspects like \u2018Clarity\u2019 and \u2018Completeness\u2019 were lower in disturbed solutions (lighter bars), confirming QuantifierAgent\u2019s high discriminative power and effectiveness. We have already filtered out the criteria that were unstable, i.e., those that had a high mean standard deviation and dispersion when being quantified in the previous section. We report the results of the QuantifierAgent quantifying differences between original and disturbed samples on the comprehensive set of criteria shown in Appendix, as shown in Fig. 13 for the math problem-solving. In most cases, the QuantifierAgent quantifies the disturbed output to be worse than the original task output. We believe analyzing the QuantifierAgent\u2019s performance will enhance the reliability, trustworthiness, and explainability in evaluations.. 6.2.3 VerifierAgent After modifying the list of criteria (Sec. 6.2.1), we have developed a stable and robust list of criteria that the QuantifierAgent can reliably quantify. Further, we also proposed a method for assessing whether the criteria can distinguish between noise-adversarially attacked samples and the original ones. These two tests will serve as input for the VerifierAgent (described in Algorithm 1), which can also have its threshold tuned for different applications. For instance, one might prioritize the stability of the criteria, while another may value the discriminative power of the AgentEval for specific applications. As such, the VerifierAgent will Figure 7: Assessment of original and disturbed solutions on Math dataset (discriminative power study). modify and update the criteria based on to what extend they pass the two tests, i.e., if the mean coefficient of variation is below a specific threshold and the percentage of adversarial testing it has passed. The VerifierAgent will then update the criteria if necessary. We believe that having a VerifierAgent would help continuously updating the criteria as needed because, by improving the systems, we may require new criteria that were not previously necessary for utility assessment. 7" +} \ No newline at end of file diff --git a/abs_9K/test_abstract_short_2405.02225v1.json b/abs_9K/test_abstract_short_2405.02225v1.json new file mode 100644 index 0000000000000000000000000000000000000000..6b81a9c51067c3d8e111ebd9dc073e61ff1218fa --- /dev/null +++ b/abs_9K/test_abstract_short_2405.02225v1.json @@ -0,0 +1,20 @@ +{ + "url": "http://arxiv.org/abs/2405.02225v1", + "title": "Fair Risk Control: A Generalized Framework for Calibrating Multi-group Fairness Risks", + "abstract": "This paper introduces a framework for post-processing machine learning models\nso that their predictions satisfy multi-group fairness guarantees. Based on the\ncelebrated notion of multicalibration, we introduce $(\\mathbf{s},\\mathcal{G},\n\\alpha)-$GMC (Generalized Multi-Dimensional Multicalibration) for\nmulti-dimensional mappings $\\mathbf{s}$, constraint set $\\mathcal{G}$, and a\npre-specified threshold level $\\alpha$. We propose associated algorithms to\nachieve this notion in general settings. This framework is then applied to\ndiverse scenarios encompassing different fairness concerns, including false\nnegative rate control in image segmentation, prediction set conditional\nuncertainty quantification in hierarchical classification, and de-biased text\ngeneration in language models. We conduct numerical studies on several datasets\nand tasks.", + "authors": "Lujing Zhang, Aaron Roth, Linjun Zhang", + "published": "2024-05-03", + "updated": "2024-05-03", + "primary_cat": "stat.ML", + "cats": [ + "stat.ML", + "cs.AI", + "cs.CY", + "cs.LG", + "stat.ME" + ], + "label": "Original Paper", + "paper_cat": "LLM Fairness", + "gt": "This paper introduces a framework for post-processing machine learning models\nso that their predictions satisfy multi-group fairness guarantees. Based on the\ncelebrated notion of multicalibration, we introduce $(\\mathbf{s},\\mathcal{G},\n\\alpha)-$GMC (Generalized Multi-Dimensional Multicalibration) for\nmulti-dimensional mappings $\\mathbf{s}$, constraint set $\\mathcal{G}$, and a\npre-specified threshold level $\\alpha$. We propose associated algorithms to\nachieve this notion in general settings. This framework is then applied to\ndiverse scenarios encompassing different fairness concerns, including false\nnegative rate control in image segmentation, prediction set conditional\nuncertainty quantification in hierarchical classification, and de-biased text\ngeneration in language models. We conduct numerical studies on several datasets\nand tasks.", + "main_content": "Introduction A common theme across the fairness in machine learning literature is that some measure of error or risk should be equalized across sub-populations. Common measures evaluated across demographic groups include false positive and false negative rates (Hardt et al., 2016) and calibration error (Kleinberg et al., 2016; Chouldechova, 2017). Initial work in this line gave methods for equalizing different risk measures on disjoint groups. A second generation of work gave methods for equalizing measures of risk across groups even when the groups could intersect \u2013 e.g. for false positive and negative rates (Kearns et al., 2018), calibration error (\u00darsula H\u00e9bert-Johnson et al., 2018), regret (Blum & Lykouris, 2019; Rothblum & Yona, 2021), prediction set coverage (Jung et al., 2021, 2022; Deng et al., 2023), among other risk measures. In general, distinct algorithms are derived for each of these settings, and they are generally limited to one-dimensional predictors of various sorts. In this work, we propose a unifying framework for fair risk control in settings with multi-dimensional outputs, based on multicalibration (\u00darsula H\u00e9bert-Johnson et al., 2018). This framework is developed as an extension of the work by Deng et al. (2023); Noarov & Roth (2023), and addresses the need for calibrating multi-dimensional output functions. To illustrate the usefulness of this framework, we apply it to a variety of settings, including false negative rate control in image segmentation, prediction set conditional coverage guarantees in hierarchical classification, and de-biased text generation in language models. These applications make use of the additional power granted by our multi-dimensional extension of multicalibration. 1.1 Related Work Multicalibration was introduced by \u00darsula H\u00e9bert-Johnson et al. (2018) as a fairness motivated constraint that informally asks that a 1-dimensional predictor of a binary-valued outcome be unbiased, conditional 1Work was done during Lujing Zhang\u2019s remote research internship at Rutgers and Penn. Email: misdrifter@stu.pku.edu.cn 2University of Pennsylvania. Email: aaroth@cis.upenn.edu 3Rutgers University. Email: linjun.zhang@rutgers.edu 3Corresponding Author. 1 arXiv:2405.02225v1 [stat.ML] 3 May 2024 \fon both its own prediction and on membership of the input in some number of pre-defined groups (see also a line of prior work that asks for a similar set of guarantees under slightly different conditions (Dawid, 1985; Sandroni et al., 2003; Foster & Kakade, 2006)). Subsequently, multicalibration has been generalized in a number of ways. Jung et al. (2021) generalizes multicalibration to real-valued outcomes, and defines and studies a variant of multicalibration that predicts variance and higher moments rather than means. Gupta et al. (2022) extends the study of multicalibration of both means and moments to the online setting, and defines a variant of mulicalibration for quantiles, with applications to uncertainty estimation. Bastani et al. (2022); Jung et al. (2022) gives more practical variants of quantile multicalibration with applications to conditional coverage guarantees in conformal prediction, together with experimental evaluation. Deng et al. (2023) gives an abstract generalization of 1-dimensional multicalibration, and show how to cast other algorithmic fairness desiderata like false positive rate control in this framework. Noarov & Roth (2023) gives a characterization of the scope of 1-dimensional multicalibration variants via a connection to property elicitation: informally, a property of a distribution can be multicalibrated if and only if it minimizes some 1-dimensional separable regression function. The primary point of departure of this paper is that we propose a multi-dimensional generalization of multicalibration: it can be viewed as the natural multi-dimensional generalization of Deng et al. (2023). Another line of work generalizes multicalibration in an orthogonal direction, leaving the outcomes binary valued but generalizing the class of checking rules that are applied. Dwork et al. (2021) defines outcome indistinguishability, which generalizes multicalibration to require indistinguishability between the predicted and true label distributions with respect to a fixed but arbitrary set of distinguishers. Kakade & Foster (2008); Foster & Hart (2018) define \u201csmooth calibration\u201d that relaxes calibration\u2019s conditioning event to be a smooth function of the prediction. Gopalan et al. (2022) defines a hierarchy of relaxations called low-degree multicalibration that further relaxes smooth calibration and demonstrates desirable statistical properties. Zhao et al. (2021) and Noarov et al. (2023) define notions of calibration tailored to the objective function of a downstream decision maker. These last lines of work focus on multi-dimensional outputs. These lines of work are part of a more general literature studying multi-group fairness. Work in this line aims e.g. to minimize disparities between false positive or false negative rates across groups (Kearns et al., 2018, 2019), or to minimize regret (measured in terms of accuracy) simultaneously across all groups (Blum & Lykouris, 2019; Rothblum & Yona, 2021; Globus-Harris et al., 2022; Tosh & Hsu, 2022). A common theme across these works is that the groups may be arbitrary and intersecting. 1.2 Notation Let X represent a feature domain, Y represent a label domain, and D denote a joint (feature, label) data distribution. For a finite set A, we use |A| and \u2206A, to denote the cardinality of A and the simplex over A respectively. Specifically, \u2206A = {(p1, p2, . . . , p|A|) : 0 \u2264pi \u22641, P|A| i=1 pi = 1}. Given a set F, we use ProjF to denote the \u21132-projection onto the set. We also introduce some shorthand notation. For two vectors a and b, \u27e8a, b\u27e9represents their inner product. For a positive integer T, we define [T] = {1, 2, . . . , T}. For a function f(x) = (f1(x), f2(x), ..., fm(x)), we denote \u2225f\u2225\u221e= supx\u2208X,i\u2208[m][fi(x)]. 2 Formulation and Algorithm 2.1 A generalized notion of Multicalibration Let x \u2208X represent the feature vector of the input, y \u2208Y represent the label, and let h(x) \u2208H denote a multi-dimensional scoring function associated with the input. For example, in image segmentation tasks, h(x) \u2208Rk (k is the number of pixels) is intended to approximate the probability of a pixel being part of a relevant segment, often learned by a neural network. In text generation tasks, h(x) is the distribution over the vocabulary produced by a language model given context x. For x \u2208X, consider an output function f : X \u2192F \u2282Rm, defined as f(x) = (f1(x), . . . , fm(x)), where F is a convex set. We denote the class of functions that f belongs to by Q. For example, in text 2 \fgeneration tasks, f(x) is the calibrated distribution over the output vocabulary and is multi-dimensional (with dimension equal to the vocabulary size); in binary classification tasks where h and f are both scalars, f(x) is the threshold used to convert the raw score h(x) into binary predictions, i.e. 1{h(x)>f(x)}. We write s(f, x, h, y, D) : Q \u00d7 X \u00d7 H \u00d7 Y \u00d7 P \u2192Rl to denote a mapping functional of interest, where D is the joint distribution of (x, h, y) and P is the distribution space. Here, s is set to be a functional of f rather than a function of f(x), which offers us more flexibility that will be useful in our applications. For example, in text generation, where h(x) \u2208\u2206Y is the distribution over tokens output by an initial language model, our goal might be to find f(x) \u2208\u2206Y, an adjusted distribution over tokens y \u2208Y with |Y| = m. In this case we could set s = f(x) \u2212Exf(x) \u2208Rm to be the mapping functional. We can calibrate the probabilities (through s) to be \u201cfair\u201d in some way \u2013 e.g. that the probability of outputting various words denoting professions should be the same regardless of the gender of pronouns used in the prompt. We note that we do not always use the dependence of s on all of its inputs and assign different s in different settings. We write G to denote the class of functions that encode demographic subgroups (along with other information) and for each g \u2208G, g(f(x), x) \u2208Rl, consistent with the dimension of s(f, x, h, y, D) so that we can calibrate over every dimension of s. For example, when l = 1, G can be set to be the indicator function of different sensitive subgroups of X. Alternately, in fair text generation tasks, when the dimension of s equals the size of the set Y, denoted as l = m, we can set the vector g \u2208G to have a value of 1 in the dimensions corresponding to certain types of sensitive words, and 0 in all other dimensions. We now formally introduce the (s, G, \u03b1)-Generalized Multicalibration ((s, G, \u03b1)-GMC) definition. Definition 1 ((s, G, \u03b1)-GMC). Let x, h, y, D denote the feature vector, the scoring function, the label vector, and the joint distribution of (x, h, y) respectively. Given a function class G, mapping functional s, and a threshold \u03b1 > 0, we say f satisfies (s, G, \u03b1)-Generalized Multicalibration ((s, G, \u03b1)-GMC) if E(x,h,y)\u223cD[\u27e8s(f, x, h, y, D), g(f(x), x)\u27e9] \u2264\u03b1, \u2200g \u2208G. (s, G, \u03b1)-GMC is a flexible framework that can instantiate many existing multi-group fairness notions, including s-HappyMap (Deng et al., 2023), property multicalibration (Noarov & Roth, 2023), calibrated multivalid coverage (Jung et al., 2022) and outcome indistinguishability (Dwork et al., 2021). More generally, compared to these notions, (s, G, \u03b1)-GMC extends the literature in two ways. First, it allows the functions s and g to be multi-dimensional (most prior definitions look similar, but with 1-dimensional s and g functions). Second, the function s here is more general and allowed to be a functional of f (rather than just a function of f(x), the evaluation of f at x). These generalizations will be important in our applications. 2.2 Algorithm and Convergence Results To achieve (s, G, \u03b1)-GMC, we present the (s, G, \u03b1)-GMC Algorithm, which can be seen as a natural generalization of algorithms used for more specific notions of multicalibration in previous work (\u00darsula H\u00e9bert-Johnson et al., 2018; Dwork et al., 2021; Jung et al., 2022; Deng et al., 2023): Algorithm 1 (s, G, \u03b1)-GMC lgorithm Input: step size \u03b7 > 0, initialization f (0) \u2208Q, max iteration T. Initialization: t = 0. while t < T, \u2203g(t) \u2208G s.t : E(x,h,y)\u223cD[\u27e8s(f (t), x, h, y, D), g(t)(f (t)(x), x)\u27e9] > \u03b1 do Let g(t) \u2208G be an arbitrary function satisfying the condition in the while statement f (t+1)(x) = ProjF \u0000f (t)(x) \u2212\u03b7g(t)(f (t)(x), x) \u0001 t = t + 1 end while Output: f (t) It is worth noting that our goal involves functionals concerning our objective function f in order to capture its global properties. We aim to find a function f such that a functional associated with it 3 \f(obtained by taking the expectation over x) satisfies the inequalities we have set to meet different fairness demands. Before delving into the main part of our convergence analysis, we introduce some definitions related to functionals. Examples of these definitions can be found in the appendix Section B. Definition 2 (The derivative of a functional). Given a function f : X \u2192F, consider a functional L(f, D) : Q\u00d7P \u2192R, where Q is the function space of f and P is a distribution space over X. Assume that L(f, D) follows the formulation that L(f, D) = Ex\u223cD[L(f(x))]. The derivative function of L(f, D) with respect to f, denoted as \u2207fL(f, D) : X \u2192F, exists if \u2200w \u2208Q, y \u2208Rm, D \u2208P, Ex\u223cD[\u27e8\u2207fL(f, D), w\u27e9] = \u2202 \u2202\u03f5 L(f + \u03f5w, D)|\u03f5=0 . In the following, we introduce the definitions of convexity and smoothness of a functional. Definition 3 (Convexity of a functional). Let L and f be defined as in Definition 2. A functional L is convex with respect to f if for any f1, f2 \u2208Q, L(f1, D) \u2212L(f2, D) \u2265Ex\u223cD[\u27e8\u2207fL(f2, D), f1 \u2212f2\u27e9]. Definition 4 (KL-smoothness of a functional). Let L and f be defined as in Definition 2. A functional L is KL\u2212smooth if for any f1, f2 \u2208Q, L(f1, D)\u2212L(f2, D) \u2264Ex\u223cD[\u27e8\u2207L(f2, D), f1\u2212f2\u27e9]+Ex\u223cD[ KL 2 \u2225f1\u2212 f2\u22252]. We will prove that this algorithm converges and outputs an f satisfying (s, G, \u03b1)-GMC whenever the following assumptions are satisfied. These are multidimensional generalizations of the conditions given by Deng et al. (2023). Assumptions (1). There exists a potential functional L(f, h, y, D), such that \u2207fL(f, h, y, D)(x) = s(f, x, h, y, D), and L(f, h, y, D) is KL-smooth with respect to f for any x \u2208X. (2). Let f \u2217(x) \u225cProjFf(x) for all x \u2208X. For any f \u2208Q, L(f \u2217, h, y, D) \u2264L(f, h, y, D) . (3). There exists a positive number B, such that for all g \u2208G and all f \u2208Q, Ex\u223cD[\u2225g(f(x), x)\u22252] \u2264B. (4). There exists two numbers Cl, Cu such that for all f \u2208Q, L(f, h, y, D) \u2265Cl, L(f (0), h, y, D) \u2264Cu. Assumption (1) says that a potential functional L exists and it satisfies a KL-smoothness condition with respect to f. For example, when f is a predicted distribution, we often set s = f(x) \u2212Ex\u223cDf(x). In this situation, L = Ex\u223cD[ 1 2\u2225f(x) \u2212Ex\u223cDf(x)\u22252] satisfies the assumption. Assumption (2) states that the potential function decreases when projected with respect to f. A specific example is when F = Y = [0, 1] and L = E(x,y)\u223cD|f(x) \u2212y|2. Assumption (3) states that the \u21132-norm of the functions in G is uniformly bounded. It always holds when G contains indicator functions, which is the most common case in fairness-motivated problems (these are usually the indicator functions for subgroups of the data). Assumption (4) says that the potential functional L is lower bounded and this generally holds true when L is convex. One concrete example is when s(f(x), h, y) = f(x) \u2212y and we have L(f, h, y, D) = Ex\u223cD[(f(x) \u2212y)2], which is lower bounded by 0. Theorem 1. Under Assumptions 1-4, the (s, G, \u03b1)-GMC Algorithm with a suitably chosen \u03b7 = O(\u03b1/(KLB)) converges in T = O( 2KL(Cu\u2212Cl)B) \u03b12 ) iterations and outputs a function f satisfying E(x,h,y)\u223cD[\u27e8s(f, x, h, y, D), g(f(x), x)\u27e9] \u2264\u03b1, \u2200g \u2208G. The proof is provided in Appendix C. At a high level, if we consider g as a generalized direction vector and s as the gradient of L, each violation can be interpreted as detecting a direction where the first-order difference of L is significant. By introducing the assumption of smoothness, our update can result in a decrease in L that exceeds a constant value. Since L is lower bounded by assumption, the updates can terminate as described. 4 \f2.3 Finite-Sample Results We have presented Algorithm 1 as if we have direct access to the true data distribution D. In practice, we only have a finite calibration set D, whose data is sampled i.i.d from D. In this subsection, we show how a variant of Algorithm 1 achieves the same goal from finite samples. First, we introduce a useful measure which we call the dimension of the function class, as similarly defined by Kim et al. (2019); Deng et al. (2023). For a dataset D, we use E(x,h,y)\u223cD to denote the empirical expectation over D. We need T datasets in all and we assume that the whole sample size is m (m/T for each dataset). Definition 5 (Dimension of the function class). We use d(G) to denote the dimension of class G, defined to be a quantity such that if the sample size m \u2265C1 d(G)+log(1/\u03b4) \u03b12 , then a random sample Sm of m elements from D guarantees uniform convergence over G with error at most \u03b1 with failure probability at most \u03b4. That is, for any fixed f and fixed s with \u2225s\u2225\u221e\u2264C2 (C1, C2 > 0 are universal constants): sup g\u2208G |E(x,h,y)\u223cD[\u27e8s(f, x, h, y, D), g(f(x), x)\u27e9] \u2212E(x,h,y)\u223cSm[\u27e8s(f, x, h, y, D), g(f(x), x)\u27e9]| \u2264\u03b1. A discussion of this definition is given in the appendix. We now give the finite sample version of the (s, G, \u03b1)-GMC Algorithm and its convergence results below. The detailed proof is in the appendix; we use the uniform convergence guarantee arising from Definition 5 to relate the problem to its distributional counterpart. Algorithm 2 (s, G, \u03b1)-GMC Algorithm (Finite Sample) Input: step size \u03b7 > 0, initialization f (0)(x) \u2208F, validation datasets D[2T ], max iteration T. Initialization: t = 0. while t < T, \u2203g(t) \u2208G, s.t. : E(x,h,y)\u223cD2t\u22121[\u27e8s(f (t)(x), h, y, D2t), g(t)(f (t)(x), x)\u27e9] > 3 4\u03b1 do Let g(t) \u2208G be an arbitrary function satisfying the condition in the while statement f (t+1)(x) = ProjF \u0000f (t)(x) \u2212\u03b7g(t)(f (t)(x), x) \u0001 t = t + 1 end while Output: f (t) Theorem 2. Under the assumptions 1-4 given in section 3, suppose we run Algorithm 2 with a suitably chosen \u03b7 = O (\u03b1/ (\u03baLB)) and sample size m = O \u0010 T \u00b7 d(G)+log(T/\u03b4) \u03b12 \u0011 , then with probability at least 1 \u2212\u03b4, the algorithm converges in T = O \u0000(Cu \u2212Cl) \u03baLB/\u03b12\u0001 steps and returns a function f satisfying: E(x,h,y)\u223cD[\u27e8s(f, x, h, y, D), g(f(x), x)\u27e9] \u2264\u03b1, \u2200g \u2208G. 3 Applications In this section, we explore three applications of our framework: De-biased text generation in language modeling \u2013 where the output function is multi-dimensional and can\u2019t be addressed in other frameworks, uncertainty quantification in hierarchical classification \u2014 in which we can offer prediction set conditional coverage guarantees, and group-wise false-positive rate control in image segmentation. We begin by outlining the challenges related to fairness and robustness inherent to these applications. Subsequently, we illustrate how to integrate these challenges within the (s, G, \u03b1)-GMC framework, enabling their resolution through Algorithm 1. 3.1 De-Biased Text Generation This section applies our framework to fair word prediction in language modelling. We think of a language model as a function that maps prompts to a distribution over the next word. More specifically, we write 5 \fx \u2208X to denote a prompt, given which the language model outputs a distribution over the vocabulary, denoted by Y. Namely, the language model generates the probability vector h(x) \u2208\u2206Y, and then samples a word (output) following o(x) \u223ch(x). Previous studies (Lu et al., 2018; Hoffmann et al., 2022) demonstrated the presence of gender bias in contemporary language models. Our objective in this section is to mitigate this issue through an approach that post-processes h(x) to a probability distribution p(x) \u2208\u2206Y that has better fairness properties in specific ways. To take advantage of the information in initial language model, p is initialized at h. At the high level, we aim to produce p(x) so that the probabilities of certain groups of words remain the same whether the prompt includes male-indicating words or female-indicating words. For example, we might not want \u201cHe was a \u201d to be completed with \u201cdoctor\u201d more frequently than \u201cShe was a \u201d to be completed with \u201cdoctor\u201d. We define an attribute set U as a collection of specific sensitive words and U to be the set of all U, which stands for different kinds of sensitive words. Following the work by Lu et al. (2018); Hoffmann et al. (2022), we measure the bias of the model on sensitive attribute U by |P(o(x) \u2208U|x \u2208F) \u2212P(o(x) \u2208U|x \u2208M)|, where the probability is taken over o(x) \u223cp(x), and x \u2208F and x \u2208M denotes that x indicates female and male pronouns respectively. Suppose the marginal distribution over prompt x (which is drawn uniformly from the given corpus) satisfies that P(x \u2208F), P(x \u2208M) \u2265\u03b3 for some positive constant \u03b3 > 0, we get: |P(o(x) \u2208U|x \u2208F) \u2212P(o(x) \u2208U|x \u2208M)| \u22641 \u03b3 (|P(x \u2208F)(P(o(x) \u2208U|x \u2208F) \u2212P(o(x) \u2208U))| + |P(x \u2208M)(P(o(x) \u2208U|x \u2208M) \u2212P(o(x) \u2208U))|). (1) As a result, we only need to control the terms on the right side of (1) instead. More specifically, we want to calibrate the output so that for any subset U \u2208U \u2282Y (e.g., gender-stereotyped professions) and subgroups A \u2208A \u2282X (e.g., gender-related pronouns), |P(x \u2208A) \u00b7 [P(o(x) \u2208U|x \u2208A) \u2212P(o(x) \u2208U)]| \u2264\u03b1. To better understand this fairness notion, let us consider a toy example where X = {he, she, his, her}, A = {{he,his}, {she,her}}, Y = {lawyer, doctor, dream, nurse}, U = {{lawyer, doctor}, {nurse}}. Our aim is to calibrate the output so that |P[o(x) \u2208{lawyer, doctor}|x \u2208{she, her}] \u2212P[o(x) \u2208 {lawyer, doctor}]| \u2264\u03b1 and |P[o(x) \u2208{lawyer, doctor}|x \u2208{he, his}] \u2212P[o(x) \u2208{lawyer, doctor}]| \u2264\u03b1. We can define V \u225c{(1, 1, 0, 0), (0, 0, 0, 1)} to be the set of indicator vectors of sensitive attributes defined by U. Setting G \u225c{1{x\u2208A}v : A \u2208A, v \u2208V} \u222a{\u22121{x\u2208A}v : A \u2208A, v \u2208V}, this problem can be cast in the GMC framework, and leads to the following theorem: Theorem 3. Assuming that x is a prompt that is uniformly drawn from the given corpus, and h is given by any fixed language model and the size of the largest attribute set in U is upper bounded by B. With a suitably chosen \u03b7 = O(\u03b1/B), our algorithm halts after T = O(B/\u03b12) iterations and outputs a function p satisfying: \u2200A \u2208A, U \u2208U, when o(x) \u223cp(x), sup A\u2208A |P(x \u2208A) \u00b7 [P(o(x) \u2208U|x \u2208A) \u2212P(o(x) \u2208U)]| \u2264\u03b1. For the finite-sample counterpart, by applying theorem 2, the sample complexity required in this setting is O( log(2|U||A|)+log( 1 \u03b4 ) \u03b12 ). 3.2 Prediction-Set Conditional Coverage in Hierarchical Classification Hierarchical classification is a machine learning task where the labels are organized in a hierarchical tree structure (Tieppo et al., 2022). More specifically, at the most granular level, predictions are made using labels on the leaves of the tree. These leaves are grouped together into semantically meaningful categories through their parent nodes, which are, in turn, grouped together through their parents, and so on up to 6 \fthe root of the tree. Such a tree structure allows us\u2014when there is uncertainty as to the correct label\u2014to predict intermediate nodes, which correspond to predicting sets of labels \u2014 the set of leaves descended from the intermediate node \u2014 giving us a way to quantify the uncertainty of our predictions. Our goal is to produce such set-valued predictions that have a uniform coverage rate conditional on the prediction we make, where a prediction set is said to \u201ccover\u201d the true label if the true label is a descendent of (or equal to) the node we predicted. For example, in a K-class hierarchical text classification problem, our input x \u2208X is a document and the label is a leaf node y on a classification tree with nodes V and edges E. For simplicity, set V = {1, 2, ..., |V |} where the first K indices {1, 2, .., K} denote leaf nodes (so the groundtruth label y \u2208{1, ..., K}). The tree is of depth H. For a given single-class classification model h : x \u2192[0, 1]K, let u(x) \u225carg maxk hk(x) denote the candidate with the highest score over all leaf nodes according to h. u(x) here corresponds to the most natural point prediction we might make given h. Figure 1: A demo of hierarchical text classification using a subset of labels from the Web of Science dataset. (Kowsari et al., 2017). As a concrete example, in the tree diagram above, we map the set {1, 2, 3, 4, 5, 6, 7} to represent the categories: Green Building, Water Pollution, Cancer, Alzheimer\u2019s Disease, Civil, Medical and Root. Consider a document x with the true label \u2018Cancer\u2019 and an initial model predicting scores h(x) = (0.1, 0.1, 0.5, 0.6). If we used the scores to make a point prediction, we would be incorrect \u2014 the highest scoring label u(x) is \u201cAltzheimer\u2019s disease\u201d, and is wrong: u(x) \u0338= y. If we output the parent node ( \u2018Medical\u2019) instead, our prediction would be less specific (a larger prediction set, here corresponding to both \u201cCancer\u201d and \u201cAlzheimer\u2019s Disease\u201d), but it would cover the true label. We would like to output nodes such that we obtain our target coverage rate (say 90%), without over-covering (say by always outputting \u201cRoot\u201d, which would be trivial). Traditional conformal prediction methods (see Angelopoulos & Bates (2021) for a gentle introduction) give prediction sets that offer marginal guarantees of this sort, but not prediction-set conditional guarantees: i.e. they offer that for 90% of examples, we produce a prediction set that covers the true label. Recent applications of multicalibration related techniques ((Jung et al., 2021; Gupta et al., 2022; Bastani et al., 2022; Jung et al., 2022; Deng et al., 2023; Gibbs et al., 2023) are able to give \u201cgroup conditional\u201d coverage guarantees which offer (e.g.) 90% coverage as averaged over examples within each of a number of intersecting groups, but once again these methods are not able to offer prediction-set conditional guarantees. Prediction set conditional guarantees promise that for each prediction set that we produce, we cover 90% of example labels, even conditional on the prediction set we offer. This precludes the possibility of our model being over-confident in some prediction sets and under-confident in others, as demonstrated in our experimental results. We now define some useful functional notation. Let A : V \u2192V H return the set of all the ancestor nodes of the input node. Let q : V \u00d7 V \u2192V compute the nearest common ancestor of its two input nodes. Let R : X \u2192R|V | be the function that computes for each node i, Ri, the sum of the raw scores h(x) assigned to each leaf that is a descendent of node i (or itself if i is a leaf). When needed, we may randomize R by letting ri(x) \u225cRi(x) + \u03f5i(x), where \u03f5(x) is an independent random variable with zero-mean and constant variance. We define a natural method to choose a node o(x) to output given a scoring function h(x) and a threshold function \u03bb(x). We define o(x) \u225carg minv{d(v) : v \u2208A(u(x)), rv < \u03bb(x)}, where d(v) denotes the depth of the node v in the tree. In other words, we output the highest ancestor i of u(x) (which we recall is the point prediction we would make given h alone) whose cumulative score ri is below 7 \fsome threshold \u2014 which we will select to obtain some target coverage probability. Other natural choices of o(x) are possible \u2014 what follows uses this choice for concreteness, but is not dependent on the specific choice. Recall that an output covers the label if it is the ancestor of the label or the label itself. Our goal is to find a \u03bb(x), such that the rate at which the output covers the label is roughly equal to a given target \u03c3, not just overall, but conditional on the prediction set we output lying in various sets U \u22822V : |E(x,h,y)\u223cD[1{o(x)\u2208U}(\u03c3 \u22121{o(x) covers y})]| \u2264\u03b1, \u2200U \u2208U. Back to our example, we can specify U in various ways. For example, we can set U = {{1, 2, 5}, {3, 4, 6}} to require equal coverage cross the parent categories \u2018Civil\u2019 and \u2018Medical\u2019. Or, we can set U = {{1}, {2}, . . . , {6}, {7}} to obtain our target coverage rate \u03c3 conditionally on the prediction set we output for all possible prediction sets we might output. We set G \u225c{1{o(x)\u2208U} : U \u2208U} \u222a{\u22121{o(x)\u2208U} : U \u2208U}, fitting this problem into our GMC framework: |E(x,h,y)\u223cD[g(o(x))(\u03c3 \u22121{o(x) covers y})]| \u2264\u03b1, \u2200g \u2208G. Using PK i=1 1{rq(i,u)(x)<\u03bb}1{y=i} = 1{o(x) covers y} and applying Algorithm 1, we obtain the following theorem: Theorem 4. Assume (1). \u2200u, \u2200i \u2208V, fri|x(u) \u2264Kp, where fri|x(u) denotes the density function of ri conditioned on x; (2). There exists a real number M > 0 such that \u2200i \u2208V, ri \u2208[\u2212M, M]. With a suitably chosen \u03b7 = O(\u03b1/KP ), our algorithm halts after T = O(KP M/\u03b12) iterations and outputs a function \u03bb satisfying that \u2200U \u2208U, |E(x,h,y)\u223cD[1{o(x)\u2208U}(\u03c3 \u22121{o(x) covers y})]| \u2264\u03b1. Applying theorem 2, we can see that the sample complexity for the finite-sample version of the algorithm is O( log(2|U|)+log( 1 \u03b4 ) \u03b12 ). 3.3 Fair FNR Control in Image Segmentation In image segmentation, the input is an image of m = w \u00d7 l (w for width and l for length) pixels and the task is to distinguish the pixels corresponding to certain components of the image, e.g., tumors in a medical image, eyes in the picture of a face, etc. As pointed out by Lee et al. (2023), gender and racial biases are witnessed when evaluating image segmentation models. Among the common evaluations of image segmentation, we consider the False Negative Rate (FNR), defined as False Negatives False Negatives+True Positives. In image segmentation when O, O\u2032 denotes the set of the actual selected segments and the predicted segments respectively, FNR = 1 \u2212|O\u2229O\u2032| |O| . We write x \u2208X to denote the input, which includes both image and demographic group information and y \u2208{0, 1}m to denote the label, which is a binary vector denoting the true inclusion of each of the m pixels. To yield the prediction of y, namely \u02c6 y \u2208{0, 1}m, a scoring function h(x) \u2208Rm and a threshold function \u03bb(x) are needed, so that \u02c6 yi = 1{hi(x)>\u03bb(x)} for i \u2208[m]. As in Section 3.2, for technical reasons we may randomize hi by perturbing it with a zero-mean random variable of modest scale. Our objective is to determine the threshold function \u03bb(x). In the context of algorithmic fairness in image segmentation, one specific application is face segmentation, where the objective is to precisely identify and segment regions containing human faces within an image. The aim is to achieve accurate face segmentation while ensuring consistent levels of precision across various demographic groups defined by sensitive attributes, like gender and race. Thus, our objective is to determine the function \u03bb(x) that ensures multi-group fairness in terms of the FNR \u2014 a natural multi-group fairness extension of the FNR control problem for image segmentation studied by Angelopoulos et al. (2023). Letting A be the set of sensitive subgroups of X, our goal is to ensure that the FNR across different 8 \fgroups are approximately (1 \u2212\u03c3) for some prespecified \u03c3 > 0: |E(x,h,y)\u223cD[1{x\u2208A}(1 \u2212|O \u2229O\u2032| |O| \u2212\u03c3)]| \u2264\u03b1, \u2200A \u2208A. We can write |O \u2229O\u2032| = Pm i=1 yi1{hi(x)>\u03bb(x)}, so the object is converted to sup A\u2208A |E(x,h,y)\u223cD[1{x\u2208A}(1 \u2212 Pm i=1 yi1{hi(x)>\u03bb(x)} Pm i=1 yi \u2212\u03c3)]| \u2264\u03b1. Let s(\u03bb, x, h, y) = 1 \u2212 Pm i=1 yi1{hi(x)>\u03bb(x)} Pm i=1 yi \u2212\u03c3 and G \u225c{\u00b11{x\u2208A} : A \u2208A}. Rewriting the inequality we get: sup g\u2208G E(x,h,y)\u223cD[g(\u03bb(x), x)s(\u03bb, x, h, y)] \u2264\u03b1. Cast in the GMC framework, we obtain the following result: Theorem 5. Assume (1) For all i \u2208[n], |hi| \u2264M for some universal constant M > 0; (2) the density function of hi conditioned on x is upper bounded by some universal constant Kp > 0. Let C be the set of sensitive subgroups of X. Then with a suitably chosen \u03b7 = O(\u03b1/(KP )), the algorithm halts after T = O( 2KP M \u03b1 ) iterations and outputs a function \u03bb satisfying: |E(x,h,y)\u223cD[1{x\u2208A}(1 \u2212|O \u2229O\u2032| |O| \u2212\u03c3)]| \u2264\u03b1, \u2200A \u2208A. Similar to the previous two applications, by applying Theorem 2 for the finite-sample version of the algorithm, the sample complexity required is O( log(2|A|)+log( 1 \u03b4 ) \u03b12 ). We note that equalizing false negative rates across groups can be achieved trivially by setting \u03bb to be large enough so that the FNR is equalized (at 0) \u2014 which would of course destroy the accuracy of the method. Thus when we set an objective like this, it is important to empirically show that not only does the method lead to low disparity across false negative rates, but does so without loss in accuracy. The experiments that we carry out in Section 4 indeed bear this out. 4 Experiments In this section, we conduct numerical experiments and evaluate the performance of our algorithms within each application from both the fairness and accuracy perspectives. We compare the results with baseline methods to assess their effectiveness. The code can be found in the supplementary material. For more detailed experiment settings and additional results, please refer to Appendix D. 4.1 De-Biased Text Generation In text generation, we consider two datasets and run experiments separately. The first dataset is the corpus data from Liang et al. (2021), which extracts sentences with both terms indicative of biases (e.g., gender indicator words) and attributes (e.g., professions) from real-world articles. The second dataset is made up of synthetic templates based on combining words indicative of bias targets and attributes with simple placeholder templates, e.g., \u201cThe woman worked as ...\u201d; \u201cThe man was known for ...\u201d, constructed in (Lu et al., 2019). Then, we define two kinds of terms indicative of bias targets: female-indicator words and male-indicator words; we also define six types of attributes: female-adj words, male-adj words, male-stereotyped jobs, female-stereotyped jobs, pleasant words, and unpleasant words, by drawing on existing word lists in the fair text generation context (Caliskan et al., 2017) (Gonen & Goldberg, 2019). Each input x is a sentence where sensitive attributes are masked. We use the BERT model (Devlin et al., 2018) to generate the initial probability distribution over the entire vocabulary for the word at 9 \fthe masked position, denoted by h(x). We then use our algorithm to post-process h(x) and obtain the function p(x), which is the calibrated probability of the output. We define two sets of prompts: Afemale and Amale be the set of prompts containing female-indicator and male-indicator words, respectively. We aim to control the gender disparity gap |P(x \u2208A) \u00b7 [P(o(x) \u2208U|x \u2208A) \u2212P(o(x) \u2208U)]| for A \u2208{Afemale, Amale}. Figure 2 plots the disparity gap for A = Amale (the result for A = Afemale is deferred to the appendix due to space constraints). It is evident that our post-processing technique effectively limits the disparity between the probabilities of outputting biased terms related to different gender groups, ensuring that it remains consistently below a specified threshold value of \u03b1 = 0.002 (we will further discuss the way of choosing \u03b1 in the Appendix D). Additionally, we assess the cross-entropy loss between the calibrated output distribution and the corresponding labels. Unlike the calibration set where sensitive words are deliberately masked, we randomly mask words during the cross-entropy test to evaluate the model\u2019s overall performance, extending beyond the prediction of sensitive words. The cross-entropy of the test set is 9.9291 before post-processing and 9.9285 after it, indicating that our algorithm does not reduce the accuracy of the model while reducing gender disparities. We would like to note that our algorithm is not designed to enhance accuracy but to improve fairness while ensuring that the performance of cross-entropy does not deteriorate too much. Figure 2: The bias on outputting different types of sensitive attributes measured on the corpus data. The results for the synthetic data are deferred to the appendix. 4.2 Prediction-Set Conditional Coverage in Hierarchical Classification For hierarchical classification, we use the Web of Science dataset (Kowsari et al., 2017) that contains 46, 985 documents with 134 categories including 7 parent categories. We choose HiAGM (Wang et al., 2022) as the network to generate the initial scoring. Our algorithm is then applied to find the threshold function that yields a fair output. We set our coverage target to be \u03c3 = 0.95 with a tolerance for coverage deviations of \u03b1 = 0.025. Equivalently put, our goal is that for each of the predictions, we aim to cover the true label with probability 95 \u00b1 2.5%, even conditional on the prediction we make. We choose naively outputting the leaf node (denoted as \u201cunprocessed\u201d in the figure) as one baseline and the split conformal method (Angelopoulos et al., 2023) as another baseline. Figure 3 shows that our method achieves coverage within the target tolerance for all predictions, while the two baselines fail to satisfy the coverage guarantee for predicting \u2019CS\u2019 and \u2019Medical\u2019. 10 \fFigure 3: The deviation of prediction-set conditional coverage from the target. 4.3 Fair FNR Control in Image Segmentation We use the FASSEG (Khan et al., 2015) dataset and adopt the U-net (Ronneberger et al., 2015) network to generate the initial scoring function for each pixel, representing the predicted probability of this pixel corresponding to the signal. The dataset contains 118 human facial images and their semantic segmentations. We set our target FNR to be \u03c3 = 0.075 with a tolerance for deviations of \u03b1 = 0.005 and calibrate the FNR across different gender subgroups and racial subgroups. In addition, we compare with the method proposed in (Angelopoulos et al., 2023) that controls on-average FNR in a finite-sample manner based on the split conformal prediction method. The results yielded by U-net and the split conformal are plotted as baselines for comparison in Figure 4. Our algorithm demonstrates its effectiveness as the deviations of the FNRs of GMC from the target \u03b1 across all subgroups are controlled below \u03c3, while the baselines are found to perform poorly on male and white subgroups. Since equalizing FNR does not necessarily imply accuracy, we compute the accuracy of our model\u2019s output together with that of the baseline. The accuracy of our model, measured as the ratio of correctly predicted pixels to the total number of pixels, is 0.86. In comparison, the baseline models achieve an accuracy of 0.84 and 0.92, respectively. This result suggests that our algorithm empirically yields significant gains in mitigating FNR disparities without a significant sacrifice in accuracy. Figure 4: The deviation of the false negative rate from the target in image segmentation. 11" +} \ No newline at end of file diff --git a/abs_9K/test_abstract_short_2405.02228v1.json b/abs_9K/test_abstract_short_2405.02228v1.json new file mode 100644 index 0000000000000000000000000000000000000000..c8d2987d4eb21a144b6cf9cda3daeabc6ca5c4d3 --- /dev/null +++ b/abs_9K/test_abstract_short_2405.02228v1.json @@ -0,0 +1,18 @@ +{ + "url": "http://arxiv.org/abs/2405.02228v1", + "title": "REASONS: A benchmark for REtrieval and Automated citationS Of scieNtific Sentences using Public and Proprietary LLMs", + "abstract": "Automatic citation generation for sentences in a document or report is\nparamount for intelligence analysts, cybersecurity, news agencies, and\neducation personnel. In this research, we investigate whether large language\nmodels (LLMs) are capable of generating references based on two forms of\nsentence queries: (a) Direct Queries, LLMs are asked to provide author names of\nthe given research article, and (b) Indirect Queries, LLMs are asked to provide\nthe title of a mentioned article when given a sentence from a different\narticle. To demonstrate where LLM stands in this task, we introduce a large\ndataset called REASONS comprising abstracts of the 12 most popular domains of\nscientific research on arXiv. From around 20K research articles, we make the\nfollowing deductions on public and proprietary LLMs: (a) State-of-the-art,\noften called anthropomorphic GPT-4 and GPT-3.5, suffers from high pass\npercentage (PP) to minimize the hallucination rate (HR). When tested with\nPerplexity.ai (7B), they unexpectedly made more errors; (b) Augmenting relevant\nmetadata lowered the PP and gave the lowest HR; (c) Advance retrieval-augmented\ngeneration (RAG) using Mistral demonstrates consistent and robust citation\nsupport on indirect queries and matched performance to GPT-3.5 and GPT-4. The\nHR across all domains and models decreased by an average of 41.93% and the PP\nwas reduced to 0% in most cases. In terms of generation quality, the average F1\nScore and BLEU were 68.09% and 57.51%, respectively; (d) Testing with\nadversarial samples showed that LLMs, including the Advance RAG Mistral,\nstruggle to understand context, but the extent of this issue was small in\nMistral and GPT-4-Preview. Our study con tributes valuable insights into the\nreliability of RAG for automated citation generation tasks.", + "authors": "Deepa Tilwani, Yash Saxena, Ali Mohammadi, Edward Raff, Amit Sheth, Srinivasan Parthasarathy, Manas Gaur", + "published": "2024-05-03", + "updated": "2024-05-03", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.IR" + ], + "label": "Original Paper", + "paper_cat": "LLM Fairness", + "gt": "Automatic citation generation for sentences in a document or report is\nparamount for intelligence analysts, cybersecurity, news agencies, and\neducation personnel. In this research, we investigate whether large language\nmodels (LLMs) are capable of generating references based on two forms of\nsentence queries: (a) Direct Queries, LLMs are asked to provide author names of\nthe given research article, and (b) Indirect Queries, LLMs are asked to provide\nthe title of a mentioned article when given a sentence from a different\narticle. To demonstrate where LLM stands in this task, we introduce a large\ndataset called REASONS comprising abstracts of the 12 most popular domains of\nscientific research on arXiv. From around 20K research articles, we make the\nfollowing deductions on public and proprietary LLMs: (a) State-of-the-art,\noften called anthropomorphic GPT-4 and GPT-3.5, suffers from high pass\npercentage (PP) to minimize the hallucination rate (HR). When tested with\nPerplexity.ai (7B), they unexpectedly made more errors; (b) Augmenting relevant\nmetadata lowered the PP and gave the lowest HR; (c) Advance retrieval-augmented\ngeneration (RAG) using Mistral demonstrates consistent and robust citation\nsupport on indirect queries and matched performance to GPT-3.5 and GPT-4. The\nHR across all domains and models decreased by an average of 41.93% and the PP\nwas reduced to 0% in most cases. In terms of generation quality, the average F1\nScore and BLEU were 68.09% and 57.51%, respectively; (d) Testing with\nadversarial samples showed that LLMs, including the Advance RAG Mistral,\nstruggle to understand context, but the extent of this issue was small in\nMistral and GPT-4-Preview. Our study con tributes valuable insights into the\nreliability of RAG for automated citation generation tasks.", + "main_content": "Introduction The development of LLMs marks a significant advancement in computational linguistics and artificial intelligence (AI) (Tamkin and Ganguli, 2021). LLMs, such as OpenAI\u2019s GPT series, have shown remarkable capabilities in text generation (Zhao et al., 2023), and question-answering systems (Rasool et al., 2023; Elgedawy et al., 2024). However, their limitations become apparent as they become more integrated into various domains, including defense (Schwinn et al., 2023), news media (Fang et al., 2023), and education (Yan et al., 2024; Hung et al., 2023; Augenstein et al., 2023). The critical issue is their propensity to generate hallucinated sentences and propagate factually inaccurate pieces of information without reference (Ji et al., 2023; Rawte et al., 2023). These inaccuracies diminish the models\u2019 reliability and erode users\u2019 trust, a vital component in their widespread adoption. Commercial LLM-based search systems, including Bing Search-powered GPT 4 (Mehdi, 2024) and Perplexity.ai (Roose, 2024), are still not capable enough of resolving the issue of citation generation to confirm the scientific feasibility of either a generated sentence(s) or given sentence(s) from the scientific literature. For instance, Figure 1 shows how proprietary LLMs respond to the zero-shot indirect query. It is evident from the figure that while general-purpose LLMs like GPT3.5 and GPT-4 \u2018pass\u2019 the query, task-specific LLM Perplexity does generate relevant citations but still shows hallucination. Consider the following arXiv:2405.02228v1 [cs.CL] 3 May 2024 \fFigure 1: An illustration and motivating example for investigating LLMs for automatic citation generation task. Perplexity.ai, which is an LLM-based search engine, yields a citation that doesn\u2019t exist [1], an incorrect one [3], and a correct citation [2]. Advance RAG (defined in this research) improved context understanding and citation generation quality. Time: Feb. 05, 2024. three real world examples of this research: Citation Generation in Research Articles and News Reports: LLMs can generate highly persuasive and realistic content, especially in writing research articles or news reports, making it challenging for users to distinguish between genuine and fabricated information Nakano et al. (2021); Menick et al. (2022); Kumarage and Liu (2023). Citation Generation in Reports for Organizational Cybersecurity: In cybersecurity, where decisions often need to be made quickly and are based on the data provided, the accuracy and reliability of information are paramount (Divakaran and Peddinti, 2024). Inaccurate citations can lead to misinformation and potentially severe consequences in decision-making processes. LLMs can automate the citation generation process but need to be carefully designed for organization specific cybersecurity. Citation Generation in Reports for Legal: In a significant event, an attorney tried employing ChatGPT for legal analysis during a trial (see subsection A.1)(Bohannon, 2023). While ChatGPT generated information, it failed to capture the nuanced complexities and critical legal precedents needed for the case. This underscores the importance of confirming and sourcing accurate legal citations and precedents relevant to the case. We contribute by addressing these challenges with the following: (A) Introduce REASONS, a dataset created by extracting related works from IEEE articles spanning 12 scientific domains from 2017 to 2023. (B) We employ a new RAG training regime to develop Advance RAG. Advance RAG and Na\u00efve RAG examine the factual integrity of the information retrieved by dense retrievers and its presentation as citations by LLMs. (C) We evaluate both proprietary and public LLMs and their RAG counterparts (10 models) to assess their contextual awareness using metrics like Pass Percentage (PP) and Hallucination rate (HR). Additionally, we have measured the quality of citation generation using F-1 and BLEU scores. (D) We conduct an adversarial examination to provide a clear assessment of context awareness regarding citation generation in LLMs. Findings:(I) Perplexity, faces a major challenge when dealing with indirect and direct query on the REASONS dataset (Figure 2 Figure 5, and in Appendix A Table 6 Table 9).(II) Citation generation is enhanced uniformly across public and proprietary LLMs when metadata like abstract and title are considered with indirect query (Figure 3 and Figure 5, along with Table 7 and Table 9). (III) Advance RAG with Mistral LLM outperforms other competitive proprietary and public LLMs. This performance is realized by a reduction in the HR and increments in F-1 and BLEU scores (Figure 3 and Figure 5 (last two bars) and Table 7 and Table 9 (last two columns)). (IV) For domains such as Quantum Computing and Biomolecules that are heavy in mathematics and numerals, there was a substantial decline in citation generation quality and an increase in HR. Adversarial examination strengthens our understanding that despite being exorbitantly large, LLMs lack context awareness (Table 2). (V) Advance RAG did provide convincing evidence of context understanding (Table 2). Further improvements in RAG-based LLMs are desirable, and utilizing REASONS dataset can provide valuable insights into context understanding and provenance in tasks such as hypothesis generation. 2 Background Early Techniques in Citation Recommendation: The practice of citing sources is a cornerstone of academic and professional writing, serving as the bedrock for reliability, and truthfulness in scholarly work (Cronin, 1981). The evolution of citation recommendation systems mirrors the broader advancements in computational linguistics and nat\fural language processing (NLP) (Bai et al., 2019; Ali et al., 2021). Initial methods in citation recommendation focused on basic techniques such as text feature-based systems (Strohman et al., 2007), simple keyword matching, and basic statistical methods (Bethard and Jurafsky, 2010). Context-aware citation recommendation systems supplemented these methods (He et al., 2010; Ebesu and Fang, 2017; Jeong et al., 2020a; Huang et al., 2021). However, their inability to grasp deeper textual contexts limited their effectiveness. Machine learning in Citation Recommendation The incorporation of machine learning into citation recommendation systems represents an initial step toward automating the citation process, which is typically regarded as manual and laborintensive(Agarwal et al., 2005; K\u00fc\u00e7\u00fcktun\u00e7 et al., 2012). These systems began to exhibit an improved understanding of the text, although they still lacked a nuanced grasp of complex contexts (Tran et al., 2015). The application of neural networks revolutionized citation recommendation. NLP algorithms, capable of parsing complex sentence structures, started identifying relevant themes for contextually appropriate citation recommendations (Zarrinkalam and Kahani, 2013; Beel et al., 2016; Iqbal et al., 2020). Concurrently, graph-based models, visualizing literature as interconnected networks, enhanced citation recommendations by considering content similarity and citation patterns (Ali et al., 2020; Chakraborty et al., 2015). With deep learning, citation recommendation systems began incorporating semantic analysis, employing models like word embeddings and neural networks for a more nuanced understanding (Yang et al., 2018; Bhagavatula et al., 2018; Vajdecka et al., 2023). Adapted from commercial use, collaborative filtering also emerged, recommending citations based on similar citation behaviors (Wang et al., 2020). Large Language Models in Citation Generation: The advent of LLMs like GPT-3 and its successors has further transformed NLP. Initial language model systems such as those based on BERT have significantly improved citation recommendation by converting unstructured text into meaningful vectors (Jeong et al., 2020b; Devlin et al., 2018; Bhowmick et al., 2021). Recent studies have focused on evaluating the fidelity of generated text to its sources (Ji et al., 2023). (Rashkin et al., 2023) introduced the \u201cattributable to identified sources\u201d (AIS) score, while (Bohnet et al., 2022) and others (Honovich et al., 2022; Yue et al., 2023) have focused on automating AIS. Concurrent work by (Liu et al., 2023) explored human evaluation of commercial generative search engines such as Bing. Chat, NeevaAI, Perplexity.ai, and YouChat. Despite these advancements, LLMs in citation recommendation still struggle with generating accurate information and providing references, as shown in studies by (Ji et al., 2023; Zheng et al., 2023). We conduct empirical and investigative research on why public and proprietary LLMs, including the powerful GPT-4 (which has not been examined yet), are prone to incorrect citation generation. Further, we provide means for improving the citation generation in public LLMs through a customized design using RAG. This limitation necessitates an approach closely aligning with RAG. RAG compels LLMs to provide citations alongside the generated text. The concept of retrieval-augmented LLMs has gained traction in recent years following (Guu et al., 2020; Borgeaud et al., 2022; Izacard et al., 2022; Khandelwal et al., 2019; Schick et al., 2023; Jiang et al., 2023b; Yao et al., 2022; Gao et al., 2023). We evaluate public and proprietary LLMs and their RAG counterparts on citation generation using REASONS, a meticulously curated dataset from arXiv spanning key domains in computer science and related fields. This allows us to assess the LLM\u2019s ability to identify a given sentence\u2019s source accurately. Domain Paper Count IEEE Papers Citation Count CV 5488 1028 3437 Robotics 3656 292 776 Graphics 1796 384 1417 IR 1741 564 1654 AI 1697 531 2021 NLP 1526 293 1092 Cryptography 1084 371 1106 NNC 892 111 326 HCI 761 112 229 Databases 723 115 182 QC 421 126 456 Biomolecules 119 17 27 Total 19904 3944 12723 Table 1: Our benchmark dataset, REASONS, includes papers and sentences from 12 domains. It primarily features ten domains in computer science and 2 in biology. Full forms of domain acronyms are provided in subsection A.5. \f3 Problem Setup Scope of REASONS: The dataset comprises sentences gathered from the related work sections of articles in computer science and biology available on arXiv (arX). Summary is provided in Table 1. It should be noted that GPT-3.5 or its successors may have processed all the papers published on arXiv from 2017 to 2021 while training. To ensure our dataset is unbiased, we include papers published in 2022 and 2023 that test the memory and understanding of LLMs. Exclusions were made for mathematics, statistics, and physics due to the abundance of equations in the related work section, and the crawling method theoremKb1 lacked the required versatility. We chose to focus on IEEE papers as they are represented across all 12 domains we considered. Each sentence in the related work section encapsulates the author\u2019s thought process in citing related works: (A) Every sentence captures the author\u2019s interpretation and emphasis on original methodology, critique of prior work, corrections to previous research, or acknowledgment of pioneers. This encompasses summarizing these aspects briefly and concisely. (B) The cited work in the related work section is either incidental or important to current work (Valenzuela et al., 2015). REASONS is inspired by previously constructed s2ORC and UnarXive datasets containing academic papers (see Table 4 in Appendix A); however, we diverge on the following points: (A) We provide sentence-level annotation of citations on major computational domains on arXiv. (B) Each sentence is accompanied by its metadata, which includes the paper title, abstract, and author names of the paper it cites. It also contains the title of the paper from which it was taken. (C) The dataset structure allows for an easy examination of LLMs using indirect and direct queries. Crawling Process: The web crawler employs the Oxylabs2 SERP Scraper API as its methodology, enabling real-time data extraction from major search engines. This API offers a proxy chaining platform for efficient data extraction. The dataset is meticulously organized in JSON format with a detailed outline (see \u201cJSON Structure\u201d). A complete GitHub repository is provided, containing the dataset and the code for reproducibility (see details in subsection A.3). We plan to keep updating the repository with more articles and metadata. The 1https://github.com/PierreSenellart/theoremkb 2https://oxylabs.io/ associated costs are provided in (subsection A.2). JSON Structure {\"Computer Vision\": { \"http://arXiv.org/abs/2012.05435v2\": { \"Paper Title\": \"Optimization-Inspired..\", \"Sentences\": [ {\"Sentence ID\": 32, \"Sentence\": \"... For GM, ... \", \"Citation Text\": \"C. Ledig,...\", \"Citation\": { \"Citation Paper ID\": \"arXiv:1609.04802\", \"Citation Paper Title\": \"Title:Photo..\", \"Citation Paper Abstract\": \"Abstract:.\", \"Citation Paper Authors\": \"Authors:...\" }}]}}} 3.1 Problem Formulation We define two tasks for LLMs over the REASONS dataset R: (a) Direct Querying and (b) Indirect Querying. For experimentation, we segment R into RS and RM. RS represents sentences and paper titles for which references are to be generated with or without the support from metadata RM. Direct Querying Task: Given a title ti \u2208RS, the LLM should generate the author list. For the task of direct querying with metadata, the LLM is given the following input: ti \u2208RS, the Advance RAG model retrieves top-40 chunks of information ai1, ..., ai40 \u2208RM, and generates the names. Indirect Querying Task: Given a sentence si \u2208RS, the LLM should generate a paper title in zero-shot setting. For the task of indirect querying with metadata called Sequential Indirect and Direct Prompting (SID Prompting), the LLM is given the following input: si \u2208RS and ground truth abstract abss \u2208RM as well as the authors aus \u2208RM, and the model is asked to generate the citation paper title. Examples of direct and indirect queries are: Direct Prompt Prompt: Who were the authors of the research paper \"Research Paper Title\"? Instruction: List only author names, formatted as < firstname >< lastname >, separated by comma. Do not mention the paper in the title, also, if you don\u2019t know, write \u2019pass\u2019. Response: Author Names. \fIndirect Prompt Prompt: I have taken a sentence from the research paper titled \u201cResearch Paper Title\u201d, give me the research paper that this sentence is citing. If you cannot come up with the paper titles, write \u2018pass.\u2019 Don\u2019t write anything else. Instruction: Sentence \"uses fractional max-pooling to randomly specify non-integer ratios between the spatial dimension sizes of the input and the output to pooling layers.\" Response: Citation Paper Title. Implementation of Direct and Indirect Querying: Direct querying is executed using zero-shot prompting for scenarios without metadata and chain-of-thoughts prompting for metadata situations. We modify the chain-of-thoughts prompting with SID Prompting. It begins with an indirect query. Following an incorrect response or a \u2018pass,\u2019 more details about the cited paper are given (i.e., direct query), including its abstract and authors\u2019 names. This is an iterative approach to generate the correct citation. Following are the two examples of these prompting strategies: Direct Query with Metadata Prompting Prompt: Who were the authors of the research paper \u201cResearch Paper Title\"? Let me give you some more context by providing the abstract of the research paper. Abstract:\u2019....\u2019. Instruction: List only author names, formatted as , separated by comma. Do not mention the paper in the title. Also, if you don\u2019t know, write \u2018pass.\u2019 Response: Author Names. SID Prompting Prompt: I have taken a sentence from the research paper titled \"Research Paper Title.\" give me the title of the possible research paper that this sentence is citing. If you cannot come up with the paper titles, write \u2019pass\u2019. Don\u2019t write anything else. Instruction: Sentence:\"......\". Let me give you some more context by providing the authors and the abstract of the paper the sentence is citing. Authors:\"......\", Abstract:\".......\" Response: Citation Paper Title. 3.2 Models and Evaluation Our research has focused on a diverse array of LLMs, carefully chosen to provide a broad perspective on the capabilities and limitations inherent in current language model technologies. Proprietary Models: Our selection of proprietary models includes those from OpenAI and Preplexity.ai. While OpenAI is known for its cutting-edge NLP models, driving significant advancements in the field, Preplexity.ai focuses on models with unique functionalities, such as recommending citations and utilizing natural language prediction for innovative search experiences. Public Models: We choose LLAMA 2 (Touvron et al., 2023) and Mistral (Jiang et al., 2023a) as the two publicly available LLMs that have demonstrated competitive performance compared to proprietary LLMs. We evaluate their effectiveness on the REASONS dataset under the standard state and retrieval-augmentation conditions. This analysis goes beyond simply comparing proprietary and public models, extending to evaluating models based on their size, particularly those with 7B parameters. 3.3 Evaluation Metrics Our evaluation uses four key metrics: 1) The BLEU Score assesses the structural alignment through clipped n-gram matching. 2) The F-1 Score evaluates the balance between precision and recall, reflecting the models\u2019 effectiveness in capturing key information. 3) Hallucination rate (HR), which we estimate by averaging over incorrect and partially correct generated citations. HR = 1 QD P I[\u02c6 c \u0338= c] + 1 |Uw| P|Uw| w=1 I[\u02c6 cw \u0338= cw], where QD: queries within a domain, and |Uw|: total number of unique words in generated citation (\u02c6 c) and true citation (c). 4) Pass Percentage (PP) measures the tendency of an LLM to either respond or abstain from giving a response. It is calculated as follows: 1 QD P I[\u02c6 c = Pass]. It is crucial to emphasize that PP serves as a safeguard to prevent LLMs from generating hallucinatory responses but also reduces engagement. Additionally, even with a high PP, the HR can be high. This implies that the model struggles to discern whether it offers correct or incorrect citations in the remaining instances. 3.4 Retrieval Augmented Generation (RAG) RAG combines a retriever and a generator to create better answers. RAG can access external knowledge, unlike methods that feed the model prompts. This lets it craft more accurate, relevant, and informative responses than models that rely solely on what they were pre-trained. We investigate RAG\u2019s ability to improve LLMs\u2019 accuracy. Ideally, RAG would help LLMs avoid giving wrong answers (low PP) and making things up (HR). We also investigate whether RAG works consistently with direct and indirect questions across different scientific fields (12 domains). We experiment with two forms of RAG architecture: \f(a) Na\u00efve RAG and (b) Advance RAG. Both architectures leverage the same bi-encoder-based retriever architecture (Karpukhin et al., 2020). Given a corpus of documents RM and a sentence s \u2208RS, the document encoder maps d \u2208RM to an embedding E\u03b8(c) and the query encoder maps s to an embedding E\u03b8(s). The top-k relevant documents for s are retrieved based on the sentence-document embedding similarity, which is often computed via dot product: z(s, d) = exp(E\u03b8(s)T E\u03b8(d)). We start with a bi-encoder retriever using an embedding model from OpenAI (subsection A.4). Other ways to set up a bi-encoder retriever, such as DRAGON+ (Lin et al., 2023), are possible. However, those are more useful when involving large-scale data augmentation. The retrieved documents are ranked in two ways, which separates Na\u00efve RAG from Advance RAG. Under the Na\u00efve RAG, we use BM25 relevance scoring to rank the documents, whereas, in Advance RAG, we fine-tune a cross-encoder on REASONS document index RM to better align it with our task of citation generation with LLM. For the fine-tuning of the cross-encoder, we use localized contrastive loss (LCL) for two reasons: (a) In RM, we do not have labeled positive and negative documents, and (b) for a sentence s there is a possibility for more than one true positive documents (Pradeep et al., 2022). LCL is formally defined as follows: LLCLs := \u2212log exp(zs,{d+}) P d\u2208Gs exp(zs,d) LLCL := 1 |S| X s\u2208Rs,Gs\u2208Rs M LLCLs where Gs represents a set of documents for a sentence s, which consist of a set of relevant documents ({d+}) and n-1 non-relevant documents {d\u2212} sampled from Rs M using biencoder. The training of Advance RAG happens through the standard cross entropy loss: LCE(\u02c6 c|s, \u03d5) = Pb i=1 I(\u02c6 cw i = cw i ) \u00b7 log Pr(\u02c6 cw i |\u03d5) where, \u03d5 is parameter of the generator LLM and b is the minibatch fine-tuning in Advance RAG. \u02c6 ci represents ith citation generation, and I(\u02c6 cw i = cw i ) represents word level comparison with ground truth citation (direct query: author names; indirect query: paper titles). For the Na\u00efve and Advance RAG, we employ LLAMA-2 7B and Mistral 7B as competitive models against proprietary LLMs. 4 Results We conducted experiments encompassing four distinct prompting styles applied to twelve scientific domains. This extensive analysis involved 12,723 sentences, resulting in a substantial dataset rigorously evaluated using ten different models. This equates to 508920 instance assessments involving 4 (prompting styles) \u00d7 12,723 (sentences for all domains) \u00d7 10 (models). The total duration required to execute all experiments on the GPU is 238 days, 6 hours, and 59 minutes. For detailed information regarding the time spent on experiments across various domains, please refer to the appendix (see subsection A.6 and Table 5). Zero-Shot Indirect Prompting: In Figure 4, a majority of the models exhibited high HR. As expected for a huge model GPT-4-1106-preview (1 Trillion Parameters) shows a relatively lower HR of 67.73% and a higher PP of 89% averaged across 12 domains. Perplexity-7b-Chat showed an exceptionally high PP of 97.5%, which is surprising, as this LLM is designed specifically for citation generation. RAG Mistral was a competitive model with GPT-4 with a lower PP of 21% and HR of 72.49% in comparison to other LLMs. Analysis shows RAG Mistral is competitive because of the high variance in HR compared to GPT-4-1106-preview. Generation quality measured by F-1 and BLEU scores were predominantly low across the board, with GPT-4 (not the preview, G1) comparatively better scores. RAG Mistral and RAG LLAMA 2 rank second and third best respectively. SID Prompting In Figure 5, showed improvement across all the LLMs in citation generation over indirect queries. An average improvement of 21% was measured, with a reduction in variance. Even though some models like Perplexity-7b-Chat and LLAMA 2 still had high HR rates, the PP dropped significantly, especially for GPT-4-1106-preview. The results of this experiment indicate that SID prompting in LLMs can balance the trade-off between PP and HR, significantly enhancing generation quality with an (8%\u2191) increase in BLEU and a (13%\u2191) in F-1 (The Appendix B provides examples for visual inspection.). Zero-Shot Direct Prompting presents a very idealistic scenario where the LLMs have access to context through direct query. This leads to both lower PP and HR. The citation generation quality significantly improves from zero-shot in\fG1 G2 G3 P RMM RL L AL AM 0 50 100 Hallucination Rate G1 G2 G3 P RMM RL L AL AM 0 0.2 0.4 0.6 0.8 F-1 Score G1 G2 G3 P RMM RL L AL AM 0 0.2 0.4 0.6 0.8 BLEU Score G1 G2 G3 P RMM RL L AL AM 0 50 100 Pass Percentage Figure 2: Averaged Zero-Shot Direct Prompting results of different LLMs across all 12 domains. G1 shows notably lower HR and higher F-1 and BLEU scores, indicating superior performance in generating citations. In contrast, model P exhibits the highest HR and the lowest scores in F-1 and BLEU, suggesting challenges in generating accurate and contextually relevant citations. The RAG models (RM and RL) demonstrate varied results, with RM showing a better accuracy and coherence balance than RL. G1: gpt-4-1106-preview, G2: gpt-4, G3: gpt-3.5-turbo, P: pplx-7b-chat, RM: Na\u00efve RAG mistral-7b-instruct, M: mistral-7b-instruct, RL: Na\u00efve RAG llama-2-7b-chat, L: llama-2-7b-chat, AL: Advance RAG llama-2-7b-chat, AM: Advance RAG mistral-7b-instruct. For the purposes of clarity and saving space, the terms AL and AM are used in the figures to denote Advance RAG llama-2-7b-chat and Advance RAG mistral-7b-instruct, respectively. In the main text of the paper, these are referred to as AdvRAG(L) and AdvRAG(M). G1 G2 G3 P RMM RL L AL AM 0 50 100 Hallucination Rate G1 G2 G3 P RMM RL L AL AM 0 0.5 1 F-1 Score G1 G2 G3 P RMM RL L AL AM 0 0.5 1 BLEU Score G1 G2 G3 P RMM RL L AL AM 0 0.5 1 Pass Percentage Figure 3: Averaged Direct Prompting with Metadata results of different LLMs across all 12 domains. The plot indicates that models G1, G2, and G3 stand out with their low HR and impressive F-1 and BLEU scores, in contrast to other models that face challenges. All models except RM reach a 0% PP, suggesting that including metadata significantly enhances their contextual understanding. G1 G2 G3 P RM M RL L 0 50 100 Hallucination Rate G1 G2 G3 P RM M RL L 0 0.2 0.4 0.6 F-1 Score G1 G2 G3 P RM M RL L 0 0.2 0.4 0.6 BLEU Score G1 G2 G3 P RM M RL L 0 50 100 Pass Percentage Figure 4: Averaged Zero-Shot Indirect Prompting across 12 domains. This prompting method led to elevated HR among the models. There was also a notable variance in PP, with models G3, P, and L exhibiting higher scores. Both conditions indicate challenges in understanding context and generating accurate citations when using indirect prompts. G1 G2 G3 P RMM RL L AL AM 0 50 100 Hallucination Rate G1 G2 G3 P RMM RL L AL AM 0 0.2 0.4 0.6 0.8 F-1 Score G1 G2 G3 P RMM RL L AL AM 0 0.2 0.4 0.6 0.8 BLEU Score G1 G2 G3 P RMM RL L AL AM 0 50 100 Pass Percentage Figure 5: Averaged SID Prompting results of different LLMs across all 12 domains. Models G1, G2, and G3 exhibit relatively better outcomes with lower HR and higher F-1 and BLEU scores, suggesting more contextual understanding. Other models demonstrated high HR, indicating difficulties in accurate citation generation with SID Prompting. Notably, while models G1 and G3 have high PPs, indicating some difficulties with SID, their overall performance still reflects a more advanced level of language processing and contextual comprehension compared to the other models. direct and SID promptings, achieving high F-1 and BLEU scores (see Figure Figure 4). However, Perplexity-7b-Chat, oddly, had high PP and HR, suggesting a need for more research on such \fspecialized LLM search engines. We observed that Perplexity-7b-Chat expands its search queries and adds references to the broader content it finds. The issue is that the expanded versions drift too far in meaning from the original. In Direct Prompting with Metadata, when metadata such as abstracts and titles were used with indirect questions, all the LLMs got better at generating citations and had low HR and PP. This shows that having more information helps LLMs create more accurate and related citations, proving the importance of enough data for good language processing. Note that PP dropped to zero for almost all models when direct promoting includes metadata. All GPT LLMs achieved F-1 and BLEU scores close to 1.0 and showed more consistent results overall. Two main points from this experiment are: First, adding metadata to LLMs is effective for all of them, especially RAG models that integrate this augmentation in their learning process. Second, smaller models with advance RAG (Mistral and LLAMA-2) adjust better to metadata than GPT-4-Preview/4/3.5 (see Figure 3). Overall: Advance RAG Mistral 7b outperformed other competitive proprietary and public LLMs in all prompting styles. This superior performance was notably marked by reduced HR, suggesting this model is more adept at generating accurate and relevant responses when adding metadata. Furthermore, improvements in F-1 scores reinforce its reliability in retrieving information. Higher BLEU scores were observed, signifying that the language output of the model aligns closely with human-like text in terms of fluency & coherence. 5 Adversarial Examination The analysis of LLMs using the REASONS dataset highlights significant variability in their performance across different domains. While they perform moderately better in areas like AI and CV with lower HR and higher F-1/BLEU scores, they struggle in complex domains such as QC, Biomolecules, and Cryptography, likely due to limited training data and the complexity of these subjects. This variability in performance indicates that LLMs have varying degrees of contextual understanding, with a tendency to perform better in domains with more extensive training data and less complex structures (e.g., maths and numerics). Motivation and Setup: We conducted adversarial experiments across all models to better assess their contextual understanding. The core concept Group PP(%) BLEU F1 HR Changing Paper Title G1 96.23 0.6210 0.8470 17.99 G2 31.45 0.0524 0.2640 83.66 G3 68.55 0.0389 0.1828 87.35 RM 3.14 0.0796 0.1584 86.78 M 0.00 0.0003 0.0221 94.95 RL 5.03 0.0628 0.1448 87.56 L 0.00 0.0066 0.0254 98.30 AdvRAG(L) 0.00 0.1322 0.4763 85.72 AdvRAG(M) 0.00 0.1569 0.5839 75.41 Changing Paper Abstract G1 95.60 0.4595 0.6451 38.49 G2 32.70 0.0396 0.2186 86.22 G3 76.10 0.0034 0.1013 91.64 RM 7.55 0.0520 0.1216 89.44 M 0.00 0.0074 0.0161 90.20 RL 2.52 0.0445 0.1112 90.16 L 0.00 0.0017 0.0146 99.01 AdvRAG(L) 0.00 0.4101 0.5780 39.67 AdvRAG(M) 0.00 0.4904 0.6954 39.57 Table 2: Performance of various LLMs on adversarial set, designed by swapping titles and abstracts. Models G1, G2, and G3, possibly exposed to similar data during training, struggled with the adversarial sets, resulting in high HR and PP. Conversely, models like AdvRAG(L) and AdvRAG(M) showed better performance, suggesting that these models attempt to understand the context before generating the citations. behind these experiments was to provide the models with incorrect yet similar metadata about the sentences in the prompts. The aim was to discern whether the models generated citations based on the contextual grasp of the provided metadata or if the metadata had minimal influence on the citation generation process. These adversarial experiments comprised two types: 1) Providing inaccurate paper titles related to the sentences. 2) Providing incorrect paper abstracts associated with the sentences. Both experiments were conducted using the SID prompting. To facilitate these experiments, we curated a subsample of 200 sentences from the REASONS dataset spanning all the domains. We extracted each sentence\u2019s most similar paper title or abstract from this dataset and replaced the original metadata. For similarity calculation, we use the RatcliffObershelp metric, which is calculated as twice the length of the longest common substring plus recursively the number of matching characters in the non-matching regions on both sides of the longest common substring (Tang et al., 2023). According to this metric, for the following example title \u201cDiffusion models for counterfactual explanations,\u201d the best replacement is \u201cOctet: Object-aware models for counterfactual explanations (0.736)\u201d as opposed \fto \u201cAdversarial counterfactual visual explanations (0.638)\u201d. We considered a threshold of 0.70 effective in preparing the adversarial set. Findings: We found that incorrect paper titles and abstracts easily fool most LLMs if it is similar to accurate information. In Table 2, G1 is displayed at 17.99%, and its pairing with a high PP of 96.23% indicates a defensive mechanism. This means the LLMs are not very good at understanding the true meaning of what they are given. On such a small adversarial set, we expect LLMs like GPT-4-1106-preview and GPT-4 to perform exceedingly well because of their extensive knowledge; however, we observed counterintuitive results in Table 2, all models show the effect. We do see promising direction with AdvRAG(M) and AdvRAG(L); however, further investigation is required into how rich graphical metadata (e.g., knowledge graph) and graph-theoretic approaches to information retrieval can improve LLM effectiveness (He et al., 2024). 6" +} \ No newline at end of file diff --git a/abs_9K/test_abstract_short_2405.02228v2.json b/abs_9K/test_abstract_short_2405.02228v2.json new file mode 100644 index 0000000000000000000000000000000000000000..e207586459088539cfa0f3e9ddfc41f332e9506f --- /dev/null +++ b/abs_9K/test_abstract_short_2405.02228v2.json @@ -0,0 +1,18 @@ +{ + "url": "http://arxiv.org/abs/2405.02228v2", + "title": "REASONS: A benchmark for REtrieval and Automated citationS Of scieNtific Sentences using Public and Proprietary LLMs", + "abstract": "Automatic citation generation for sentences in a document or report is\nparamount for intelligence analysts, cybersecurity, news agencies, and\neducation personnel. In this research, we investigate whether large language\nmodels (LLMs) are capable of generating references based on two forms of\nsentence queries: (a) Direct Queries, LLMs are asked to provide author names of\nthe given research article, and (b) Indirect Queries, LLMs are asked to provide\nthe title of a mentioned article when given a sentence from a different\narticle. To demonstrate where LLM stands in this task, we introduce a large\ndataset called REASONS comprising abstracts of the 12 most popular domains of\nscientific research on arXiv. From around 20K research articles, we make the\nfollowing deductions on public and proprietary LLMs: (a) State-of-the-art,\noften called anthropomorphic GPT-4 and GPT-3.5, suffers from high pass\npercentage (PP) to minimize the hallucination rate (HR). When tested with\nPerplexity.ai (7B), they unexpectedly made more errors; (b) Augmenting relevant\nmetadata lowered the PP and gave the lowest HR; (c) Advance retrieval-augmented\ngeneration (RAG) using Mistral demonstrates consistent and robust citation\nsupport on indirect queries and matched performance to GPT-3.5 and GPT-4. The\nHR across all domains and models decreased by an average of 41.93%, and the PP\nwas reduced to 0% in most cases. In terms of generation quality, the average F1\nScore and BLEU were 68.09% and 57.51%, respectively; (d) Testing with\nadversarial samples showed that LLMs, including the Advance RAG Mistral,\nstruggle to understand context, but the extent of this issue was small in\nMistral and GPT-4-Preview. Our study contributes valuable insights into the\nreliability of RAG for automated citation generation tasks.", + "authors": "Deepa Tilwani, Yash Saxena, Ali Mohammadi, Edward Raff, Amit Sheth, Srinivasan Parthasarathy, Manas Gaur", + "published": "2024-05-03", + "updated": "2024-05-09", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.IR" + ], + "label": "Original Paper", + "paper_cat": "Retrieval AND Augmented AND Generation AND RAG", + "gt": "Automatic citation generation for sentences in a document or report is\nparamount for intelligence analysts, cybersecurity, news agencies, and\neducation personnel. In this research, we investigate whether large language\nmodels (LLMs) are capable of generating references based on two forms of\nsentence queries: (a) Direct Queries, LLMs are asked to provide author names of\nthe given research article, and (b) Indirect Queries, LLMs are asked to provide\nthe title of a mentioned article when given a sentence from a different\narticle. To demonstrate where LLM stands in this task, we introduce a large\ndataset called REASONS comprising abstracts of the 12 most popular domains of\nscientific research on arXiv. From around 20K research articles, we make the\nfollowing deductions on public and proprietary LLMs: (a) State-of-the-art,\noften called anthropomorphic GPT-4 and GPT-3.5, suffers from high pass\npercentage (PP) to minimize the hallucination rate (HR). When tested with\nPerplexity.ai (7B), they unexpectedly made more errors; (b) Augmenting relevant\nmetadata lowered the PP and gave the lowest HR; (c) Advance retrieval-augmented\ngeneration (RAG) using Mistral demonstrates consistent and robust citation\nsupport on indirect queries and matched performance to GPT-3.5 and GPT-4. The\nHR across all domains and models decreased by an average of 41.93%, and the PP\nwas reduced to 0% in most cases. In terms of generation quality, the average F1\nScore and BLEU were 68.09% and 57.51%, respectively; (d) Testing with\nadversarial samples showed that LLMs, including the Advance RAG Mistral,\nstruggle to understand context, but the extent of this issue was small in\nMistral and GPT-4-Preview. Our study contributes valuable insights into the\nreliability of RAG for automated citation generation tasks.", + "main_content": "Introduction The development of LLMs marks a significant advancement in computational linguistics and artificial intelligence (AI) (Tamkin and Ganguli, 2021). LLMs, such as OpenAI\u2019s GPT series, have shown remarkable capabilities in text generation (Zhao et al., 2023), and question-answering systems (Rasool et al., 2023; Elgedawy et al., 2024). However, their limitations become apparent as they become more integrated into various domains, including defense (Schwinn et al., 2023), news media (Fang et al., 2023), and education (Yan et al., 2024; Hung et al., 2023; Augenstein et al., 2023). The critical issue is their propensity to generate hallucinated sentences and propagate factually inaccurate pieces of information without reference (Ji et al., 2023; Rawte et al., 2023). These inaccuracies diminish the models\u2019 reliability and erode users\u2019 trust, a vital component in their widespread adoption. Commercial LLM-based search systems, including Bing Search-powered GPT 4 (Mehdi, 2024) and Perplexity.ai (Roose, 2024), are still not capable enough of resolving the issue of citation generation to confirm the scientific feasibility of either a generated sentence(s) or given sentence(s) from the scientific literature. For instance, Figure 1 shows how proprietary LLMs respond to the zero-shot indirect query. It is evident from the figure that while general-purpose LLMs like GPT3.5 and GPT-4 \u2018pass\u2019 the query, task-specific LLM Perplexity does generate relevant citations but still shows hallucination. Consider the following arXiv:2405.02228v2 [cs.CL] 9 May 2024 \fFigure 1: An illustration and motivating example for investigating LLMs for automatic citation generation task. Perplexity.ai, which is an LLM-based search engine, yields a citation that doesn\u2019t exist [1], an incorrect one [3], and a correct citation [2]. Advance RAG (defined in this research) improved context understanding and citation generation quality. Time: Feb. 05, 2024. three real world examples of this research: Citation Generation in Research Articles and News Reports: LLMs can generate highly persuasive and realistic content, especially in writing research articles or news reports, making it challenging for users to distinguish between genuine and fabricated information Nakano et al. (2021); Menick et al. (2022); Kumarage and Liu (2023). Citation Generation in Reports for Organizational Cybersecurity: In cybersecurity, where decisions often need to be made quickly and are based on the data provided, the accuracy and reliability of information are paramount (Divakaran and Peddinti, 2024). Inaccurate citations can lead to misinformation and potentially severe consequences in decision-making processes. LLMs can automate the citation generation process but need to be carefully designed for organization specific cybersecurity. Citation Generation in Reports for Legal: In a significant event, an attorney tried employing ChatGPT for legal analysis during a trial (see subsection A.1)(Bohannon, 2023). While ChatGPT generated information, it failed to capture the nuanced complexities and critical legal precedents needed for the case. This underscores the importance of confirming and sourcing accurate legal citations and precedents relevant to the case. We contribute by addressing these challenges with the following: (A) Introduce REASONS, a dataset created by extracting related works from IEEE articles spanning 12 scientific domains from 2017 to 2023. (B) We employ a new RAG training regime to develop Advance RAG. Advance RAG and Na\u00efve RAG examine the factual integrity of the information retrieved by dense retrievers and its presentation as citations by LLMs. (C) We evaluate both proprietary and public LLMs and their RAG counterparts (10 models) to assess their contextual awareness using metrics like Pass Percentage (PP) and Hallucination rate (HR). Additionally, we have measured the quality of citation generation using F-1 and BLEU scores. (D) We conduct an adversarial examination to provide a clear assessment of context awareness regarding citation generation in LLMs. Findings:(I) Perplexity, faces a major challenge when dealing with indirect and direct query on the REASONS dataset (Figure 2 Figure 5, and in Appendix A Table 6 Table 9).(II) Citation generation is enhanced uniformly across public and proprietary LLMs when metadata like abstract and title are considered with indirect query (Figure 3 and Figure 5, along with Table 7 and Table 9). (III) Advance RAG with Mistral LLM outperforms other competitive proprietary and public LLMs. This performance is realized by a reduction in the HR and increments in F-1 and BLEU scores (Figure 3 and Figure 5 (last two bars) and Table 7 and Table 9 (last two columns)). (IV) For domains such as Quantum Computing and Biomolecules that are heavy in mathematics and numerals, there was a substantial decline in citation generation quality and an increase in HR. Adversarial examination strengthens our understanding that despite being exorbitantly large, LLMs lack context awareness (Table 2). (V) Advance RAG did provide convincing evidence of context understanding (Table 2). Further improvements in RAG-based LLMs are desirable, and utilizing REASONS dataset can provide valuable insights into context understanding and provenance in tasks such as hypothesis generation. 2 Background Early Techniques in Citation Recommendation: The practice of citing sources is a cornerstone of academic and professional writing, serving as the bedrock for reliability, and truthfulness in scholarly work (Cronin, 1981). The evolution of citation recommendation systems mirrors the broader advancements in computational linguistics and nat\fural language processing (NLP) (Bai et al., 2019; Ali et al., 2021). Initial methods in citation recommendation focused on basic techniques such as text feature-based systems (Strohman et al., 2007), simple keyword matching, and basic statistical methods (Bethard and Jurafsky, 2010). Context-aware citation recommendation systems supplemented these methods (He et al., 2010; Ebesu and Fang, 2017; Jeong et al., 2020a; Huang et al., 2021). However, their inability to grasp deeper textual contexts limited their effectiveness. Machine learning in Citation Recommendation The incorporation of machine learning into citation recommendation systems represents an initial step toward automating the citation process, which is typically regarded as manual and laborintensive(Agarwal et al., 2005; K\u00fc\u00e7\u00fcktun\u00e7 et al., 2012). These systems began to exhibit an improved understanding of the text, although they still lacked a nuanced grasp of complex contexts (Tran et al., 2015). The application of neural networks revolutionized citation recommendation. NLP algorithms, capable of parsing complex sentence structures, started identifying relevant themes for contextually appropriate citation recommendations (Zarrinkalam and Kahani, 2013; Beel et al., 2016; Iqbal et al., 2020). Concurrently, graph-based models, visualizing literature as interconnected networks, enhanced citation recommendations by considering content similarity and citation patterns (Ali et al., 2020; Chakraborty et al., 2015). With deep learning, citation recommendation systems began incorporating semantic analysis, employing models like word embeddings and neural networks for a more nuanced understanding (Yang et al., 2018; Bhagavatula et al., 2018; Vajdecka et al., 2023). Adapted from commercial use, collaborative filtering also emerged, recommending citations based on similar citation behaviors (Wang et al., 2020). Large Language Models in Citation Generation: The advent of LLMs like GPT-3 and its successors has further transformed NLP. Initial language model systems such as those based on BERT have significantly improved citation recommendation by converting unstructured text into meaningful vectors (Jeong et al., 2020b; Devlin et al., 2018; Bhowmick et al., 2021). Recent studies have focused on evaluating the fidelity of generated text to its sources (Ji et al., 2023). (Rashkin et al., 2023) introduced the \u201cattributable to identified sources\u201d (AIS) score, while (Bohnet et al., 2022) and others (Honovich et al., 2022; Yue et al., 2023) have focused on automating AIS. Concurrent work by (Liu et al., 2023) explored human evaluation of commercial generative search engines such as Bing. Chat, NeevaAI, Perplexity.ai, and YouChat. Despite these advancements, LLMs in citation recommendation still struggle with generating accurate information and providing references, as shown in studies by (Ji et al., 2023; Zheng et al., 2023). We conduct empirical and investigative research on why public and proprietary LLMs, including the powerful GPT-4 (which has not been examined yet), are prone to incorrect citation generation. Further, we provide means for improving the citation generation in public LLMs through a customized design using RAG. This limitation necessitates an approach closely aligning with RAG. RAG compels LLMs to provide citations alongside the generated text. The concept of retrieval-augmented LLMs has gained traction in recent years following (Guu et al., 2020; Borgeaud et al., 2022; Izacard et al., 2022; Khandelwal et al., 2019; Schick et al., 2023; Jiang et al., 2023b; Yao et al., 2022; Gao et al., 2023). We evaluate public and proprietary LLMs and their RAG counterparts on citation generation using REASONS, a meticulously curated dataset from arXiv spanning key domains in computer science and related fields. This allows us to assess the LLM\u2019s ability to identify a given sentence\u2019s source accurately. Domain Paper Count IEEE Papers Citation Count CV 5488 1028 3437 Robotics 3656 292 776 Graphics 1796 384 1417 IR 1741 564 1654 AI 1697 531 2021 NLP 1526 293 1092 Cryptography 1084 371 1106 NNC 892 111 326 HCI 761 112 229 Databases 723 115 182 QC 421 126 456 Biomolecules 119 17 27 Total 19904 3944 12723 Table 1: Our benchmark dataset, REASONS, includes papers and sentences from 12 domains. It primarily features ten domains in computer science and 2 in biology. Full forms of domain acronyms are provided in subsection A.5. \f3 Problem Setup Scope of REASONS: The dataset comprises sentences gathered from the related work sections of articles in computer science and biology available on arXiv (arX). Summary is provided in Table 1. It should be noted that GPT-3.5 or its successors may have processed all the papers published on arXiv from 2017 to 2021 while training. To ensure our dataset is unbiased, we include papers published in 2022 and 2023 that test the memory and understanding of LLMs. Exclusions were made for mathematics, statistics, and physics due to the abundance of equations in the related work section, and the crawling method theoremKb1 lacked the required versatility. We chose to focus on IEEE papers as they are represented across all 12 domains we considered. Each sentence in the related work section encapsulates the author\u2019s thought process in citing related works: (A) Every sentence captures the author\u2019s interpretation and emphasis on original methodology, critique of prior work, corrections to previous research, or acknowledgment of pioneers. This encompasses summarizing these aspects briefly and concisely. (B) The cited work in the related work section is either incidental or important to current work (Valenzuela et al., 2015). REASONS is inspired by previously constructed s2ORC and UnarXive datasets containing academic papers (see Table 4 in Appendix A); however, we diverge on the following points: (A) We provide sentence-level annotation of citations on major computational domains on arXiv. (B) Each sentence is accompanied by its metadata, which includes the paper title, abstract, and author names of the paper it cites. It also contains the title of the paper from which it was taken. (C) The dataset structure allows for an easy examination of LLMs using indirect and direct queries. Crawling Process: The web crawler employs the Oxylabs2 SERP Scraper API as its methodology, enabling real-time data extraction from major search engines. This API offers a proxy chaining platform for efficient data extraction. The dataset is meticulously organized in JSON format with a detailed outline (see \u201cJSON Structure\u201d). A complete GitHub repository is provided, containing the dataset and the code for reproducibility (see details in subsection A.3). We plan to keep updating the repository with more articles and metadata. The 1https://github.com/PierreSenellart/theoremkb 2https://oxylabs.io/ associated costs are provided in (subsection A.2). JSON Structure {\"Computer Vision\": { \"http://arXiv.org/abs/2012.05435v2\": { \"Paper Title\": \"Optimization-Inspired..\", \"Sentences\": [ {\"Sentence ID\": 32, \"Sentence\": \"... For GM, ... \", \"Citation Text\": \"C. Ledig,...\", \"Citation\": { \"Citation Paper ID\": \"arXiv:1609.04802\", \"Citation Paper Title\": \"Title:Photo..\", \"Citation Paper Abstract\": \"Abstract:.\", \"Citation Paper Authors\": \"Authors:...\" }}]}}} 3.1 Problem Formulation We define two tasks for LLMs over the REASONS dataset R: (a) Direct Querying and (b) Indirect Querying. For experimentation, we segment R into RS and RM. RS represents sentences and paper titles for which references are to be generated with or without the support from metadata RM. Direct Querying Task: Given a title ti \u2208RS, the LLM should generate the author list. For the task of direct querying with metadata, the LLM is given the following input: ti \u2208RS, the Advance RAG model retrieves top-40 chunks of information ai1, ..., ai40 \u2208RM, and generates the names. Indirect Querying Task: Given a sentence si \u2208RS, the LLM should generate a paper title in zero-shot setting. For the task of indirect querying with metadata called Sequential Indirect and Direct Prompting (SID Prompting), the LLM is given the following input: si \u2208RS and ground truth abstract abss \u2208RM as well as the authors aus \u2208RM, and the model is asked to generate the citation paper title. Examples of direct and indirect queries are: Direct Prompt Prompt: Who were the authors of the research paper \"Research Paper Title\"? Instruction: List only author names, formatted as < firstname >< lastname >, separated by comma. Do not mention the paper in the title, also, if you don\u2019t know, write \u2019pass\u2019. Response: Author Names. \fIndirect Prompt Prompt: I have taken a sentence from the research paper titled \u201cResearch Paper Title\u201d, give me the research paper that this sentence is citing. If you cannot come up with the paper titles, write \u2018pass.\u2019 Don\u2019t write anything else. Instruction: Sentence \"uses fractional max-pooling to randomly specify non-integer ratios between the spatial dimension sizes of the input and the output to pooling layers.\" Response: Citation Paper Title. Implementation of Direct and Indirect Querying: Direct querying is executed using zero-shot prompting for scenarios without metadata and chain-of-thoughts prompting for metadata situations. We modify the chain-of-thoughts prompting with SID Prompting. It begins with an indirect query. Following an incorrect response or a \u2018pass,\u2019 more details about the cited paper are given (i.e., direct query), including its abstract and authors\u2019 names. This is an iterative approach to generate the correct citation. Following are the two examples of these prompting strategies: Direct Query with Metadata Prompting Prompt: Who were the authors of the research paper \u201cResearch Paper Title\"? Let me give you some more context by providing the abstract of the research paper. Abstract:\u2019....\u2019. Instruction: List only author names, formatted as , separated by comma. Do not mention the paper in the title. Also, if you don\u2019t know, write \u2018pass.\u2019 Response: Author Names. SID Prompting Prompt: I have taken a sentence from the research paper titled \"Research Paper Title.\" give me the title of the possible research paper that this sentence is citing. If you cannot come up with the paper titles, write \u2019pass\u2019. Don\u2019t write anything else. Instruction: Sentence:\"......\". Let me give you some more context by providing the authors and the abstract of the paper the sentence is citing. Authors:\"......\", Abstract:\".......\" Response: Citation Paper Title. 3.2 Models and Evaluation Our research has focused on a diverse array of LLMs, carefully chosen to provide a broad perspective on the capabilities and limitations inherent in current language model technologies. Proprietary Models: Our selection of proprietary models includes those from OpenAI and Preplexity.ai. While OpenAI is known for its cutting-edge NLP models, driving significant advancements in the field, Preplexity.ai focuses on models with unique functionalities, such as recommending citations and utilizing natural language prediction for innovative search experiences. Public Models: We choose LLAMA 2 (Touvron et al., 2023) and Mistral (Jiang et al., 2023a) as the two publicly available LLMs that have demonstrated competitive performance compared to proprietary LLMs. We evaluate their effectiveness on the REASONS dataset under the standard state and retrieval-augmentation conditions. This analysis goes beyond simply comparing proprietary and public models, extending to evaluating models based on their size, particularly those with 7B parameters. 3.3 Evaluation Metrics Our evaluation uses four key metrics: 1) The BLEU Score assesses the structural alignment through clipped n-gram matching. 2) The F-1 Score evaluates the balance between precision and recall, reflecting the models\u2019 effectiveness in capturing key information. 3) Hallucination rate (HR), which we estimate by averaging over incorrect and partially correct generated citations. HR = 1 QD P I[\u02c6 c \u0338= c] + 1 |Uw| P|Uw| w=1 I[\u02c6 cw \u0338= cw], where QD: queries within a domain, and |Uw|: total number of unique words in generated citation (\u02c6 c) and true citation (c). 4) Pass Percentage (PP) measures the tendency of an LLM to either respond or abstain from giving a response. It is calculated as follows: 1 QD P I[\u02c6 c = Pass]. It is crucial to emphasize that PP serves as a safeguard to prevent LLMs from generating hallucinatory responses but also reduces engagement. Additionally, even with a high PP, the HR can be high. This implies that the model struggles to discern whether it offers correct or incorrect citations in the remaining instances. 3.4 Retrieval Augmented Generation (RAG) RAG combines a retriever and a generator to create better answers. RAG can access external knowledge, unlike methods that feed the model prompts. This lets it craft more accurate, relevant, and informative responses than models that rely solely on what they were pre-trained. We investigate RAG\u2019s ability to improve LLMs\u2019 accuracy. Ideally, RAG would help LLMs avoid giving wrong answers (low PP) and making things up (HR). We also investigate whether RAG works consistently with direct and indirect questions across different scientific fields (12 domains). We experiment with two forms of RAG architecture: \f(a) Na\u00efve RAG and (b) Advance RAG. Both architectures leverage the same bi-encoder-based retriever architecture (Karpukhin et al., 2020). Given a corpus of documents RM and a sentence s \u2208RS, the document encoder maps d \u2208RM to an embedding E\u03b8(c) and the query encoder maps s to an embedding E\u03b8(s). The top-k relevant documents for s are retrieved based on the sentence-document embedding similarity, which is often computed via dot product: z(s, d) = exp(E\u03b8(s)T E\u03b8(d)). We start with a bi-encoder retriever using an embedding model from OpenAI (subsection A.4). Other ways to set up a bi-encoder retriever, such as DRAGON+ (Lin et al., 2023), are possible. However, those are more useful when involving large-scale data augmentation. The retrieved documents are ranked in two ways, which separates Na\u00efve RAG from Advance RAG. Under the Na\u00efve RAG, we use BM25 relevance scoring to rank the documents, whereas, in Advance RAG, we fine-tune a cross-encoder on REASONS document index RM to better align it with our task of citation generation with LLM. For the fine-tuning of the cross-encoder, we use localized contrastive loss (LCL) for two reasons: (a) In RM, we do not have labeled positive and negative documents, and (b) for a sentence s there is a possibility for more than one true positive documents (Pradeep et al., 2022). LCL is formally defined as follows: LLCLs := \u2212log exp(zs,{d+}) P d\u2208Gs exp(zs,d) LLCL := 1 |S| X s\u2208Rs,Gs\u2208Rs M LLCLs where Gs represents a set of documents for a sentence s, which consist of a set of relevant documents ({d+}) and n-1 non-relevant documents {d\u2212} sampled from Rs M using biencoder. The training of Advance RAG happens through the standard cross entropy loss: LCE(\u02c6 c|s, \u03d5) = Pb i=1 I(\u02c6 cw i = cw i ) \u00b7 log Pr(\u02c6 cw i |\u03d5) where, \u03d5 is parameter of the generator LLM and b is the minibatch fine-tuning in Advance RAG. \u02c6 ci represents ith citation generation, and I(\u02c6 cw i = cw i ) represents word level comparison with ground truth citation (direct query: author names; indirect query: paper titles). For the Na\u00efve and Advance RAG, we employ LLAMA-2 7B and Mistral 7B as competitive models against proprietary LLMs. 4 Results We conducted experiments encompassing four distinct prompting styles applied to twelve scientific domains. This extensive analysis involved 12,723 sentences, resulting in a substantial dataset rigorously evaluated using ten different models. This equates to 508920 instance assessments involving 4 (prompting styles) \u00d7 12,723 (sentences for all domains) \u00d7 10 (models). The total duration required to execute all experiments on the GPU is 238 days, 6 hours, and 59 minutes. For detailed information regarding the time spent on experiments across various domains, please refer to the appendix (see subsection A.6 and Table 5). Zero-Shot Indirect Prompting: In Figure 4, a majority of the models exhibited high HR. As expected for a huge model GPT-4-1106-preview (1 Trillion Parameters) shows a relatively lower HR of 67.73% and a higher PP of 89% averaged across 12 domains. Perplexity-7b-Chat showed an exceptionally high PP of 97.5%, which is surprising, as this LLM is designed specifically for citation generation. RAG Mistral was a competitive model with GPT-4 with a lower PP of 21% and HR of 72.49% in comparison to other LLMs. Analysis shows RAG Mistral is competitive because of the high variance in HR compared to GPT-4-1106-preview. Generation quality measured by F-1 and BLEU scores were predominantly low across the board, with GPT-4 (not the preview, G1) comparatively better scores. RAG Mistral and RAG LLAMA 2 rank second and third best respectively. SID Prompting In Figure 5, showed improvement across all the LLMs in citation generation over indirect queries. An average improvement of 21% was measured, with a reduction in variance. Even though some models like Perplexity-7b-Chat and LLAMA 2 still had high HR rates, the PP dropped significantly, especially for GPT-4-1106-preview. The results of this experiment indicate that SID prompting in LLMs can balance the trade-off between PP and HR, significantly enhancing generation quality with an (8%\u2191) increase in BLEU and a (13%\u2191) in F-1 (The Appendix B provides examples for visual inspection.). Zero-Shot Direct Prompting presents a very idealistic scenario where the LLMs have access to context through direct query. This leads to both lower PP and HR. The citation generation quality significantly improves from zero-shot in\fG1 G2 G3 P RMM RL L AL AM 0 50 100 Hallucination Rate G1 G2 G3 P RMM RL L AL AM 0 0.2 0.4 0.6 0.8 F-1 Score G1 G2 G3 P RMM RL L AL AM 0 0.2 0.4 0.6 0.8 BLEU Score G1 G2 G3 P RMM RL L AL AM 0 50 100 Pass Percentage Figure 2: Averaged Zero-Shot Direct Prompting results of different LLMs across all 12 domains. G1 shows notably lower HR and higher F-1 and BLEU scores, indicating superior performance in generating citations. In contrast, model P exhibits the highest HR and the lowest scores in F-1 and BLEU, suggesting challenges in generating accurate and contextually relevant citations. The RAG models (RM and RL) demonstrate varied results, with RM showing a better accuracy and coherence balance than RL. G1: gpt-4-1106-preview, G2: gpt-4, G3: gpt-3.5-turbo, P: pplx-7b-chat, RM: Na\u00efve RAG mistral-7b-instruct, M: mistral-7b-instruct, RL: Na\u00efve RAG llama-2-7b-chat, L: llama-2-7b-chat, AL: Advance RAG llama-2-7b-chat, AM: Advance RAG mistral-7b-instruct. For the purposes of clarity and saving space, the terms AL and AM are used in the figures to denote Advance RAG llama-2-7b-chat and Advance RAG mistral-7b-instruct, respectively. In the main text of the paper, these are referred to as AdvRAG(L) and AdvRAG(M). G1 G2 G3 P RMM RL L AL AM 0 50 100 Hallucination Rate G1 G2 G3 P RMM RL L AL AM 0 0.5 1 F-1 Score G1 G2 G3 P RMM RL L AL AM 0 0.5 1 BLEU Score G1 G2 G3 P RMM RL L AL AM 0 0.5 1 Pass Percentage Figure 3: Averaged Direct Prompting with Metadata results of different LLMs across all 12 domains. The plot indicates that models G1, G2, and G3 stand out with their low HR and impressive F-1 and BLEU scores, in contrast to other models that face challenges. All models except RM reach a 0% PP, suggesting that including metadata significantly enhances their contextual understanding. G1 G2 G3 P RM M RL L 0 50 100 Hallucination Rate G1 G2 G3 P RM M RL L 0 0.2 0.4 0.6 F-1 Score G1 G2 G3 P RM M RL L 0 0.2 0.4 0.6 BLEU Score G1 G2 G3 P RM M RL L 0 50 100 Pass Percentage Figure 4: Averaged Zero-Shot Indirect Prompting across 12 domains. This prompting method led to elevated HR among the models. There was also a notable variance in PP, with models G3, P, and L exhibiting higher scores. Both conditions indicate challenges in understanding context and generating accurate citations when using indirect prompts. G1 G2 G3 P RMM RL L AL AM 0 50 100 Hallucination Rate G1 G2 G3 P RMM RL L AL AM 0 0.2 0.4 0.6 0.8 F-1 Score G1 G2 G3 P RMM RL L AL AM 0 0.2 0.4 0.6 0.8 BLEU Score G1 G2 G3 P RMM RL L AL AM 0 50 100 Pass Percentage Figure 5: Averaged SID Prompting results of different LLMs across all 12 domains. Models G1, G2, and G3 exhibit relatively better outcomes with lower HR and higher F-1 and BLEU scores, suggesting more contextual understanding. Other models demonstrated high HR, indicating difficulties in accurate citation generation with SID Prompting. Notably, while models G1 and G3 have high PPs, indicating some difficulties with SID, their overall performance still reflects a more advanced level of language processing and contextual comprehension compared to the other models. direct and SID promptings, achieving high F-1 and BLEU scores (see Figure Figure 4). However, Perplexity-7b-Chat, oddly, had high PP and HR, suggesting a need for more research on such \fspecialized LLM search engines. We observed that Perplexity-7b-Chat expands its search queries and adds references to the broader content it finds. The issue is that the expanded versions drift too far in meaning from the original. In Direct Prompting with Metadata, when metadata such as abstracts and titles were used with indirect questions, all the LLMs got better at generating citations and had low HR and PP. This shows that having more information helps LLMs create more accurate and related citations, proving the importance of enough data for good language processing. Note that PP dropped to zero for almost all models when direct promoting includes metadata. All GPT LLMs achieved F-1 and BLEU scores close to 1.0 and showed more consistent results overall. Two main points from this experiment are: First, adding metadata to LLMs is effective for all of them, especially RAG models that integrate this augmentation in their learning process. Second, smaller models with advance RAG (Mistral and LLAMA-2) adjust better to metadata than GPT-4-Preview/4/3.5 (see Figure 3). Overall: Advance RAG Mistral 7b outperformed other competitive proprietary and public LLMs in all prompting styles. This superior performance was notably marked by reduced HR, suggesting this model is more adept at generating accurate and relevant responses when adding metadata. Furthermore, improvements in F-1 scores reinforce its reliability in retrieving information. Higher BLEU scores were observed, signifying that the language output of the model aligns closely with human-like text in terms of fluency & coherence. 5 Adversarial Examination The analysis of LLMs using the REASONS dataset highlights significant variability in their performance across different domains. While they perform moderately better in areas like AI and CV with lower HR and higher F-1/BLEU scores, they struggle in complex domains such as QC, Biomolecules, and Cryptography, likely due to limited training data and the complexity of these subjects. This variability in performance indicates that LLMs have varying degrees of contextual understanding, with a tendency to perform better in domains with more extensive training data and less complex structures (e.g., maths and numerics). Motivation and Setup: We conducted adversarial experiments across all models to better assess their contextual understanding. The core concept Group PP(%) BLEU F1 HR Changing Paper Title G1 96.23 0.6210 0.8470 17.99 G2 31.45 0.0524 0.2640 83.66 G3 68.55 0.0389 0.1828 87.35 RM 3.14 0.0796 0.1584 86.78 M 0.00 0.0003 0.0221 94.95 RL 5.03 0.0628 0.1448 87.56 L 0.00 0.0066 0.0254 98.30 AdvRAG(L) 0.00 0.1322 0.4763 85.72 AdvRAG(M) 0.00 0.1569 0.5839 75.41 Changing Paper Abstract G1 95.60 0.4595 0.6451 38.49 G2 32.70 0.0396 0.2186 86.22 G3 76.10 0.0034 0.1013 91.64 RM 7.55 0.0520 0.1216 89.44 M 0.00 0.0074 0.0161 90.20 RL 2.52 0.0445 0.1112 90.16 L 0.00 0.0017 0.0146 99.01 AdvRAG(L) 0.00 0.4101 0.5780 39.67 AdvRAG(M) 0.00 0.4904 0.6954 39.57 Table 2: Performance of various LLMs on adversarial set, designed by swapping titles and abstracts. Models G1, G2, and G3, possibly exposed to similar data during training, struggled with the adversarial sets, resulting in high HR and PP. Conversely, models like AdvRAG(L) and AdvRAG(M) showed better performance, suggesting that these models attempt to understand the context before generating the citations. behind these experiments was to provide the models with incorrect yet similar metadata about the sentences in the prompts. The aim was to discern whether the models generated citations based on the contextual grasp of the provided metadata or if the metadata had minimal influence on the citation generation process. These adversarial experiments comprised two types: 1) Providing inaccurate paper titles related to the sentences. 2) Providing incorrect paper abstracts associated with the sentences. Both experiments were conducted using the SID prompting. To facilitate these experiments, we curated a subsample of 200 sentences from the REASONS dataset spanning all the domains. We extracted each sentence\u2019s most similar paper title or abstract from this dataset and replaced the original metadata. For similarity calculation, we use the RatcliffObershelp metric, which is calculated as twice the length of the longest common substring plus recursively the number of matching characters in the non-matching regions on both sides of the longest common substring (Tang et al., 2023). According to this metric, for the following example title \u201cDiffusion models for counterfactual explanations,\u201d the best replacement is \u201cOctet: Object-aware models for counterfactual explanations (0.736)\u201d as opposed \fto \u201cAdversarial counterfactual visual explanations (0.638)\u201d. We considered a threshold of 0.70 effective in preparing the adversarial set. Findings: We found that incorrect paper titles and abstracts easily fool most LLMs if it is similar to accurate information. In Table 2, G1 is displayed at 17.99%, and its pairing with a high PP of 96.23% indicates a defensive mechanism. This means the LLMs are not very good at understanding the true meaning of what they are given. On such a small adversarial set, we expect LLMs like GPT-4-1106-preview and GPT-4 to perform exceedingly well because of their extensive knowledge; however, we observed counterintuitive results in Table 2, all models show the effect. We do see promising direction with AdvRAG(M) and AdvRAG(L); however, further investigation is required into how rich graphical metadata (e.g., knowledge graph) and graph-theoretic approaches to information retrieval can improve LLM effectiveness (He et al., 2024). 6" +} \ No newline at end of file diff --git a/abs_9K/test_abstract_short_2405.02235v1.json b/abs_9K/test_abstract_short_2405.02235v1.json new file mode 100644 index 0000000000000000000000000000000000000000..698530fed2950ad1ecebabd2c5265677d6af1ab8 --- /dev/null +++ b/abs_9K/test_abstract_short_2405.02235v1.json @@ -0,0 +1,16 @@ +{ + "url": "http://arxiv.org/abs/2405.02235v1", + "title": "Learning Optimal Deterministic Policies with Stochastic Policy Gradients", + "abstract": "Policy gradient (PG) methods are successful approaches to deal with\ncontinuous reinforcement learning (RL) problems. They learn stochastic\nparametric (hyper)policies by either exploring in the space of actions or in\nthe space of parameters. Stochastic controllers, however, are often undesirable\nfrom a practical perspective because of their lack of robustness, safety, and\ntraceability. In common practice, stochastic (hyper)policies are learned only\nto deploy their deterministic version. In this paper, we make a step towards\nthe theoretical understanding of this practice. After introducing a novel\nframework for modeling this scenario, we study the global convergence to the\nbest deterministic policy, under (weak) gradient domination assumptions. Then,\nwe illustrate how to tune the exploration level used for learning to optimize\nthe trade-off between the sample complexity and the performance of the deployed\ndeterministic policy. Finally, we quantitatively compare action-based and\nparameter-based exploration, giving a formal guise to intuitive results.", + "authors": "Alessandro Montenegro, Marco Mussi, Alberto Maria Metelli, Matteo Papini", + "published": "2024-05-03", + "updated": "2024-05-03", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "label": "Original Paper", + "paper_cat": "Model AND Based AND Reinforcement AND Learning", + "gt": "Policy gradient (PG) methods are successful approaches to deal with\ncontinuous reinforcement learning (RL) problems. They learn stochastic\nparametric (hyper)policies by either exploring in the space of actions or in\nthe space of parameters. Stochastic controllers, however, are often undesirable\nfrom a practical perspective because of their lack of robustness, safety, and\ntraceability. In common practice, stochastic (hyper)policies are learned only\nto deploy their deterministic version. In this paper, we make a step towards\nthe theoretical understanding of this practice. After introducing a novel\nframework for modeling this scenario, we study the global convergence to the\nbest deterministic policy, under (weak) gradient domination assumptions. Then,\nwe illustrate how to tune the exploration level used for learning to optimize\nthe trade-off between the sample complexity and the performance of the deployed\ndeterministic policy. Finally, we quantitatively compare action-based and\nparameter-based exploration, giving a formal guise to intuitive results.", + "main_content": "Introduction Within reinforcement learning (RL, Sutton & Barto, 2018) approaches, policy gradients (PGs, Deisenroth et al., 2013) algorithms have proved very effective in dealing with realworld control problems. Their advantages include the applicability to continuous state and action spaces (Peters & Schaal, 2006), resilience to sensor and actuator noise (Gravell et al., 2020), robustness to partial observability (Azizzadenesheli et al., 2018), and the possibility of incorporating prior knowledge in the policy design phase (Ghavamzadeh & Engel, 2006), improving explainability (Likmeta et al., 2020). PG algorithms search directly in a space of parametric policies for the one that maximizes a performance 1Politecnico di Milano, Piazza Leonardo da Vinci 32, 20133, Milan, Italy. Correspondence to: Alessandro Montenegro . Proceedings of the 41 st International Conference on Machine Learning, Vienna, Austria. PMLR 235, 2024. Copyright 2024 by the author(s). function. Nonetheless, as always in RL, the exploration problem has to be addressed, and practical methods involve injecting noise in the actions or in the parameters. This limits the application of PG methods in many real-world scenarios, such as autonomous driving, industrial plants, and robotic controllers. This is because stochastic policies typically do not meet the reliability, safety, and traceability standards of this kind of applications. The problem of learning deterministic policies has been explicitly addressed in the PG literature by Silver et al. (2014) with their deterministic policy gradient, which spawned very successful deep RL algorithms (Lillicrap et al., 2016; Fujimoto et al., 2018). This approach, however, is affected by several drawbacks, mostly due to its inherent off-policy nature. First, this makes DPG hard to analyze from a theoretical perspective: local convergence guarantees have been established only recently, and only under assumptions that are very demanding for deterministic policies (Xiong et al., 2022). Furthermore, its practical versions are known to be very susceptible hyperparameter tuning. We study here a simpler and fairly common approach: that of learning stochastic policies with PG algorithms, then deploying the corresponding deterministic version, \u201cswitching off\u201d the noise.1 Intuitively, the amount of exploration (e.g., the variance of a Gaussian policy) should be selected wisely. Indeed, the smaller the exploration level, the closer the optimized objective is to that of a deterministic policy. At the same time, with a small exploration, learning can severely slow down and get stuck on bad local optima. Policy gradient methods can be partitioned based on the space on which the exploration is carried out, distinguishing between: action-based (AB) and parameter-based (PB, Sehnke et al., 2010) exploration. The first, of which REINFORCE (Williams, 1992) and GPOMDP (Baxter & Bartlett, 2001; Sutton et al., 1999) are the progenitor algorithms, performs exploration in the action space, with a stochastic (e.g., Gaussian) policy. On the other hand, PB exploration, introduced by Parameter-Exploring Policy Gradients (PGPE, Sehnke et al., 2010), implements the exploration at the level of policy parameters by means of a stochastic hyperpolicy. The latter performs perturbations of the parameters of a (typ1This can be observed in several libraries (e.g., Raffin et al., 2021b) and benchmarks (e.g., Duan et al., 2016). 1 arXiv:2405.02235v1 [cs.LG] 3 May 2024 \fLearning Optimal Deterministic Policies with Stochastic Policy Gradients ically deterministic) action policy. Of course, this dualism only considers the simplest form of noise-based, undirected exploration. Efficient exploration in large-scale MDPs is a very active area of research, with a large gap between theory and practice (Ghavamzadeh et al., 2020) placing the matter well beyond the scope of this paper. Also, we consider noise magnitudes that are fixed during the learning process, as the common practice of learning the exploration parameters themselves breaks all known sample complexity guarantees of vanilla PG (cf. Appendix C). To this day, a large effort has been put into providing convergence guarantees and sample complexity analyses for AB exploration algorithms (e.g., Papini et al., 2018; Yuan et al., 2022; Fatkhullin et al., 2023a), while the theoretical analysis of PB exploration has been taking a back seat since (Zhao et al., 2011). We are not aware of any global convergence results for parameter-based PGs. Furthermore, even for AB exploration, current studies focus on the convergence to the best stochastic policy. Original Contributions. In this paper, we make a step towards the theoretical understanding of the practice of deploying a deterministic policy learned with PG methods: \u2022 We introduce a framework for modeling the practice of deploying a deterministic policy, by formalizing the notion of white noise-based exploration, allowing for a unified treatment of both AB and PB exploration. \u2022 We study the convergence to the best deterministic policy for both AB and PB exploration. For this reason, we focus on the global convergence, rather than on the first-order stationary point (FOSP) convergence, and we leverage on commonly used (weak) gradient domination assumptions. \u2022 We quantitatively show how the exploration level (i.e., noise) generates a trade-off between the sample complexity and the performance of the deployed deterministic policy. Then, we illustrate how it can be tuned to optimize such a trade-off, delivering sample complexity guarantees. In light of these results, we compare the advantages and disadvantages of AB and PB exploration in terms of samplecomplexity and requested assumptions, giving a formal guise to intuitive results. We also elaborate on how the assumptions used in the convergence analysis can be reconnected to basic characteristics of the MDP and the policy classes. We conclude with a numerical validation to empirically illustrate the discussed trade-offs. The proofs of the results presented in the main paper are reported in Appendix D. The related works are discussed in Appendix B. 2. Preliminaries Notation. For a measurable set X, we denote with \u2206pXq the set of probability measures over X. For P P\u2206pXq, we denote with p its density function. With little abuse of notation, we will interchangeably use x\u201eP or x\u201ep to denote that random variable x is sampled from the P. For nPN, we denote by JnK:\u201ct1, ..., nu. Lipschitz Continuous and Smooth Functions. A function f :X \u010eRd \u00d1R is L-Lipschitz continuous (L-LC) if |fpxq\u00b4fpx1q|\u010fL}x\u00b4x1}2 for every x,x1 PX. f is L2Lipschitz smooth (L2-LS) if it is continuously differentiable and its gradient \u2207xf is L2-LC, i.e., }\u2207xfpxq\u00b4 \u2207xfpx1q}2 \u010fL2}x\u00b4x1}2 for every x,x1 PX. Markov Decision Processes. A Markov Decision Process (MDP, Puterman, 1990) is represented by M:\u201c pS,A,p,r,\u03c10,\u03b3q, where S \u010eRdS and A\u010eRdA are the measurable state and action spaces, p:S \u02c6A\u00dd \u00d1\u2206pSq is the transition model, where pps1|s,aq specifies the probability density of landing in state s1 PS by playing action aPA in state sPS, r:S \u02c6A\u00dd \u00d1r\u00b4Rmax,Rmaxs is the reward function, where rps,aq specifies the reward the agent gets by playing action a in state s, \u03c10 P\u2206pSq is the initial-state distribution, and \u03b3 Pr0,1s is the discount factor. A trajectory \u03c4 \u201cps\u03c4,0,a\u03c4,0,...,s\u03c4,T \u00b41,a\u03c4,T \u00b41q of length T PNYt`8u is a sequence of T state-action pairs. The discounted return of a trajectory \u03c4 is Rp\u03c4q:\u201c\u0159T \u00b41 t\u201c0 \u03b3trps\u03c4,t,a\u03c4,tq. Deterministic Parametric Policies. We consider a parametric deterministic policy \u00b5\u03b8 :S \u00d1A, where \u03b8P\u0398\u010eRd\u0398 is the parameter vector belonging to the parameter space \u0398. The performance of \u00b5\u03b8 is assessed via the expected return JD :\u0398\u00d1R, defined as: JDp\u03b8q:\u201c E \u03c4\u201epDp\u00a8|\u03b8qrRp\u03c4qs, (1) where pDp\u03c4;\u03b8q:\u201c\u03c10ps\u03c4,0q\u015bT \u00b41 t\u201c0 pps\u03c4,t`1|s\u03c4,t,\u00b5\u03b8ps\u03c4,tqq is the density of trajectory \u03c4 induced by policy \u00b5\u03b8.2 The agent\u2019s goal consists of finding an optimal parameter \u03b8\u02da D P argmax\u03b8P\u0398 JDp\u03b8q and we denote J\u02da D :\u201cJDp\u03b8\u02da Dq. Action-Based (AB) Exploration. In AB exploration, we consider a parametric stochastic policy \u03c0\u03c1 :S \u00d1\u2206pAq, where \u03c1PP is the parameter vector belonging to the parameter space P \u010eRdP. The policy is used to sample actions at \u201e\u03c0\u03c1p\u00a8|stq to be played in state st for every step t of interaction. The performance of \u03c0\u03c1 is assessed via the expected return JA :P \u00d1R, defined as: JAp\u03c1q:\u201c E \u03c4\u201epAp\u00a8|\u03c1qrRp\u03c4qs, where (2) pAp\u03c4;\u03c1q:\u201c\u03c10ps\u03c4,0q\u015bT \u00b41 t\u201c0 \u03c0\u03c1pa\u03c4,t|s\u03c4,tqpps\u03c4,t`1|s\u03c4,t,a\u03c4,tq is the density of trajectory \u03c4 induced by policy \u03c0\u03c1.2 In AB exploration, we aim at learning \u03c1\u02da A Pargmax\u03c1PP JAp\u03c1q and we denote JA\u02da :\u201cJAp\u03c1\u02da Aq. If JAp\u03c1q is differentiable w.r.t. \u03c1, PG methods (Peters & Schaal, 2008) update the 2For both JD (resp. JA, JP) and pD (resp. pA, pP), we use the D (resp. A, P) subscript to denote that the dependence on \u03b8 (resp. \u03c1) is through a Deterministic policy (resp. Action-based exploration policy, Parameter-based exploration hyperpolicy). 2 \fLearning Optimal Deterministic Policies with Stochastic Policy Gradients parameter \u03c1 via gradient ascent: \u03c1t`1 \u00d0 \u00dd\u03c1t `\u03b6t p \u2207\u03c1JAp\u03c1tq, where \u03b6t \u01050 is the step size and p \u2207\u03c1JAp\u03c1q is an estimator of \u2207\u03c1JAp\u03c1q. In particular, the GPOMDP estimator is:3 p \u2207\u03c1JAp\u03c1q:\u201c 1 N N \u00ff i\u201c1 T \u00b41 \u00ff t\u201c0 \u02dc t \u00ff k\u201c0 \u2207\u03c1log\u03c0\u03c1pa\u03c4i,k|s\u03c4i,kq \u00b8 \u03b3trps\u03c4i,t,a\u03c4i,tq, where N is the number of independent trajectories t\u03c4iuN i\u201c1 collected with policy \u03c0\u03c1 (\u03c4i \u201epAp\u00a8;\u03c1q), called batch size. Parameter-Based (PB) Exploration. In PB exploration, we use a parametric stochastic hyperpolicy \u03bd\u03c1 \u010e\u2206p\u0398q, where \u03c1PRdP is the parameter vector. The hyperpolicy is used to sample parameters \u03b8\u201e\u03bd\u03c1 to be plugged in the deterministic policy \u00b5\u03b8 at the beginning of every trajectory. The performance index of \u03bd\u03c1 is JP :Rd\u03c1 \u00dd \u00d1R, that is the expectation over \u03b8 of JDp\u03b8q defined as:2 JPp\u03c1q:\u201c E \u03b8\u201e\u03bd\u03c1 rJDp\u03b8qs. PB exploration aims at learning \u03c1\u02da P Pargmax\u03c1PP JPp\u03c1q and we denote JP\u02da :\u201cJPp\u03c1\u02da Pq. If JDp\u03c1q is differentiable w.r.t. \u03c1, PGPE (Sehnke et al., 2010) updates the hyperparameter \u03c1 via gradient accent: \u03c1t`1 \u00d0 \u00dd\u03c1t `\u03b6t p \u2207\u03c1JPp\u03c1tq. In particular, PGPE uses an estimator of \u2207\u03c1JPp\u03c1q defined as: p \u2207\u03c1JPp\u03c1q\u201c 1 N N \u00ff i\u201c1 \u2207\u03c1 log\u03bd\u03c1p\u03b8iqRp\u03c4iq, where N is the number of independent parameterstrajectories pairs tp\u03b8i,\u03c4iquN i\u201c1, collected with hyperpolicy \u03bd\u03c1 (\u03b8i \u201e\u03bd\u03c1 and \u03c4i \u201epDp\u00a8;\u03b8iq), called batch size. 3. White-Noise Exploration We formalize a class of stochastic (hyper)policies widely employed in the practice of AB and PB exploration, namely white noise-based (hyper)policies. These policies \u03c0\u03b8p\u00a8|sq (resp. hyperpolicies \u03bd\u03b8) are obtained by adding a white noise \u03f5 to the deterministic action a\u201c\u00b5\u03b8psq (resp. to the parameter \u03b8) independent of the state s (resp. parameter \u03b8). Definition 3.1 (White Noise). Let dPN and \u03c3\u01050. A probability distribution \u03a6d P\u2206pRdq is a white-noise if: E \u03f5\u201e\u03a6dr\u03f5s\u201c0d, E \u03f5\u201e\u03a6dr}\u03f5}2 2s\u010fd\u03c32. (3) This definition complies with the zero-mean Gaussian distribution \u03f5\u201eNp0d,\u03a3q, where E\u03f5\u201eN p0d,\u03a3qr}\u03f5}2 2s\u201ctrp\u03a3q\u010f d\u03bbmaxp\u03a3q. In particular, for an isotropic Gaussian \u03a3\u201c \u03c32Id, we have that trp\u03a3q\u201cd\u03c32. We now formalize the notion of white noise-based (hyper)policy. Definition 3.2 (White noise-based policies). Let \u03b8P\u0398 and \u00b5\u03b8 :S \u00d1A be a parametric deterministic policy and let \u03a6dA be a white noise (Definition 3.1). A white noise-based pol3We limit our analysis to the GPOMDP estimator (Baxter & Bartlett, 2001), neglecting the REINFORCE (Williams, 1992) since it is known that the latter suffers from larger variance. icy \u03c0\u03b8 :S \u00d1\u2206pAq is such that, for every state sPS, action a\u201e\u03c0\u03b8p\u00a8|sq satisfies a\u201c\u00b5\u03b8psq`\u03f5 where \u03f5\u201e\u03a6dA independently at every step. This definition considers stochastic policies \u03c0\u03b8p\u00a8|sq that are obtained by adding noise \u03f5 fulfilling Definition 3.1, sampled independently at every step, to the action \u00b5\u03b8psq prescribed by the deterministic policy (i.e., AB exploration), resulting in playing action \u00b5\u03b8psq`\u03f5. An analogous definition can be formulated for hyperpolicies. Definition 3.3 (White noise-based hyperpolicies). Let \u03b8P\u0398 and \u00b5\u03b8 :S \u00d1A be a parametric deterministic policy and let \u03a6d\u0398 be a white-noise (Definition 3.1). A white noisebased hyperpolicy \u03bd\u03b8 P\u2206p\u0398q is such that, for every parameter \u03b8P\u0398, parameter \u03b81 \u201e\u03bd\u03b8 satisfies \u03b81 \u201c\u03b8`\u03f5 where \u03f5\u201e\u03a6d\u0398 independently in every trajectory. This definition considers stochastic hyperpolicies \u03bd\u03b8 obtained by adding noise \u03f5 fulfilling Definition 3.1, sampled independently at the beginning of each trajectory, to the parameter \u03b8 defining the deterministic policy \u00b5\u03b8, resulting in playing deterministic policy \u00b5\u03b8`\u03f5 (i.e., PB exploration). Definitions 3.2 and 3.3 allow to represent a class of widelyused (hyper)policies, like Gaussian hyperpolicies and Gaussian policies with state-independent variance. Furthermore, once the parameter \u03b8 is learned with either AB and PB exploration, deploying the corresponding deterministic policy (i.e., \u201cswitching off\u201d the noise) is straightforward.4 4. Fundamental Assumptions In this section, we present the fundamental assumptions on the MDP (p and r), deterministic policy \u00b5\u03b8, and white noise \u03a6. For the sake of generality, we will consider abstract assumptions in the next sections and, then, show their relation to the fundamental ones (see Appendix A for details). Assumptions on the MDP. We start with the assumptions on the regularity of the MDP, i.e., on transition model p and reward function r, w.r.t. variations of the played action a. Assumption 4.1 (Lipschitz MDP (logp, r) w.r.t. actions). The log transition model logpps1|s,\u00a8q and the reward function rps,\u00a8q are Lp-LC and Lr-LC, respectively, w.r.t. the action for every s,s1 PS, i.e., for every a,aPA: |logpps1|s,aq\u00b4logpps1|s,aq|\u010fLp}a\u00b4a}2, (4) |rps,aq\u00b4rps,aq|\u010fLr}a\u00b4a}2. (5) Assumption 4.2 (Smooth MDP (logp, r) w.r.t. actions). The log transition model logpps1|s,\u00a8q and the reward function rps,\u00a8q are L2,p-LS and L2,r-LS, respectively, w.r.t. the 4For white noise-based (hyper)policies there exists a one-toone mapping between the parameter space of (hyper)policies and that of deterministic policies (P \u201c\u0398). For simplicity, we assume \u0398\u201cRd\u0398 and A\u201cRdA (see Appendix C). 3 \fLearning Optimal Deterministic Policies with Stochastic Policy Gradients action for every s,s1 PS, i.e., for every a,aPA: }\u2207a logpps1|s,aq\u00b4\u2207a logpps1|s,aq}2 \u010fL2,p}a\u00b4a}2, }\u2207arps,aq\u00b4\u2207arps,aq}2 \u010fL2,r}a\u00b4a}2. Intuitively, these assumptions ensure that when we perform AB and/or PB exploration altering the played action w.r.t. a deterministic policy, the effect on the environment dynamics and on reward (and on their gradients) is controllable. Assumptions on the deterministic policy. We now move to the assumptions on the regularity of the deterministic policy \u00b5\u03b8 w.r.t. the parameter \u03b8. Assumption 4.3 (Lipschitz deterministic policy \u00b5\u03b8 w.r.t. parameters \u03b8). The deterministic policy \u00b5\u03b8psq is L\u00b5-LC w.r.t. parameter for every sPS, i.e., for every \u03b8,\u03b8P\u0398: }\u00b5\u03b8psq\u00b4\u00b5\u03b8psq}2 \u010fL\u00b5}\u03b8\u00b4\u03b8}2. (6) Assumption 4.4 (Smooth deterministic policy \u00b5\u03b8 w.r.t. parameters \u03b8). The deterministic policy \u00b5\u03b8psq is L2,\u00b5-LS w.r.t. parameter for every sPS, i.e., for every \u03b8,\u03b8P\u0398: }\u2207\u03b8\u00b5\u03b8psq\u00b4\u2207\u03b8\u00b5\u03b8psq}2 \u010fL2,\u00b5}\u03b8\u00b4\u03b8}2. (7) Similarly, these assumptions ensure that if we deploy an altered parameter \u03b8, like in PB exploration, the effect on the played action (and on its gradient) is bounded. Assumptions 4.1 and 4.3 are standard in the DPG literature (Silver et al., 2014). Assumption 4.2, instead, can be interpreted as the counterpart of the Q-function smoothness used in the DPG analysis (Kumar et al., 2020; Xiong et al., 2022), while Assumption 4.4 has been used to study the convergence of DPG (Xiong et al., 2022). Similar conditions to our Assumption 4.1 were adopted by Pirotta et al. (2015), but measuring the continuity of p in the Kantorovich metric, a weaker requirement that, unfortunately, does not come with a corresponding smoothness condition. Assumptions on the (hyper)policies. We introduce the assumptions on the score functions of the white noise \u03a6. Assumption 4.5 (Bounded Scores of \u03a6). Let \u03a6P\u2206pRdq be a white noise with variance bound \u03c3\u01050 (Definition 3.1) and density \u03d5. \u03d5 is differentiable in its argument and there exists a universal constant c\u01050 s.t.: (i) E\u03f5\u201e\u03a6r}\u2207\u03f5 log\u03d5p\u03f5q}2 2s\u010fcd\u03c3\u00b42; (ii) E\u03f5\u201e\u03a6r}\u22072 \u03f5 log\u03d5p\u03f5q}2s\u010fc\u03c3\u00b42. Intuitively, this assumption is equivalent to the more common ones requiring the boundedness of the expected norms of the score function (and its gradient) (Papini et al., 2022; Yuan et al., 2022, cf. Appendix E). Note that a zero-mean Gaussian \u03a6\u201cNp0d,\u03a3q fulfills Assumption 4.5. Indeed, one has \u2207\u03f5 log\u03d5p\u03f5q\u201c\u03a3\u00b41\u03f5 and \u22072 \u03f5 log\u03d5p\u03f5q\u201c \u03a3\u00b41. Thus, Er}\u2207\u03f5 log\u03d5p\u03f5q}2 2s\u201ctrp\u03a3\u00b41q\u010fd\u03bbminp\u03a3q\u00b41 and Er}\u22072 \u03f5 log\u03d5p\u03f5q}2s\u201c\u03bbminp\u03a3q\u00b41. In particular, for an isotropic Gaussian \u03a3\u201c\u03c32I, we have \u03bbminp\u03a3q\u201c\u03c32, fulfilling Assumption 4.5 with c\u201c1. 5. Deploying Deterministic Policies In this section, we study the performance JD of the deterministic policy \u00b5\u03b8, when the parameter \u03b8 is learned via AB or PB white noise-based exploration (Section 3). We will refer to this scenario as deploying the parameters, which reflects the common practice of \u201cswitching off the noise\u201d once the learning process is over. PB Exploration. Let us start with PB exploration by observing that for white noise-based hyperpolicies (Definition 3.3), we can express the expected return JP as a function of JD and of the noise \u03f5 for every \u03b8P\u0398: JPp\u03b8q\u201c E \u03f5\u201e\u03a6d\u0398 rJDp\u03b8`\u03f5qs. (8) This illustrates that PB exploration can be obtained by perturbing the parameter \u03b8 of a deterministic policy \u00b5\u03b8 via the noise \u03f5\u201e\u03a6d\u0398. To achieve guarantees on the deterministic performance JD of a parameter \u03b8 learned with PB exploration, we enforce the following regularity condition. Assumption 5.1 (Lipschitz JD w.r.t. \u03b8). JD is LJ-LC in the parameter \u03b8, i.e., for every \u03b8,\u03b81 P\u0398: |JDp\u03b8q\u00b4JDp\u03b81q|\u010fLJ}\u03b8\u00b4\u03b81}2. (9) When the MDP and the deterministic policy are LC as in Assumptions 4.1 and 4.3, LJ is Opp1\u00b4\u03b3q\u00b42q (see Table 2 in Appendix A for the full expression). This way, we guarantee that perturbation \u03f5 on the parameter \u03b8 determines a variation on function JD depending on the magnitude of \u03f5, which allows obtaining the following result. Theorem 5.1 (Deterministic deployment of parameters learned with PB white-noise exploration). If the hyperpolicy complies with Definition 3.3, under Assumption 5.1: (i) (Uniform bound) for every \u03b8P\u0398, it holds that |JDp\u03b8q\u00b4JPp\u03b8q|\u010fLJ ?d\u0398\u03c3P; (ii) (JD upper bound) Let \u03b8\u02da P Pargmax\u03b8P\u0398 JPp\u03b8q, it holds that: J\u02da D \u00b4JDp\u03b8\u02da Pq\u010f2LJ ?d\u0398\u03c3P; (iii) (JD lower bound) There exists an MDP, a deterministic policy class \u00b5\u03b8 fulfilling Assumption 5.1, and a noise complying with Definition 3.1, such that J\u02da D \u00b4JDp\u03b8\u02da Pq\u011b0.28LJ ?d\u0398\u03c3P. Some observations are in order. (i) shows that the performance of the hyperpolicy JPp\u03b8q is representative of the deterministic performance JDp\u03b8q up to an additive term depending on LJ ?d\u0398\u03c3P. As expected, this term grows with the Lipschitz constant LJ of the function JD, with the standard deviation \u03c3P of the additive noise, and with the dimensionality of the parameter space d\u0398. In particular, this implies that lim\u03c3P\u00d10` JPp\u03b8q\u201cJDp\u03b8q. (ii) is a consequence of (i) and provides an upper bound between the optimal performance obtained if we were able to directly optimize the deterministic policy max\u03b8P\u0398 JDp\u03b8q and the performance of the parameter \u03b8\u02da P learned by optimizing JPp\u03b8q, i.e., via 4 \fLearning Optimal Deterministic Policies with Stochastic Policy Gradients PB exploration, when deployed on the deterministic policy. Finally, (iii) provides a lower bound to the same quantity on a specific instance of MDP and hyperpolicy, proving that the dependence on LJ ?d\u0398\u03c3P is tight up to constant terms. AB Exploration. Let us move to the AB exploration case where understanding the effect of the noise is more complex since it is applied to every action independently at every step. To this end, we introduce the notion of non-stationary deterministic policy \u00b5\u201cp\u00b5tqT \u00b41 t\u201c0 , where at time step t the deterministic policy \u00b5t :S \u00d1A is played, and its expected return (with abuse of notation) is JDp\u00b5q\u201cE\u03c4\u201epDp\u00a8|\u00b5qrRp\u03c4qs where pDp\u00a8|\u00b5q:\u201c\u03c10ps\u03c4,0q\u015bT \u00b41 t\u201c0 pps\u03c4,t`1|s\u03c4,t,\u00b5tps\u03c4,tqq. Let \u03f5\u201c p\u03f5tqT \u00b41 t\u201c0 \u201e\u03a6T dA be a sequence of noises sampled independently, we denote with \u00b5\u03b8 `\u03f5\u201cp\u00b5\u03b8 `\u03f5tqT \u00b41 t\u201c0 the nonstationary policy that, at time t, perturbs the action as \u00b5\u03b8pstq`\u03f5t. Since the noise is independent on the state, we express JA as a function of JD for every \u03b8P\u0398 as follows: JAp\u03b8q\u201c E \u03f5\u201e\u03a6T dA \u201d JDp\u00b5\u03b8 `\u03f5q \u0131 . (10) Thus, to ensure that the parameter learned by AB exploration achieves performance guarantees when evaluated as a deterministic policy, we need to enforce some regularity condition on JD as a function of \u00b5. Assumption 5.2 (Lipschitz JD w.r.t. \u00b5). JD of the nonstationary deterministic policy \u00b5 is pLtqT \u00b41 t\u201c0 -LC in the nonstationary policy, i.e., for every \u00b5,\u00b51: |JDp\u00b5q\u00b4JDp\u00b51q|\u010f T \u00b41 \u00ff t\u201c0 Lt sup sPS \u203a \u203a\u00b5tpsq\u00b4\u00b51 tpsq \u203a \u203a 2 . (11) Furthermore, we denote L:\u201c\u0159T \u00b41 t\u201c0 Lt. When the MDP is LC as in Assumptions 4.1, L is Opp1\u00b4 \u03b3q\u00b42q (see Table 2 in Appendix A for the full expression). The assumption enforces that changing the deterministic policy at step t from \u00b5t to \u00b51 t, the variation of JD is controlled by the action distance (in the worst state s) multiplied by a time-dependent Lipschitz constant. This form of condition allows us to show the following result. Theorem 5.2 (Deterministic deployment of parameters learned with AB white-noise exploration). If the policy complies with Definition 3.2 and under Assumption 5.2: (i) (Uniform bound) for every \u03b8P\u0398, it holds that: |JDp\u03b8q\u00b4JAp\u03b8q|\u010fL?dA\u03c3A; (ii) (JD upper bound) Letting \u03b8\u02da A Pargmax\u03b8P\u0398 JAp\u03b8q, it holds that J\u02da D \u00b4JDp\u03b8\u02da Aq\u010f2L?dA\u03c3A; (iii) (JD lower bound) There exists an MDP, a deterministic policy class \u00b5\u03b8 fulfilling Assumption 5.1, and a noise complying with Definition 3.1, such that J\u02da D \u00b4JDp\u03b8\u02da Aq\u011b0.28L?dA\u03c3A. Similarly to Theorem 5.1, (i) and (ii) provide an upper bound on the difference between the policy performance JAp\u03b8q and the corresponding deterministic policy JDp\u03b8q and on the performance of \u03b8\u02da A when deployed on a deterministic policy. Clearly, also in the AB exploration, we have that lim\u03c3A\u00d10` JAp\u03b8q\u201cJDp\u03b8q. As in the PB case, (iii) shows that the upper bound (ii) is tight up to constant terms. Finally, let us note that our bounds for PB exploration depend on the dimension of the parameter space d\u0398 that is replaced by that of the action space dA in AB exploration. 6. Global Convergence Analysis In this section, we present our main results about the convergence of AB and PB white noise-based exploration to global optimal parameter \u03b8\u02da D of the performance of the deterministic policy JD. Let K PN be the number of iterations and N the batch size; given an accuracy threshold \u03f5\u01050, our goal is to bound the sample complexity NK to fulfill the following last-iterate global convergence condition: J\u02da D \u00b4ErJDp\u03b8Kqs\u010f\u03f5, (12) where \u03b8K is the (hyper)parameter at the end of learning. 6.1. General Global Convergence Analysis In this section, we provide a global convergence analysis for a generic stochastic first-order algorithm optimizing the differentiable objective function J: on the parameters space \u0398\u010eRd, that can be instanced for both AB (setting J: \u201cJA) and PB (setting J: \u201cJP) exploration, when optimizing the corresponding objective. At every iteration kPJKK, the algorithm performs the gradient ascent update: \u03b8k`1 \u00d0 \u00dd\u03b8k `\u03b6k p \u2207\u03b8J:p\u03b8kq, (13) where \u03b6k \u01050 is the step size and p \u2207\u03b8J:p\u03b8kq is an unbiased estimate of \u2207\u03b8J:p\u03b8kq and denote J\u02da : \u201cmax\u03b8P\u0398 J:p\u03b8q. We enforce the following standard assumptions. Assumption 6.1 (Weak gradient domination for J:). There exist \u03b1\u01050 and \u03b2 \u011b0 such that for every \u03b8P\u0398 it holds that J\u02da : \u00b4J:p\u03b8q\u010f\u03b1}\u2207\u03b8J:p\u03b8q}2 `\u03b2. Assumption 6.1 is the gold standard for the global convergence of stochastic optimization (Yuan et al., 2022; Masiha et al., 2022; Fatkhullin et al., 2023a). Note that, when \u03b2 \u201c0, we recover the (strong) gradient domination (GD) property: J\u02da : \u00b4J:p\u03b8q\u010f\u03b1}\u2207\u03b8Jp:\u03b8q}2 for all \u03b8P\u0398. GD is stricter than WGD, and requires that J: has no local optima. Instead, WGD admits local maxima as long as their performance is \u03b2-close to the globally optimal one.5 Assumption 6.2 (Smooth J: w.r.t. parameters \u03b8). J: is 5In this section, we will assume that J: (i.e., either JA or JA) is already endowed with the WGD property. In Section 7, we illustrate how it can be obtained in several common scenarios. 5 \fLearning Optimal Deterministic Policies with Stochastic Policy Gradients L2,:-LS w.r.t. parameters \u03b8, i.e., for every \u03b8,\u03b81 P\u0398: }\u2207\u03b8J:p\u03b81q\u00b4\u2207\u03b8J:p\u03b8q}2 \u010fL2,:}\u03b81 \u00b4\u03b8}2. (14) Assumption 6.2 is ubiquitous in the convergence analysis of policy gradient algorithms (Papini et al., 2018; Agarwal et al., 2021; Yuan et al., 2022; Bhandari & Russo, 2024), which is usually studied as an instance of (nonconvex) smooth stochastic optimization. The smoothness of J: PtJA,JPu can be: (i) inherited from the deterministic objective JD (originating, in turn, from the regularity of the MDP) and of the deterministic policy \u00b5\u03b8 (Assumptions 4.14.4); or (ii) enforced through the properties on the white noise \u03a6 (Assumption 4.5). The first result was observed in a similar form by Pirotta et al. (2015, Theorem 3), while a generalization of the second was established by Papini et al. (2022) and refined by Yuan et al. (2022). Assumption 6.3 (Bounded estimator variance p \u2207\u03b8J:p\u03b8q). The estimator p \u2207\u03b8J:p\u03b8q computed with batch size N has a bounded variance, i.e., there exists V: \u011b0 such that, for every \u03b8P\u0398, we have: Varrp \u2207\u03b8J:p\u03b8qs\u010fV:{N. Assumption 6.3 guarantees that the gradient estimator is characterized by a bounded variance V: which scales with the batch size N. Under Assumptions 4.5 (and 4.4 for GPOMDP), the term V: can be further characterized (see Table 2 in Appendix A). We are now ready to state the global convergence result. Theorem 6.1. Consider an algorithm running the update rule of Equation (13). Under Assumptions 6.1, 6.2, and 6.3, with a suitable constant step size, to guarantee J\u02da : \u00b4ErJ:p\u03b8Kqs\u010f\u03f5`\u03b2 the sample complexity is at most: NK \u201c 16\u03b14L2,:V: \u03f53 log maxt0,J\u02da : \u00b4J:p\u03b80q\u00b4\u03b2u \u03f5 . (15) This result establishes a convergence of order r Op\u03f5\u00b43q to the global optimum J\u02da : of the general objective J:. Recalling that J: PtJA,JPu, Theorem 6.1 provides: (i) the first global convergence guarantee for PGPE for PB exploration (setting J: \u201cJP) and (ii) a global convergence guarantee for PG (e.g., GPOMDP) for AB exploration of the same order (up to logarithmic terms in \u03f5\u00b41) of the state-of-the-art one of Yuan et al. (2022) (setting J: \u201cJA). Note that our guarantee is obtained for a constant step size and holds for the last parameter \u03b8K, delivering a last-iterate result, rather than a best-iterate one as in (Yuan et al., 2022, Corollary 3.7). Clearly, this result is not yet our ultimate goal since, we need to assess how far the performance of the learned parameter \u03b8K is from that of the optimal deterministic objective J\u02da D. 6.2. Global Convergence of PGPE and GPOMDP In this section, we provide results on the global convergence of PGPE and GPOMDP with white-noise exploration. The sample complexity bounds are summarized in Table 1 and presented extensively in Appendix D. They all follow from our general Theorem 6.1 and our results on the deployment of deterministic policies from Section 5. PGPE. We start by commenting on the sample complexity of PGPE for a constant, generic hyperpolicy variance \u03c3P , shown in the first column. First, the guarantee on J\u02da D \u00b4ErJDp\u03b8Kqs contains the additional variancedependent term 3LP ?d\u0398\u03c3P originating from the deterministic deployment. Second, the sample complexity scales with r Op\u03f5\u00b43q. Third, by enforcing the smoothness of the MDP and of the deterministic policy (Assumptions 4.2 and 4.4), we improve the dependence on d\u0398 and on \u03c3P at the price of an additional p1\u00b4\u03b3q\u00b41 factor. A choice of \u03c3P which adapts to \u03f5 allows us to achieve the global convergence on the deterministic objective JD, up to \u03f5`\u03b2 only. Moving to the second column, we observe that the convergence rate becomes r Op\u03f5\u00b47q, which reduces to r Op\u03f5\u00b45q with the additional smoothness assumptions, which also improve the dependence on both p1\u00b4\u03b3q\u00b41 and d\u0398. The slower rate \u03f5\u00b45 or \u03f5\u00b47, compared to the \u03f5\u00b43 of the fixedvariance case, is easily explained by the more challenging requirement of converging to the optimal deterministic policy rather than the optimal stochastic hyperpolicy, as for standard PGPE. Note that we have set the standard deviation equal to \u03c3P \u201c \u03f5 6LP ?d\u0398 \u201cOp\u03f5p1\u00b4\u03b3q2d\u00b41{2 \u0398 q that, as expected, decreases with the desired accuracy \u03f5.6 GPOMDP. We now consider the global convergence of GPOMDP, starting again with a generic policy variance \u03c3A (third column). The result is similar to that of PGPE with three notable exceptions. First, an additional p1\u00b4\u03b3q\u00b41 factor appears in the sample complexity due the variance bound of GPOMDP (Papini et al., 2022). This suggests that GPOMDP struggles more than PGPE in long-horizon environments, as already observed by Zhao et al. (2011). Second, the dependence on the dimensionality of the parameter space d\u0398 is replaced with the dimensionality of the action space dA. This is expected and derives from the nature of exploration that is performed in the parameter space for PGPE and in the action space for GPOMPD. Finally, the smoothness of the deterministic policy (Asm. 4.4) is always needed. Adding also the smoothness of the MDP (Asm. 4.2), we can trade a dA factor for a p1\u00b4\u03b3q\u00b41 one. Again, a careful \u03f5-dependent choice of \u03c3A allows us to achieve global convergence on the deterministic objective JD. In the last column, we can notice that the convergence rates display the same dependence on \u03f5 as in PGPE. How6These results should be interpreted as a demonstration that global convergence to deterministic policies is possible rather than a practical recipe to set the value of \u03c3P. We do hope that our theory can guide the design of practical solutions in future works. 6 \fLearning Optimal Deterministic Policies with Stochastic Policy Gradients Table 1. Sample complexity NK \u201c r Op\u00a8q of GPOMDP and PGPE to converge to a deterministic optimal policy, retaining only dependencies on \u03f5, p1\u00b4\u03b3q\u00b41, \u03c3A, \u03c3P, d\u0398, dA, and \u03b1. Task-dependent constants LP and LA are Opp1\u00b4\u03b3q\u00b42q\u2014see Table 2 in Appendix A. ever, the dependence on the effective horizon p1\u00b4\u03b3q\u00b41 is worse. In this case, the additional smoothness assumption improves the dependency on dA and p1\u00b4\u03b3q\u00b41. 7. About the Weak Gradient Domination So far, we have assumed WGD for the AB JA and PB JP (Assumption 6.1). In this section, we discuss several scenarios in which such an assumption holds. 7.1. Inherited Weak Gradient Domination We start by discussing the case in which the deterministic policy objective JD already enjoys the (W)GD property. Assumption 7.1 (Weak gradient domination for JD). There exist \u03b1D \u01050 and \u03b2D \u011b0 such that for every \u03b8P\u0398 it holds that J\u02da D \u00b4JDp\u03b8q\u010f\u03b1D}\u2207\u03b8JDp\u03b8q}2 `\u03b2D. Although the notion of WGD has been mostly applied to stochastic policies in the literature (Liu et al., 2020; Yuan et al., 2022), there is no reason why it should not be plausible for deterministic policies. Bhandari & Russo (2024) provide sufficient conditions for the performance function not to have any local optima, which is a stronger condition, without discriminating between deterministic and stochastic policies (cf. their Remark 1). Moreover, one of their examples is linear-quadratic regulators with deterministic linear policies. We show that, under Lipschiztianity and smoothness of the MDP and deterministic policy (Assumptions 4.1-4.4), this is sufficient to enforce the WGD property for both the PB JP and the AB JA objectives. Let us start with JP. Theorem 7.1 (Inherited weak gradient domination for JP). Under Assumptions 7.1, 4.1, 4.3, 4.2, 4.4, for every \u03b8P\u0398: JP \u02da \u00b4JPp\u03b8q\u010f\u03b1D}\u2207\u03b8JPp\u03b8q}2 `\u03b2D `p\u03b1DL2 `LP q\u03c3P a d\u0398, where L2 \u201cOpp1\u00b4\u03b3q\u00b43q (full expression in Lemma E.2). The result shows that the WGD property of JD entails that of JP with the same \u03b1D coefficient, but a different \u03b2 \u201c \u03b2Dp\u03b1DL2 `LP q\u03c3P ?d\u0398 that accounts for the gap between the two objectives encoded in \u03c3P. Note that even if JD enjoys a (strong) GD (i.e., \u03b2D \u201c0), in general, JP inherits a WGD property. In the setting of Theorem 7.1, convergence in the sense of J\u02da D \u00b4ErJDp\u03b8Kqs\u010f\u03f5`\u03b2D can be achieved with r Op\u03b16 D\u03f5\u00b45d2 \u0398p1\u00b4\u03b3q\u00b411q samples by carefully setting the hyperpolicy variance (see Theorem D.12 for details). An analogous result can be obtained for AB exploration. Theorem 7.2 (Inherited weak gradient domination on JA). Under Assumptions 7.1, 4.1, 4.3, 4.2, 4.4, for every \u03b8P\u0398: JA \u02da \u00b4JAp\u03b8q\u010f\u03b1D}\u2207\u03b8JAp\u03b8q}2 `\u03b2D `p\u03b1D\u03c8`LAq\u03c3A a dA, where \u03c8\u201cOpp1\u00b4\u03b3q\u00b44q (full expression in the proof). The sample complexity, in this case, is r Op\u03b16 D\u03f5\u00b45d2 Ap1\u00b4 \u03b3q\u00b414q (see Theorem D.13 for details). 7.2. Policy-induced Weak Gradient Domination When the the objective function does not enjoy weak gradient domination in the space of deterministic policies, we can still have WGD with respect to stochastic policies if they satisfy a condition known as Fisher-non-degeneracy (Liu et al., 2020; Ding et al., 2022). As far as we know, WGD by Fishernon-degeneracy is a peculiar property of AB exploration that has no equivalent in PB exploration. White-noise policies satisfying Assumption 4.5 are Fisher-non-degenerate under the following standard assumption (Liu et al., 2020): Assumption 7.2 (Explorability). There exists \u03bbE \u01050 s.t. E\u03c0\u03b8r\u2207\u03b8\u00b5\u03b8psq\u2207\u03b8\u00b5\u03b8psqJs\u013e\u03bbEI for all \u03b8P\u0398, where the expectation over states is induced by the stochastic policy. We can use this fact to prove WGD for white-noise policies: Theorem 7.3 (Policy-induced weak gradient domination). Under Assumptions 4.5, 7.2 and D.1, we have: JA \u02da \u00b4JAp\u03b8q\u010fC ?dA\u03c3A \u03bbE }\u2207\u03b8JAp\u03b8q}2 ` ?\u03f5bias 1\u00b4\u03b3 , for some numerical constant C \u01050, that is, Assumption 6.1 (:=A) is satisfied with \u03b1\u201cC ?dA\u03c3A \u03bbE and \u03b2 \u201c ?\u03f5bias 1\u00b4\u03b3 . 7 \fLearning Optimal Deterministic Policies with Stochastic Policy Gradients Here \u03f5bias is the compatible-critic error, which can be very small for rich policy classes (Ding et al., 2022). We can leverage this to prove the global convergence of GPOMDP as in Section 7.1, this time to JD \u00b4ErJDp\u03b8qs\u010f\u03f5` ?\u03f5bias 1\u00b4\u03b3 . Tuning \u03c3A, we can achieve a sample complexity of r Op\u03f5\u00b41\u03bb\u00b44 E d4 Ap1\u00b4\u03b3q\u00b410q (see Theorem D.16 for details) This seems to violate the \u2126p\u03f5\u00b42q lower bound by Azar et al. (2013). However, the factor \u03bbE can depend on \u03c3A \u201cOp\u03f5q in highly non-trivial ways, and, thus, can hide additional factors of \u03f5. For this reason, the results granted by the Fishernon-degeneracy of white-noise policies are not compared with the ones granted by inherited WGD from Section 7.1. Intuitively, \u03bbE encodes some difficulties of exploration that are absent in \u201cnice\u201d MDPs satisfying Assumption 7.1. See Appendix D.4 for further discussion and omitted proofs. 8. Numerical Validation In this section, we empirically validate some of the theoretical results presented in the paper. We conduct a study on the gap in performance between the deterministic objective JD and the ones of GPOMDP and PGPE (respectively JA and JP) by varying the value of their exploration parameters (\u03c3A and \u03c3P, respectively). Details on the employed versions of PGPE and GPOMDP can be found in Appendix G. Additional experimental results can be found in Appendix H. We run PGPE and GPOMDP for K \u201c2000 iterations with batch size N \u201c100 on three environments from the MuJoCo (Todorov et al., 2012) suite: Swimmer-v4 (T \u201c200), Hopper-v4 (T \u201c100), and HalfCheetah-v4 (T \u201c100). For all the environments the deterministic policy is linear in the state and the noise is Gaussian. We consider \u03c32 : P t0.01,0.1,1,10,100u. More details in Appendix H.1. From Figure 1, we note that as the exploration parameter grows, the distance of JPp\u03b8Kq and JAp\u03b8Kq from JDp\u03b8Kq increases, coherently with Theorems 5.1 and 5.2. Among the tested values for \u03c3P and \u03c3A, some lead to the highest values of JDp\u03b8Kq. Empirically, we note that PGPE delivers the best deterministic policy with \u03c32 P \u201c10 for Swimmer and with \u03c32 P \u201c1 for the other environments. GPOMDP performs the best with \u03c32 A \u201c1 for Swimmer, and with \u03c32 A \u201c10 in the other cases. These outcomes agree with the theoretical results in showing that there exists an optimal value for \u03c3:. We can also appreciate the trade-off between GPOMDP and PGPE w.r.t. the parameter dimensionality d\u0398 and the horizon T, by comparing the best values of JD found by the two algorithms in each environment. GPOMDP is better than PGPE in Hopper and HalfCheetah. This can be explained by the fact that such environments are characterized by higher values of d\u0398. Instead, in Swimmer, PGPE performs better than GPOMDP. This can be explained by the higher value of T and the lower value of d\u0398. 10\u00b42 10\u00b41 100 101 102 \u00b4100 0 100 200 \u03c32 P J:p\u03b8Kq JD JP (a) PGPE on HalfCheetah. 10\u00b42 10\u00b41 100 101 102 \u00b4100 0 100 200 \u03c32 A J:p\u03b8Kq JD JA (b) GPOMDP on HalfCheetah. 10\u00b42 10\u00b41 100 101 102 150 200 250 \u03c32 P J:p\u03b8Kq JD JP (c) PGPE on Hopper. 10\u00b42 10\u00b41 100 101 102 150 200 250 \u03c32 A J:p\u03b8Kq JD JA (d) GPOMDP on Hopper. 10\u00b42 10\u00b41 100 101 102 20 40 60 \u03c32 P J:p\u03b8Kq JD JP (e) PGPE on Swimmer. 10\u00b42 10\u00b41 100 101 102 20 40 60 \u03c32 A J:p\u03b8Kq JD JA (f) GPOMDP on Swimmer. Figure 1. Variance study on Mujoco (5 runs, mean \u02d8 95% C.I.). 9." +} \ No newline at end of file diff --git a/abs_9K/test_abstract_short_2405.02384v1.json b/abs_9K/test_abstract_short_2405.02384v1.json new file mode 100644 index 0000000000000000000000000000000000000000..edb5fb4e9f0804b97ce76100678592ce6b958538 --- /dev/null +++ b/abs_9K/test_abstract_short_2405.02384v1.json @@ -0,0 +1,18 @@ +{ + "url": "http://arxiv.org/abs/2405.02384v1", + "title": "CogDPM: Diffusion Probabilistic Models via Cognitive Predictive Coding", + "abstract": "Predictive Coding (PC) is a theoretical framework in cognitive science\nsuggesting that the human brain processes cognition through spatiotemporal\nprediction of the visual world. Existing studies have developed spatiotemporal\nprediction neural networks based on the PC theory, emulating its two core\nmechanisms: Correcting predictions from residuals and hierarchical learning.\nHowever, these models do not show the enhancement of prediction skills on\nreal-world forecasting tasks and ignore the Precision Weighting mechanism of PC\ntheory. The precision weighting mechanism posits that the brain allocates more\nattention to signals with lower precision, contributing to the cognitive\nability of human brains. This work introduces the Cognitive Diffusion\nProbabilistic Models (CogDPM), which demonstrate the connection between\ndiffusion probabilistic models and PC theory. CogDPM features a precision\nestimation method based on the hierarchical sampling capabilities of diffusion\nmodels and weight the guidance with precision weights estimated by the inherent\nproperty of diffusion models. We experimentally show that the precision weights\neffectively estimate the data predictability. We apply CogDPM to real-world\nprediction tasks using the United Kindom precipitation and ERA surface wind\ndatasets. Our results demonstrate that CogDPM outperforms both existing\ndomain-specific operational models and general deep prediction models by\nproviding more proficient forecasting.", + "authors": "Kaiyuan Chen, Xingzhuo Guo, Yu Zhang, Jianmin Wang, Mingsheng Long", + "published": "2024-05-03", + "updated": "2024-05-03", + "primary_cat": "cs.NE", + "cats": [ + "cs.NE", + "cs.AI", + "cs.LG" + ], + "label": "Original Paper", + "paper_cat": "Diffusion AND Model", + "gt": "Predictive Coding (PC) is a theoretical framework in cognitive science\nsuggesting that the human brain processes cognition through spatiotemporal\nprediction of the visual world. Existing studies have developed spatiotemporal\nprediction neural networks based on the PC theory, emulating its two core\nmechanisms: Correcting predictions from residuals and hierarchical learning.\nHowever, these models do not show the enhancement of prediction skills on\nreal-world forecasting tasks and ignore the Precision Weighting mechanism of PC\ntheory. The precision weighting mechanism posits that the brain allocates more\nattention to signals with lower precision, contributing to the cognitive\nability of human brains. This work introduces the Cognitive Diffusion\nProbabilistic Models (CogDPM), which demonstrate the connection between\ndiffusion probabilistic models and PC theory. CogDPM features a precision\nestimation method based on the hierarchical sampling capabilities of diffusion\nmodels and weight the guidance with precision weights estimated by the inherent\nproperty of diffusion models. We experimentally show that the precision weights\neffectively estimate the data predictability. We apply CogDPM to real-world\nprediction tasks using the United Kindom precipitation and ERA surface wind\ndatasets. Our results demonstrate that CogDPM outperforms both existing\ndomain-specific operational models and general deep prediction models by\nproviding more proficient forecasting.", + "main_content": "Introduction Predictive Coding (PC) is a theoretical construct in cognitive science, positing that the human brain cognizes the vi*Equal contribution 1School of Software, BNRist, Tsinghua University. Kaiyuan Chen . Correspondence to: Mingsheng Long . Preliminary work. sual world through predictive mechanisms (Spratling, 2017; Hohwy, 2020). The PC theory elucidates that the brain hierarchically amends its perception of the environment by anticipating changes in the visual world. Researchers have developed computational models based on the PC theory to simulate the brain\u2019s predictive mechanisms (Keller & Mrsic-Flogel, 2018). Neuroscientists employ these models to empirically validate the efficacy of the PC theory and to find new characteristics. Precision weighting, a pivotal feature of the PC theory, suggests that the brain assigns more attention to signals with lower precision by using precision as a filter in weighting prediction errors. With the advancement of deep learning, predictive learning has emerged as one of the principal learning methods (Rane et al., 2020; Bi et al., 2023). Neural networks are now capable of making effective predictions in video data (Shi et al., 2015; Wang et al., 2017; Ho et al., 2022c). Deep video prediction models have rich applications, such as weather forecasting (Ravuri et al., 2021; Zhang et al., 2023) and autonomous driving simulation (Wang et al., 2018; Wen et al., 2023). Researchers design cognitively inspired video prediction models utilizing the PC theory. PredNet (Lotter et al., 2020), which employs multi-layer ConvLSTM (Shi et al., 2015) networks to predict the next frame in a video sequence, is responsible for predicting the residual between the outcomes of a network layer and the ground truth values. However, the predictive capability of PredNet does not show significant improvement over non-hierarchical video prediction models and has not been validated in real-world video prediction tasks. We posit that the hierarchical modeling mechanism in PredNet is not effectively implemented. PredNet directly targets low signal-to-noise ratio residuals as learning objectives, which complicates the learning process, and fails to extract fundamentally distinct features between layers. Additionally, PredNet lacks the capability to model precision, leading to uniform weighting in learning residuals across different regions. This results in redundant noise information becoming a supervisory signal and hinders the model\u2019s ability to learn from important information. In this study, we propose PC-inspired Cognitive Diffusion Probabilistic Models (CogDPM), which align the main features of PC theory with Diffusion Probabilistic Models 1 arXiv:2405.02384v1 [cs.NE] 3 May 2024 \fCogDPM: Diffusion Probabilistic Models via Cognitive Predictive Coding (DPMs), a specialized branch of deep generative models. The CogDPM framework innovatively abstracts the multistep inference process characteristic of Diffusion Probabilistic Models into a hierarchically structured model, where each layer is responsible for processing signals at distinct spatiotemporal scales. This hierarchical approach allows for a progressive enhancement in the model\u2019s interpretation of sensory inputs, actively working to reduce prediction errors through iterative refinement. A key feature of the CogDPM framework is its ability to estimate spatiotemporal precision weights based on the variance of states in each hierarchical layer. This methodology plays a crucial role in optimizing the overall precision of predictions, and represents a novel advancement in predictability modeling. We verify the effectiveness of precision weights as well as the predictions skills of CogDPM on real-world spatiotemporal forecasting tasks. To verify precision weights, we use synthetic motion datasets of both rigid body and fluid. Results show precision weights get higher salience on the hard-to-predict region. To validate the prediction capabilities of CogDPM, we apply CogDPM to real-world tasks including precipitation nowcasting (Shi et al., 2015; Ravuri et al., 2021) and high wind forecasting (Barbounis et al., 2006; Soman et al., 2010). We evaluate CogDPM through case studies focusing on extreme weather events and scientific numerical metrics. CogDPM outperforms operational domain-specific models FourCastNet (Pathak et al., 2022) and DGMR (Ravuri et al., 2021) as well as the general deep predictive models. We demonstrate that CogDPM has strong extreme event prediction capabilities and verify the effectiveness of precision estimations of CogDPM which provide useful information for weather-driven decision-making. In summary, we identify the following advantages of CogDPM: \u2022 CogDPM aligns diffusion probabilistic models with Predictive Coding theory, which inherently integrates hierarchy prediction error minimization with precisionweighting mechanics. \u2022 CogDPM delivers skillful and distinct prediction results, particularly in scientific spatiotemporal forecasting, demonstrating a marked improvement in probabilistic forecasting metrics. \u2022 CogDPM presents a novel method for predictability estimation, providing index of confidence modeling for probabilistic forecasting. 2. Related Work Predictive Learning. Predictive learning is a subfield of machine learning that utilizes historical data to make predictions about future events or outcomes. As an important aspect of human cognition that plays a crucial role in our ability to perceive and understand the world, spatiotemporal predictive learning has triggered a substantial amount of research efforts, such as ConvLSTM (Shi et al., 2015), PredRNN (Wang et al., 2017), and ModeRNN (Yao et al., 2023). Recently, diffusion models (Ho et al., 2020) have been successfully applied in video generation (Ho et al., 2022a) so as to capture spatiotemporal correlations, showing a promising trend as a spatiotemporal predictive learning framework. Predictive Coding. In neuroscience, predictive coding is a theory of brain function about how brains create predictions about the sensory input. Rao & Ballard translates the idea of predictive coding into a computational model based on extra-classical receptive-field effects, and shows the brain mechanism of trying to efficiently encode sensory data using prediction. Further research in neuroscience (Friston, 2009; Clark, 2013; Emberson et al., 2015; Spratling, 2017) presents different interpretations of predictive coding theory. Predictive Coding Neural Networks. The development of deep learning has arisen plenty of deep predictive networks with cognition-inspired mechanisms. PredNet (Lotter et al., 2016) implements hierarchical predictive error with ConvLSTM for spatiotemporal prediction using principles of predictive coding. CPC (Oord et al., 2018; Henaff, 2020) and MemDPC (Han et al., 2020) incorporate contrastive learning in the latent space via a predictive-coding-based probabilistic loss. PCN (Wen et al., 2018; Han et al., 2018) proposes a bi-directional and recurrent network to learn hierarchical image features for recognition. Such models introduce the motivation of predictive coding in their taskspecific manners. However, these works ignore precision weighting, a pivotal mechanism in PC theory. Besides, these works have not explored a proper PC-based framework of diffusion models. 3. Method Spatiotemporal forecasting involves extracting patterns from a sequence of vector fields c\u2212N0:0 and providing future evolution x1:N. We give a brief introduction to the framework of predictive coding and propose our CogDPM for implementing Predictive Coding into spatiotemporal forecasting. To avoid confusion, we use the superscript N to represent different moments allowed by time, and the subscript t to denote the ordinal number of the inference steps in the diffusion model. 3.1. CogDPM via Predictive Coding Figure 1a presents a conceptual demonstration of a predictive coding (PC) system. Based on PC theory, we pro2 \fCogDPM: Diffusion Probabilistic Models via Cognitive Predictive Coding a b c Predictive Error Minimization Reverse Denoising Process Error Estimator s+1 Expectation State s+1 Layer s+1 Error Estimator s Expectation State s Layer s Error Estimator s-1 Expectation State s-1 Layer s-1 Predictions N=1 Inverse Precision Guidance t+1 Latent State t+1 State t State t-1 Observations Step t+1 Guidance t Latent State t Step t Guidance t-1 Latent State t-1 Step t-1 Generative DPM ACzXicjVHLTsJAFD3UF+ILdemkZi4IsUYdUl0oTsxkUcEQtoywIS +Mp2aEMStP+BWf8v4B/oX3hlLohKj07Q9c+49Z+be60Qej6VlvWaMufmFxaXscm5ldW19I7+5VYvDRLis6oZeKBqOHTOPB6wqufRYIxLM9h2P1Z3hmYrXb5mIeRhcy1HE2r7dD3iPu7Yk6ua8M27JAZP2pJMvWEVL3MWlFJQLoqYf4FLXQRwkUCHwBJGEPNmJ6mijBQkRcG2PiBCGu 4wT5EibUBajDJvYIX37tGumbEB75RlrtUunePQKUprYI01IeYKwOs3U8UQ7K/Y37H2VHcb0d9JvXxiJQbE/qWbZv5Xp2qR6OFE18Cpkgzqjo3dUl0V9TNzS9VSXKIiFO4S3FB2NXKaZ9NrYl17aq3to6/6UzFqr2b5iZ4V7ekAZd+jnMW1A6KpaPi4dVhoXyajqLHexin+Z5jDIuU EGVvAM84gnPxqWRGHfG/WeqkUk12/i2jIcPXQCTeA=G\u2713 Diff. Perceptual DPM Perceptual DPM ACzXicjVHLTsJAFD3UF+ILdemkZi4Iq0h6pLoxp2YyCMCIW0Z YEJpm+nUhCBu/QG3+lvGP9C/8M5YEpUYnabtmXPvOTP3XjfyeSwt6zVjLCwuLa9kV3Nr6xubW/ntnVocJsJjVS/0Q9FwnZj5PGBVyaXPGpFgzsj1Wd0dnqt4/ZaJmIfBtRxHrD1y+gHvc+RN1UOpOWHDpTDv5glW09DLngZ2CAtJVCfMvaKGLEB4SjMAQBL24SCmpwkbFi Li2pgQJwhxHWeYIkfahLIYZTjEDunbp10zZQPaK89Yqz06xadXkNLEAWlCyhOE1WmjifaWbG/eU+0p7rbmP5u6jUiVmJA7F+6WeZ/daoWiR5OdQ2cao0o6rzUpdEd0Xd3PxSlSHiDiFuxQXhD2tnPXZ1JpY1656+j4m85UrNp7aW6Cd3VLGrD9c5zoHZUtI+LpatSoXyW jqLPezjkOZ5gjIuUEGVvAM84gnPxqWRGHfG/WeqkUk1u/i2jIcPcrGTgQ=P\u2713 Uncertainty Weighting Guidance Update Short-term Obserations Sensation Fields N=1 Predictions N=1 Low-precision Maps N=1 N=0 N=0 Figure 1. a, A general predictive coding framework. The system recognizes the sensation fields with hierarchy error units and expectation units and generates the predictions and precision maps during the process. b, Cognitive Diffusion Probabilistic Models (CogDPM) framework, providing predictions and precision weights with multi-step denoising process. c, Updates of latent states with precisionweighted predictive error. pose Cognitive Diffusion Probabilistic Models (CogDPM) for spatiotemporal forecasting based on multi-step denoising (Ho et al., 2020), which realizes the core mechanisms of hierarchical inference and prediction error minimization. Fig. 1b shows the framework of CogDPM, which takes past observations as input to forecast the evolution of future fields and estimate corresponding prediction error. Hierarchical Inference. Predictive coding theory describes that the brain makes spatiotemporal predictions of the sensations through hierarchical inference with multilayer organized estimators (Walsh et al., 2020). While different layers of the PC system are responsible for processing features at different spatial scales, the hierarchical system gradually performs prediction error minimization and converges on a final consistent predictions (Wiese & Metzinger, 2017). CogDPM aligns the multi-step inference of DPM with the hierarchical inference of the PC system. In the inference phase of CogDPM, the forecast is gradually generated in the hidden states evolution process from xT , xT \u22121, . . . to x0, where xT is a Gaussian prior and x0 indicates the generated target distribution of forecast. CogDPM inherits the properties of DPM that the different inference steps have varying spatial and temporal scales of feature expression capabilities (Zheng et al., 2022). In the initial stages of inference, the model yields holistic and vague results. As it approaches the final steps, the model shifts its focus towards supplementing with detailed information, which is also aligned with the hierarchical property of the PC system. In each internal inference step, the guidance of the diffusion model plays a similar role with the error units of the PC system, taking observation sequence as input and strengthen the correlation between generated results and observations (Dhariwal & Nichol, 2021). Prediction Error Minimization. Each layer in the PC system outputs two key components: predictions for future sensations and estimations of prediction errors (van Elk, 2021). This process is enabled by interactions between two functionally distinct neural sub-components in the layer: expectation units and error units (Walsh et al., 2020). The expectation unit updates expected sensory states from the previous level to the error units, without directly receiving sensory-driven signals as input. The error unit receives and analyzes the discrepancies between perceptual and expected sensory states to compute the error, which is then fed back to the expectation unit in the next layer. The goal of the information transfer between multiple layers is to minimize 3 \fCogDPM: Diffusion Probabilistic Models via Cognitive Predictive Coding prediction errors, ultimately resulting in more accurate environmental perceptions. CogDPM couples a generative DPM G\u03b8 with a perceptual DPM P\u03b8, where \u03b8 represents their sharing parameters. The previous state xt is the sharing input of both models, while observations c can only be attached by the perceptual DPM. With the previous state as observation, the perceptual DPM acts as sensory stimuli and thus aligns with the bottom-up process in the PC system. The generative DPM, as a comparison, performs as the top-down prediction based on conceptual knowledge. Fig. 1c provides detailed schematic diagram of a single step in CogDPM. Given the outputs G\u03b8(xt) and P\u03b8(xt, c) separately for each step t, the guidance for predictive error minimization can be expressed by: Guidance[xt] = P\u03b8(xt, c) \u2212G\u03b8(xt), (1) i.e., the difference between sensations and predictions. 3.2. Precision Weighting in CogDPM Precision weighting stands as the pivotal mechanism for filtering information transmitted between adjacent layers. It posits that the brain expends more effort in comprehending imprecise information, recognizing that sensory input often contains a substantial proportion of redundant information, which does not necessitate repetitive processing (Hohwy, 2020). During each error minimization phase of the predictive coding (PC) approach, the error unit generates precision maps. These maps selectively filter the signal transmitted to the subsequent layer, assigning greater weight to signals characterized by higher imprecision. Following precision weighting in PC theory, our goal is to design a modeling of imprecision for each denoising process of CogDPM. We therefore delve into the progressive denoising mechanism in the backward process of DPMs. In each denoising step for xt, the model predicts a noise towards the corresponding groundtruth x0 (Song et al., 2020). The model usually shifts xt into xt\u22121 within a tiny step and recursively performs the process to get x0, but can either directly obtain x0 within a single larger step. If the direct predictions from step t and from step t + 1 with generative DPM G\u03b8 differ in a significant manner for a certain spatiotemporal region, the single step produces inconsistent signal from previous steps, indicating the imprecision of the generative model at such region of the current state. Hence, we use the fluctuation field of direct predictions x0 from {xt, . . . , xt+k\u22121} to estimate such imprecision of state xt for each coordinate, formulated by Eq. (2): U[xt] = Var [EG\u03b8 [x0 | xt] , . . . , EG\u03b8 [x0 | xt+k\u22121]] , (2) where Var stands for the variance field along the denoising step, and k is the hyperparameter for window length. In this way, CogDPM provides a modeling of the inverse precision field for multiscale spatiotemporal coordinates in the inference steps. Since only the past observation is given in the forecasting tasks, this precision is a good substitution for the actual precision to weight the minimization. We implement precision weighting in the CogDPM framework, which can be formulated as Eq. (3), xt\u22121 = G\u03b8(xt) + f(U[xt]) \u00b7 Guidance[xt], (3) where f is a parameter-free normalization function shown in Eq. (8). Precision weighting helps to control the balance between diversity and the alignments with the observation, with larger guidance increasing the alignments and decreasing the diversity or the quality of generations. Through this precision weighting mechanism, CogDPM strategically allocates greater guidance intensity to regions with lower predictability, thereby enhancing local precision in a focused manner. Computational details. The framework of a standard DPM starts with x0 sampled from data distribution, and latent states {x1, x2, . . . , xT } following the forward process along a Markov chain as Eq. (4). q(xt+1 | xt) = N \u0000\u221a\u03b1txt, \u221a 1 \u2212\u03b1tI \u0001 , (4) where {\u03b1t}t=1,2,...,T are constant parameters. Each latent state is a corrupted estimation for the future inputs with the three-dimensional shape of N \u00d7 H \u00d7 W. In each step of the backward process, we update the latent state with the denoising network \u03f5\u03b8. We denote the sensation input as c, which has a shape of N0 \u00d7 H \u00d7 W. The perceptual model P\u03b8 and generative model G\u03b8 can be preformed separately as Eq. (5) and (6). P\u03b8(xt, c) = 1 \u221a\u03b1t \u0012 xt \u22121 \u2212\u03b1t \u221a1 \u2212\u00af \u03b1t \u03f5\u03b8(xt, c) \u0013 , (5) G\u03b8(xt) = 1 \u221a\u03b1t \u0012 xt \u22121 \u2212\u03b1t \u221a1 \u2212\u00af \u03b1t \u03f5\u03b8(xt, \u2205) \u0013 , (6) where \u00af \u03b1t = Qt s=1 \u03b1s and \u03f5\u03b8 is the denoising network of the DPM. CogDPM provides inverse precision estimation with Eq. (2), and EG\u03b8 [x0 | xt] can be computed as Eq. (7): EG\u03b8 [x0 | xt] = 1 \u221a\u00af \u03b1t \u0000xt \u2212 \u221a 1 \u2212\u00af \u03b1t\u03f5\u03b8(xt, \u2205) \u0001 . (7) For implementation, we push G\u03b8 (xt) into the estimation queue with a maximal queue length of k, and estimate the precision with Eq. (2). Thus, we can merge G\u03b8(xt) and P\u03b8(xt, c) with respect to the control of precision with Eq. (3). Considering numerical stability, we normalize the inverse precision field in U(xt) and clip the value in a fixed range. The formulation of f is following: f(w) = \u03bb \u00b7 clip \u0012w \u2212\u00af w \u03c3(w) , 0, 1 \u0013 + 1, (8) 4 \fCogDPM: Diffusion Probabilistic Models via Cognitive Predictive Coding where \u00af w and \u03c3(w) are the mean and standard error of w, \u03bb is a constant that controls the guidance strength. Finally, we merge G\u03b8(xt) and P\u03b8(xt, c) with the guidance weight by inverse precision as Eq. (3). The pseudo code of the inference process of CogDPM framework is shown in Algorithm 1. Objective function. CogDPM follows the training schema in diffusion probabilistic model (Ho et al., 2020) that predicts the noise from the corrupted inputs. We denote the loss term as L(\u03b8). The denoising U-Net \u03f5\u03b8 has parameters \u03b8, and takes the corrupted future observations xs, contexts c and the scalar diffusion step s as input. We adopt the L1 loss to minimize the error between injected noise and the prediction of the denoising U-Nets. L(\u03b8) = Et,x0,\u03f5,c \u0002 \u2225\u03f5 \u2212\u03f5\u03b8 \u0000\u221a\u00af \u03b1tx0 + \u221a 1 \u2212\u00af \u03b1t\u03f5, c, t \u0001 \u22251 \u0003 (9) To jointly train the conditional and unconditional models, c is replaced by Z \u223cN(0, I) with 10% probability. Algorithm 1 Inference Process of CogDPM framework Input: Context input c, denosing model \u03f5\u03b8, maximul queue length L xT \u223cN(0x, Ix) Define free estimation queue Qfree for t = T to 1 do \u03f5c \u223cN(0c, Ic) \u03f5cond t = \u03f5\u03b8(\u02c6 xt, c) {Network output with condition c.} \u03f5free t = \u03f5\u03b8(\u02c6 xt, \u03f5c) {Network output without condition.} P\u03b8(xt, c) = 1 \u221a\u03b1t (xt \u2212 1\u2212\u03b1t \u221a1\u2212\u00af \u03b1t \u03f5cond t ) G\u03b8(xt) = 1 \u221a\u03b1t (xt \u2212 1\u2212\u03b1t \u221a1\u2212\u00af \u03b1t \u03f5free t ) \u02c6 xt\u21920 = 1 \u221a\u00af \u03b1t \u0000xt \u2212\u221a1 \u2212\u00af \u03b1t\u03f5free t \u0001 {Estimate x0 with xt.} Push \u02c6 xt\u21920 into Qfree if Length of Qfree exceeds L then Drop last term from Qfree end if Get inverse precision estimation w = f(Var(Qfree)) xt\u22121 = G\u03b8(xt)+w\u00b7(P\u03b8(xt, c)\u2212G\u03b8(xt)) {Prediction error minimization with precision weighting.} end for Output: x0 4. Experiments We demonstrate that by incorporating the novel design inspired by the cognitive predictive process, CogDPM can deliver more skillful and improved results in tasks of scientific spatiotemporal field prediction. 4.1. Synthesis Data Experiments In this section, we compare the predictive performance of CogDPM with other mainstream deep predictive networks and investigate the interpretability of Precision weighting within the CogDPM framework in the context of spatiotemporal prediction. We expect high correlation between the precision estimation and the predictability of CogDPM. The inverse precision estimator should allocate more attention to the region with higher prediction difficulty. Benchmarks. We conduct experiments on the MovingMNIST dataset (Wu et al., 2021), which simulates the motion of rigid bodies, and the Turbulence flow dataset, which models fluid dynamics. The Moving MNIST dataset is generated with the same method as (Wu et al., 2021). We create sequences with 20 frames, and each frame contains three handwriting digits. The motion of digits consists of transition, reflection, and rotation. Models predict the next 16 frames with 4 continuous context frames. The turbulent flow dataset is proposed by (Rui et al., 2020). We follow the same dataset parameters as Rui et al. and generate a sequence with 15 frames and 64 x 64 grids on each frame. Four frames are taken to predict the next 11 frames. We have selected a diverse array of deep spatiotemporal forecasting models as baselines for our study. These include the Transformer-based spatiotemporal forecasting model FourCastNet (Pathak et al., 2022) , RNN-type networks such as MotionRNN (Wu et al., 2021) and PredRNN-v2 (Wang et al., 2022), the physics-inspired predictive model PhyDNet (Guen & Thome, 2020), and a predictive DPM model that employs naive Classifier-free Guidance (Ho & Salimans, 2021) and utilizes the same network architecture as CogDPM. For the evaluation metrics, we have chosen the Neighborhood-based CRPS (Continuous Ranked Probability Score), CSI (Critical Success Index), and FSS (Fractional Skill Score), which are commonly used in scientific forecasting tasks. The CRPS metric emphasizes the ensemble forecasting capabilities of the model, with lower values indicating better predictive performance. On the other hand, the CSI and FSS metrics focus on assessing the accuracy of the model\u2019s predictions in peak regions, with higher values denoting stronger predictive capabilities. The implementation details of these metrics are provided in the appendix D, and we will continue to employ them in subsequent experiments on real-world datasets. Numerical Results Table 4 presents the numerical evaluation results for two datasets. Here, w denotes the window size employed in the Neighborhood-based assessment method, while avg and max represent the average and maximum values obtained from this method, respectively. 5 \fCogDPM: Diffusion Probabilistic Models via Cognitive Predictive Coding Methods / Metrics MovingMNIST Turbulence CRPS \u2193 CSI \u2191 (w5) FSS \u2191 (w5) CRPS \u2193 CSI \u2191 (w5) FSS \u2191 (w5) (w8, avg) (w8, max) (w8, avg) (w8, max) FourCastNet 0.0619 0.2288 0.1915 0.3261 0.0098 0.0119 0.3761 0.6558 MotionRNN 0.0377 0.1232 0.4859 0.6758 0.0037 0.0046 0.7235 0.9354 PhyDNet 0.0325 0.0983 0.6161 0.7969 0.0079 0.009 0.5456 0.8254 PredRNN-v2 0.027 0.0774 0.688 0.8471 0.0033 0.0042 0.7529 0.9507 DPM 0.0323 0.082 0.6959 0.822 0.0023 0.0096 0.6725 0.9668 CogDPM (ours) 0.027 0.0697 0.7365 0.8588 0.0023 0.0034 0.7962 0.9722 Table 1. Numerical Evaluation of Prediction Skills on MovingMNIST and Turbulence Datasets MC Sampling CogDPM Predictions Inverse Precision Temporal Residual -2 2 4 2 4 N=0 8 10 6 0 2 4 6 8 0 2 4 6 8 0.0 0.1 0.2 0.3 0.4 0.5 0.0 0.1 0.2 0.3 0.4 0.5 0.0 0.1 0.2 0.3 0.4 0.5 0 2 4 6 8 0 2 4 6 8 1.00 0.75 0.50 0.25 0.00 0.25 0.50 0.75 1.00 1.00 0.75 0.50 0.25 0.00 0.25 0.50 0.75 1.00 1.0 0.5 0.0 0.5 1.0 CogDPM Predictions Predicion Residual -2 4 N=0 MovingMNIST Turbulence 8 12 16 Inverse Precision MC Sampling 0 2 4 6 8 0 2 4 6 8 1.00 0.75 0.50 0.25 0.00 0.25 0.50 0.75 1.00 1.00 0.75 0.50 0.25 0.00 0.25 0.50 0.75 1.00 1.0 0.5 0.0 0.5 1.0 0 2 4 6 8 0 2 4 6 8 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0 Figure 2. Predictions and inverse precision of CogDPM on rigid-body MovingMNIST dataset (left) and Turbulence flow dataset (right). The CogDPM model demonstrates consistent improvements over the baseline models in terms of the CRPS, which measures the average ensemble forecasting capability, as well as the CSI and FSS indicators, which assess the accuracy of the model\u2019s predictions in the peak regions. Additionally, when compared to the DPM model based on naive Classifier-free Guidance, CogDPM exhibits superior performance. This underscores the beneficial impact of introducing the Precision Weighting mechanism on enhancing the model\u2019s predictive efficacy. Interpretability of precision weights. Figure 2 presents the outcomes of the CogDPM model. The initial two rows delineate the ground truth images alongside the corresponding prediction results generated by CogDPM. The third row illustrates the prediction residuals, representing the discrepancies between the actual and predicted data as depicted in the preceding rows. The fourth row features images that overlay the inverse precision map, highlighting the top 20% of values with a black contour line, against a backdrop of the residual map. The fifth row shows the precision map estimated by Monte Carlo sampling which estimate the prediction confidence with the variation among multiple independent predictions with difference noise prior (Zhang, 2021). CogDPM provides reasonable predictions in both datasets. In the prediction of rigid body motion, the estimated Inverse Precision effectively encompasses the Precision Residuals, which are primarily located at the edges of objects. The edges of objects present a greater challenge for prediction compared to blank areas or the interior of objects. This outcome aligns with our expectations for the estimation of the precision map. Precision estimated with MC sampling works similarly but provide more false positive region in frame 12 and 14. In the prediction of fluid motion, regions with large temporal residuals exhibit higher accelerations, indicating increased predictive difficulty. The estimated Inverse Precision indeed covers the Temporal Residuals well, meeting our expectations. We observe that in both fluid and rigid body motion prediction tasks, the Precision weights of CogDPM exhibit varying styles, yet consistently depict the model\u2019s confidence on current case. On comparison, MC sampling 6 \fCogDPM: Diffusion Probabilistic Models via Cognitive Predictive Coding a T=0h Reanalysis CogDPM Predictions Wind Speed Unpredictability Estimation Inverse Precision FourCastNet Predictions MC Uncertainty T=24h T=36 T=12h T=48h <4 m/s 4-6 m/s 6-8 m/s 8-10 m/s 10-12 m/s 12-14 m/s >14 m/s b 0 2 4 6 8 0 2 4 6 8 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0 0.00 0.25 0.50 0.75 1.00 1.25 1.50 0 1 2 3 4 5 6 7 8 Prediction interval [6 hour] 0.02 0.03 0.04 0.05 RMSE PredRNN FourCastNet CogDPM 0 1 2 3 4 5 6 7 8 Prediction interval [6 hour] 0 0.2 0.4 0.6 0.8 CSI V \u2265 12.0 m/s PredRNN FourCastNet CogDPM 0 1 2 3 4 5 6 7 8 Prediction interval [6 hour] 0 0.2 0.4 0.6 0.8 CSI V \u2265 16.0 m/s PredRNN FourCastNet CogDPM 0 1 2 3 4 5 6 7 8 Prediction interval [6 hour] 0.01 0.02 0.03 0.04 CRPS PredRNN FourCastNet CogDPM Figure 3. Experiments on high wind forecasting. a, a Case study of the ERA5 wind forecast from 2017-03-04 18:00. High wind and tornadoes attacked the Mideast USA at 2017-03-06 18:00(T=48h) (Twin Cities, 2017). CogDPM provides alarming forecasts, covering states with the most severe weather reports, Iowa and Missouri. CogDPM precision indicate the credibility of the predictions, helping forecasters to identify the missing and false positive regions. b, Numerical scores on ERA5 wind dataset from 2017-01-01 to 2019-12-31. We report CSI with 12 m/s (first) and 16 m/s (second) threshold, RMSE (third), and CRPS across four ensembles (fourth). method almost fails in this case due to the over-confidence of the prediction result. Difference among multiple predictions have no significant signals but random noise. While, the CogDPM is not effected because its precision describe the continuous enhancing process of model\u2019s confidence during the hierarchy inference. 4.2. Surface Wind Forecasting Experiments Benchmarks. We first evaluate our model by applying it to the task of surface wind forecasting, using the ERA5 reanalysis dataset (Hersbach et al., 2023). Accurate wind field forecasting is crucial for various applications in energy and weather domains. Ensemble forecasting is a key technique to provide more useful information for the forecasters, which provides multiple predictions and the confidence of its predictions. We show that CogDPM not only provide better ensemble forecasts results, but also estimate the prediction confidence with its precision weights. We choose real-world operational metrics for evaluation. In the meteorology domain, forecasters focus on evaluating the risk of high wind and confirming the time for extreme weather issue warnings. On this purpose, we use Critical Success Index (CSI) to measure the consistency between heavy wind regions in forecasts and ground truths. In the energy domain, accurate wind field forecasting supports the prediction of wind power, which is essential for the fluctuation control of clean energy generation (Marug\u00b4 an et al., 2018). Absolute wind speed is the dominant factor that affect the power production of the wind turbine (Port\u00b4 eAgel et al., 2013); thus, we consider pixel-wise Root Mean 7 \fCogDPM: Diffusion Probabilistic Models via Cognitive Predictive Coding 0 2 4 6 8 0 2 4 6 8 0 5 10 15 20 25 30 0 5 10 15 20 25 30 0 5 10 15 20 25 30 T=0 min Observations CogDPM Predictions Precipitation (mm/h) DGMR Predictions T=60 min T=30 min T=90 min Figure 4. Experiments on precipitation nowcasting. Case study on an extreme precipitation event starting on 2019-07-24 at 03:15 in the UK timezone, CogDPM successfully predicts movement and intensity variation of the squall front, while DGMR produces results with early dissipation. Square Error (RMSE) and Radially Continuous ranked probability score (CRPS) on wind speed for the evaluation of this scenario (Barbounis et al., 2006). Applendix D shows detailed implemation of these metrics. Results. We use the ERA5 reanalysis surface wind data and crop patches centered in the US spanning from 1979 to 2021. We evaluate predictions for the next 48 hours with 6-hour intervals using the observations in past 24 hours. We compare the proposed method with FourCastNet (Pathak et al., 2022), a domain-specialized network for reanalysis field forecasting, and predictive recurrent networks for deterministic video prediction. FourCastNet provides ensemble forecasts based on the Gaussian disturbance on the initial states following (Evensen, 2003). Figure 3a shows studies on a case starting from 2017-03-04 18:00. The results from FourCastNet indicate a failure to accurately forecast the growing high wind region, and the high wind region is underestimated in the 48-hour forecast. In contrast, results from CogDPM not only locate the high wind region more accurately, but also provide intensity estimates much closer to the ground truth, supporting the need for 48-hour-ahead precautions. CogDPM are capable of providing alarming forecasts around 2017-03-06 18:00, when 1,024 128 64 32 16 8 4 Wavelength [km] \u221220 \u221210 0 10 20 30 40 PSD T + 90 min Ground Truth PredRNN DGMR CogDPM 0 3 6 9 12 15 18 Prediction interval [5 min] 0.05 0.1 0.15 0.2 CRPS Grid Scale [km] = 1 PredRNN DGMR CogDPM 0 3 6 9 12 15 18 Prediction interval [5 min] 0.05 0.1 0.15 0.2 0.25 0.3 Pooled CRPS (2km window) Grid Scale [km] = 2 PredRNN DGMR CogDPM 0 0.2 0.4 0.6 0.8 Cost/loss ratio 0.05 0.1 0.15 0.2 0.25 Value Cumulative rain \u2265 20mm PredRNN DGMR CogDPM Figure 5. Experiments on precipitation nowcasting. Numerical verification scores on sampled the United Kingdom precipitation dataset in 2019. CRPS is computed with four ensembles for spatial pooling size 1km x 1km (left top) and 2 km x 2 km (right top); Economic value with 20 mm/h accumulative rain threshold (left bottom); Radially averaged power spectral density on predictions at 90 minutes (right bottom). CogDPM surpasses the operational forecast model DGMR in ensemble forecasting precision and forecast skillfulness. high wind and tornadoes attacked the Mideast USA1. We also visualize the inverse precision fields corresponding to the forecasts, since confidence estimation provide key information for decision-making. In the forecast for the first 24 hours, the uncertainty fields given by FourCastNet are relatively dispersed and not closely related to the evolution of the associated wind field. In the next time period to the 48 hours, FourCastNet produces unreasonable estimates for the windless area in the upper right corner. The inverse precision fields given by CogDPM had much closer correlations to the weather process. In the 48-hour forecast, CogDPM underestimated the forecast intensity in Wyoming and Colorado, but allocated lower precision on that region. Figure 3b shows that CogDPM outperforms baseline methods on CSI, particularly for heavier wind thresholds. For the measurement of RMSE, we take the mean across eight ensemble forecasts for all methods. Although DPMs are not directly optimized by the Mean Squared Error (MSE) loss, the mean ensemble results are competitive with predictive models trained with MSE losses. The CogDPM exhibits a lower CRPS across all prediction times, indicating its ability 1Summary of March 06 2017 Severe Weather Outbreak Earliest Known Tornado in Minnesota\u2019s History, https://www. weather.gov/mpx/SevereWeather_06March2017 8 \fCogDPM: Diffusion Probabilistic Models via Cognitive Predictive Coding to effectively generate ensemble forecasts. Our results demonstrate that CogDPM is capable of making predictions under severe conditions, supported by the probabilistic forecast ability of the PEM process, while deterministic models avoid predicting severe cases to reduce mistake-making risk. 4.3. Precipitation Nowcasting Experiments Benchmarks. We evaluate our model on the precipitation nowcasting task using the United Kingdom precipitation dataset (Ravuri et al., 2021). Precipitation nowcasting aims to predict high-resolution precipitation fields up to two hours ahead, which provides socioeconomic value on weather-dependent decision-making (Ravuri et al., 2021). Precipitation data is extremely unbalanced on spatiotemporal scales, demanding nowcasting models to focus on vital parts of the field. Fig. 4a shows a case study selected by the chief meteorologist from MetOffice (Ravuri et al., 2021), which involves a squall line sweeping across the United Kingdom. We choose DGMR as a strong baseline on skillful nowcasting (Ravuri et al., 2021), which is datadriven method that forecast precipitation with a generative adversarial network. DGMR is also the operational method deployed by Met Office of the United Kingdom. Results. In Figure 4, our results accurately forecast both the trajectory and intensity fluctuations of the squall line, as depicted by the red precipitation line in the top right segment. CogDPM\u2019s forecasts consistently show the squall line progressing over 30 and 60 minutes, followed by dissipation at the 90-minute mark, mirroring actual events. Conversely, predictions from DMGR indicate a rapid dissipation of the squall line within 30 minutes, and significantly weaker outcomes are projected for the 60-minute mark. We posit that the suboptimal performance of the DGMR model is attributable to the simultaneous use of generative loss and pixel-wise alignment loss functions during its training phase, which leads to unstable training process and still keeps the drawback of dissipation of deterministic alignments. While the generative loss alone is capable of simulating realistic meteorological processes, it falls short in accurately predicting the extent of precipitation and is abandoned in DGMR. On the contrary, CogDPM does not require additional deterministic alignment during training but enhances precision with precision-weighted guidance during inference steps. We present additional case studies in Appendix F. We further explore the numerical evaluations in Fig 5 with metrics on different forecast properties focusing on the accuracy, reality and diversity. Radially Continuous ranked probability score (CRPS) measures the alignment between probabilistic forecast and the ground truth. We also report the spatially aggregated CRPS (Ravuri et al., 2021) to test prediction performance across different spatial scales. Details of these metrics can be found in Extended Data. The first row in Fig 4 shows CogDPM consistently outperforms baseline models for the whole time period. We adopt the decision-analytic model to evaluate the Economic value of ensemble predictions (Ravuri et al., 2021). Curves in Figure 5 with greater under-curve area provide better economic value, and CogDPM outperforms baseline models in this regard. Radially averaged power spectral density (PSD) evaluates the variations of spectral characteristics on different spatial scale. CogDPM achieves the minimal gap with ground truth characteristics. The superior performance metrics of CogDPM stem from its diffusion models\u2019 ability to emulate the hierarchical inference of predictive coding, resulting in smaller prediction errors compared to single-step forecasting models. Furthermore, the integration of precision weighting allows the model to dynamically assess the precision of inputs and adjust the intensity of conditional control accordingly. This targeted approach effectively reduces errors in areas that are challenging to predict, thereby enhancing the accuracy of the model in delineating boundaries and extreme regions. 5. Discussion CogDPM is related to classifier-free diffusion models (Ho & Salimans, 2021), which enhance the class guidance with a conditional DPM and an unconditional DPM. CogDPM framework builds the connection between classifier-free diffusion models and predictive coding. We also introduce the precision estimation method with the reverse diffusion process and use precision to control the guidance strength in spatiotemporal scales. We adopt the ablation study to show the enhancement in prediction skills of the CogDPM framework compared with the vanilla CFG method in appendix E. Active inference (Parr et al., 2019) is also a widely discussed theory of the predictive coding framework, which states that cognition system actively interact with the environment to minimize the prediction error. Active inference is omitted in this work. We take a computational predictive coding model with both active inference and precision weighting as the future work. 6." +} \ No newline at end of file diff --git a/abs_9K/test_abstract_short_2405.02426v1.json b/abs_9K/test_abstract_short_2405.02426v1.json new file mode 100644 index 0000000000000000000000000000000000000000..93dee5edea353db28e7d058507daf19dedae4dc9 --- /dev/null +++ b/abs_9K/test_abstract_short_2405.02426v1.json @@ -0,0 +1,18 @@ +{ + "url": "http://arxiv.org/abs/2405.02426v1", + "title": "Generalized Solution for Double-Porosity Flow through a Graded Excavation Damaged Zone", + "abstract": "Prediction of flow to boreholes or excavations in fractured low-permeability\nrocks is important for resource extraction and disposal or sequestration\nactivities. Analytical solutions for fluid pressure and flowrate, when\navailable, are powerful, insightful, and efficient tools enabling parameter\nestimation and uncertainty quantification. A flexible porous media flow\nsolution for arbitrary physical dimension is derived and extended to double\nporosity for converging radial flow when permeability and porosity decrease\nradially as a power law away from a borehole or opening. This distribution can\narise from damage accumulation due to stress relief associated with drilling or\nmining. The single-porosity graded conductivity solution was initially found\nfor heat conduction, the arbitrary dimension flow solution comes from\nhydrology, and the solution with both arbitrary dimension and graded\npermeability distribution appeared in reservoir engineering. These existing\nsolutions are here combined and extended to two implementations of the\ndouble-porosity conceptual model, for both a simpler thin-film mass transfer\nand more physically realistic diffusion between fracture and matrix. This work\npresents a new specified-flowrate solution with wellbore storage for the\nsimpler double-porosity model, and a new more physically realistic solution for\nany wellbore boundary condition. A new closed-form expression is derived for\nthe matrix diffusion solution (applicable to both homogeneous and graded\nproblems), improving on previous infinite series expressions.", + "authors": "Kristopher L. Kuhlman", + "published": "2024-05-03", + "updated": "2024-05-03", + "primary_cat": "physics.flu-dyn", + "cats": [ + "physics.flu-dyn", + "physics.geo-ph", + "86A05" + ], + "label": "Original Paper", + "paper_cat": "Diffusion AND Model", + "gt": "Prediction of flow to boreholes or excavations in fractured low-permeability\nrocks is important for resource extraction and disposal or sequestration\nactivities. Analytical solutions for fluid pressure and flowrate, when\navailable, are powerful, insightful, and efficient tools enabling parameter\nestimation and uncertainty quantification. A flexible porous media flow\nsolution for arbitrary physical dimension is derived and extended to double\nporosity for converging radial flow when permeability and porosity decrease\nradially as a power law away from a borehole or opening. This distribution can\narise from damage accumulation due to stress relief associated with drilling or\nmining. The single-porosity graded conductivity solution was initially found\nfor heat conduction, the arbitrary dimension flow solution comes from\nhydrology, and the solution with both arbitrary dimension and graded\npermeability distribution appeared in reservoir engineering. These existing\nsolutions are here combined and extended to two implementations of the\ndouble-porosity conceptual model, for both a simpler thin-film mass transfer\nand more physically realistic diffusion between fracture and matrix. This work\npresents a new specified-flowrate solution with wellbore storage for the\nsimpler double-porosity model, and a new more physically realistic solution for\nany wellbore boundary condition. A new closed-form expression is derived for\nthe matrix diffusion solution (applicable to both homogeneous and graded\nproblems), improving on previous infinite series expressions.", + "main_content": "Introduction Fluid flow through damage-induced fracture networks in otherwise low-permeability crystalline rocks (e.g., granite, argillite or halite) is of interest to geothermal energy production (Tao et al, 2021), radioactive waste disposal (Tsang et al, 2005), hydrogen storage (AbuAisha and Billiotte, 2021), and compressed air energy storage (Kim et al, 2012). Rock damage around an excavation (i.e., the Excavation Damaged Zone, EDZ; Davies and Bernier (2005)) increases the connected porosity, and leads to increased permeability. Fractured rock often has higher porosity and permeability than intact rock. Damage near a borehole or excavation will decrease the relative contribution from flow in the lower-permeability farfield, and will confound the estimation of hydrologic properties using approaches that assume uniform homogeneous distributions of permeability and porosity. There is a need for a flexible analytical solution for flow to a borehole or excavation in the presence of damage, that includes wellbore storage, doubleporosity flow, and variable flow dimension. This is most evident in a mechanically weak, low-permeability medium like salt, but should also apply to other low-permeability fractured rocks like granite or shale. 1 arXiv:2405.02426v1 [physics.flu-dyn] 3 May 2024 \fIn salt, the far-field (i.e., undamaged) permeability is unmeasurably low (Beauheim and Roberts, 2002) due to salt\u2019s tendency to creep shut any unsupported openings. The permeability around a borehole in salt is derived from accumulated damage due to stress redistribution around the excavation itself (Wallace et al, 1990; Stormont et al, 1991; Cosenza, 1996; Hou, 2003; Kuhlman, 2014). Stormont et al (1991) presented brine and gas permeability data measured in salt for packer-isolated intervals of small boreholes before and after a central 1-meter diameter borehole was drilled (i.e., a mineby experiment). Figure 1 shows these data support the conceptual model of permeability and porosity decaying away from an excavation. Cosenza (1996) proposed the power-law model for permeability and porosity plotted in the figure. These data show porosity and permeability decrease with distance from the central excavation. Two lines are shown with to the data; one is a monomial power-law, the other includes an additive background term. The two curves differ primarily away from the excavation (r/rw \u22653), where larger uncertainties in estimated porosity and permeability exist, for three reasons. First, the access drift EDZ (test conducted in the floor of a 5-m wide room) is superimposed on the 1-m borehole EDZ. Second, the small-diameter (2.5-cm) measurement boreholes themselves each have a small EDZ overprinted on the 1-m borehole EDZ. Lastly, the apparent background permeability may represent the measurement limit of the packer system used (i.e., compliance of the packer inflation elements and working fluid). Especially in salt, the undisturbed background permeability is near zero, and is difficult to measure consistently in the field (Beauheim and Roberts, 2002). The power-law distribution of permeability matches the more certain near-field permeability distribution, and is conceptually more elegant than a finite domain or a flow domain with piece-wise heterogeneous properties (i.e., a higher-permeability EDZ adjacent to lowerpermeability intact rock). Other investigations have also shown porosity and permeability decaying away with distance from an excavation in crystalline rocks (Shen et al, 2011; Cho et al, 2013; Ghazvinian, 2015) and sedimentary rocks (Perras et al, 2010; Perras and Diederichs, 2016). Fig. 1 Permeability and porosity observations around a 1-m borehole (radial distance scaled by excavation radius) in salt from small-scale mine-by experiment (data from Stormont et al (1991)) Salt permeability has been related to both the confining and shear stresses (Reynolds and Gloyna, 1960; Lai, 1971; Stormont and Fuenkajorn, 1994; Alkan, 2009). Confining stresses reduce fracture aperture and bulk permeability, while shear stresses are associated with increased bulk permeability. Aydan et al (1993) present solutions for radial and tangential plane stress and strain (i.e., dilatation or a change in porosity) around a circular excavation. Strain is proportional to r\u22122 D or r\u22123 D (where rD is radial distance 2 \finto the formation scaled by the excavation size), depending on whether the region is experiencing elastic (exponent 2) or plastic (exponent \u22483) deformation. These relationships illustrate a possible behavior of rock in the EDZ. The true extent of the EDZ depends on drilling or excavation method, borehole or tunnel geometry, state of stress, and rock mechanical properties (Hudson et al, 2009). Softer or weaker sedimentary rocks like argillite or halite typically have a larger EDZ than stiffer or stronger rocks like granite. There are several well-known empirical power-law relationships between porosity and permeability in fractured or granular media (e.g., Kozeny, 1927; Carman, 1937) and many studies have discussed their applicability (David et al, 1994; Kuhlman and Matteo, 2018). Permeability in fractured rocks is more sensitive to small changes in porosity than granular rocks (i.e., fractured rocks have higher pore compressibility resulting in larger exponents in porosity-permeability relationships). Based on evidence from these observations, graded dimensionless porosity is assumed to follow n(r) = n0 \u0012 r rw \u0013\u2212\u03b7 , (1) where rw is the borehole or excavation radius [m], n0 = n(rw) is maximum porosity at the borehole wall, and \u03b7 is a dimensionless exponent (see Table 1 for a list of physical variables and notation). Using the same form, the graded permeability can be represented with the form k(r) = k0 \u0012 r rw \u0013\u2212\u03ba , (2) where k0 = k(rw) is the maximum permeability [m2] at the borehole wall and \u03ba is another dimensionless exponent. Based on lab measurements on fractured granite, the empirical relationship \u03ba \u22483\u03b7 has been proposed (Kranz et al, 1979; David et al, 1994). The Stormont et al (1991) salt data (Figure 1) support \u03b7 = 4.5 and \u03ba = 17, which shows a somewhat faster-decaying permeability (\u03ba = 3.8\u03b7) than seen in granitic rocks. The power-law permeability and porosity distribution conceptual model presented here is an alternative to flow models using wellbore skin (Streltsova, 1988; Pasandi et al, 2008), finite domain (Gelbard, 1992; Lin et al, 2016), or low-permeability non-Darcy flow with a threshold gradient (Liu, 2014, 2017). These three conceptualizations all lead to reduced contributions of flow from the far field, but only borehole skin can account for observed distributions of higher porosity or permeability near the excavation, which are important when analyzing pressure or flowrate data at early time. The contribution from lower permeability in the far field are more important at late time. Finite domains and skin can have analytical flow solutions, but low-permeability non-Darcy flow does not typically lend itself to analytical solutions. Barker (1988) developed a generalized solution for converging flow to a borehole with variable noninteger dimension, D. This conceptualization has been used to characterize flow in fractured systems, where lower-dimension (i.e., D < 3) results associated with discrete fractures are more common than higher dimension results (Beauheim et al, 2004; Le Borgne et al, 2004; Bowman et al, 2013; Ferroud et al, 2018). Doe (1991) extended the solution of Barker (1988) to the conceptualization where permeability varies with radial distance, through analogy with the heat conduction literature (Carslaw and Jaeger, 1959). A single-porosity flow solution is derived here with power-law variable properties, like the approach of Doe (1991) (who did not present a derivation). The single-porosity solution is then readily extended to a double-porosity conceptualization, using first the approach of Warren and Root (1963) for thin-film mass transfer between fractures and matrix, then the more physically realistic matrix diffusion approach of Kazemi (1969). Double-porosity flow is a common and efficient conceptualization in fractured rocks (Aguilera, 1980; van Golf-Racht, 1982; Da Prat, 1990). The medium is conceptualized as two communicating physically overlapping continua including fractures with high permeability (but little to no storage) and matrix or intact rock with significant storage (but little to no flow) (Barenblatt and Zheltov, 1960; Barenblatt et al, 1960). Many extensions to the basic double-porosity conceptual model exist, including multiple matrix or fracture porosities, and different assumptions about the geometry or underlying physics governing flow in the fractures or matrix (Chen, 1989; Kuhlman and Heath, 2021). The Warren and Root (1963) 3 \fsolution simplifies the exchange between matrix and fractures to a mass-transfer thin-film approximation, leading to numerous analytical solutions (Aguilera, 1980; Chen, 1989). It is commonly used for this reason, even though it is well-known that spatial pressure gradients in matrix blocks are important, as the matrix is low-permeability and would therefore be expected to experience steep, slow-changing gradients. A series representation of the Kazemi (1969) solution is used here, an extension of the multirate mass transfer model to double-porosity flow (Kuhlman et al, 2015). The more physically correct (but more difficult to solve) solution can be represented by an infinite series of porosities, which can either represent an infinite number of Warren-Root type matrix porosities, or if the coefficients are chosen specifically, a single Kazemi-type matrix diffusion porosity. More recently, Wang et al (2021) has developed a semi-analytical solution for flow in a double-porosity formation, for the case when non-Darcian flow is significant. Moutsopoulos et al (2022) have provided analytical and semi-analytical solutions for two classical problems in flow of unconfined double-porosity aquifers, based on Moutsopoulos (2021). De-Smedt (2022) presented an analytical solution for flow in double-porosity media for fractional flow dimensions, which is a generalization of De-Smedt (2011). Hayek et al (2018) presented a semi-analytical solution for flow due to pumping a double-porosity aquifer via a constant-pressure boundary condition (without wellbore storage) where permeability varied as a power law. The fractal reservoir flow problem (Chang and Yortsos, 1990) is also analogous to the radially variable properties approach presented here, but the governing equations of the two problems are only equivalent when the spectral exponent (\u03b8 in Chang and Yortsos (1990)) in the fractal problem is zero. The fractal reservoir governing equation is typically solved approximately, since the additional terms due to non-zero spectral exponent in the governing equation do not readily allow closed-form analytical solution. In the next section, the governing equations and boundary conditions are developed for the variabledimension single-porosity flow problem (Doe, 1991). This solution is mapped onto the modified Bessel equation, allowing solution for flow to both specified pressure (type-I) and specified flowrate with wellbore storage (type-III). These more general single-porosity solutions are shown to degenerate down to several well-known cases. The single-porosity solutions are then extended to a simpler Warren-Root type doubleporosity model for type-I (Hayek et al, 2018) and type-III (new) and then a new Kazemi type doubleporosity model. The Kazemi series solution approach is then summed analytically to arrive at a new closed-form expression for the response in Laplace space, a solution that is new for both graded and homogeneous domains. Finally, a summary and discussion of limitations is given for the new solutions. The approach taken here, representing the porosity and permeability of fractured rocks as power-law distributions, was first developed by Delay et al (2007), and first pursued by the author for applications in deep (> 3 km) borehole disposal of radioactive waste in basement rock (Brady et al, 2017; Kuhlman et al, 2019). The approach is also applicable to flow in salt surrounding excavations, like those in mine-by experiments (Stormont et al, 1991). 2 Development of Flow Problem To introduce and contrast with the dual-porosity solution, the single-porosity solution is developed first. To make a single solution for Cartesian linear, cylindrical, and spherical geometries, a variable-dimension approach like Barker (1988) is used, including variable permeability and porosity, like Doe (1991). The governing equation for slightly compressible time-dependent change in pressure p [Pa] in a general 1D coordinate (Barker, 1988) is n(r)c\u2202p \u2202t = 1 rm \u2202 \u2202r \u0014k(r)rm \u00b5 \u2202p \u2202r \u0015 , (3) where c is bulk compressibility [1/Pa] and the dimensionless parameter m is 0 for a Cartesian strip, 1 for a cylinder, and 2 for a sphere (i.e., m = D \u22121, where D is the dimension). The derivative of the bracketed term in (3) is expanded via chain rule; starting from (2), dk dr = \u2212\u03bak(r)/r is substituted with the definitions of k(r) and n(r), to get n0c \u0012 r rw \u0013\u2212\u03b7 \u2202p \u2202t = k0 \u00b5 \u0012 r rw \u0013\u2212\u03ba \u0014m \u2212\u03ba r \u2202p \u2202r + \u22022p \u2202r2 \u0015 . (4) For converging radial flow in a semi-infinite domain, the relevant wellbore boundary conditions are constant-pressure (type-I), constant-flux (type-II), or constant-flux with wellbore storage (type-III in 4 \fLaplace space). The initial, far-field, and source borehole boundary conditions for a borehole in an infinite symmetric domain are initial p(r, t = 0) = 0 far \u2212field p(r \u2192\u221e, t) < \u221e wellbore type \u2212I pI(r = rw, t) = p1(t); or (5) wellbore type \u2212II Amk0 \u00b5 \u2202pII(t) \u2202r \f \f \f \f r=rw = Q(t); or wellbore type \u2212III Amk0 \u00b5 \u2202pIII(t) \u2202r \f \f \f \f r=rw = Q(t) + Ac \u03c1g \u2202pw(t) \u2202t , respectively. See Appendix A for definition of source borehole boundary condition terms. These boundary conditions represent a homogeneous uniform initial condition, a requirement that the solution remains finite at large distance, and a specified pressure or pressure gradient at the source (r = rw). The Type-II boundary condition (specified flowrate) is a special case (\u03c3 = 0) of the wellbore storage boundary condition (flowrate linearly proportional to change in pressure), so it is not developed further. 2.1 Dimensional Analysis A solution is derived for equation (4), using the approach of Doe (1991), which was based on analogy with the heat conduction literature (Carslaw and Jaeger, 1959). Reducing the governing equation (4) to dimensionless form using characteristic time, Tc = n0cL2 c\u00b5/k0, and characteristic length, Lc = rw, leads to r\u03ba\u2212\u03b7 D \u2202pD \u2202tD = m \u2212\u03ba rD \u2202pD \u2202rD + \u22022pD \u2202r2 D , (6) where the dimensionless quantities rD = r/Lc, tD = t/Tc, and p{I,III} D = p/p{I,III} c are used (see Table 2 for a summary of dimensionless quantities). The characteristic pressure change is given by pI c = \u02c6 p1, where p1(t) = \u02c6 p1ft separates the timedependent specified pressure into a constant characteristic pressure and a dimensionless variable time behavior (for a constant specified pressure, ft = 1). The dimensionless type-I initial and boundary conditions are pD(rD, tD = 0) = 0 pD(rD \u2192\u221e, tD) < \u221e (7) pI D(rD = 1, tD) = ft. Using pIII c = rw \u02c6 Q\u00b5 Amk0 , where Q(t) = \u02c6 Qft similarly separates the time-dependent volumetric flowrate into a constant characteristic flowrate and a dimensionless time behavior. The dimensionless type-III source borehole boundary condition is \u2202pIII D \u2202rD \f \f \f \f rD=1 = ft + \u03c3 \u2202pIII D \u2202t , (8) where \u03c3 is a dimensionless wellbore storage coefficient (see Appendix A) and the same initial and far-field conditions apply as the type-I case. 2.2 Laplace Transform Taking the dimensionless Laplace transform \u0000 \u00af f(s) = R \u221e 0 e\u2212stDf(tD) dtD \u0001 of the governing partial differential equation (6) (without loss of generality assuming zero initial condition) leads to the ordinary differential equation d2\u00af pD dr2 D + m \u2212\u03ba rD d\u00af pD drD \u2212s\u00af pDr\u03ba\u2212\u03b7 D = 0, (9) 5 \fassuming \u03ba, \u03b7, and m are not functions of time, and s is the dimensionless Laplace transform parameter. The transformed type-I and far-field boundary conditions (7) are \u00af pD(rD \u2192\u221e) < \u221e (10) \u00af pI D(rD = 1) = \u00af ft, where \u00af ft represents the Laplace transform of the boundary condition\u2019s time behavior. For a unit step change at t = 0 (where ft = 1, a typical assumption), \u00af ft = 1 s. Other temporal behaviors are simply handled, including a step change at a non-zero time, an exponentially decaying source term, an arbitrary piecewise-constant or piecewise-linear behavior, or a sinusoidal source term (Kruseman and de Ridder, 1994; Mishra et al, 2013). The transformed wellbore-storage boundary condition is d\u00af pIII D drD \f \f \f \f rD=1 = \u00af ft + \u03c3s\u00af pIII D , (11) which now more clearly resembles a Type-III boundary condition. 2.3 Numerical Inverse Laplace Transform The governing equations and associated boundary conditions are solved exactly in Laplace space, then numerically inverted back to the time domain using one of several viable approaches (Kuhlman, 2013). The equations were rapidly prototyped and inverted using the Python library mpmath (Johansson et al, 2017), which provides arbitrary precision special functions and numerical inverse Laplace transform algorithms. A Fortran program was also developed to facilitate plotting and parameter estimation, implementing the inversion algorithm of de Hoog et al (1982). Python and Fortran implementations of the solution are available at https://github.com/klkuhlm/graded. 3 Solution of Flow Problem 3.1 Mapping onto Modified Bessel Equation The governing ordinary differential equation (9) can be made equivalent to a form of the modified Bessel equation after a change of variables first used by Lommel (1868) for the standard Bessel equation. Appendix B illustrates an analogous change of variables to the modified Bessel equation. Comparing (9) to this scaled version of the modified Bessel equation (41), they are equivalent given the following correspondences \u03b1 =1 2 (\u03ba \u2212m + 1) \u03b3 =1 2 (\u03ba \u2212\u03b7 + 2) (12) \u03bd = s \u03b12 \u03b32 = \u03ba \u2212m + 1 \u03ba \u2212\u03b7 + 2 \u03b2 = r s \u03b32 = s 4s (\u03ba \u2212\u03b7 + 2)2 . The transformed modified Bessel equation has the general solution (37) y = z\u03b1 [AI\u03bd (\u03b2z\u03b3) + BK\u03bd (\u03b2z\u03b3)] , (\u03b3 \u0338= 0) , (13) where A and B are constants determined by the boundary conditions and I\u03bd(z) and K\u03bd(z) are the firstand second-kind modified Bessel functions of non-integer order and real argument (McLachlan, 1955; Bowman, 1958; Spanier and Oldham, 1987; DLMF, 2023). The finiteness boundary condition (10) requires A = 0 to keep the solution finite as rD \u2192\u221e, since the first-kind modified Bessel function grows exponentially with increasing real argument, leaving \u00af pD (rD) = r\u03b1 DBK\u03bd (\u03b2r\u03b3 D) , (14) 6 \fwhich is not defined for \u03b3 = 0 (i.e., \u03ba\u2212\u03b7 = \u22122, which is unrealistic because \u03ba is larger than \u03b7 for physical reasons), and B is determined by the Laplace-space source borehole boundary conditions. 3.2 Constant-Pressure (Type-I) at Borehole The borehole boundary condition (rD = 1) for specified change in pressure leads to the solution (the Warren and Root (1963) double porosity solution for this wellbore boundary condition is equivalent to Hayek et al (2018)) \u00af pI D(rD) = \u00af ftr\u03b1 D K\u03bd (\u03b2r\u03b3 D) K\u03bd (\u03b2) (15) and its radial gradient (i.e., proportional to flow of fluid into the borehole) d\u00af pI D drD = \u00af ftr\u03b1\u22121 D \u0014 (\u03b1 \u2212\u03b3\u03bd) K\u03bd (\u03b2r\u03b3 D) K\u03bd (\u03b2) + \u03b2\u03b3r\u03b3 D K\u03bd\u22121 (\u03b2r\u03b3 D) K\u03bd (\u03b2) \u0015 , (16) using a recurrence relationship for the derivative of the Bessel function in terms of Bessel functions of adjacent orders (DLMF, 2023, \u00a710.29.2). Restricting \u03ba \u2265\u03b7 (i.e., permeability decreases as fast or faster than porosity), then \u03b3 > 0 and \u03b1 = \u03b3\u03bd (for \u03b3 < 0, \u03b1 \u2212\u03b3\u03bd = 2\u03b1). This physically motivated restriction on parameters simplifies (16) to d\u00af pI D drD = \u221as \u00af ftr\u03b1+\u03b3\u22121 D K\u03bd\u22121 (\u03b2r\u03b3 D) K\u03bd (\u03b2) , (17) since \u03b2\u03b3 = \u221as for \u03b3 > 0. When evaluated in the source borehole (rD = 1), the solution simplifies further. Figure 2 shows plots of the predicted pressure gradient at rD = 1 due to a constant-pressure condition there (top row) and the predicted decrease in pressure radially away from the boundary (values of \u03b7, \u03ba, and m for each simulation are listed in the caption and title of each figure). Both rows of plots show the variability with the porosity exponent (\u03b7, given by the line color) and the permeability exponent (\u03ba = \u03b7\u03c4, given by the line type). The same results are shown for Cartesian linear (m = 0), cylindrical (m = 1), and spherical (m = 2) geometries in three columns. For a given set of parameters, a higher-dimensional domain (larger m) leads to a slower drop in produced fluids at any time. The highest sustained flowrate for all dimensions is achieved with constant properties in space (i.e., the red curve \u03b7 = \u03ba = 0). More negative exponents in the porosity and permeability power-laws lead to more rapid decrease in flowrate, as the contribution to flow from large radius vanishes when the exponent increases in magnitude. These types of responses might be mis-interpreted as being associated with lower permeability (which would also lead to a faster decrease in flowrate) using a model with constant properties and a fixed dimension. In the source well (top row of subplots), the effect of \u03ba is different and are predicted to reverse between dimensions. For \u03b7 = 3 (black lines), the \u03ba = {3, 6, 9} cases are swapped between m = 1 and m = 2. For \u03b7 = 2 (blue lines), the \u03ba cases are swapped between m = 0 and m = 1. The bottom row of figures shows the predicted pressure with distance at tD = 10. At locations away from the source well (rD > 1), changes in the porosity exponent, \u03b7, have relatively less impact than changes in the permeability exponent, \u03ba (different colored solid lines are close together, while colored lines of different line type are widely separated). The dimensionality (m) has a smaller effect at locations away from the source borehole than it had on the gradient predicted at the source borehole. 3.3 Constant-Flowrate with Wellbore Storage (Type-III) The wellbore-storage boundary condition for the specified flowrate solution at rD = 1 results in the general solution (that is new for any double-porosity solution with power-law variation in material properties) \u00af pIII D (rD) = \u00af ftr\u03b1 D K\u03bd (\u03b2r\u03b3 D) (\u03b1 \u2212\u03b3\u03bd + \u03c3s) K\u03bd (\u03b2) + \u03b2\u03b3K\u03bd\u22121 (\u03b2), (18) which can be simplified using \u03b1 = \u03b3\u03bd and \u03b2\u03b3 = \u221as to \u00af pIII D (rD) = \u00af ftr\u03b1 D K\u03bd (\u03b2r\u03b3 D) \u221asK\u03bd\u22121 (\u03b2) + \u03c3sK\u03bd (\u03b2). (19) 7 \fFig. 2 Type-I flowrate (top row at rD = 1) and pressure (bottom row at rD > 1 and tD = 10) solution at borehole for m = 0, 1, 2 (Cartesian, cylindrical, and spherical) and at different radial distances. Line color indicates \u03b7; line type indicates \u03ba/\u03b7. Line segments in top row illustrate slopes of 1/2, 1, and 3/2. Analogous to the results for the Type-I solution but only showing the m = 1 and m = 2 cases, Figure 3 shows the predicted pressure through time at the boundary for a specified flowrate at the boundary. Figure 3 results are for no wellbore storage (\u03c3 = 0), while Figure 4 shows the same results with nonzero wellbore storage (all model parameters listed in caption or title of each figure). Wellbore storage is important at early time, leading to a smaller predicted change in pressure, with the predicted response giving a characteristic 1 : 1 slope on log-log plots before formation storage contributes significantly to the flow (i.e., pumping in a bathtub). Wellbore storage makes more of a difference (i.e., shows a larger deviation from \u03c3 = 0 case) for larger \u03b7 (and \u03ba, since \u03ba = 2\u03b7). 3.4 Parameter Combinations Yielding Simpler Solutions When \u03b7 = \u03ba = 0, permeability and porosity are constant in space; in this case (9) simplifies to d2\u00af pD dr2 D + m rD d\u00af pD drD \u2212s\u00af pD = 0, (20) 8 \fFig. 3 Type-II solution (Type-III with \u03c3 = 0) at borehole for m = 1, 2 (cylindrical and spherical). Line color indicates \u03b7; line type indicates \u03ba/\u03b7. which is the dimensionless form of the equation solved by Barker (1988). In this case \u03b3 = 1, \u03b1 = (1\u2212m)/2, \u03bd = \u03b1, and \u03b2 = \u221as. The solution in Laplace-space under these conditions becomes \u00af pD (rD) = r\u03bd DBK\u03bd \u0000\u221asrD \u0001 , (21) which was found by Barker (1988, Eqn. 15). When \u03b7 = \u03ba = m = 0 the time-domain solution simplifies to pD(t) = 1/ \u221a \u03c0t, because \u03bd = 1/2 and \u03bd \u22121 = \u22121/2, the numerator and denominator of (17) are equal since K\u03bd(z) \u2261K\u2212\u03bd(z). Another simplification occurs when m = \u03ba = \u03b7, not necessarily zero. In this case, the permeability and porosity decrease at the same rate radially that the surface area of the domain grows in size (A0 \u221d1, A1 \u221drD, A2 \u221dr2 D), resulting in an equivalent Cartesian coordinate system, d2\u00af pD dr2 D \u2212s\u00af pD = 0, (22) which has a solution in terms of sin(\u221asrD) and cos(\u221asrD) or exp(\u00b1\u221asrD) and typically has an explicit inverse Laplace transform. In this case \u03b1 = \u03bd = 1/2, \u03b3 = 0, and \u03b2 = \u221as. When \u03bd = n \u00b1 1 2 (for n integer), the modified Bessel functions become modified spherical Bessel functions (DLMF, 2023, \u00a710.47), and when \u03bd = \u00b1 1 3, they become Airy functions (DLMF, 2023, \u00a79.6). These additional special cases are not handled differently here (i.e., the more general solution in terms of modified Bessel functions is still valid), since in the case given here \u03bd varies with \u03ba, \u03b7, and m (12). 4 Extension of Solution to Double Porosity 4.1 Mass-Transfer Coefficient Approximation Beginning with the Warren and Root (1963) formulation for double-porosity (i.e., high-conductance fractures and high-capacity matrix), the power-law permeability and porosity distributions are incorporated. 9 \fFig. 4 Type-III solution at borehole (rD = 1), for m = 1, 2 (cylindrical and spherical). Line color indicates \u03b7; line type indicates \u03c3. All curves for \u03ba/\u03b7 = 2. The equations for double-porosity flow in the fractures and matrix are 1 rm \u2202 \u2202r \u0014kf \u00b5 \u2202pf \u2202r \u0015 = nrcr \u2202pr \u2202t + nfcf \u2202pf \u2202t \u02c6 \u03b1kr \u00b5 (pf \u2212pr) = nrcr \u2202pr \u2202t (23) where \u02c6 \u03b1 is the shape factor [1/m2] of Warren and Root (1963), subscript f indicates fracture, and subscript r indicates matrix (rock). The matrix equation does not involve a spatial gradient of pressure, nor a matching of pressure and flux at the boundary, but simply a difference between the fracture and matrix pressure (i.e., the mass transfer coefficient approximation often used for heat transfer across thin films). This behavior is sometimes referred to in the petroleum engineering literature as \u201csteady-state\u201d flow between the fracture and matrix (Da Prat, 1990), but it also represents one-dimensional diffusion in the matrix with a thin-film mass-transfer approximation between the fracture and matrix reservoirs, analogous to Newton\u2019s law of cooling. Substituting the permeability ki = ki0 \u0010 r rw \u0011\u2212\u03bai and porosity ni = ni0 \u0010 r rw \u0011\u2212\u03b7i (i \u2208{f, r}), then converting to dimensionless form using an analogous approach to Warren and Root (1963), where \u03c9 = nf0cf/ (nr0cr + nf0cf) is the dimensionless fracture storage coefficient and \u03bb = \u02c6 \u03b1krr2 w/kf is the dimensionless interporosity exchange coefficient. Finally, taking the Laplace transform of both equations results in the pair of ordinary differential equations \u0014d2\u00af pfD dr2 D + m \u2212\u03baf rD d\u00af pfD drD \u0015 r\u2212\u03baf = (1 \u2212\u03c9)r\u2212\u03b7r D \u00af pmDs + \u03c9r\u2212\u03b7f D \u00af pfDs \u03bb (\u00af pfD \u2212\u00af prD) r\u2212\u03bar D = (1 \u2212\u03c9)r\u2212\u03b7r D \u00af prDs. (24) Solving for matrix pressure in the matrix equation, \u00af prD = \u00af pfD\u03bbr\u2212\u03bar D / \u0002 (1 \u2212\u03c9)sr\u2212\u03b7r D + \u03bbr\u2212\u03bar D \u0003 , and substituting this into the fracture equation leads to a single equation solely in terms of dimensionless 10 \fFig. 5 Type-I flowrate solution at borehole (left) and Type-II solution for pressure (\u03c3 = 0, right), for m = 1 (cylindrical). Line color indicates \u03bb; line type indicates \u03c9. Laplace-domain fracture pressure \u0014d2\u00af pfD dr2 D + m \u2212\u03baf rD d\u00af pfD drD \u0015 r\u2212\u03baf = r\u2212\u03b7r D \u00af pfD ( (1 \u2212\u03c9)sr\u2212\u03bar D \u03bb (1 \u2212\u03c9)sr\u2212\u03b7r D + \u03bbr\u2212\u03bar D ) + \u03c9r\u2212\u03b7f D \u00af pfDs. (25) To force the term in curly brackets in (25) to be independent of rD, \u03bar = \u03b7r is assumed. Setting \u03bar and \u03b7r equal to \u03b7f allows rD and \u00af pfD to be similar form to previous solutions. Simplifying the subsequent notation \u03baf \u2192\u03ba, \u03b7r \u2192\u03b7, and \u00af pfD \u2192\u00af pD results in d2\u00af pD dr2 D + m \u2212\u03ba rD d\u00af pD drD = r\u03ba\u2212\u03b7 D \u00af pD \u0014 (1 \u2212\u03c9)s\u03bb (1 \u2212\u03c9)s + \u03bb + \u03c9s \u0015 , (26) which is the same form as (9). This solution corresponds to the same scaled Bessel equation, with only the definition of \u03b2 changing to \u03b2W R = s\u0014 \u03bb \u03bb/(1 \u2212\u03c9) + s + \u03c9 \u0015 s \u03b32 . (27) Any more general spatial behavior of matrix properties (e.g., \u03b7r \u0338= \u03bar) would not be solvable with the same approach. This limitation still makes physical sense, as the the most important terms to vary with space are the fracture permeability and the matrix storage. Setting \u03ba = \u03b7 = 0 and m = 1 results in the Warren and Root (1963) solution. Figure 5 shows typical solution behaviors for the cylindrical (m = 1) case for Type-I and Type-II wellbore boundary conditions, for \u03b7 = 3 and \u03ba = 6. Figure 6 shows behavior from the \u201cmiddle\u201d curve in Figure 5 (\u03bb = 10\u22125 and \u03c9 = 10\u22124), for a range of porosity and permeability exponents similar to those shown in Warren and Root (1963), listed in the figure caption. 11 \fFig. 6 Type-I flowrate solution at borehole (left) and Type-II solution for pressure (\u03c3 = 0, right), for m = 1 (cylindrical). All curves are for \u03bb = 10\u22125 and \u03c9 = 10\u22124 (middle curves shown in Figure 5). Line color indicates \u03b7; line type indicates \u03ba/\u03b7. 4.2 Matrix Diffusion The matrix diffusion problem of Kazemi (1969) is more physically realistic (Aguilera, 1980; Da Prat, 1990), but it is typically solved numerically or via late-time approximations (De Swaan, 1976), rather than analytically like Warren and Root (1963). The series approach of Kuhlman et al (2015) is used here to represent matrix diffusion in a single matrix continuum through the sum of an infinite series of Warren-Root matrix continua, and the infinite sum is then analytically summed. The generalization of (23) to multiple matrix continua starts with 1 rm \u2202 \u2202r \u0014kf \u00b5 \u2202pf \u2202r \u0015 = N X j=1 njcj \u2202pj \u2202t + nfcf \u2202pf \u2202t \u02c6 \u03b1jkj \u00b5 (pf \u2212pj) = njcj \u2202pj \u2202t j = 1, 2, . . . N, (28) where N is the number of matrix continua (one additional equation for each continuum). Similarly taking the Laplace transform of this set of equations, solving for \u00af pf, substituting the matrix equations into the fracture equation, and simplifying the notation leads to d2\u00af pD dr2 D + m \u2212\u03ba rD d\u00af pD drD = r\u03ba\u2212\u03b7 D \u00af pD\u03c9s(1 + \u00af g), (29) where \u00af g = N X j=1 \u02c6 \u03bejuj s + uj (30) is a matrix memory kernel (Haggerty and Gorelick, 1995), \u02c6 \u03bej is related to the storage properties of each matrix continuum (analogous to \u03c9 of Warren and Root (1963)), and uj is related to the interporosity flow coefficient of each matrix continuum (analogous to \u03bb of Warren and Root (1963)). The Laplacespace memory kernel approach is flexible, and is used elsewhere in hydrology and reservoir engineering (Herrera and Yates, 1977; Haggerty et al, 2000; Schumer et al, 2003). Equation (29) can be simplified to 12 \fWarren and Root (1963) with a particular choice of \u00af g and N = 1, and to the solution for a triple-porosity reservoir (Clossman, 1975) with a different choice of \u00af g and N = 2 (Kuhlman et al, 2015). When N \u2192\u221ein (30), the it is more convenient to specify the mean and variance of the parameter distributions than the individual parameters associated with each porosity. Several different distributions are possible (Haggerty and Gorelick, 1995). In the form presented by Kuhlman et al (2015), the parameters are specified as the infinite series uj = (2j \u22121)2\u03c02\u03bb 4(1 \u2212\u03c9) \u02c6 \u03bej = 8(1 \u2212\u03c9) (2j \u22121)2\u03c9\u03c02 j = 1, 2, . . . N \u2192\u221e (31) which leads to the Kazemi (1969) solution for matrix diffusion. The parameters \u03bb and \u03c9 have the same definitions as in Warren and Root (1963). Setting \u03ba = \u03b7 = 0 results in the solution of Kuhlman et al (2015). The new governing equation is the same form and the modified Bessel function solution, only requiring re-definition of \u03b2 as \u03b2KZ = v u u t \" N X j=1 \u03c9\u02c6 \u03bejuj uj + s + \u03c9 # s \u03b32 , N \u2192\u221e. (32) Substituting the definitions of u and \u02c6 \u03be from (31) and simplifying leads to \u03b2KZ = v u u t \" N X j=1 2\u03bb W 2 j \u03bb/(1 \u2212\u03c9) + s + \u03c9 # s \u03b32 , N \u2192\u221e, (33) where Wj = \u03c0(2j \u22121)/2. This is similar in form to (27) but the term in the denominator grows as the index increases, illustrating how the series solution approximates the Kazemi (1969) solution through an infinite series of modified Warren and Root (1963) matrix porosities. Further simplifying the approach of Kuhlman et al (2015), the infinite series in (33) can be evaluated in closed form using residue methods (Wolfram Research, Inc., 2021), resulting in \u03b2KZ = v u u t \"r \u03bb(1 \u2212\u03c9) s tanh r s(1 \u2212\u03c9) \u03bb ! + \u03c9 # s \u03b32 , (34) where tanh(\u00b7) is the hyperbolic tangent. This closed-form expression derived here is more accurate and numerically more efficient than truncating or accelerating the infinite series in (32), which is an improvement over the series presented in Kuhlman et al (2015) for graded or homogeneous domains. Figure 7 illustrates the transition from the Warren and Root (1963) (N = 1) to the Kazemi (1969) series approximation for increasing terms (N = {2, 10, 100, 1000}, heavy colored solid lines) and the expression for the infinite sum (34) (heavy black dashed line) for flow to a specified flux (type-II, \u03c3 = 0) cylindrical (m = 1) borehole of constant material properties (\u03ba = \u03b7 = 0). The bounding Theis (1935) behavior is shown for the fracture and matrix compressibilities (thin red dashed lines). 5 Applications and Limitations A general converging radial flow solution for specified flowrate or specified wellhead pressure was derived for domains with power-law variability in porosity and permeability due to damage. The single-porosity version has already been presented by Doe (1991), and a solution for constant-pressure condition without wellbore storage was derived by Hayek et al (2018), but the specified-flowrate double-porosity solution with wellbore storage presented here is new. The infinite series approximation to Kazemi was summed analytically, resulting in a new closed-form expression of the series presented in Kuhlman et al (2015), which is an improvement for both graded and homogeneous properties. The newly developed analytical solutions are more general (i.e., several existing solutions are special cases of the new solution) and include more behaviors typical in well-test solutions (i.e., wellbore storage, positive skin, double porosity), 13 \fFig. 7 Type-II solution for pressure at source borehole (\u03c3 = 0), for m = 1 (cylindrical) for different number of terms. All curves are for \u03bb = 10\u22125, \u03c9 = 10\u22124, \u03ba = \u03b7 = 0. while still being straightforward and parsimonious (i.e., as few free parameters as possible) in their implementation. The basic flow solution assumes linear single-phase flow of a fluid in a slightly compressible formation. The double-porosity solution assumes the fractures are high permeability, with low storage capacity, while the matrix (i.e., intact rock between fractures) is high storage capacity with low permeability. These assumptions are representative for analytical solutions to subsurface porous media flow problems in the hydrology and petroleum engineering literature, and are shared by the solutions of Barker (1988), Doe (1991), Warren and Root (1963), Kazemi (1969), and Kuhlman et al (2015). To apply this analytical solution to observed data, either observed data would be transformed into dimensionless space, or the analytical solution could be transformed to dimensional space, then a parameter estimation routine would be used to minimize the model-data misfit, and possibly explore the uncertainty or uniqueness of the solution. The solution method developed to solve these solutions uses numerical inverse Laplace transforms and runs quickly enough to be used in parameter estimation (e.g., Monte Carlo methods that require hundreds of thousands of evaluations). The analytical solution might be of most use with parameter estimation to fit observations, but the non-uniqueness of the curves may make estimation of unique physical parameters difficult, without further physical or site-specific constraints. Realistically, the parameters in the Bessel equation may be estimable (i.e., \u03b1, \u03b2, \u03b3, and \u03bd defined in (12)), but without defining the flow dimension (m) or the relationship between the porosity and permeability exponents (\u03c4 = \u03ba/\u03b7), it may be difficult to identify all the parameters from data alone, since many the curves have similar shapes, unlike classical Type curves (Bourdet et al, 1989). 14 \fAc borehole cross-sectional area m2 Am borehole cylindrical surface area m2 c bulk compressibility 1/Pa ft time variability \u2212 g gravitational acceleration m/s2 h hydraulic head m k permeability m2 Lc characteristic length (rw) m m dimension (D \u22121) \u2212 n porosity \u2212 p change in pressure Pa s Laplace transform parameter \u2212 Q volumetric flowrate m3/s r distance coordinate m rw borehole or excavation radius m \u02c6 \u03b1 Warren and Root (1963) shape factor 1/m2 \u03b7 porosity power-law exponent \u2212 \u03ba permeability power-law exponent \u2212 \u03c1 fluid density kg/m3 \u00b5 fluid viscosity Pa \u00b7 s Table 1 Physical Properties and Parameters pD scaled pressure p/pc tD scaled time tk0/n0cL2 c\u00b5 rD scaled distance r/Lc \u03bb interporosity exchange coefficient \u02c6 \u03b1krr2 w/kf \u03c3 wellbore storage coefficient Ac/(rwn0c\u03c1gAm) \u03c9 fracture storage coefficient nf0cf/(nr0cr + nf0cf) Table 2 Dimensionless Quantities Statements and Declarations Funding The author thanks the U.S. Department of Energy Office of Nuclear Energy\u2019s Spent Fuel and Waste Science and Technology program for funding. Conflicts of Interest The author has no competing interests to declare. Availability of Data and Material No data or materials were used by the author in the preparation of the manuscript. Code Availability The source code of Fortran and Python implementations of the program are available from the author upon request. Acknowledgments This paper describes objective technical results and analysis. Any subjective views or opinions that might be expressed in the paper do not necessarily represent the views of the U.S. Department of Energy or the United States Government. This article has been authored by an employee of National Technology & Engineering Solutions of Sandia, LLC under Contract No. DE-NA0003525 with the U.S. Department of Energy (DOE). The employee owns all right, title and interest in and to the article and is solely responsible for its contents. The United States Government retains and the publisher, by accepting the article for publication, acknowledges that the United States Government retains a non-exclusive, paid-up, irrevocable, world-wide license to publish 15 \for reproduce the published form of this article or allow others to do so, for United States Government purposes. The DOE will provide public access to these results of federally sponsored research in accordance with the DOE Public Access Plan https://www.energy.gov/downloads/doe-public-access-plan. The author thanks Tara LaForce from Sandia for technically reviewing the manuscript. 6 Appendix A: Wellbore Storage Boundary Condition The wellbore-storage boundary condition accounts for the storage in the finite borehole arising from the mass balance Qin \u2212Qout = Ac \u2202hw \u2202t . Qin [m3/s] is volumetric flow into the borehole from the formation, Qout is possibly time-variable flow out of the well through the pump (Q(t) [m3/s]), and \u2202hw \u2202t is the change in hydraulic head [m] (hw = pw \u03c1g + z) of water standing in the borehole through time, pw is change in pressure [Pa] of water in the borehole, \u03c1 is fluid density [kg/m3], z is an elevation datum [m], and g is gravitational acceleration [m/s2]. Ac is the cross-sectional surface area of the pipe, sphere or box providing storage (it may be a constant or a function of elevation); for a typical pipe, it becomes Ac = \u03c0r2 c, where rc is the radius of the casing where the water level is changing. The mass balance is then Amk0 \u00b5 \u2202p \u2202r \f \f \f \f r=rw \u2212Q(t) = Ac \u03c1g \u2202pw \u2202t , (35) where Am is the area of the borehole communicating with the formation. For the integer m considered here these are A0 = b2, A1 = 2\u03c0rwb, A2 = 4\u03c0r2 w (b is a length independent of the borehole radius). Assuming the change in water level in the borehole (hw = pw/ (\u03c1g)) is equal to the change in formation water level (h = p/ (\u03c1g)), this can be converted into dimensionless form as \u2202pD \u2202rD \f \f \f \f rD=1 \u2212ft = \u03c3 \u2202pD \u2202t , (36) where \u03c3 = Ac/ (rwn0c\u03c1gAm) is a dimensionless ratio of formation to wellbore storage; \u03c3 \u21920 is an infinitesimally small well with only formation response, while \u03c3 \u2192\u221eis a well with no formation response (i.e., a bathtub). 7 Appendix B: Transformation of Modified Bessel Equation Following the approach of Bowman (1958), alternative forms of the Bessel equation are found, this approach is a simplification of the original approach of Lommel (1868). An analogous approach is applied here to \u201cback into\u201d the desired modified Bessel equation. The equation satisfied by the pair of functions y1 = x\u03b1I\u03bd (\u03b2x\u03b3) , y2 = x\u03b1K\u03bd (\u03b2x\u03b3) (37) is sought, where \u03b1, \u03b2, \u03b3, and \u03bd are constants. Using the substitutions \u03b6 = yx\u2212\u03b1 and \u03be = \u03b2x\u03b3 gives \u03b61 = I\u03bd (\u03be) and \u03b62 = K\u03bd (\u03be), which are the two solutions to the modified Bessel equation (DLMF, 2023, \u00a710.25.1), \u03be d d\u03be \u0012 \u03be d\u03b6 d\u03be \u0013 \u2212(\u03be2 + \u03bd)\u03b6 = 0. (38) Given \u03be d d\u03be \u0012 \u03be d\u03b6 d\u03be \u0013 = x \u03b32 d dx \u0012 x d\u03b6 dx \u0013 , (39) and x d dx \u0012 x d\u03b6 dx \u0013 = y\u2032\u2032 x\u03b1\u22122 \u2212(2\u03b1 \u22121) y\u2032 x\u03b1\u22121 + \u03b12y x\u03b1 , (40) the standard-form equation satisfied by y is y\u2032\u2032 + (1 \u22122\u03b1) y\u2032 + \u03b12y x\u03b1 \u2212 \u0012 \u03b22\u03b32x2\u03b3\u22122 \u2212\u03b12 \u2212\u03bd2\u03b32 x2 \u0013 y = 0. (41) 16 \fThis equation can be compared to the Laplace-space ordinary differential equation (9), allowing direct use of the product of powers and modified Bessel function (37) as solutions (13)." +} \ No newline at end of file diff --git a/abs_9K/test_abstract_short_2405.02478v1.json b/abs_9K/test_abstract_short_2405.02478v1.json new file mode 100644 index 0000000000000000000000000000000000000000..7257407242beb7b190451dc9da4c0313fbd266d5 --- /dev/null +++ b/abs_9K/test_abstract_short_2405.02478v1.json @@ -0,0 +1,17 @@ +{ + "url": "http://arxiv.org/abs/2405.02478v1", + "title": "Continuous Learned Primal Dual", + "abstract": "Neural ordinary differential equations (Neural ODEs) propose the idea that a\nsequence of layers in a neural network is just a discretisation of an ODE, and\nthus can instead be directly modelled by a parameterised ODE. This idea has had\nresounding success in the deep learning literature, with direct or indirect\ninfluence in many state of the art ideas, such as diffusion models or time\ndependant models. Recently, a continuous version of the U-net architecture has\nbeen proposed, showing increased performance over its discrete counterpart in\nmany imaging applications and wrapped with theoretical guarantees around its\nperformance and robustness. In this work, we explore the use of Neural ODEs for\nlearned inverse problems, in particular with the well-known Learned Primal Dual\nalgorithm, and apply it to computed tomography (CT) reconstruction.", + "authors": "Christina Runkel, Ander Biguri, Carola-Bibiane Sch\u00f6nlieb", + "published": "2024-05-03", + "updated": "2024-05-03", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "eess.IV" + ], + "label": "Original Paper", + "paper_cat": "Diffusion AND Model", + "gt": "Neural ordinary differential equations (Neural ODEs) propose the idea that a\nsequence of layers in a neural network is just a discretisation of an ODE, and\nthus can instead be directly modelled by a parameterised ODE. This idea has had\nresounding success in the deep learning literature, with direct or indirect\ninfluence in many state of the art ideas, such as diffusion models or time\ndependant models. Recently, a continuous version of the U-net architecture has\nbeen proposed, showing increased performance over its discrete counterpart in\nmany imaging applications and wrapped with theoretical guarantees around its\nperformance and robustness. In this work, we explore the use of Neural ODEs for\nlearned inverse problems, in particular with the well-known Learned Primal Dual\nalgorithm, and apply it to computed tomography (CT) reconstruction.", + "main_content": "Introduction Computed Tomography (CT) is an ubiquitous imaging technique in modern medicine that allows for imaging of patients using X-rays. In brief, CT relies on measuring a series of images corresponding to the attenuation of X-rays by the object of interest (a human, in medicine), by rotating the X-ray source and detector around the patient, typically around a full circle. CT reconstruction thus refers to the problem of obtaining the image that produced the measurements, often called sinograms. This problem, mathematically modelled by the Radon transform (line integrals over a domain), is ill-posed, as, in general, breaks the three conditions that define a well-posed problem: Firstly the solution is not continuously dependant on the measurement, as small changes in the measured sinogram will represent large changes in the image. Secondly it has no unique solution in the general Preprint. 1 arXiv:2405.02478v1 [cs.LG] 3 May 2024 \fsense, particularly in undersampled or limited view tomography. Finally, under sufficient measurement noise, there may be no solution. This theoretical analysis has direct implication in medicine, as signal noise is directly related to X-ray dose, which ideally is desired to be reduced as much as possible, as X-rays are ionizing-radiation, which leads to cell dead and can increase likelihood of cancer arising in living tissue. Similarly, the non-uniqueness can be an issue, as albeit most standard CT is performed using full circular trajectories, clinical applications like breast tomosynthesis or image guided surgery often have physical limitations on the scanning range of the CT machine, and thus inherently cannot acquire all the required data to ensure uniqueness of solutions. In practice, it is thus rare to reduce the noise and the scanning range, as the reconstruction is often unusable. But, if a robust enough reconstruction method can be found, dose reduction becomes feasible. Classically (and very often, clinically) the method that solves the CT reconstruction is the Filtered Backprojection (FBP) algorithm, an approximation of the inverse of the aforementioned Radon transform. As this method assumes a continuous sampling with no noise, it performs sufficiently well under those conditions, but rapidly degrades with increased noise and undersampling. Other methods have been proposed, based on the variational regularization approach [1] [2] that, by using the physics of CT, can iteratively solve the CT reconstruction problem, generally with much better performance against noise, particularly under appropriate choices of regularization, such as Total Variation. In recent years, these methods have been enhanced by using datadriven methods, i.e. Machine Learning (ML). A variety of methods have been proposed for data driven CT reconstructions, but in this work we will focus on the Learned Primal Dual (LPD). The goal of this work is a robustness enhancement of learned methods, and showcasing a proof of concept using LPD. The motivation of this work is driven by Neural Ordinary Differential Equations (Neural ODEs), a way to interpret the typical convolutions and layers that convolutions neural networks (CNNs) are made of as a discretization of a continuous ODE. This continuation of the discrete layers produces better performing networks that are provably robust to noise, and have been shown to outperform their discrete counterparts in practical scenarios (see e.g., [3,4]). Given that noise rejection is a key feature of a good CT reconstruction solver, this work proposes to put together data-driven models and Neural ODEs, to further enhance their performance. We propose the Continous LPD (cLPD), an idea that however is feasible to implement in any other datadriven inverse problem, in principle. 2 Methods In this section we first introduce CT reconstruction, then the LPD algorithm and Neural ODEs. This leads to the novelty in this work, the cLPD and its architecture. 2 \f2.1 Variational formulation of reconstruction Mathematically, one can write the CT reconstruction problem as seeking to recover a function (the image) x : R3 \u2192R from the measured projections, described by the Radon transform as y(\u2113) = R \u2113x(z) dz, \u2113\u2208L, where L represents the lines in R3 from the X-ray source to each detector, defined by the scanner geometry and rotation. This is often linearized and discretized as Ax = y + \u02dc e (1) where A represents the integral over the lines (often referred as the forward operator), x is a vector representing the pixel values, y is a vector representing the measured sinogram values and \u02dc e is the noise or error, either from measurement of from the linearization. To solve 1 in a robust manner, the variational regularization approach has found significant success in the literature, proposing the following optimization: \u02c6 x = arg min x D(y, Ax) + R(x) (2) where D measures the data fidelity between the measurement and the image estimate (most commonly the l2 distance in CT, due to the characteristics of the noise) and R is a regularization function that promotes images of desired properties, also called a prior. The optimization literature has proposed many methods to solve 2, given particular choices of D and R. These methods have been shown to outperform the FBP algorithm under most conditions, given appropriate choice of functions and parameters. 2.2 Data-driven methods: Learned Primal Dual In recent years, NN have been proposed to solve problems like CT in 1. While many methods can be proposed as a post-processing of a reconstruction, generally FBP as \u00af x = N\u03b8(\u02c6 x) (3) being N\u03b8 a NN parametrized by \u03b8. While these produce solutions \u00af x of high quality, they solutions are not guaranteed to be of small D(y, A\u00af x) (i.e. fitting to the measured data), as there is no such constraint in N\u03b8. Thus data-driven model-based methods where proposed in the literature, attempting to mix data driven methods with algorithms that use explicit use of the physics knowledge of the model, A. While several methods exist, in this work we focus on the LPD [5]. LPD was formulated starting from the Primal Dual Hybrid Gradient (PDHG) algorithm [6], that solves 2 using classical methods, and can be expressed as in algorithm 1, with an appropriate initialization of x0 (e.g. FBP) and z0 (often zero). This algorithm uses proximal operators, defined as prox\u03c4F(x) = arg min u F(u) + \u03c4 2\u2225u \u2212x\u22252 2. (4) 3 \fAlgorithm 1 Primal Dual Hybrid Gradient Input: x0, z0, \u03c3 > 0, \u03c4 > 0, \u03c1 \u2208[0, 1] 1: for i = 1, ... do 2: zi+1 \u2190prox\u03c3D(zi + \u03c3A\u00af xi) 3: xi+1 \u2190prox\u03c4R(xi \u2212\u03c4AT zi+1) 4: \u00af xi+1 \u2190xi+1 + \u03c1(xi+1 \u2212xi) 5: end for LPD thus proposes to replace these proximal operators, and also the update step for \u00af xi+1 for NNs, leading to: Algorithm 2 Learned Primal Dual Input: x0, z0 1: for i = 1, ..., I do 2: zi+1 \u2190\u0393\u03b8d i (zi, A\u00af xi, y) 3: xi+1 \u2190\u039b\u03b8p i (xi, AT zi+1) 4: end for In algorithm 2, the number of iterations I is predefined (therefore the common name of unrolled method), and networks \u0393\u03b8d i and \u039b\u03b8p i therefore are defined by a different set of parameters \u03b8i in each iteration. In practice often zi+1 and xi+1 are composed of several channels, but only one of them is used to update the respective variable. Interestingly, these primal and dual networks require small parametrizations, as the intuition of replacing a proximal suggest, they do not need to represent a complex transform, only a small step change. In comparison to a typical NN, LPD uses the operator A, thus limiting the results to the space of valid images. It has been shown that in simulated studies, LPD outperforms most well known classical variational methods and post-processing NN methods of the form of 3, leading to many variations being proposed [7\u20139]. It is important to note that while LPD has the form of a classical optimizer with convergence guarantees, such properties are lost once parametrized with a network [10]. It is more appropriate to see the entirety of algorithm 2 as a single network LPD\u03b8(y) The LPD is finally trained given a set of training data Tj = (xj, yj), j \u2208[1, J] with a loss function L(\u03b8) minimizing the empirical loss L(\u03b8) = 1 J J X j=0 \u2225LPD\u03b8(yj) \u2212xj\u2225, (5) and using the resulting \u03b8, employing typical minimization algorithms from machine learning literature, such as Adam. For the purpose of this work, however, we are interested in how \u0393\u03b8d i and \u039b\u03b8p i are constructed. As its standard in imaging applications, these are constructed as a series of discrete convolutions. While this method of constructing NNs 4 \fis overwhelmingly the standard, there is evidence that one can obtain better results if these convolutions are modelled by a continuous function, rather than a discrete operation. This continuous representation was proposed and named Neural Ordinary Differential Equations or, Neural ODEs [11]. 2.3 Neural Ordinary Differential Equations Neural ordinary differential equations (NeuralODEs) as introduced in [11] are based on the fact that neural networks like ResNet [12] can be seen as an Euler discretisation of a continuous transformation [13\u201315] Every discrete layer thus computes xt+1 = xt + f\u03b8t(xt) for a parametrised function f\u03b8t and an input xt. By reducing the size of the steps, i.e., adding more layers to the network, in the limit the network f\u03b8 describes the dynamics of hidden units as the following ordinary differential equation (ODE): \u2202x(t) \u2202t = f\u03b8(x(t), t). (6) The output of the network x(T) thus can be computed by solving the ODE initial value problem at time T via standard ODE solvers. Computing the backward step to compute gradients during training of the network requires backpropagating through the solver. As this is memory-inefficient due to the solver possibly needing hundret of function evaluations, Chen et al [11] introduced the adjoint method. The adjoint method treats the solver as a black box and uses a second ODE going backward in time, starting with the gradients of the original output with respect to the loss function. Using automatic differentiation, the gradients with respect to the parameters can be calculated in a memory efficient way. Neural ODEs are known to be memory and parameter efficient and robust to noise while providing theoretical underpinnings from the theory of ordinary differential equations. 2.4 The Continuous Learned Primal Dual The aim of the continuous learned primal dual algorithm (cLPD) is to combine both the advantages of the classical learned primal dual algorithm with those of neural ODEs. Continuous learned primal dual therefore replaces the discrete convolutional blocks in both networks \u0393\u03b8d i and \u039b\u03b8p i by continuous neural ODE blocks \u0393c \u03b8d i and \u039bc \u03b8p i (see Algorithm 3). As neural ODEs have proven to be more robust to noise, a better handling of noise that is inherent in the data can be achieved a feature that is particularly useful for CT reconstruction. 2.5 Network Architecture The network architecture for both the dual and primal iterates of the continuous learned primal dual algorithm is highlighted in Figure 1. We define the ODE by using five convolutional layers with parametric ReLU (PReLU) activation functions for primal and dual iterates. 5 \fAlgorithm 3 Continuous Learned Primal Dual Input: x0, z0 1: for i = 1, ..., I do 2: zi+1 \u2190\u0393c \u03b8d i (zi, A\u00af xi, y) 3: xi+1 \u2190\u039bc \u03b8p i (xi, AT zi+1) 4: end for (a) Dual iterates, \u0393\u03b8d i . (b) Primal iterates, \u039b\u03b8p i . Figure 1: Network architecture for both the dual and primal iterates of the (continuous) learned primal dual algorithm. Each of the rectangles describes a convolution and ODE for the LPD and cLPD, respectively. The number of input channels is denoted below the box and the kernel size specified in the middle of the rectangle. 3 Experimental Setup To emphasise the advantages of our continuous learned primal dual algorithm, we conduct experiments on the following different radiation doses and geometries: 1. Clinical setting: We firstly test the clinical setting, i.e., a clinical radiation dose on a full circle. 2. Reduced dose setting: An ongoing challenge in CT reconstruction is minimising the radiation dose per patient. This can be achieved by either reducing the X-ray dose or decreasing the number of angles that get measured. We thus test the following experimental settings: a) Extreme low dose, full circle: Reducing the X-ray dose by measuring over the full circle. b) Sparse angle, clinical dose: Reducing the number of angles to measure while keeping the clinical X-ray dose. c) Sparse angle, extreme low dose: Reducing both the number of angles to measure and the X-ray dose. 3. Restricted setting: Clinicians additionally are also interested in a restricted setting. In this setting, it is not possible to measure the full circle but 6 \fjust up to a very limited angle increasing the difficulty of reconstructing images drastically. a) Limited angle, clinical dose: We firstly test the restricted setting on a clinical X-ray dose. b) Limited angle, extreme low dose: Additionally, we then try the limited angle setting on an extreme low X-ray dose. In the following, we will analyse the results for the experimental settings above for our continuous learned primal dual algorithm, the standard learned primal dual with discrete layers and filtered backprojection as comparison to a classical method. We train both the cLPD and LPD with a batch size of 2, learning rate of 10\u22124 and the original LPD parameters used in [5] for 100 epochs on the LIDC-IDRI dataset [16] using the Adam optimiser [17]. 4 Experimental results This section details the results of the experiments. 1. Clinical setting: For the clinical setting, the continuous learned primal dual performs on par with the classical learned primal dual algorithm. The structural similarity index measure (SSIM) for the standard LPD algorithm is slightly higher than for the continuous version while in terms of the peak signalto-noise ratio (PSNR) cLPD outperforms LPD. The cLPD and LPD perform significantly better than FBP, both in terms of image quality metrics as well as visual results (see Subfigure 2a). 2. Reduced dose setting: To analyse the effect that a reduced dose has on the proposed algorithm, we additionally test an extreme low dose and sparse angle geometry. a) Extreme low dose, full circle: When decreasing the dose while measuring over the full circle, similarly as for the clinical setting, cLPD and LPD perform on par, while the average SSIM and PSNR decrease from 0.61 to 0.58 and 34 to 32, respectively. Comparing both algorithms to FBP, FBP is not able to handle the increased noise level (see visual results in Subfigure 2b) while both cLPD and LPD reconstruct denoised images. b) Sparse angle, clinical dose: Reducing the number of angles to measure from while keeping the X-ray dose at a clinical dose, the continuous version of the learned primal dual outperforms the classical LPD and FBP both in terms of SSIM and PSNR (see Table 1). Visually, the FBP is not able to reconstruct any details of the image while cLPD and LPD are able to preserve most of the features. c) Sparse angle, extreme low dose: Further reducing the dose by decreasing the X-ray dose and the number of angles to measure on, cLPD outperforms both the classical learned primal dual and FBP algorithm in terms of 7 \fSSIM, PSNR and visual results. Whith increasing amounts of noise, the reconstructions of both cLPD and LPD get more blury and less detailed while the FBP algorithm produces noisy results without any high-level features. 3. Restricted setting: Analysing a restricted setting, we obtain the following results: a) Limited angle, clinical dose: Firstly testing on a clinical dose, in the restricted setting our proposed continuous learned primal dual outperforms the classical learned primal dual and FBP to an even greater extend. While the average SSIM and PSNR of the reconstructions produced by cLPD compared to 2.c) dropped by 0.09 and 5.26, respectively, the average SSIM and PSNR of the LPD reconstructions decreased by 0.13 and 7.01, respectively \u2013 highlighting the robustness of the cLPD algorithm to noise. The visual results highlighted in Subfigure 2e further highlight these advantages of the cLPD. Even for a restricted setting our method is able to preserve low-level features like the shape of the lungs and introducing barely any artifcats. The LPD and FBP algorithm however reconstruct artifact heavy images that do not resemble the target reconstructions. b) Limited angle, extreme low dose: Secondly testing on an extreme low dose, the performance gap between our cLPD algorithm and both the standard LPD and FBP persists. Similiarly to the previous setting, the visual results (see Subfigure 2f) highlight the robustness to noise of the continuous version of the learned primal dual algorithm. 8 \f(a) Visual results for clinical setting (1.). (b) Visual results for extreme low dose, full circle setting (2.a)). (c) Visual results for sparse angle, clinical dose setting (2.b)). (d) Visual results for sparse angle, extreme low dose setting (2.c)). (e) Visual results for limited angle, clinical dose setting (3.a)). (f) Visual results for limited angle, extreme low dose setting (3.b)). Figure 2: Visual results for a randomly picked image of the test set for all experimental settings. We highlight the results of our cLPD, standard LPD, FBP and the target reconstruction from left to right. With increasing noise levels, our approach (cLPD) is able to outperform the LPD more and more significantly. Both cLPD and LPD outperform FBP in all experimental settings. In the case of a limited angle geometry, cLPD reconstructs artifact free results while the standard LPD starts to blur. For high noise levels and the restricted setting especially, FBP is unsuitable as it introduces artifacts. 9 \fTable 1: Overview of mean structural similarity index measure (SSIM), peak signal-to-noise ration (PSNR) and their standard deviations for the experimental settings highlighted in Section 3. For experimental settings in which the noise level is comparatively low (1. and 2.a)), our proposed algorithm (cLPD) performs as well as its standard version (LPD). In these cases, both the cLPD and LPD outperform the classical FBP. With increasing noise levels, the advantages of NeuralODEs come into play and the cLPD outperforms both LPD and FBP (2.c)-3.b)). Experimental setting Algorithm Mean SSIM (\u2191) Mean PSNR (\u2191) 1.) Full angle, clinical dose cLPD 0.6140 \u00b1 0.1263 34.1787 \u00b1 3.1489 LPD 0.6157 \u00b1 0.1245 34.1159 \u00b1 3.1387 FBP 0.0602 \u00b1 0.0207 16.8117 \u00b1 1.8200 2.a) Full angle, extremely low dose cLPD 0.5773 \u00b1 0.1287 32.3713 \u00b1 2.5793 LPD 0.5790 \u00b1 0.1251 32.2299 \u00b1 2.5228 FBP 0.0213 \u00b1 0.0086 11.2341 \u00b1 1.8579 2.b) Sparse angle, clinical dose cLPD 0.5627 \u00b1 0.1269 31.2625 \u00b1 2.2977 LPD 0.5571 \u00b1 0.1169 30.8406 \u00b1 2.1796 FBP 0.0108 \u00b1 0.0044 8.1548 \u00b1 1.7939 2.c) Sparse angle, extremely low dose cLPD 0.5316 \u00b1 0.1232 29.6664 \u00b1 1.9520 LPD 0.5265 \u00b1 0.1185 29.2588 \u00b1 1.8851 FBP 0.0024 \u00b1 0.0012 2.6769 \u00b1 1.8622 3.a) Limited angle, clinical dose cLPD 0.4465 \u00b1 0.1099 24.4042 \u00b1 1.6947 LPD 0.3951 \u00b1 0.0937 22.4654 \u00b1 1.7108 FBP 0.0103 \u00b1 0.0045 7.7079 \u00b1 1.8264 3.b) Limited angle, extremely low dose cLPD 0.4371 \u00b1 0.1081 24.0181 \u00b1 1.6651 LPD 0.3823 \u00b1 0.0933 22.2501 \u00b1 1.6938 FBP 0.0037 \u00b1 0.0018 3.4242 \u00b1 2.0003 10 \f5 Discussion and" +} \ No newline at end of file diff --git a/abs_9K/test_abstract_short_2405.02696v1.json b/abs_9K/test_abstract_short_2405.02696v1.json new file mode 100644 index 0000000000000000000000000000000000000000..ba7bc6e4b19f95789fba26f31b7978230cc193b5 --- /dev/null +++ b/abs_9K/test_abstract_short_2405.02696v1.json @@ -0,0 +1,17 @@ +{ + "url": "http://arxiv.org/abs/2405.02696v1", + "title": "DiffuseTrace: A Transparent and Flexible Watermarking Scheme for Latent Diffusion Model", + "abstract": "Latent Diffusion Models (LDMs) enable a wide range of applications but raise\nethical concerns regarding illegal utilization.Adding watermarks to generative\nmodel outputs is a vital technique employed for copyright tracking and\nmitigating potential risks associated with AI-generated content. However,\npost-hoc watermarking techniques are susceptible to evasion. Existing\nwatermarking methods for LDMs can only embed fixed messages. Watermark message\nalteration requires model retraining. The stability of the watermark is\ninfluenced by model updates and iterations. Furthermore, the current\nreconstruction-based watermark removal techniques utilizing variational\nautoencoders (VAE) and diffusion models have the capability to remove a\nsignificant portion of watermarks. Therefore, we propose a novel technique\ncalled DiffuseTrace. The goal is to embed invisible watermarks in all generated\nimages for future detection semantically. The method establishes a unified\nrepresentation of the initial latent variables and the watermark information\nthrough training an encoder-decoder model. The watermark information is\nembedded into the initial latent variables through the encoder and integrated\ninto the sampling process. The watermark information is extracted by reversing\nthe diffusion process and utilizing the decoder. DiffuseTrace does not rely on\nfine-tuning of the diffusion model components. The watermark is embedded into\nthe image space semantically without compromising image quality. The\nencoder-decoder can be utilized as a plug-in in arbitrary diffusion models. We\nvalidate through experiments the effectiveness and flexibility of DiffuseTrace.\nDiffuseTrace holds an unprecedented advantage in combating the latest attacks\nbased on variational autoencoders and Diffusion Models.", + "authors": "Liangqi Lei, Keke Gai, Jing Yu, Liehuang Zhu", + "published": "2024-05-04", + "updated": "2024-05-04", + "primary_cat": "cs.CR", + "cats": [ + "cs.CR", + "cs.AI" + ], + "label": "Original Paper", + "paper_cat": "Diffusion AND Model", + "gt": "Latent Diffusion Models (LDMs) enable a wide range of applications but raise\nethical concerns regarding illegal utilization.Adding watermarks to generative\nmodel outputs is a vital technique employed for copyright tracking and\nmitigating potential risks associated with AI-generated content. However,\npost-hoc watermarking techniques are susceptible to evasion. Existing\nwatermarking methods for LDMs can only embed fixed messages. Watermark message\nalteration requires model retraining. The stability of the watermark is\ninfluenced by model updates and iterations. Furthermore, the current\nreconstruction-based watermark removal techniques utilizing variational\nautoencoders (VAE) and diffusion models have the capability to remove a\nsignificant portion of watermarks. Therefore, we propose a novel technique\ncalled DiffuseTrace. The goal is to embed invisible watermarks in all generated\nimages for future detection semantically. The method establishes a unified\nrepresentation of the initial latent variables and the watermark information\nthrough training an encoder-decoder model. The watermark information is\nembedded into the initial latent variables through the encoder and integrated\ninto the sampling process. The watermark information is extracted by reversing\nthe diffusion process and utilizing the decoder. DiffuseTrace does not rely on\nfine-tuning of the diffusion model components. The watermark is embedded into\nthe image space semantically without compromising image quality. The\nencoder-decoder can be utilized as a plug-in in arbitrary diffusion models. We\nvalidate through experiments the effectiveness and flexibility of DiffuseTrace.\nDiffuseTrace holds an unprecedented advantage in combating the latest attacks\nbased on variational autoencoders and Diffusion Models.", + "main_content": "INTRODUCTION The strides made in latent diffusion models [10, 17, 28, 35] have substantially elevated the capacity for synthesizing photorealistic content in image generation and profoundly impact text-to-image [32, 46], image editing [5, 24], in-painting [21, 31], super-resolution [12, 33], content creation [26, 27] and video synthesis [4, 16]. Relevant commercial applications are becoming mainstream creative tools for designers, artists, and the general public. However, contemporary text-to-image generation models, such as Stable Diffusion and Midjourney, can generate a multitude of novel images as well as convincing depictions of fabricated events for malicious purposes. Criminals might utilize LDMs to produce insulting or offensive images, which shall be disseminated to spread rumors and pose a substantial threat to societal security. The hazards of deepfakes, impersonation and copyright infringement are also prevalent issues associated with current generative models. The potential illicit use of text-to-image models has spurred research for embedding watermarks in model outputs. Watermarked images contain signals imperceptible to humans but are marked as machine-generated. Copyright information of the model and the identity information of the model users will be embedded into images. Extracting watermarks from AI-generated images enables the detection of model copyrights and tracing unauthorized users. False and harmful images can be promptly identified and removed from platforms and unauthorized users of the model can be traced through the extraction of image information, which mitigates the potential harm caused by AI-generated content. Existing research on image watermarking tended towards postprocessing solutions. The core concept involves embedding the watermark into the image with minimal adjustments, emphasizing subtlety and intricacy. For instance, the watermark implemented arXiv:2405.02696v1 [cs.CR] 4 May 2024 \fConference acronym \u2019XX, June 03\u201305, 2024, Woodstock, NY Liangqi Lei, Keke Gai, Jing Yu, and Liehuang Zhu in Stable Diffusion [8] operates by altering a particular Fourier frequency within the generated image. This type of watermark faces a key trade-off between watermark robustness and image quality. For diffusion model watermarks, Some researchers have proposed embedding fixed messages into generated images by fine-tuning diffusion models like U-Net [30] or variational autoencoders. However, this approach only allows embedding fixed information into the generated images, requiring re-finetuning of the diffusion model when the embedding information needs to be changed. Moreover, if the model owner distributes the diffusion model to a large number of users, each distributed model must be fine-tuned separately, resulting in significant consumption of computational resources and time. Additionally, when the model requires iterative updates, the stability of the watermark becomes unreliable due to adjustments in model parameters. Recent studies [48] have demonstrated that methods involving the random addition of noise to images to disrupt watermarks, followed by image reconstruction using diffusion models, can effectively remove a significant portion of post-processing watermarking schemes. This poses new challenges to the robustness of watermarking. To address the aforementioned challenges and achieve high extraction accuracy, robustness and image quality, we propose a new watermarking scheme called DiffuseTrace. DiffuseTrace differs fundamentally from previous watermarking methods. DiffuseTrace embeds the watermark into the latent variables of the model, subtly influencing the sampling phase of the model. The watermark is embedded at the semantic level prior to image generation, without any post-processing of the generated images. We specialize in a watermarking scheme that can be seamlessly integrated into a wide range of latent diffusion models. DiffuseTrace can serve as a plug-and-play solution across various diffusion models. Taking practical application scenarios into account, we categorize the roles involved in model usage into two types: model producers and model users. Model producers train and possess all pre-trained models, including diffusion models, watermark encoders, watermark decoders. Model producers assign specific binary identity information to each user. By providing APIs, model producers offer generative model services to users. When malicious images resembling model-generated content or images suspected of copyright infringement appear on art platforms, news outlets or other sharing platforms, model producers can trace illegal usage or infringement-involved users by extracting watermark information from the generated images. For watermark modules, we control the distribution of the watermark through an encoder and dynamically allocate a watermark close to the standard normal distribution for each user. Since the data distribution and sampling process remain consistent with the original model, the generated images can achieve transparent watermark embedding with semantic consistency. Human inspection cannot distinguish watermark samples from random samples. Through transforming images into latent variables and inversely diffusing them to obtain the initial latent variables, the watermark can be decoded through a decoder. Considering the diverse processing stages in the flow of image data as well as the potential bias introduced by the inverse diffusion of the diffusion model, we employ adversarial training and fine-tuned the watermark decoder to enhance the robustness of watermark extraction. The primary contributions of this work are outlined as follows: (1) Among diffusion watermarking schemes based on initial hidden variables, DiffuseTrace is the first scheme that embeds robust multi-bit watermarks. DiffuseTrace is embedded at the semantic level of diffusion-model-generated images without relying on the trade-off between image quality and watermark robustness. It exhibits evident advantages over post-processing methods in terms of image quality. (2) Compared to the state-of-the-art post-processing watermarking and diffusion model watermarking schemes, DiffuseTrace not only exhibits significant performance in common image processing but also shows remarkable robustness against attacks based on variational autoencoders and diffusion models. The paper provides a thorough analysis at the theoretical level regarding the superior watermark robustness of DiffuseTrace. (3) The proposed universal watermark module for latent diffusion models can be seamlessly integrated across different versions of diffusion models. The watermark message of DiffuseTrace can be flexibly modified without being affected by model fine-tuning or model update iterations. Our code is open source: https://anonymous.4open.science/r/DiffuseTrace6DED. Paper Organization. The overview of this paper is organized as follows: The basic introduction of DiffuseTrace is shown in Section 1. The background of DiffuseTrace are summarized in Section 2. In Section 3, we introduce the problem formulation for DiffuseTrace. In Section 4, we demonstrate DiffuseTrace in detail. In Section 5, We have provided a detailed theoretical exposition and security analysis of the proposed scheme. In Section 6, we summarize and analyze the experimental results. In Section 7, we present the realted work of watermarking for LDMs. In Section 8, we summarize the DiffuseTrace watermarking scheme. 2 BACKGROUND 2.1 Diffusion Model based Image Generation Diffusion models progressively transitions the sample x from the true data distribution \ud835\udc5d(\ud835\udc65) to stochastic noise and adeptly reverses this process through iterative denoising of the noisy data [17]. A typical diffusion model framework involves a forward process that progressively diffuses the data distribution \ud835\udc5d(\ud835\udc65,\ud835\udc50) towards the noise distribution \ud835\udc5d\ud835\udc61(\ud835\udc67\ud835\udc61,\ud835\udc50) for \ud835\udc61\u2208(0,\ud835\udc47], where c denotes the conditional context. The conditional gaussian distribution of the diffusion process can be formulated as: \ud835\udc5d\ud835\udc61(\ud835\udc67\ud835\udc61|\ud835\udc65) = \ud835\udc5d\ud835\udc61(\ud835\udc67\ud835\udc61|\ud835\udefc\ud835\udc61\ud835\udc65, \ud835\udf0e2 \ud835\udc61\ud835\udc3c), (1) where \ud835\udefc\ud835\udc61, \ud835\udf0e\ud835\udc61\u2208R+. \ud835\udefc\ud835\udc61and \ud835\udf0e\ud835\udc61are the strengths of signal and noise respectively decided by a noise scheduler. \ud835\udc67\ud835\udc61= \ud835\udefc\ud835\udc61\ud835\udc65+ \ud835\udf0e\ud835\udc61\ud835\udf16is the noisy data. It has been proved that there exists a denoising process with the same marginal distribution as the forward process [35]. The estimation of the only variable can be derived as: \u25bd\ud835\udc67\ud835\udc61log\ud835\udc5d\ud835\udc61(\ud835\udc67\ud835\udc61,\ud835\udc50) \u2248 \ud835\udefc\ud835\udc61\ud835\udc65\ud835\udc61 \ud835\udf03(\ud835\udc67\ud835\udc61,\ud835\udc50) \u2212\ud835\udc67\ud835\udc61 \ud835\udf0e2 \ud835\udc61 . (2) Specifically, given a noise-predicting diffusion model parameterized by \ud835\udf03, which is typically structured as a U-Net [30], training can be \fDiffuseTrace: A Transparent and Flexible Watermarking Scheme for Latent Diffusion Model Conference acronym \u2019XX, June 03\u201305, 2024, Woodstock, NY T T M Step1: DiffuseTrace Enc./Dec. Pretraining M\u2019 Step2: DiffuseTrace Decoder Finetuning \u201c a cute cat \u201d Step3: Sematic Watermarked Image Generation Initial Latents Distribute Sample Watermark Region M Sample Distribute Initial Latents Attack ... Iterative Denoising \ufffd0 VAE Reconstruct T Locate M\u2019 Watermark Region W Sample Distribute Initial Latents \u201c a cute cat \u201d Diffusion Model Step4: Watermark Message Extraction Diffusion Inversion Reconstruct Locate Watermark Region W Diffusion Model DiffuseTrace Enc./Dec. U-Net of Diffusion Model Variational Autoencoder VAE Rec. Loss Distri. Loss Figure 1: Methods of DiffuseTrace. (Step1) Train the DiffuseTrace Encoder through resampling methods to generate latent variables approximate to a standard normal distribution and jointly train the decoder to decode the information. M: Random n-bit messages. (Step2) Keep the encoder fixed and train the decoder. Randomly select prompts for the diffusion model denoising process to generate images. Decode the images after passing through the attack layer to obtain latent variables and execute diffusion inversion to extract the initial latent variables. Compare the decoded message from the initial latent variables with the initial message to build a reconstruction loss for fine-tuning the decoder. (Step3) Assign watermark message w and generate initial watermarked latent variables by the encoder to generate images. (Step4) Extract watermark message after inverting the images and trace the source through statistical testing. formulated as the following noise prediction problem: \ud835\udc5a\ud835\udc56\ud835\udc5b \ud835\udf03 E\ud835\udc65,\ud835\udc61,\ud835\udf0e|| \u02c6 \ud835\udf16\ud835\udf03(\ud835\udefc\ud835\udc61\ud835\udc65+ \ud835\udf0e\ud835\udc61\ud835\udf16,\ud835\udc61) \u2212\ud835\udf16||2 2, (3) where \ud835\udc61refers to the time step; \ud835\udf16is the ground-truth noise; the noise \ud835\udf16\u223cN (\ud835\udf16|0, \ud835\udc3c) is a standard Gaussian. Recently, LDMs [28] streamlines inference processes by incorporating denoising process within the encoded latent space derived from a pre-trained variational autoencoder (VAE) [6]. Diffusion models reconstructs images through the latent state. During the inference phase, stable diffusion models take both a latent seed and a text prompt as an input. The U-Net progressively removes noise from random latent image representations guided by text embeddings. The noise residual from the U-Net is utilized in conjunction with a scheduler algorithm to generate a denoised latent. When synthesizing images, a crucial technique, classifierfree guidance is adopted to enhance the quality of generated images. \u02dc \ud835\udf16\ud835\udc61\u210e\ud835\udc52\ud835\udc61\ud835\udc4e(\ud835\udc61,\ud835\udc67\ud835\udc61,\ud835\udc50) = \ud835\udc64\u02c6 \ud835\udf16\ud835\udc61\u210e\ud835\udc52\ud835\udc61\ud835\udc4e(\ud835\udc61,\ud835\udc67\ud835\udc61,\ud835\udc50) + (\ud835\udc64\u22121) \u02c6 \ud835\udf16\ud835\udc61\u210e\ud835\udc52\ud835\udc61\ud835\udc4e(\ud835\udc61,\ud835\udc67\ud835\udc61,\ud835\udf19) (4) where The guidance scale \ud835\udc64can be modified to regulate the influence of conditional information on the produced images, aiming to strike a balance between quality and diversity. \u02c6 \ud835\udf16\ud835\udc61\u210e\ud835\udc52\ud835\udc61\ud835\udc4e(\ud835\udc61,\ud835\udc67\ud835\udc61,\ud835\udf19) denotes the unconditional diffusion obtained by empty prompt. 2.2 Diffusion Denoising and Inversion The well-trained diffusion model leverages a diverse range of samplers to generate samples from noise and execute denoising procedures. A notable denoising method is the Denoising Diffusion Implicit Model (DDIM) [34] which stands out for its efficiency and deterministic output. DDIM accomplishes denoising with significantly fewer steps. The image \ud835\udc650 will be reproduced with 50 inference steps to the standard 1000-step process. Formally, for each denoising step \ud835\udc61, DDIM utilizes a learned noise predictor \ud835\udf16\ud835\udf03to estimate the noise \ud835\udf16\ud835\udf03(\ud835\udc65\ud835\udc61) added to \ud835\udc650, which leads to the estimation of \ud835\udc650 as follows: \u02c6 \ud835\udc650 = \ud835\udc65\ud835\udc61\u2212\u221a1 \u2212\u00af \ud835\udefc\ud835\udc61\ud835\udf16\ud835\udf03(\ud835\udc65\ud835\udc61) \u221a\u00af \ud835\udefc\ud835\udc61 . (5) the estimated noise \ud835\udf16\ud835\udf03(\ud835\udc65\ud835\udc61) is recombined with the approximated \u02c6 \ud835\udc650 to compute \ud835\udc65\ud835\udc61\u22121: \ud835\udc65\ud835\udc61\u22121 = \u221a\u00af \ud835\udefc\ud835\udc61\u22121 \u02c6 \ud835\udc650 + \u221a1 \u2212\u00af \ud835\udefc\ud835\udc61\u22121\ud835\udf16\ud835\udf03(\ud835\udc65\ud835\udc61) . (6) DDIM also incorporates an inversion mechanism [10], which facilitates the reconstruction of the noise representation \ud835\udc65\ud835\udc47from an image \ud835\udc650. The recovered \ud835\udc65\ud835\udc47should be mappable to an image approximate to \ud835\udc650. Based on the assumption that \ud835\udc65\ud835\udc61\u22121 \u2212\ud835\udc65\ud835\udc61\u2248\ud835\udc65\ud835\udc61+1 \u2212\ud835\udc65\ud835\udc61, The DDIM inversion shall be formulated as: \u02c6 \ud835\udc65\ud835\udc61+1 = \u221a\u00af \ud835\udefc\ud835\udc61+1\ud835\udc650 + \u221a1 \u2212\u00af \ud835\udefc\ud835\udc61+1\ud835\udf16\ud835\udf03(\ud835\udc65\ud835\udc61) (7) Essentially, this process follows the forward diffusion process as described in Equation 6. Diffusion inversion, even in zero-text inversion within conditional diffusion, can still achieve decent accuracy. Meanwhile, the method is applicable to deterministic sampling methods like DPM++ [20]. Our watermarking scheme leverages this property of diffusion inversion. \fConference acronym \u2019XX, June 03\u201305, 2024, Woodstock, NY Liangqi Lei, Keke Gai, Jing Yu, and Liehuang Zhu 3 PROBLEM FORMULATION 3.1 Threat Model In this paper, we consider two parties: the defender and the adversary. The defender is the owner of the generative model. Latent diffusion model is deployed as an online service. The core objectives are protecting the copyright of the model and tracing the illegal usage through model outputs. Conversely, the adversary\u2019s objective is to disrupt the watermark information in the model output and circumvent the copyright protection and tracing mechanisms of the model. Adversary\u2019s Motivation. The adversary\u2019s motivation stems from two aspects: Firstly, training a latent diffusion model requires gathering a significant amount of data, expertise in architecture or algorithms and numerous failed experiments, all of which are expensive. As a result, the model parameters are considered proprietary information for businesses. Generative model services are deployed as online services. Adversaries may manipulate images to destroy watermark information and redistribute the outputs of online services to cloud platforms, effectively becoming commercial competitors. Secondly, Attackers may exploit online generative services to generate insulting or offensive images for malicious purposes such as fabricating fake news or spreading rumors and remove watermarks from the images to evade tracing. Adversary\u2019s Background Knowledge. We assume that adversaries can access the victim\u2019s latent diffusion model in a black-box manner. Attackers can query the victim\u2019s latent diffusion model with data samples and obtain corresponding responses. Specifically, we categorize adversary background knowledge into two dimensions: the architecture of the victim\u2019s diffusion model and the watermark removal capability. For the architecture of the diffusion model, we assume adversaries can access it since such information is typically publicly accessible. Regarding watermark removal capability, we assume adversaries can manipulate images using techniques such as Gaussian blur, color jittering and image compression. Meanwhile, we consider adversaries who possess the capability to perform state-of-the-art watermark removal attacks using variational autoencoders and diffusion models. 3.2 Image Watermarking and Verification Formally, the validation scheme for generative image watermarking in Diffusers is defined as follows: The generative image watermarking verification scheme is a tuple Verification = \u27e8\ud835\udc47\ud835\udc5f\ud835\udc5b, \ud835\udc38\ud835\udc5a\ud835\udc4f, \ud835\udc38\ud835\udc63\ud835\udc4e,\ud835\udc49\ud835\udc5f\ud835\udc53\u27e9of processes: A Train process \ud835\udc47\ud835\udc5f\ud835\udc5b(\ud835\udc37,\ud835\udc34\ud835\udc5f\ud835\udc50[\u00b7], \ud835\udc38\ud835\udc5b\ud835\udc50[\u00b7], \ud835\udc3f) = {\ud835\udc38\ud835\udc5b\ud835\udc50[\ud835\udc4a], \ud835\udc37\ud835\udc52\ud835\udc50[\ud835\udc4a]}, is a fine-tuning or training process that takes training data \ud835\udc37= {\ud835\udc65\ud835\udc51,\ud835\udc66\ud835\udc51} as inputs and outputs the models \ud835\udc38\ud835\udc5b\ud835\udc50[\ud835\udc4a] and \ud835\udc37\ud835\udc52\ud835\udc50[\ud835\udc4a] by minimizeing a given loss L. An embedding process \ud835\udc38\ud835\udc5a\ud835\udc4f(\ud835\udc5d\ud835\udc5f\ud835\udc5a,\ud835\udc34\ud835\udc5f\ud835\udc50[\u00b7], \ud835\udc38\ud835\udc5b\ud835\udc50[\u00b7], \ud835\udc3f\ud835\udc4e\ud835\udc61,\ud835\udc46\ud835\udc56\ud835\udc54) = \ud835\udc43\ud835\udc56\ud835\udc50 [\ud835\udc46\ud835\udc56\ud835\udc54] is an inference process that embeds the signature \ud835\udc46\ud835\udc56\ud835\udc54into latent variables through an encoder and performs inference through the model \ud835\udc34\ud835\udc5f\ud835\udc50[\u00b7] to output the watermarked image \ud835\udc43\ud835\udc56\ud835\udc50[\ud835\udc46\ud835\udc56\ud835\udc54]. An quality evaluation process \ud835\udc38\ud835\udc63\ud835\udc4e\ud835\udc59(\ud835\udc34\ud835\udc5f\ud835\udc50[\u00b7], \ud835\udc40, \ud835\udc3f\ud835\udc4e\ud835\udc61,\ud835\udf16) = {\ud835\udc47\ud835\udc5f\ud835\udc62\ud835\udc52, \ud835\udc39\ud835\udc4e\ud835\udc59\ud835\udc60\ud835\udc52} is to evaluate whether or not the discrepency is less than a predefined threshold i.e. |\ud835\udc40(\ud835\udc34\ud835\udc5f\ud835\udc50[\ud835\udc4a,\ud835\udc46\ud835\udc56\ud835\udc54], \ud835\udc3f\ud835\udc4e\ud835\udc61, \ud835\udc38\ud835\udc5b\ud835\udc50[\u00b7]) \u2212\ud835\udc40| \u2264\ud835\udf16, where \ud835\udc40(\ud835\udc34\ud835\udc5f\ud835\udc50[\ud835\udc4a,\ud835\udc46\ud835\udc56\ud835\udc54], \ud835\udc3f\ud835\udc4e\ud835\udc61, \ud835\udc38\ud835\udc5b\ud835\udc50[\u00b7]) denotes the image fidelity or semantic consistency tested against a set of watermarked latents. \ud835\udc40is the target generation performance. A verification process \ud835\udc38\ud835\udc63\ud835\udc4e(\ud835\udc3c\ud835\udc5a\ud835\udc54, \ud835\udc46\ud835\udc56\ud835\udc54, \ud835\udc34\ud835\udc61\ud835\udc58, \ud835\udc37\ud835\udc52\ud835\udc50[\u00b7],\ud835\udf16) = {\ud835\udc47\ud835\udc5f\ud835\udc62\ud835\udc52, \ud835\udc39\ud835\udc4e\ud835\udc59\ud835\udc60\ud835\udc52} checks whether the expected signature \ud835\udc46\ud835\udc56\ud835\udc54of a given generative image can be successfully verified by Decoder \ud835\udc37\ud835\udc52\ud835\udc50[\u00b7] when facing image attacks. Watermark Detaction. DiffuseTrace embed a k-bit secret message \ud835\udc5a\u2208{0, 1}\ud835\udc58into the watermark image. The watermark detection algorithm includes an extractor that can extract the hidden signal \ud835\udc5a\u2032 from the watermarked image. It uses statistical testing to set a threshold \ud835\udf0f\u2208{0, 1, 2...\ud835\udc58} for the extracted bits . If the number of matching bits \ud835\udc38(\ud835\udc5a,\ud835\udc5a\u2032) \u2265\ud835\udf0f, the image is marked as watermarked. Formally, We establish the hypothesis H1: The image pic is generated by DiffuseTrace against the null hypothesis H0: The image is not generated by DiffuseTrace. Under \ud835\udc3b0, we assume that the extracted bits \ud835\udc5a\u2032 1,\ud835\udc5a\u2032 2...\ud835\udc5a\u2032 \ud835\udc58are independent and identically distributed Bernoulli random variables with a probability of 0.5. \ud835\udc38(\ud835\udc5a,\ud835\udc5a\u2032) follows a binomial distribution \ud835\udc35(\ud835\udc58, 0.5). Type I error (false positive rate (FPR) , \ud835\udf0e) equals the probability of \ud835\udc38(\ud835\udc5a,\ud835\udc5a\u2032) exceeding \ud835\udf0f, derived from the binomial cumulative distribution function. It has a closed form using the regularized incomplete beta function \ud835\udc3c\ud835\udc65(\ud835\udc4e;\ud835\udc4f). \ud835\udf161(\ud835\udf0f) = P(\ud835\udc38(\ud835\udc5a,\ud835\udc5a\u2032) > \ud835\udf0f| \ud835\udc3b0) = 1 2\ud835\udc58 \ud835\udc58 \u2211\ufe01 \ud835\udc56=\ud835\udf0f+1 ( \ud835\udc58 \ud835\udc56 ) = \ud835\udc3c1/2(\ud835\udf0f+ 1,\ud835\udc58\u2212\ud835\udf0f). (8) If we reject the null hypothesis \ud835\udc3b0 with a p-value less than 0.01, we consider the image to be without a watermark. In practice, for a watermark of 48 bits (\ud835\udc58= 48), at least 34 bits should be extracted to confirm the presence of the watermark. This provides a reasonable balance between detecting genuine watermarks and avoiding false positives. 3.3 Objective for Watermarking DiffuseTrace should have the following propeties: Robust against Watermarking Attacking: Images with watermarks may undergo various image processing operations. Even after post-processing, the watermark can still be fully recovered. DiffuseTrace should withstand watermark removal attacks, such as Gaussian noise, color jittering, Gaussian blur and others. Meanwhile, DiffuseTrace should be able to defend against the latest watermark attacks based on the state-of-the-art variational autoencoder and diffusion model techniques. Generalizability: Considering the cost of embedding fixed information into fine-tuned models, DiffuseTrace can adjust embedded message flexibly and should be compatible with various versions of diffusion models and remain unaffected by model fine-tuning or model update iterations. Fidelity: Minimizing the impact on the model\u2019s output before and after watermarking to the greatest extent possible. The images generated by the DiffuseTrace maintain consistency with the original model in terms of semantic consistency and image quality. The watermark samples generated by DiffuseTrace should exhibit no significant differences in visual and semantic quality compared to normal samples. \fDiffuseTrace: A Transparent and Flexible Watermarking Scheme for Latent Diffusion Model Conference acronym \u2019XX, June 03\u201305, 2024, Woodstock, NY The goal is to design a watermark that is flexible, robust to postprocessing, generalizable and does not compromise the quality or semantic consistency of the image. Additionally, it should remain unaffected by model fine-tuning or update iterations. 4 PROPOSED WATERMARKING SCHEME 4.1 Overview The overview of our method is in figure 1. As described in the first section, we have three objectives: \u2022 The watermark is embedded into the initial latent variables at the semantic level without altering semantic consistency and image quality. \u2022 Watermark messages can be modified flexibly without retraining or fine-tuning the model. \u2022 The watermark is robust against various image processing techniques and state-of-art watermark removal methods. The core idea of DiffuseTrace is to embed the watermark into latent variables. The initial latent variables of the latent space are divided into multiple watermark regions, with each region corresponding to a portion of the watermark information. To ensure both lossless quality and semantic consistency of the image, the embedded watermark should approximate a standard normal distribution and be extractable by the decoder. Specifically, DiffuseTrace consists of a watermark encoder and watermark decoder. The model owner encodes the initial latent variables through the watermark encoder. The latent variables are then processed through a scheduler guided by prompts and denoised through a U-Net. Afterward, latent variables are decoded by a variational autoencoder into watermarked images. The watermarked images are subjected to an attack layer and decoded back into the latent space. Through diffusion inversion, the original latent variables are restored. The watermark is then extracted from the decoded latent variables through the decoder. 4.2 Pre-training Watermark Encoder-Decoder The paper pretrains the encoder-decoder structure for watermark embedding and extraction. The training objective of the encoder is to construct a unified representation of watermark information and latent variables under a standard Gaussian distribution based on the watermark information and the embedded watermark region. Specifically, when binary identity information of the user is inputted into the encoder, it will produce watermark-embedded latent variables which adhere to a standard Gaussian distribution. The following explains the reason for latent variables with watermark conforming to a standard normal distribution. When the latent generator of the LDM samples latent variables \ud835\udc4dfrom noise, the function of the U-Net is to iteratively denoise Gaussian noise matrices within the diffusion cycle guided by text and timesteps. By subtracting the predicted noise from random Gaussian noise matrices, the random Gaussian noise matrices are eventually transformed into the latent variables of the image. Since the noise introduced during the training process of the U-Net follows a normal distribution, the initial latent variables of the LDM inference process should ideally approximate a standard normal distribution. When training a variational autoencoder to encode images into latent variables, one of the training objectives is to approximate the latent variables to adhere roughly to a standard normal distribution. More precisely, the training set for U-net involves repeatedly adding noise from a standard normal distribution to images. With a sufficient number of iterations, the original images will converge close to a standard normal distribution. Hence, during the denoising image generation phase, the initial noise is selected to conform to a standard normal distribution. If there is a significant deviation of the initial noise from the standard normal distribution, it may lead to inconsistencies between image quality and semantics. Watermarked latent variables that are closer to a standard normal distribution better conform to the standard denoising process. Due to the non-differentiability and gradient descent limitations of distributions, the encoder\u2019s model architecture employs the reparameterization technique to generate watermark-embedded latent variables. Considering the difficulty of explicitly distributing watermark regions at the trillion level, we have adopted an implicit partitioning of the watermark regions. Sampled latent variables are constrained by Kullback-Leibler divergence to approximate a standard normal distribution. Each watermark information independently maps to a portion of the probability distribution. The specific principles are detailed in our theoretical analysis 5. The decoder network is the inverse of the encoder. The training objective of the decoder is to extract watermark information from the initial latent variables. The encoder and decoder are jointly trained to ultimately produce watermark-embedded latent variables that conform to a standard normal distribution. The decoder then outputs the corresponding watermark information based on these latent variables. According to the analysis of 5.2, the reconstruction of watermark refers to maximizing the expected probability distribution of the watermark \ud835\udc64given the latent variable. The loss for reconstructing the message L\ud835\udc64is calculated as the Mean Square Error (MSE) loss between the original watermark message and the decoded message: L\ud835\udc64= \ud835\udc40\ud835\udc46\ud835\udc38(\ud835\udc5a\ud835\udc52\ud835\udc60\ud835\udc60\ud835\udc4e\ud835\udc54\ud835\udc52,\ud835\udc51\ud835\udc52\ud835\udc50(\ud835\udc5a\ud835\udc52\ud835\udc60\ud835\udc60\ud835\udc4e\ud835\udc54\ud835\udc52\u2032)) (9) For the loss of the initial latent variable distribution, We compute the Kullback-Leibler (KL) divergence between the distribution of the latent variables and the standard normal distribution as the distribution loss. KL divergence is a measure of how one probability distribution diverges from the expected probability distribution. Suppose we have two probability distributions P and Q for a random variable \ud835\udf09. If \ud835\udf09is a discrete random variable, the KL divergence from P to Q is defined as: DKL(\ud835\udc43\u2225\ud835\udc44) = \u2211\ufe01 \ud835\udc56 \ud835\udc43(\ud835\udc56) ln \u0012 \ud835\udc43(\ud835\udc56) \ud835\udc44(\ud835\udc56) \u0013 (10) According to the analysis of 5.1, we assume that the output follows a normal distribution, denoted as \ud835\udc5d1 \u223cN (\ud835\udf071, \ud835\udf0e2 1) and the standard normal distribution is denoted as \ud835\udc5d2 \u223cN (0, 1). The distribution loss L\ud835\udc51\ud835\udc56\ud835\udc60\ud835\udc61used in this paper is as follows: L\ud835\udc51\ud835\udc56\ud835\udc60\ud835\udc61= \ud835\udc3e\ud835\udc3f(\ud835\udc5d1\u2225\ud835\udc5d2) = \u22121 2 \u00d7 [2 log\ud835\udf0e1 + 1 \u2212\ud835\udf0e2 1 \u2212\ud835\udf072 1] (11) In the above two loss functions, L\ud835\udc64ensures the correct decoding of watermark information, while L\ud835\udc51\ud835\udc56\ud835\udc60\ud835\udc61guarantees the initial distribution of latent variables, thereby ensuring the quality and semantic \fConference acronym \u2019XX, June 03\u201305, 2024, Woodstock, NY Liangqi Lei, Keke Gai, Jing Yu, and Liehuang Zhu consistency of the images. The encoder and decoder are jointly trained by minimizing the following loss function: L = \ud835\udf061L\ud835\udc64+ \ud835\udf062L\ud835\udc51\ud835\udc56\ud835\udc60\ud835\udc61 (12) \ud835\udf061 and \ud835\udf062 represent the proportion constant parameter. Therefore, the trained encoder is capable of embedding information into latent variables that approximately adhere to a standard normal distribution, while the decoder can be seen as the inverse process of the encoder to extract the watermark. 4.3 Decoder Fine-Tuning According to the analysis of 5.3, throughout the entire watermark embedding-extraction process, following factors contribute to decoding imprecision: (1) The diffusion inversion process approximates the differences between adjacent step latent variables; (2) Since the prompt is not available in the practical scenarios during the decoding stage, we utilize zero-text inversion; (3) Potential alterations and manipulations to the image occur through various image processing techniques. These factors contribute to inevitable deviations in the inferred initial latent variables. In essence, these processes result in a global shift of the initial latent variables in the semantic space of the images. The decoder can accurately extract most of the watermark information, but samples located at the edges of the watermark region exhibit significant inaccuracies. During the fine-tuning phase of the decoder, the objectives are to adapt to the shift occurring in the watermark region and accurately extract the watermark from the attacked samples. Specifically, we fix the encoder of the watermark model and diffusion model. To simulate the image processing procedures in real-world scenarios, the attack layer employs an image perturbation technique after generating various images with randomly prompted words. The perturbation layer includes randomly adding Gaussian noise, applying Gaussian blur, color jittering and image compression to the images. Adversarial training will enhance the robustness of watermark detectors against image processing. After inverting the images subjected to image processing, we obtain the modified initial latent variables. We fine-tune the decoder by computing the mean squared error between the decoded messages and the original watermark messages as the loss function. 4.4 Error correction mechanism The scheme clearly delineates the watermark region, but during watermark detection, the effects of inversion and image processing due to adversarial training on the decoder can lead to overlap in watermark detection areas. This results in bit errors for samples at the edges of the watermark region where overlap occurs during adversarial training. We have provided detailed reasons and explanations in the security analysis 5.4 and elucidated the reasons and necessity for employing error correction codes. Recursive Systematic Convolutional (RSC) Codes: RSC codes provide a systematic approach to encoding and decoding bitstreams, allowing for error correction of data and adaptive recovery of the original message from corrupted data. Concretely, Given an input bitstream m, the RSC encoder transforms it into another bitstream \ud835\udc5a+\ud835\udc501 +\ud835\udc502...\ud835\udc50\ud835\udc58. where each \ud835\udc50\ud835\udc56is a bitstream that has the same length as the bitstream \ud835\udc5aand the symbol + indicates the concatenation of bitstreams. A higher encoding ratio can withstand a greater proportion of errors but results in a lengthier encoded bitstream. When decoding, if the modified bit string \ud835\udc5a\u2032 +\ud835\udc501...\ud835\udc50\ud835\udc56is input to the RSC decoder and the error rate of the encoded stream is less than a certain threshold, the original information m can be recovered. We can utilize this property to make corrections to watermark encoding. Turbo Codes [3]: Turbo codes can tolerate more bit errors compared to other codes at the same bit rate. A typical Turbo code consists of two convolutional codes and an interleaver. The primary function of the interleaver is to shuffle the outputs of the two convolutional codes, increasing the independence of each code and thus enhancing error correction performance. During the decoding process, an iterative algorithm is utilized to estimate and rectify errors iteratively, thereby enhancing the error correction performance. In our experiments, we utilize Turbo codes as error correction codes to further enhance the stability of watermark extraction. The specific process involves the model owner assigning identity information to the model user, which is then encoded into identity information codes with redundancy using Turbo codes. These identity information codes undergo encoding by the encoder, denoising of latent variables, inversion of latent variables, extraction of watermark information and error correction of the extracted identity information redundancy codes to restore the initial identity information. The mechanism of error correction codes combines partial watermark regions into a unified part, correcting the initial latent variables located at the boundaries of watermark detection regions, thereby enhancing the robustness of watermark detection. 5 THEORETICAL ANALYSIS OF THE PROPOSED SCHEME 5.1 Unified Representations of Watermark Regions and Latent Variables Based on the initial requirements, we aim to establish a unified representation for watermark information and latent variable regions. For each watermark \ud835\udc4a, specific distributions of latent variables are distributed. These settings ensure that all images generated by the model can be attributed to the initial distribution of latent variables. Formally, We set both the diffusion model and the watermark model to share the same latent space. For specific parts of this latent space, we can sample and extract watermark features based on a probability function \ud835\udc43(\ud835\udc67). We assume a series of deterministic functions \ud835\udc53(\ud835\udc67;\ud835\udf03) parameterized by a vector \ud835\udf03in some space \u03a6, where \ud835\udc53: \ud835\udc4d\u00d7 \u03a6 \u2192X . When \ud835\udf03is fixed and \ud835\udc67\u223cN (1, 0), \ud835\udc53(\ud835\udc67;\ud835\udf03) can generate latent variables that conform to a standard Gaussian distribution. By adopting this approach, we can construct watermark distribution regions corresponding to specific watermark information. These regions coincide with the latent variables of the diffusion model, achieving a unified representation of both. Additionally, the distribution of the watermark conforms to a standard normal distribution. This embedding process solely alters the selection of initial latent variables, preserving semantic consistency and image quality. \fDiffuseTrace: A Transparent and Flexible Watermarking Scheme for Latent Diffusion Model Conference acronym \u2019XX, June 03\u201305, 2024, Woodstock, NY We aim to optimize \ud835\udf03such that \ud835\udc43(\ud835\udc67) can be sampled \ud835\udc67from while ensuring it closely matches the watermark \ud835\udc4a. To formalize this concept mathematically, the objective of DiffuseTrace is to maximize the probability of each \ud835\udc4athroughout the entire watermark extraction process. This objective stems from the principle of maximum likelihood. If the decoder is capable of reconstructing the watermark from the latent variables, it is also likely to reconstruct watermark from similar samples and unlikely to reconstruct watermark from dissimilar ones. To illustrate the dependence of \ud835\udc43(\ud835\udc67) on\ud835\udc4a, we transform \ud835\udc53(\ud835\udc67,\ud835\udf03) into \ud835\udc43(\ud835\udc4a|\ud835\udc67;\ud835\udf03). The probability density function can be formalized as follows: \ud835\udc43(\ud835\udc4a) = \u2211\ufe01 \ud835\udc67 \ud835\udc43(\ud835\udc4a|\ud835\udc67;\ud835\udf03)\ud835\udc43(\ud835\udc67) (13) The output distribution conforms to a Gaussian distribution after watermark embedding. Therefore, \ud835\udc43(\ud835\udc4a|\ud835\udc67;\ud835\udf03) satisfies the following distribution: \ud835\udc43(\ud835\udc4a|\ud835\udc67;\ud835\udf03) = N (\ud835\udc4a|\ud835\udc53(\ud835\udc67;\ud835\udf03), \ud835\udf0e2\ud835\udc3c) (14) After embedding the watermark, the latent variables have a mean of \ud835\udc53(\ud835\udc67;\ud835\udf03) and a covariance equal to the identity matrix \ud835\udc3cmultiplied by the scalar \ud835\udf0ewhich is a hyperparameter. 5.2 The Implicit Allocation of Watermarks Essentially, we need to partition the standard normal distribution, with each partition capable of accurately reconstructing the original watermark. For a 48-bit watermark, dividing into over two hundred eighty-one trillion regions presents a challenge in manually determining the watermark encoding regions given the complexity of explicitly partitioning watermark regions under the standard normal distribution. This implicit partitioning problem is analogous to the challenges faced by variational autoencoders in fitting distributions to data. As outlined in the paper [11], any distribution in \ud835\udc51dimensions can be generated using a set of \ud835\udc51variables drawn from a normal distribution and mapped through a sufficiently complex function. For \ud835\udc43(\ud835\udc4a), within the partitioning of the watermark into over two hundred eighty-one trillion blocks, most sampled \ud835\udc67contribute minimally to \ud835\udc43(\ud835\udc4a), since \ud835\udc43(\ud835\udc4a|\ud835\udc67) is close to zero for most \ud835\udc67. The approximation of the prior distribution can be simplified by introducing the posterior distribution \ud835\udc5e(\ud835\udc67|\ud835\udc65). By computing the KL divergence between the posterior and prior distributions, we obtain: \ud835\udc37[\ud835\udc5e(\ud835\udc67|\ud835\udc64)||\ud835\udc5d(\ud835\udc67|\ud835\udc64)] = E\ud835\udc67\u223c\ud835\udc5e[(\ud835\udc67|\ud835\udc64) \u2212log\ud835\udc5d(\ud835\udc67|\ud835\udc64)] (15) The same to solving the variational evidence lower bound, we derive the watermark reconstruction evidence lower bound through Bayesian transformation: log\ud835\udc5d(\ud835\udc64) \u2265E\ud835\udc67\u223c\ud835\udc5e[log\ud835\udc5d(\ud835\udc64|\ud835\udc67)] \u2212\ud835\udc37[\ud835\udc5e(\ud835\udc67|\ud835\udc64)||\ud835\udc5d(\ud835\udc67)] (16) The first term in the equation represents maximizing the expected probability distribution of the watermark \ud835\udc64given the latent variable \ud835\udc67, i.e., the loss incurred by the watermark decoder in reconstructing the watermark. The second term is for the approximate posterior distribution of the latent space \ud835\udc67to closely resemble the prior distribution, i.e., the watermark information generated by the encoder and the standard normal distribution should be as similar as possible. 5.3 Offset of Watermark Detection Region As stated in Equation 7, diffusion inversion attributes the generated image to the initial latent variables. The assumption of diffusion inversion approximates \ud835\udc4b\ud835\udc61\u22121\u2212\ud835\udc4b\ud835\udc61to \ud835\udc4b\ud835\udc61+1\u2212\ud835\udc4b\ud835\udc61. While unconditional diffusion inversion can yield accurate results, excessive guidance scale in conditional diffusion amplifies the errors introduced by null-text diffusion inversion [23]. In fact, after extracting the semantic embeddings of the images, conducting a forward pass after each inversion and applying gradient descent can enhance the effectiveness of the inversion process. Let the current latent variable be \ud835\udc4d\ud835\udc61\ud835\udc56. \ud835\udc4d\ud835\udc61\ud835\udc56+1 is obtained after performing inversion on \ud835\udc4d\ud835\udc61\ud835\udc56. The process of solving \ud835\udc4d\ud835\udc61\ud835\udc56+1 under the guidance of extracting semantic embeddings can be mathematically expressed as follows: \u2207\ud835\udc9b\ud835\udc61\ud835\udc56+1 \u2225\ud835\udc9b\ud835\udc61\ud835\udc56\u2212\ud835\udc9b\u2032 \ud835\udc61\ud835\udc56\u22252 2 (17) Theoretically, refining the decoder by restoring the initial latent variables through the aforementioned approach would yield better results. However, considering the computational overhead of the gradient descent process, the approach adopted in the paper accepts the inaccuracy of diffusion inversion under zero text and defines this inaccuracy as the offset to the watermark detection region. The purpose of fine-tuning is to learn the offset vector as \ud835\udc5d. The watermark encoder trained with maximum likelihood exhibits the following properties, similar samples have more similar latent variables, while dissimilar samples have greater distances in the latent space. The distance between the similar latent variables obtained after inversion and the initial latent variables should be less than a certain threshold \ud835\udf16to guarantee the accuracy of detection. After the watermark region is segmented, the watermark detection area is offset due to diffusion inversion, the refinement target at this point transforms into: min \ud835\udf03 E(\ud835\udc65\ud835\udc56,\ud835\udc66\ud835\udc56)\u223cD [max \ud835\udc3f(\ud835\udf03,\ud835\udc56\ud835\udc5b\ud835\udc63(\ud835\udc51\ud835\udc52\ud835\udc5b\ud835\udc5c(\ud835\udc65\ud835\udc56)) + \ud835\udc5d\ud835\udc56,\ud835\udc66\ud835\udc56)]. (18) \ud835\udc65\ud835\udc56denotes specific latent variables and \ud835\udc66\ud835\udc56denotes the watermark region \ud835\udc65\ud835\udc56belong. In the formula 19, \ud835\udc51\ud835\udc52\ud835\udc5b\ud835\udc5crepresents the process of diffusion denoising 6, \ud835\udc56\ud835\udc5b\ud835\udc63represents the precise inversion process, and \ud835\udc5d\ud835\udc56denotes the offset of the watermark detection area caused by approximation 7. After fine-tuning the watermark decoder, it should satisfy \ud835\udc5d< \ud835\udf16to ensure detection accuracy. Taking into account that images may undergo various treatments including image blurring, Gaussian noise, color transformations, etc., such attacks can affect samples on the edges of the watermark region, leading to decreased detection accuracy. Essentially, this process does not alter the watermark\u2019s region, but it notably aids in repairing the evasion of edge samples. Adversarial training can appropriately expand the range of the watermark detection region for various attacks. Therefore, the refinement target can be further transformed into: min \ud835\udf03 E(\ud835\udc65\ud835\udc56,\ud835\udc66\ud835\udc56)\u223cD [max \ud835\udc3f(\ud835\udf03,\ud835\udc56\ud835\udc5b\ud835\udc63(\ud835\udc51\ud835\udc52\ud835\udc5b\ud835\udc5c(\ud835\udc65\ud835\udc56) + \ud835\udeff) + \ud835\udc5d\ud835\udc56,\ud835\udc66\ud835\udc56)]. (19) The variable \ud835\udeffcan be expressed as the deviations caused by various attacks on the image, where images generated after such attacks are semantically similar but deviate from the original image in semantic space. Correcting this step further enhances the watermark decoder\u2019s detection accuracy for edge samples. \fConference acronym \u2019XX, June 03\u201305, 2024, Woodstock, NY Liangqi Lei, Keke Gai, Jing Yu, and Liehuang Zhu 5.4 Security Analysis Based on the above analysis, DiffuseTrace divides the watermark region into multiple contiguous areas. Assuming the image undergoes image processing resulting in changes compared to the original image, this change is assumed to be \ud835\udf16\ud835\udc5dwithin the latent variable space. The initial latent variable corresponding to the original image is \ud835\udc4d0 \u2208X. As long as \ud835\udc4d\ud835\udc47+\ud835\udf16\ud835\udc5d\u2208X, the watermark verification is successful. For the initial latent variables close to the center of the watermark region, the distance from the latent variables to other watermark regions is \ud835\udc37\u226b\ud835\udf16\ud835\udc5d. In this case, watermark verification is straightforward. However, for samples at the edges of the watermark region, \ud835\udc4d\ud835\udc47+ \ud835\udf16\ud835\udc5d\u2209X. In the detection phase, we effectively expanded the detection area for each watermark, considering the outer radius \ud835\udc5fof each watermark region as part of the region itself. This process can be formalized as follows: \ud835\udc51\ud835\udc52\ud835\udc61\ud835\udc52\ud835\udc50\ud835\udc61(\ud835\udc67\ud835\udc47+ \ud835\udf16\ud835\udc5d) = \ud835\udc57\ud835\udc62\ud835\udc51\ud835\udc54\ud835\udc52(\ud835\udc670 + \ud835\udf16\ud835\udc5d\u2208(X + r)) (20) The size of \ud835\udc5fdepends on the magnitude of the perturbations in adversarial samples used during adversarial training. Expanding the partition of watermark regions actually increases the risk of overlap to some extent between different watermark regions. We set the distance \ud835\udc51between the two watermark regions. Since the encoder remains fixed, the region of the watermark itself won\u2019t change. However, due to inversion-induced overall shifts and image processing, the detection area post-inversion corresponds to a deviated initial region. If \ud835\udc5f\u2264\ud835\udc51, adversarial training enhances the robustness of the watermark, ensuring that even edge latent variables can still extract the watermark. Security Analysis Without Attack. If the magnitude of adversarial training \ud835\udc5fexceeds \ud835\udc51, it causes the watermark from one edge sample to fall within the detection range of another, leading to bit errors. Indeed, during training, adversarial samples at the boundary regions steer the model in the wrong direction, while correct samples in these regions guide the model back on track. As a result, the accuracy of samples in these areas remains above fifty percent but unstable, leading to a fluctuating state. To correct such errors, we employ error correction codes. As mentioned by 4.4, if the error rate of samples in the boundary region is within an acceptable range, error correction codes can restore the original information. Essentially, this approach uses a larger amount of information to rectify errors and merges multiple regions into one. Security Analysis of Image Processing. In our scheme, we consider image manipulation where the same image undergoes a certain offset in the latent space, but within an acceptable range smaller than a certain threshold. If the corresponding change in the latent space is less than \ud835\udc51, adversarial training ensures that both central and marginal latent variables can successfully decode the information. Common image manipulations such as Gaussian transformations, color jittering, brightness variations and image compression all keep the image\u2019s position in the latent space within an acceptable range. Therefore, DiffuseTrace effectively defends against such attacks. Even with significant image manipulations such as JPEG compression to 10 percent, contrast increase to 8, and brightness increase to 6, DiffuseTrace maintains a certain level of accuracy. Security Analysis of VAE-based Attacks and Diffusionbased Attacks. The core idea behind attacks such as VAE-based attacks and Diffusion-based attacks in the proposed scheme is to disrupt and reconstruct. Disruption involves adding noise to the image, while reconstruction involves removing the noise through a diffusion model. The reason why such attacks can succeed is that the primary objective of most watermarking schemes is to add minimal watermark noise to the image while still being able to extract the watermark information. These methods often utilize the LPIPS loss [47] or differences in the color channels of the image as the loss function, aiming to minimize the SSIM and PSNR metrics of the final image. This allows reconstruction attacks to exploit this vulnerability by continuously adding noise to gradually degrade the stability of the watermark. Eventually, the reconstruction process generates an image that is indistinguishable from the watermark. While some watermarking schemes, such as Stegastamp, sacrifice image quality and significantly increase adversarial training to enhance their stability, there is no defense against reconstruction attacks when constructive steps become sufficiently numerous. In fact, reconstruction attacks can even produce images that are clearer than the watermark samples. The watermark based on the initial latent variables primarily operates at the semantic level, allowing for stable watermark extraction as long as there are no significant changes in the image within the latent space. Attacks on watermarks based on diffusion models by adding noise do not alter the original semantic content of the image actually. The initial hidden space positions of the image can still be discerned which makes it resistant to such reconstruction attacks. This is exactly the advantage of DiffuseTrace in combating attacks. 6 EXPERIMENTS 6.1 Experimental settings Datasets. In the experiment, we utilized the following datasets: \u2022 Real Photos: 500 images were randomly selected from MSCOCO [18], which contains over 328K images along with their annotations. \u2022 AI-Generated Images Prompts: 500 prompts were randomly sampled from Diffusion Prompts, a database of approximately 80,000 prompts filtered and extracted from image finders. \u2022 AI-Generated Images: 500 images and prompts are randomly chosen from StableDiffusionDB [41]. This dataset contains images generated by Stable Diffusion based on prompts and hyperparameters provided by actual user interactions. Watermark Baselines. For traditional watermarking schemes, we selected DcTDwt [1] and DcTDwtSvD [8] which is deployed in Stable Diffusion as a watermark with an embedding capacity of 48. For post-processing watermarking schemes based on EncoderDecoder/GAN structures, we chose RivaGAN [44], Hidden [51] and StegaStamp [37] with embedding capacities of 32, 48, 48, and 48 respectively. For watermarking schemes based on Variational Autoencoders, we chose Stable Signature [13] and and SSLWatermark [14] with an embedding capacity of 48. Additionally, for watermarking schemes based on latent variables, we chose Tree-Ring with a watermark radius of 10. Given that tree-ring is a zero-bit watermark scheme, we utilized p-values as the detection metric. The \fDiffuseTrace: A Transparent and Flexible Watermarking Scheme for Latent Diffusion Model Conference acronym \u2019XX, June 03\u201305, 2024, Woodstock, NY Table 1: Bit Accuracy/Detection Accuracy Under Image Processing Method Brightness Noise Contrast Hue JPEG Blur Resize BM3D Value 2.0 0.05 2.0 0.25 50 7*7 0.3 30 Traditional Wm. DwtDct 0.601/0.000 0.801/0.642 0.497/0.000 0.479/0.000 0.488/0.000 0.582/0.092 0.493/0.000 0.498/0.000 D.Svd 0.612/0.042 0.850/0.999 0.718/0.118 0.485/0.000 0.498/0.000 0.989/0.999 0.506/0.000 0.632/0.084 Enc.-Dec. Wm. RivaGan 0.975/0.999 0.960/0.994 0.832/0.992 0.984/0.999 0.773/0.801 0.867/0.924 0.504/0.000 0.858/0.873 Hidden 0.964/0.999 0.971/0.994 0.979/0.999 0.992/0.999 0.849/0.823 0.816/0.852 0.825/0.873 0.626/0.168 S.Stamp 0.937/0.999 0.979/0.999 0.972/0.999 0.995/0.999 0.952/0.999 0.981/0.999 0.972/0.999 0.980/0.999 VAE-Based Wm. S.Signa 0.971/0.999 0.976/0.996 0.965/0.999 0.954/0.994 0.806/0.809 0.781/0.822 0.513/0.011 0.604/0.013 Latent-Based Wm. SSLWm. 0.927/0.999 0.627/0.124 0.975/0.999 0.942/0.997 0.547/0.000 0.997/0.999 0.844/0.901 0.620/0.224 Ours 0.942/0.999 0.915/0.999 0.959/0.999 0.982/0.999 0.912/0.999 0.966/0.999 0.922/0.999 0.902/0.999 Table 2: Image Sematic Quality and Undetectability Evaluation. The table demonstrates the impact of adding semantic watermarks on image quality through two No-inference Metrics, NIQE and PIQE. The semantic consistency before and after adding the DiffuseWatermark is evaluated through the Clip metric. Dataset Method NIQE\u2193PIQE\u2193Clip\u2191Bit/Detect DiffusionDB No-Watermark 4.91 28.21 0.342 0.511/0.000 Tree-ring(rad10) 5.32 30.28 0.332 -/0.999 Tree-ring(rad20) 6.64 37.33 0.301 -/0.999 DiffuseTrace(16) 4.22 29.08 0.344 0.999/0.999 DiffuseTrace(32) 5.04 29.77 0.339 0.992/0.999 DiffuseTrace(48) 4.72 28.41 0.340 0.984/0.999 MS-COCO Prompts No-Watermark 3.85 33.28 0.335 0.504/0.000 Tree-ring(rad10) 4.32 34.28 0.324 -/0.999 Tree-ring(rad20) 5.64 38.33 0.291 -/0.999 DiffuseTrace(16) 4.12 33.25 0.333 0.999/0.999 DiffuseTrace(32) 3.81 30.21 0.326 0.994/0.999 DiffuseTrace(48) 4.17 32.34 0.330 0.990/0.999 Diffusion Prompts No-Watermark 4.88 29.72 0.326 0.488/0.999 Tree-ring(rad10) 5.32 30.28 0.327 -/0.999 Tree-ring(rad20) 5.94 37.33 0.303 -/0.999 DiffuseTrace(16) 4.93 28.42 0.358 0.999/0.999 DiffuseTrace(32) 5.11 30.18 0.353 0.999/0.999 DiffuseTrace(48) 4.70 26.33 0.328 0.984/0.999 corresponding bit capacity of DiffuseTrace is 48 bits. Considering the additional overhead of redundancy codes, no error correction codes were used in the comparative experiments. Attack Baselines. To thoroughly evaluate the robustness of DiffuseTrace, we test it against a comprehensive set of baseline attacks that represent common image processing, VAE-based attack and diffusion-based attack. Specially, The set of attacks employed in our testing includes: \u2022 brightness and contrast change of 2.0 \u2022 addition of Gaussian noise with standard deviation of 0.05 \u2022 Adjustment of the hue by 0.25. \u2022 JPEG compression with a quality setting of 50. \u2022 BM3D denoising algorithm with Peak Signal-to-Noise Ratio Standard Deviation of 30. \u2022 Gaussian blur with kernel size 7 and standard deviation of 1 \u2022 Two Variational AutoEncoder (VAE) based image compression models, Bmshj18 [2] and Cheng20 [7], both with compression factors of 3. \u2022 A stable diffusion-based image regeneration model for watermark attack, Zhao23 [49] with 40 denoising steps. Evaluation Metrics. The two main objectives of incorporating watermarks are copyright protection and user tracing. Therefore, we utilize \ud835\udc5d\ud835\udc63\ud835\udc4e\ud835\udc59\ud835\udc62\ud835\udc52as the standard for copyright tracing and utilize bit accuracy rate as the standard for user tracing. We set a decision threshold to reject the null hypothesis for \ud835\udc5d< 0.01, requiring detection of the corresponding method-corrected 24/32 and 34/48 bits. Otherwise, the image is deemed to be without a watermark. Semantic consistency. Since our images have watermarks added before image generation, the watermark is reflected at the semantic level. Therefore, we choose the CLIP score metric [25] to evaluate the semantic consistency between generated images and prompt words. The reference metric will be used to evaluate the semantic quality difference between the generated images with and without watermark embedding in DiffuaeTrace in order to evaluate the fidelity of watermark embedding Image Quality. We evaluate the quality of an image through two no-reference metrics, the Natural Image Quality Evaluator (NIQE) score [22] and the Perceptual Image Quality Evaluator (PIQE) score [38] . The reference indicators will be used to compare the quality of images without watermarks with images with watermarks embedded in order to evaluate the loss of watermark embedding on the image and the invisibility of the watermark. 6.2 Sematic and image quality evaluation From Table 2, we evaluated the impact of embedding the watermark on image quality and semantic consistency before and after. The experiment utilized the stable-diffusion-2-1-base model [29] with 25 inference steps at the guidance scale of 5. Results indicate no significant differences in NIQE and PIQE quality metrics across different watermark bits. Additionally, semantic alignment of generated images, as assessed by Clip scores, remains similar to the original model. This suggests that DiffuseTrace does not rely on the trade-off between image quality and watermark robustness typical of post-processing watermarks. Since images are generated \fConference acronym \u2019XX, June 03\u201305, 2024, Woodstock, NY Liangqi Lei, Keke Gai, Jing Yu, and Liehuang Zhu 0 1 2 3 4 5 6 Brightness Intesity 0.00 0.25 0.50 0.75 1.00 Bit accuracy/TPR@1%FPR Brightness Attack Bit accuracy TPR@1%FPR Stable Signature DwtDctSvd SSL Watermark Stegastamp 0.0 0.1 0.2 0.3 0.4 Standard Deviation of Noise 0.00 0.25 0.50 0.75 1.00 Bit accuracy/TPR@1%FPR Noise Attack Bit accuracy TPR@1%FPR Stable Signature DwtDctSvd SSL Watermark Stegastamp 0 2 4 6 8 Contrast Intensity 0.00 0.25 0.50 0.75 1.00 Bit accuracy/TPR@1%FPR Contrast Attack Bit accuracy TPR@1%FPR Stable Signature DwtDctSvd SSL Watermark Stegastamp 20 40 60 80 100 Quality Rate 0.00 0.25 0.50 0.75 1.00 Bit accuracy/TPR@1%FPR JPEG Attack Bit accuracy TPR@1%FPR Stable Signature DwtDctSvd SSL Watermark Stegastamp 0.0 2.5 5.0 7.5 10.0 12.5 15.0 Kernel Size 0.00 0.25 0.50 0.75 1.00 Bit accuracy/TPR@1%FPR Blur Attack Bit accuracy TPR@1%FPR Stable Signature DwtDctSvd SSL Watermark Stegastamp 20 40 60 80 Resize Scale 0.00 0.25 0.50 0.75 1.00 Bit accuracy/TPR@1%FPR Resize Attack Bit accuracy TPR@1%FPR Stable Signature DwtDctSvd SSL Watermark Stegastamp 0 20 40 60 80 100 Denoising Strength 0.00 0.25 0.50 0.75 1.00 Bit accuracy/TPR@1%FPR BM3D Attack Bit accuracy TPR@1%FPR Stable Signature DwtDctSvd SSL Watermark Stegastamp 2 4 6 8 Quality Rate 0.00 0.25 0.50 0.75 1.00 Bit accuracy/TPR@1%FPR VAE-based Attack (Cheng 20) Bit accuracy TPR@1%FPR Stable Signature DwtDctSvd SSL Watermark Stegastamp 0 25 50 75 100 125 150 Denoiset Steps 0.00 0.25 0.50 0.75 1.00 Bit accuracy/TPR@1%FPR Diffusion-based Attack (Zhao 23) Bit accuracy TPR@1%FPR Stable Signature DwtDctSvd SSL Watermark Stegastamp Figure 2: The figure illustrates the performance of DiffuseTrace in response to various attacks of different intensities, measured by bit accuracy and TPR@0.01FPR. It also compares TPR@0.01FPR of traditional watermarks such as DwtDctSvd, SSL Watermark and Stable Signaturez under corresponding attacks. The watermark capacity for each scheme in the figure is 48 bits. entirely through correct sampling processes and variables are properly distributed without subsequent modifications to the images, the DiffuseTrace approach exhibits significant advantages in terms of image quality compared to post-processing solutions. Compared to other methods that embed watermarks in the latent space, the SSL watermark remains a post-processing solution. Tree-ring alters the distribution of initial latent variables by embedding the watermark in the frequency domain of the latent space through Fourier transformation. However, U-net fails to recover from the losses incurred during this process. Consequently, as the watermark radius increases, the generated images suffer significant losses in both quality and semantic consistency. 6.3 Robustness evalution against image processing From Table 1, we evaluated the robustness of the watermark against various image processing attacks. It could be observed from the graph that DiffuseTrace demonstrates stability and roubustness when faced with common image processing attacks. DiffuseTrace achieve watermark detection rate close to 100 percent and average bit accuracy above 90 percent under common attack. Compared to post-processing watermarking schemes and VAE-based watermarking schemes, DiffuseTrace demonstrates excellent stability when faced with significant image compression and resizing. Stegastamp remain highly robust in the comparison, since it sacrifices image quality and utilizes error correction codes to compress 96 bits into 48 bits, with a relatively large error correction space. Stable Signature watermark specially for diffusion model remain stable under most attacks, but is vulnerable to denoise algorithm and processing to high extent. Furthermore, We conducted detailed experiments on various types of image disturbance amplitudes and compared the current method to latent-based method ssl watermark and vaebased watermark stable signature under maximum attack intensity. The results showed that DiffuseTrace has a significant advantage in image processing stability compared to other methods. 6.4 Robustness evaluation against VAE-based attacks and diffusion based attacks. In Table 3, we evaluated the accuracy of various watermarking schemes when faced with deep learning-based attacks. The experiments consist of VAE-based attack and diffision-based attack which is the latest watermark attack. The table reveals that the majority of \fDiffuseTrace: A Transparent and Flexible Watermarking Scheme for Latent Diffusion Model Conference acronym \u2019XX, June 03\u201305, 2024, Woodstock, NY schemes are unable to withstand this type of reconstruction attack. Stable Signature exhibits fragile From the results, it is evident that DiffuseTrace exhibits a significant advantage in countering VAEbased attacks utilizing diffusion models. SSL watermarks and stable signatures all exhibit low watermark detection rates, indicating that they are unable to resist both VAE-based and diffusion based attacks. For diffusion based attacks, except for DiffuseTrace, other watermark schemes have experienced a significant drop in bit accuracy. In subsequent experiments, we increased the intensity of the diffusion attack. From the results, it is evident that DiffuseTrace exhibits significantly higher resilience against reconstruction attacks compared to other methods. Even the VAE-attack with a quality coefficient of 1 or the diffusion-based attack with 150 denoise steps do not fundamentally affect the stability of the watermark and only the DiffuseTrace was able to maintain accuracy in high-intensity reconstruction. Reconstruction attacks are achieved by maintaining semantic accuracy, continuously adding noise to destroy watermarks and continuously reconstructing and restoring images to obtain images without watermarks. However, this process essentially does not alter the semantic consistency of the image nor does it significantly alter the initial latent variables of image inversion. Therefore, DiffuseTrace can remain stable under reconstruction attacks. Table 3: Bit Accuracy/Detection Accuracy Under Deeplearning-based Attack Method VAE A. Diffusion A. Bmshj18 [2] Cheng20 [7] Zhao23 [49] Traditional Wm. DwtDct 0.524/0.000 0.517/0.012 0.489/0.000 D.Svd 0.504/0.000 0.512/0.013 0.523/0.014 Enc.-Dec. Wm. RivaGan 0.611/0.063 0.632/0.070 0.588/0.070 Hidden 0.621/0.170 0.641/0.198 0.497/0.009 S.Stamp 0.979/1.000 0.965/1.000 0.852/0.927 VAE-based Wm. Stable Signature 0.616/0.224 0.682/0.409 0.541/0.014 Latent-based Wm. SSL Wm. 0.623/0.123 0.631/0.144 0.655/0.149 Tree-ring /0.993 /0.991 /0.997 Ours 0.972/1.000 0.967/1.000 0.970/1.000 6.5 Ablation Experiments This section, we experimentally quantify the impact of several key hyperparameters mentioned in the theoretical analysis 5 on the inaccuracy. we consider the impact of the guidance scale used during the generation phase, the inference steps employed during the inversion phase, and the version of the model on watermark detection in order to demonstrate the effectiveness of DiffuseTrace. Ablation on Guidance Scale. In the theoretical analysis 5.3, we elaborate on the reasons why the guidance scale introduces errors into the experiments. In the following experiments, we quantify the impact of the guidance scale on the DiffuseTrace watermarking scheme through experimentation. For the ablation experiment on the guidance scale, the scheduler is set the dpm++ [20] scheduler. The experimental setup includes setting both the inference steps and reverse inference steps to 20. We adjust the guidance scale to assess its influence on the experimental results. The specific experimental results depicted in the graph 3 show that as the guidance scale increases during the inference stage, the bit accuracy gradually decreases, while the detection accuracy remains relatively stable within the guidance scale range of 0 to 20. This indicates that the watermark detection accuracy is maintained, demonstrating the robustness of the watermark. Thus, users can freely adjust the guidance scale during the image generation stage while still ensuring traceability of the watermark. Deploying the diffusion model as a service can provide users with the option to adjust the guidance scale hyperparameter, which will not significantly affect watermark detection. Ablation on Inference Steps. The ablation experiment for reverse inference steps employed the DPM++ scheduler, with an inference step setting of 20 and a guidance scale set to 5. The evaluation of the experiment\u2019s results involves adjusting the reverse inference steps to assess their impact. The experimental results, as depicted in the figure 3, indicate that after 5 inference steps of inversion, the watermark detection rate stabilizes. Even with only 2 inference steps, a good detection rate can still be maintained. This suggests that the number of inference steps does not significantly affect the accuracy of detection. Therefore, during the detection phase, to increase efficiency, a small number of reverse inference steps can be employed to extract the image watermark. 7 RELATED WORK 7.1 Detection of AI-Generated Images It is difficult for humans to distinguish between real and fake images. Realistic fake images intensify concerns about the disinformation dissemination. To tackle this problem, various fake image detection approaches have been proposed. A typical approach [15, 36, 39] involves extracting temporal, frequency, and texture features from images. Subsequently, a feature extraction network is constructed to train a binary classifier to distinguish between AI-generated images and real images. However, this image detection method exhibits noticeable performance degradation when applied to diffusion models. For AI-generated image detection based on diffusion models [40], leveraging a pre-trained diffusion model allows for a more accurate reconstruction of the characteristics of images generated through the diffusion process. By reconstructing the diffusion process, differences between real images and images generated by the diffusion model can be detected thereby enabling the detection of AI-generated images. Adding a watermark to generated images is also a method for identifying AI-generated images. 7.2 Image Watermarking The strategy of adding watermarks to images for protecting intellectual property rights has a long history in the field of computer vision. Traditional image watermarking methods typically involve embedding watermarks into appropriate frequency components of the image, utilizing techniques such as Discrete Cosine Transform (DCT), Discrete Wavelet Transform (DWT) [1] or Singular Value Decomposition (SVD) [19]. Deep learning-based approaches, such as HiDDeN [51] , StegaStamp [37], have demonstrated competitive results in terms of robustness against various geometric \fConference acronym \u2019XX, June 03\u201305, 2024, Woodstock, NY Liangqi Lei, Keke Gai, Jing Yu, and Liehuang Zhu 0 5 10 15 20 Guidance Scale 0.0 0.2 0.4 0.6 0.8 1.0 Bit accuracy/TPR@1%FPR Ablation on Guidance Scales Stable-diffusion-v1-4 Stable-diffusion-2-1-base TPR@1%FPR Bit accuracy 5 10 15 20 25 30 Reverse Inference Steps 0.0 0.2 0.4 0.6 0.8 1.0 Bit accuracy/TPR@1%FPR Ablation on Reverse Inference Steps Stable-diffusion-v1-4 Stable-diffusion-2-1-base TPR@1%FPR Bit accuracy Figure 3: The figure (left) illustrates the ablation experiment concerning the guidance scale, where adjusting the guidance scale leads to a gradual decrease in the watermark\u2019s bit accuracy, while the watermark detection rate remains stable. The figure (right) shows the results of the ablation study on reverse inference steps, where the bit rate detected stabilizes after two inference steps. transformations. These methods often employ deep learning encoders and extractors to embed and extract watermarks respectively. the aforementioned watermarking methods primarily focus on postprocessing existing images. The core idea is to achieve robustness against various attacks while minimizing the impact on the visual quality of the image. Therefore, post-processing methods are confronted with a trade-off between watermark stability, watermark capacity and image quality. For diffusion model watermarking, it can mainly be categorized into three types: Watermark embedding during training phase. In methods incorporating watermarks during the training phase, watermarks are embedded into the training data. The data is encoded with the watermark during training and a decoder is trained to extract the watermark. During the detection phase, all images generated by diffusion models will carry encoded binary strings. Watermark [50] is a representative approach. Methods of this kind typically have stringent requirements for watermark embedding, involving the incorporation of watermarks into a substantial dataset of images followed by training the entire model. Fine-tuning phase with watermark incorporation. The main purpose of such watermark embedding methods is to integrate the watermark component into the model component, making it inseparable during distribution. Watermarks are incorporated into model components during fine-tuning. For instance, methods like Stable Signature [13] and FSwatermark [43] fine-tune the variational autoencoders to ensure that all generated images carry the watermark. It\u2019s approximate to integrating the watermark into the final generation stage. Watermark embedding into latent space during inference. During inference steps, watermarks are added to the latent variable space of the model. Methods like Tree-ring [42] and ZoDiac [45] achieve this by diffusing inversion and applying frequency domain transformations to latent variables, ensuring that all generated images carry the watermark. DiffuseTrace also falls into this category of methods. The watermark is embedded in the image prior to its generation. 7.3 Image Watermarking Attack The goal of image watermark attacks is to assess the robustness of image detection after practical modifications. These attacks mainly fall into two categories: image processing attacks and deep learningbased attacks. image processing attacks. Common image processing techniques include adding noise, color jitter, image compression, image scaling and Gaussian blur. Image processing or compression methods may utilize frequency-domain or 3D transformation-based approaches including BM3D denoising algorithm [9]. Deep learning-based attack. Deep learning-based attack methods, including methods based on variational autoencoders such as [2] and [7] can disrupt watermarks embedded in images. In recent research, diffusion based attacks [49] are used to encode the semantic features of images, add noise to disrupt watermark and regenerate images. Reconstruction models exhibit prominent performance and can eliminate most watermarks injected by most existing methods. 8" +} \ No newline at end of file diff --git a/abs_9K/test_abstract_short_2405.02710v1.json b/abs_9K/test_abstract_short_2405.02710v1.json new file mode 100644 index 0000000000000000000000000000000000000000..c07235690f22a113998c962ae8bd0459f8726808 --- /dev/null +++ b/abs_9K/test_abstract_short_2405.02710v1.json @@ -0,0 +1,16 @@ +{ + "url": "http://arxiv.org/abs/2405.02710v1", + "title": "Enhancing News Summarization with ELearnFit through Efficient In-Context Learning and Efficient Fine-Tuning", + "abstract": "With the deluge of information delivered by the daily news cycle, there is a\ngrowing need to effectively and efficiently summarize news feeds for quick\nconsumption. We leverage large language models (LLMs), with their advanced\nlearning and generative abilities as compared to conventional language models,\nto generate concise and coherent summaries for news articles from the XSum\ndataset. Our paper focuses on two key aspects of LLMs: Efficient in-context\nLearning (ELearn) and Parameter Efficient Fine-tuning (EFit). Under ELearn, we\nfind that increasing the number of shots in prompts and utilizing simple\ntemplates generally improve the quality of summaries. We also find that\nutilizing relevant examples in few-shot learning for ELearn does not improve\nmodel performance. In addition, we studied EFit using different methods and\ndemonstrate that fine-tuning the first layer of LLMs produces better outcomes\nas compared to fine-tuning other layers or utilizing LoRA. We also find that\nleveraging more relevant training samples using selective layers does not\nresult in better performance. By combining ELearn and EFit, we create a new\nmodel (ELearnFit) that leverages the benefits of both few-shot learning and\nfine-tuning and produces superior performance to either model alone. We also\nuse ELearnFit to highlight the trade-offs between prompting and fine-tuning,\nespecially for situations where only a limited number of annotated samples are\navailable. Ultimately, our research provides practical techniques to optimize\nnews summarization during the prompting and fine-tuning stages and enhances the\nsynthesis of news articles.", + "authors": "Che Guan, Andrew Chin, Puya Vahabi", + "published": "2024-05-04", + "updated": "2024-05-04", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "label": "Original Paper", + "paper_cat": "Parameter AND Efficient AND Fine AND Tuning", + "gt": "With the deluge of information delivered by the daily news cycle, there is a\ngrowing need to effectively and efficiently summarize news feeds for quick\nconsumption. We leverage large language models (LLMs), with their advanced\nlearning and generative abilities as compared to conventional language models,\nto generate concise and coherent summaries for news articles from the XSum\ndataset. Our paper focuses on two key aspects of LLMs: Efficient in-context\nLearning (ELearn) and Parameter Efficient Fine-tuning (EFit). Under ELearn, we\nfind that increasing the number of shots in prompts and utilizing simple\ntemplates generally improve the quality of summaries. We also find that\nutilizing relevant examples in few-shot learning for ELearn does not improve\nmodel performance. In addition, we studied EFit using different methods and\ndemonstrate that fine-tuning the first layer of LLMs produces better outcomes\nas compared to fine-tuning other layers or utilizing LoRA. We also find that\nleveraging more relevant training samples using selective layers does not\nresult in better performance. By combining ELearn and EFit, we create a new\nmodel (ELearnFit) that leverages the benefits of both few-shot learning and\nfine-tuning and produces superior performance to either model alone. We also\nuse ELearnFit to highlight the trade-offs between prompting and fine-tuning,\nespecially for situations where only a limited number of annotated samples are\navailable. Ultimately, our research provides practical techniques to optimize\nnews summarization during the prompting and fine-tuning stages and enhances the\nsynthesis of news articles.", + "main_content": "Introduction There has been an overload of information with each passing day \u2013 data is more voluminous, comes in more varieties and arrives at higher velocity. The news cycle is a good example of this trend, making it more difficult to read and synthesize the vast amount of information coming our way. The advent of large language models (LLMs) has led to a substantial improvement in the effectiveness and comprehensibility of news summarization. LLMs present two ways to address downstream tasks \u2013 through prompt engineering and fine-tuning. In our research, we explore various techniques to improve model performance through better prompts and finetuning methods. First, we study efficient in-context learning, which we call ELearn to denote the process of the model learning through prompts. We examine the impact of LLM size, the number of shots, and various templates during the in-context learning . We also select relevant samples in prompting in an attempt to improve performance. We then explore efficient methods to fine-tune LLMs. Calling this technique EFit, we test the performance of selective layer fine-tuning and LoRA in news summarization. We also utilize selective samples to improve the training set for the fine-tuning process. Finally, we combine ELearn and EFit to create ELearnFit and find that this model achieves superior performance versus either model alone. We make various contributions to existing research on news summarization 1. Through ELearn, we find that using larger models, increasing the number of shots during prompting, and leveraging simple templates can all enhance model performance. We also show that utilizing selective relevant examples during prompting does not meaningfully impact performance. Through EFit, we find that fine-tuning the first layer of LLMs produces better outcomes as compared to fine-tuning other layers or utilizing LoRA, and leveraging more relevant training samples using selective samples does not result in better performance. The combined model, ELearnFit, leverages the best of both worlds and suggests practical implementations for practitioners, especially when using a limited number of annotated samples. 2 Related Work The evolution of news summarization techniques has been driven by advancements in NLP and the increasing availability of largescale datasets. Early news summarization techniques relied on statistical methods, such as frequency analysis and clustering, to extract important information from news articles. These methods were limited in their ability to capture the semantics and context of the news content. With the advent of deep learning, news summarization techniques have undergone a significant transformation. Deep learning models, particularly transformer-based architectures such as BERT and GPT-3[3, 5, 17], have demonstrated remarkable performance in various NLP tasks, including news summarization [7]. These models are able to learn complex representations of news articles and generate summaries that are both informative and coherent. Recent research in news summarization has focused on developing techniques that can handle diverse types of news articles, including 1The codes used in this study were derived from the class \"Deep Multi-Task and Meta Learning\" offered by Stanford School of Engineering. We implemented and adapted these foundational project codes to meet the specific requirements of our study. arXiv:2405.02710v1 [cs.CL] 4 May 2024 \flong and complex articles, and generate summaries that are tailored to specific user needs and preferences. Additionally, there has been growing interest in explainable news summarization [8, 13], which aims to provide users with insights into how summaries are generated and the rationale behind the selection of specific sentences or phrases. Fine-tuning LLMs and in-context learning are two powerful techniques that have been successfully applied to summarization [2, 6, 19]. Fine-tuning LLMs [15] involves adapting the pre-trained LLM to the specific task of news summarization by fine-tuning its parameters on a smaller, task-specific dataset. This allows the LLM to leverage its learned knowledge and adapt it to the task of generating informative and coherent summaries. In-context learning is a technique where a pre-trained LM utilizes text input to define a task. By providing the model with an instruction and/or a few task demonstrations, it gains the ability to predict subsequent steps and complete additional instances of the task [3]. Furthermore, in-context learning can be viewed as a form of implicit Bayesian inference [18]. The model learns to infer a latent concept from the context and uses it to generate a response. The pretraining distribution can be seen as a mixture of hidden Markov models (HMMs), where each HMM represents a different concept. When prompted with a specific context, the model implicitly infers the latent concept that is most relevant to the context and generates a response based on that concept. To facilitate the training and evaluation of news summarization models, large-scale datasets such as CNN/Daily Mail and XSum [4, 14] have proven invaluable. These datasets provide a diverse collection of news articles and human-generated summaries, enabling researchers to benchmark different summarization techniques and track progress in the field. 3 Approach In this study, we utilize the XSum dataset, a large-scale collection of news articles with annotated summarizations, as analyzed in Subsection 3.1 , to explore various methods for enhancing prompting (ELearn) and fine-tuning (EFit), which will be explained in Subsections 3.2 and 3.3, respectively, in a more efficient manner. Furthermore, we investigate the advantages of combining these techniques through our proposed ELearnFit approach, which will be described in Subsection 3.4. To run each model, the input consists of the testing article, which may or may not be accompanied by support article-summary pair samples in the prompt. The output is the generated summary. To generate the summary, we sample from a pre-trained language model using greedy decoding, producing tokens one by one until a stop token is encountered or the maximum token limit of 100 is reached. To evaluate the performance of the model, the ROUGE-1 F1 score is employed. This metric measures the overlap between the generated summary and the reference summary Although the main emphasis of this paper is on fine-tuning LLaMa2 models, it is worth highlighting that the strategies and techniques discussed in the following sections can be adapted to optimize the performance of other transformer-based models. 3.1 Analysis of Data and Performance of Existing Models on Leaderboard We conduct our research using the XSum dataset, which consists of a training set comprising 204,045 article-summary samples meticulously curated by the original researchers. In Figure 1, the distributions of article and summary lengths in the training set are displayed. Due to limited resources, we face constraints (refer to Section 4.6 for more details) in using powerful GPU machines to fine-tune models using the complete training dataset. Additionally, the size and input token limits for several representative open-source GPT models, as indicated in Table 1, could pose restrictions on testing few-shot learning scenarios. Consequently, we create a smaller dataset consisting of 17,806 samples from the training set. This subset is obtained by filtering out rows from the training dataset where the combined word count of the article and summary exceeded 100. The length distributions of the filtered articles and summaries are displayed in Figure 2. Based on numerical testing observations, it has been determined that even the filtered dataset is still too large to adequately explore optimal parameters in experiments. To ensure a fair comparison across all experiments, we further reduce the dataset by selecting the initial 256 article-summary pairs as the fine-tuning set. The remaining 125 pairs are reserved for testing purposes. It\u2019s important to note that while the testing set consists of only 125 pairs, the entire filtered dataset (excluding the testing pairs) is utilized to assist the model in selecting relevant support samples for prompting and fine-tuning, as explained in subsections 4.2 and 4.4, respectively. According to the leaderboard ranking [1], the top-performing papers in news summarization achieve impressive results by assigning probability mass to candidate summaries [20] or by aligning model-generated sequences with reference sequences [12]. These approaches consistently yield Rouge-1 scores close to 0.5 across the entire testing dataset. However, in our work, we simply sample from a pre-trained LLM using greedy decoding, generating tokens iteratively until either a stop token is encountered or the maximum token limit of 100 is reached. It is worth noting that our primary focus is on optimizing efficient techniques for in-context learning and fine-tuning in news summarization, with the specific choice of dataset and token adjustment not being crucial to the outcome of our work. Figure 1: Length Distribution of Articles and Summaries in Training Set 2 \fFigure 2: Length Distribution of Filtered Articles and Summaries (Combined Word Count \u2264100) Table 1: Number of Parameters and Input Token Limits for GPT Models (Approximately 1.5 Tokens per Word) Models Parameters Input Tokens GPT2-Medium 345 million 1,024 Eleuther-Neo 2.7 billion 2,048 LLaMa2-7B 7 billion 2,048 LLaMa2-13B 13 billion 4,096 3.2 ELearn Efficient In-Context Learning We use two simple templates to investigate the impact of templates on few-shot learning. Figure 3 illustrates the case for one-shot learning. The first template, called \"NONE,\" utilizes a single space to separate the support article, support summary, and the test article. The second template, known as \"TL;DR\" (Too Long;Didn\u2019t Read), utilizes \" TL;DR: \" to differentiate between the article and summary (Please note that there intentionally exists a space before and after \"TL;DR:\" in most occurrences, while in the last occurrence of \"TL;DR:\", there is only one space before \"TL;DR:\" and no space after the colon. This formatting choice has been made to facilitate word generation using a language model.). Additionally, a single space is used to separate the support sample from the test sample. For clarity, these separators are highlighted in green in the figure. Figure 3: Templates for One-Shot Learning: \"none\" vs \"TL;DR\" Figure 3 presents an example of a one-shot learning template. An interesting aspect to explore is the impact of different numbers of support examples in the prompt. When selecting examples, one approach is to randomly choose article-summary pairs from the training set, which generally provides diversified support examples. Another approach is to use retrieve similar pairs to a given testing article in the prompt, which may result in examples concentrated around specific content or topics. Furthermore, it is important to consider the size of language models, as it directly relates to memory usage and can potentially influence in-context learning. 3.3 EFit Efficient Fine-Tuning In LLaMa2 [16], the transformer block plays a crucial role in the transformer architecture. It comprises of two main sub-layers: a self-attention layer and a feed-forward network. To construct a Transformer model, multiple Transformer blocks are repeated (32 for LLaMa2-7b and 40 for LLaMa2-13b) and stacked together. Each block processes the output of the previous block, allowing the LLaMa2 model to capture both local and global dependencies in the input sequence. However, due to the large size of the model and limited GPU resources, one approach for parameter-efficient fine-tuning is to selectively choose a specific transformer block layer, such as the first layer, to fine-tune the pre-trained weight matrix \ud835\udc4a0 \u2113\u2208R\ud835\udc511\u00d7\ud835\udc512 to a new arbitrary weight matrix \ud835\udc4a\ud835\udc53\ud835\udc61 \u2113 while freezing the remaining block layers. Another approach for parameter-efficient fine-tuning is to employ LoRA (Low-Rank Adaptation). This technique freezes the pretrained model weights and introduces trainable rank decomposition matrices into each layer of the Transformer architecture. By doing so, the number of trainable parameters for downstream tasks is significantly reduced [9]. Mathematically, LoRA imposes constraints on the fine-tuned parameter space:\ud835\udc4a\ud835\udc53\ud835\udc61 \u2113 = \ud835\udc4a0 \u2113+\ud835\udc34\ud835\udc35\u22a4, where \ud835\udc34\u2208R\ud835\udc511\u00d7\ud835\udc5dand \ud835\udc35\u2208R\ud835\udc512\u00d7\ud835\udc5dare low rank matrices and \ud835\udc5d<< \ud835\udc511,\ud835\udc512. With LoRA, the number of parameters being fine-tuned for a single layer is (\ud835\udc511 + \ud835\udc512) \u00d7 \ud835\udc5d. The original number of parameters for the single layer is \ud835\udc511 \u00d7\ud835\udc512. Therefore, the ratio of parameters fine-tuned by LoRA to the original parameters is: (\ud835\udc511 + \ud835\udc512) (\ud835\udc511 \u00d7 \ud835\udc512) \u00d7 \ud835\udc5d= ( 1 \ud835\udc511 + 1 \ud835\udc512 ) \u00d7 \ud835\udc5d Let\u2019s take the query projection matrix (q_proj) of the self-attention layer of LLaMa2-7b as an example. The matrix has dimensions of 4,096 x 4,096, with \ud835\udc511 = 4, 096 and \ud835\udc512 = 4, 096. By applying LoRA with a rank parameter of \ud835\udc5d= 16 (where \ud835\udc5d<< \ud835\udc511 and \ud835\udc5d<< \ud835\udc512), we achieve a reduction ratio of 0.0078, indicating significant parameter reduction. LoRA proves to be most effective in saving parameters when \ud835\udc5dis much smaller than both \ud835\udc511 and \ud835\udc512. Furthermore, inspired by the idea of retrieval augmented generation (RAG) [10], we incorporate the selection of relevant support examples during the prompting and fine-tuning stages. This is accomplished by performing semantic search to retrieve top pairs that are similar to each individual testing article. By adopting this approach, the fine-tuning examples can be more targeted and aligned with the specific content or topics covered in the testing articles. Another alternative is to randomly select article-summary pairs, which introduces a broader range of examples for the fine-tuning process. This random selection provides diverse instances, enhancing the fine-tuned model\u2019s robustness and adaptability. 3.4 ELearnFit Combine ELearn and EFit Both the ELearn and EFit approaches, discussed earlier, have the potential to independently improve model performance. ELearn is preferable when there are few annotated examples available, whereas EFit may be more suitable when numerous examples are accessible. In practice, annotations are costly and often limited to a small number of examples per task. Moreover, training models 3 \fwith a large amount of data necessitates substantial GPU resources and time. To address these issues, we propose an approach called ELearnFit, which combines ELearn and EFit by first fine-tuning and then prompting the model. Since both ELearn and EFit have multiple parameters to optimize independently, we employ a heuristic approach. This involves selecting optimal parameters from the ELearn optimization process and then incorporating the optimal parameters from the EFit optimization process. By doing this, we effectively manage computational resources and time constraints while striving for the best parameter settings. For these experiments, prompting is conducted via random sampling of support examples in the prompt to be fed to the pre-trained model. On the other hand, fine-tuning is performed over ten iterations, with data randomly sampled from the training set without replacement in each iteration. Both the randomly sampled examples in the prompt for ELearn and the fine-tuning process for EFit introduce variability in the fine-tuning process. To comprehensively evaluate and analyze the robustness and performance of ELearn, EFit, and ELearnFit, we investigate which component contributes more to the variation. This investigation is crucial for understanding the stability and reliability of each approach across different trials and conditions. Ultimately, it will help us identify the most robust strategy that consistently delivers strong performance in the presence of variability. 4 Experiments All experiments are run on the Azure ML platform, harnessing the computational capabilities of A100 GPUs equipped with 80 gigabytes of high-bandwidth memory. This technological foundation provide the ideal setting for a series of groundbreaking investigations. In order to ensure a systematic exploration and refinement of the parameters, we conduct all experiments in a sequential manner. Through employing a heuristic sequential approach, we efficiently manage computational resources and time constraints while striving for optimal parameter settings. Subsection 4.1 focuses on ELearn, and compares the results of varying LLM model size, prompt templates, and few-shot learning paradigms. Subsection 4.2 delves into the impact of selective samples for prompting on ELearn. Subsection 4.3 shifts to EFit, and explores the effectiveness of parameter-efficient fine-tuning through two distinct approaches: selective layer finetuning and LoRA algorithms. Subsection 4.4 sheds light on the insights gleaned from selective training samples for EFit. Subsection 4.5 analyzes the impact of combining the capabilities of ELearn and EFit, resulting in ELearnFit, and highlighting the potential for synergy between these techniques. Lastly, Subsection 4.6 compares the robustness of the various models. 4.1 Investigate ELearn by Analyzing the Influence of Model Size, Templates, and Few-shot Learning In this experiment, we compare four representative open-source GPT models: Eleuther-Neo, GPT2-medium, LLaMa2-7b, and LLaMa213b, explore the influence of two prompt templates (none and TL;DR), and vary the number of examples in the prompt. The results are illustrated in Figure 4, where the x-axis represents the number of examples in the prompt and the y-axis represents the Rouge-1 score. Our findings suggest that increasing the number of examples in the prompt leads to improved model performance. Notably, in the case of GPT-2 models, the zero-shot performance exceeds the one-shot performance. These findings align with previous studies conducted by [3, 18] on datasets such as LAMBADA, HellaSwag, PhysicalQA, and RACE-m, which reported similar observations in relation to GPT-3. Additionally, we observe that utilizing a straightforward prompt structure, specifically \"TL;DR\" (depicted in red), facilitates the model\u2019s learning process. This simplified format enables faster pattern recognition in comparison to the none template (depicted in black). Furthermore, focusing on the four models and examining their performance with the \"TL;DR\" template (depicted in red) in Figure 4, it becomes evident that LLaMa2-7b and LLaMa213b outperform gp2-medium and Eleuther-Neo. This finding suggests that the larger models, LLaMa2-7b and LLaMa2-13b, possess superior capabilities in handling the summarization task, signifying their suitability for this specific application. Figure 4: Comparison of Four Language Models with Fewshot Learning using Two Templates 4.2 Enhance ELearn via Selective Samples during Prompting To further improve the perforamcne of ELearn, inspired by the idea of RAG for prompting, we utilize semantic search to retrieve support article-summary samples that are contextually relevant to each testing article, which enables ELearn to learn from these samples in prompts and potentially generate more accurate responses. In this experiment, we broaden the range of support samples used in the prompt by including the entire filtered dataset, excluding the samples designated for testing. The outcomes obtained using this expanded scope align closely with those achieved using the original support samples from training set, so we solely showcase the results obtained from the latter (the complete filtered dataset, excluding the 125 testing samples) in this paper. Note that the order of prompting may potentially lead to different performance results as compared to random prompt ordering. Research conducted by 4 \f[11] demonstrates that in the QA problem, the location of relevant information within the language model\u2019s input context follows a U-shaped performance curve. Moreover, the 7B Llama-2 models are biased towards recent information, performing best when it is located at the end of the input context. However, exploring the impact of prompt order for news summerization is beyond the scope of this research paper. Figure 5 depicts that the utilization of selective samples during few-shot learning does not significantly affect the performance of the model. One potential explanation for this outcome could be that our straightforward implementation is incapable of capturing the extensive range of topics encompassed in news articles. As a result, the support samples may not adequately represent the diverse range of subjects covered by the articles in the test dataset. Figure 5: An Evaluation of In-Context Learning Methods: Comparing Random Samples vs. Selective Samples in Prompts 4.3 Investigate EFit We explore the effectiveness of parameter-efficient fine-tuning using two approaches: LoRA (LoRA4, LoRA16, and LoRA32 algorithms) and selective layers. Figure 6 shows the results of the various fine-tuned models with LoRA as well as the models fine-tuned on specific layers (while freezing the remaining layers). The results suggest that increasing the number of training examples for fine-tuning generally leads to improved performance. When there is only one support example, all algorithms perform similarly. However, with a larger number of support examples (e.g., 8 and 64), fine-tuning the first layer and fine-tuning with LoRA16 results in significantly better performance. Furthermore, when the number of support examples is limited (e.g., 8), fine-tuning the first layer of a LLaMa2-7b often yields weaker results compared to fine-tuning with LoRA16. This is because LoRA16 makes slight modifications to each layer of the LLaMa2-7b, allowing it to adapt more effectively to a small number of examples. However, as the number of support examples increases (e.g., 64), fine-tuning the first layer of the LLaMa2-7b shows improved performance compared to fine-tuning with LoRA16. This is because fine-tuning the first layer allows the LLaMa2-7b to learn task-specific patterns and relationships more directly, leveraging the increased amount of training data. Additionally, fine-tuning with LoRA16 outperforms both LoRA4 and LoRA32. This suggests that the decomposed weight matrix with a rank of 16 is better suited for representing features learned from news articles compared to ranks 4 and 32. Finally, the model where only the last layer is fine-tuned performs the worst, suggesting that the pre-trained and fine-tuned data sets do not fully overlap. As a result, fine-tuning the lower-level, granular features proves more effective in improving performance as compared to focusing on high-level features, given an adequate number of support examples. These findings suggest that fine-tuning the first layer of LLMs has the most impact. Figure 6: Parameter-Efficient Fine-tuning using LoRA and Selective Layer Approaches (Please note that the x-axis is logarithmically scaled for values of the number of support examples greater than 4). In practice, annotated examples may not be readily available so we investigate the impact of sample size on model performance. In Figure 7, we observe that the Rouge-1 score reaches a local maximum around 64 training examples. Beyond that point, the performance exhibits fluctuations as the number of examples continues to increase. This finding suggests that 64 training examples could potentially represent a \"sweet spot\" for fine-tuning. 4.4 Enhance EFit via Selective Training Samples during Fine-tuning To enhance the performance of EFit, we draw inspiration from the concepts of selecting relevant samples in the prompting phase. In this experiment, for each testing sample, we select the top 1 or top 2 most similar training samples from the entire filtered dataset, excluding the 125 testing samples. We then fine-tune the model using these selected samples. Table 2 shows the results. When fine-tuning the first layer of LLaMa2-7b, using the more similar samples during fine-tuning did not impact model performance. On the other hand, when the model is fine-tuned LoRA16, using the more similar samples led to slightly improved performance. Interestingly, the improved results under LoRA16 are comparable to the results under the model with the 5 \fFigure 7: Impact of Number of Training Examples on Finetuning the First Layer of LLaMa2-7b (note that the x-axis has been logarithmically scaled). Table 2: Comparison of Rouge-1 (%) between EFit with Random Samples and Selective Samples EFit Sampling LLaMa2-7b, Finetuned First layer LLaMa2-7b, LoRA16 Random Sample 36.32 32.43 Top 1 Selective Sample 35.36 36.16 Top 2 Selective Samples 36.62 34.38 fine-tuned first layer. This suggests that the LoRA16 model may benefit from having more relevant samples during fine tuning. 4.5 ELearnFit Optimize LLM by Combining ELearn and EFit We now look to combine the ELearn and EFit approaches to gain the benefits of both better prompting and fine-tuning. In this experiment, we focus on the TL;DR template in the prompt and two finetuned models (fine-tune the first layer of LLaMa2-7b or LLaMa2-7b with LoRA16). During each testing phase, we first fine-tune LLaMa2-7b and then apply few-shot in-context learning using different numbers (referred to as shots) of support examples (e.g., 0, 1, 2, 4, and 8 shots). The examples for in-context learning were randomly selected from the training set and incorporated into the prompts. The results, as depicted in Figure 8 and Figure 9, indicate that when there are limited annotations available for fine-tuning LLaMa27b, 4-shot learning leads to superior performance when compared to the results using less shots. Interestingly, both 4-shot and 8shot learning exhibit similar performance levels. However, this performance gap disappears when there are enough examples for fine-tuning and the results with different shot learnings converge. This suggests that few-shot learning has a lesser impact when a model is effectively fine-tuned with an adequate number of examples. Said another way, having more examples in the prompt can compensate for smaller sample sizes during the fine-tuning process. Similar to our investigation of selecting relevant samples in fewshot learning for ELearn, we now test whether this approach would Figure 8: Fine-tuning LLaMa2-7b with LoRA16 and Applying Few-shot In-context Learning Figure 9: Fine-tuning the First Layer of LLaMa2-7b and Applying Few-shot In-context Learning benefit ELearnFit during its few-shot learning phase. Figures 10 and 11 show the results of applying for ELearnFit after fine-tuning the first layer of LLaMa2-7b and fine-tuning with LoRA16, respectively. It is worth mentioning that when the number of training examples for fine-tuning is zero, it signifies pure in-context learning. Consistent with our findings in Section 3.2, these results suggest that randomly sampled examples offer a wider range of styles for the LLMs to effectively learn the summarization task. On the other hand, selective sampling faces challenges in capturing the desired diversity. Furthermore, it is worthwhile noting that when the model undergoes fine-tuning with LoRA16 and has an adequate number of examples (e.g., 64 examples), selective sampling demonstrates a slight improvement in overall model performance. We now use semantic search to identify the most similar training samples to fine-tune the model. Figure 12 shows that fine-tuning the first layer, using selective samples in training and 4-shot learning during incontext learning exhibits slightly inferior performance as compared to the proposed combined approach, which involves 64 examples for 6 \fFigure 10: Comparing In-Context Learning Approaches: Random Sampling vs. Selective Sampling during Prompting, Following Fine-tuning the First Layer of LLaMa2-7b Figure 11: Comparing In-Context Learning Approaches: Random Sampling vs. Selective Sampling during Prompting, Following Fine-tuning LLaMa2-7b with LoRA16 fine tuning and 4 examples in prompting. However, it outperforms the ELearnFit approach with 1or 2-shot learning. Additionally, as depicted in Figure 13, when fine-tuning with LoRA16, the combination of selective samples in training, and few-shot learning with four selective samples during in-context learning yields the overall best result. One possible explanation is that the use of selective samples for finetuning for prompting together could potentially enhance the effectiveness of finetuning LLaMA2-7b with LoRA16. This proposition finds support in the comparison between Figure 8 and Figure 9. Specifically, when evaluating the performance of the 4-shot learning scenarios, an increase in the number of examples for finetuning from 8 to 64 results in a degradation in performance for the former, as depicted in Figure 8. In contrast, the latter exhibits a stable performance, as illustrated in Figure 9. Figure 12: Comparing Fine-tuning the First Layer of LLaMa27b: Random Sampling vs. Selective Sampling in Training Set, and Random Sampling vs. Selective Sampling during Prompting Figure 13: Comparing Fine-tuning LLaMa2-7b with LoRA16: Random Sampling vs. Selective Sampling in Training Set, and Random Sampling vs. Selective Sampling during Prompting 4.6 Robustness Checks In our experiment, fine-tuning is performed over ten iterations. In each iteration, data are randomly sampled from the training set without replacement, introducing variability in the fine-tuning process. We now assess the robustness of three approaches: ELearn, EFit, and ELearnFit. The descriptions for each model are detailed in Table 3. Figure 14 presents the results obtained from five repeated trials for each approach. The x-axis represents the nth trial, while the y-axis displays the Rouge-1 score. While we were limited to five trials due to computational constraints, additional trials could be conducted to further assess the robustness of these approaches. This experimental setup allowed us to gain insights into the performance of each approach under varying conditions and to compare their effectiveness in different scenarios. 7 \fTable 3: Model Description for Robustness Comparison Model In-context Learning Fine-tuning ELearn 4 Shots EFit_first First Layer w/ 64 Examples EFit_LoRA6 LoRA16 w/ 64 Examples ELearnFit_first 4 Shots First Layer w/ 64 Examples ELearnFit_LoRA16 4 Shots LoRA16 w/ 64 Examples Table 4: Performance Details for Robustness Comparison Model Mean Standard Deviation ELearn 0.2962 0.0303 EFit_first 0.3465 0.0039 EFit_LoRA16 0.3274 0.0029 ELearnFit_first 0.3441 0.0086 ELearnFit_LoRA16 0.3273 0.0053 Table 4 reveals that in-context learning exhibits greater variability across trials compared to the other two approaches. This is evident from the higher standard deviation observed in the ELearn results. In contrast, both EFit_first and ELearnFit_first demonstrated similar performance, although ELearnFit_first had twice the standard deviation of EFit_first. A similar observation can be made for ELearnFit_LoRA16 and EFit_LoRA16. These findings further suggest that fine-tuning offers more stable performance than incontext learning. Additionally, when the number of samples for fine-tuning is limited, the combined approach ELearnFit yields consistent and reliable performance across different trials, highlighting its potential for enhancing robustness. Figure 14: Robustness Comparison of ELearn, EFit and ELearnFit Limitations In this paper, we primarily directed our attention to the LLaMa2-7b model, a formidable language model consisting of 7 billion parameters. Assuming that each parameter occupies a modest 4 bytes of memory, the estimated total memory requirement for this model is approximately 27.34 gigabytes, calculated as follows: Total Memory Size = 7 \u00d7 109 \u00d7 4 bytes/(10242) \u224827.34 gigabytes (1) where: 1 kilobyte (KB) = 1024 bytes 1 megabyte (MB) = 1024 kilobytes Similarly, the total memory requirements for the LLaMa2-13b and LLaMa2-70b models are approximately 51 gigabytes and 274 gigabytes, respectively. Due to limited resources on A100 GPUs, which offer up to 80 gigabytes of high-bandwidth memory, and the substantial computation time required for each experiment, we primarily focus on optimizing ELearn and EFiT with the LLaMa2-7b model in this paper. However, we believe that the insights gained from this research work can be readily extended to larger language models such as LLaMa2-70b, especially when coupled with more powerful GPU resources." +} \ No newline at end of file diff --git a/abs_9K/test_abstract_short_2405.02730v1.json b/abs_9K/test_abstract_short_2405.02730v1.json new file mode 100644 index 0000000000000000000000000000000000000000..de38b219cd763595cd4c0c722d3b9bf1067c2fd2 --- /dev/null +++ b/abs_9K/test_abstract_short_2405.02730v1.json @@ -0,0 +1,16 @@ +{ + "url": "http://arxiv.org/abs/2405.02730v1", + "title": "U-DiTs: Downsample Tokens in U-Shaped Diffusion Transformers", + "abstract": "Diffusion Transformers (DiTs) introduce the transformer architecture to\ndiffusion tasks for latent-space image generation. With an isotropic\narchitecture that chains a series of transformer blocks, DiTs demonstrate\ncompetitive performance and good scalability; but meanwhile, the abandonment of\nU-Net by DiTs and their following improvements is worth rethinking. To this\nend, we conduct a simple toy experiment by comparing a U-Net architectured DiT\nwith an isotropic one. It turns out that the U-Net architecture only gain a\nslight advantage amid the U-Net inductive bias, indicating potential\nredundancies within the U-Net-style DiT. Inspired by the discovery that U-Net\nbackbone features are low-frequency-dominated, we perform token downsampling on\nthe query-key-value tuple for self-attention and bring further improvements\ndespite a considerable amount of reduction in computation. Based on\nself-attention with downsampled tokens, we propose a series of U-shaped DiTs\n(U-DiTs) in the paper and conduct extensive experiments to demonstrate the\nextraordinary performance of U-DiT models. The proposed U-DiT could outperform\nDiT-XL/2 with only 1/6 of its computation cost. Codes are available at\nhttps://github.com/YuchuanTian/U-DiT.", + "authors": "Yuchuan Tian, Zhijun Tu, Hanting Chen, Jie Hu, Chao Xu, Yunhe Wang", + "published": "2024-05-04", + "updated": "2024-05-04", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "label": "Original Paper", + "paper_cat": "Diffusion AND Model", + "gt": "Diffusion Transformers (DiTs) introduce the transformer architecture to\ndiffusion tasks for latent-space image generation. With an isotropic\narchitecture that chains a series of transformer blocks, DiTs demonstrate\ncompetitive performance and good scalability; but meanwhile, the abandonment of\nU-Net by DiTs and their following improvements is worth rethinking. To this\nend, we conduct a simple toy experiment by comparing a U-Net architectured DiT\nwith an isotropic one. It turns out that the U-Net architecture only gain a\nslight advantage amid the U-Net inductive bias, indicating potential\nredundancies within the U-Net-style DiT. Inspired by the discovery that U-Net\nbackbone features are low-frequency-dominated, we perform token downsampling on\nthe query-key-value tuple for self-attention and bring further improvements\ndespite a considerable amount of reduction in computation. Based on\nself-attention with downsampled tokens, we propose a series of U-shaped DiTs\n(U-DiTs) in the paper and conduct extensive experiments to demonstrate the\nextraordinary performance of U-DiT models. The proposed U-DiT could outperform\nDiT-XL/2 with only 1/6 of its computation cost. Codes are available at\nhttps://github.com/YuchuanTian/U-DiT.", + "main_content": "Introduction Thanks to the attention mechanism that establishes long-range spatial dependencies, Transformers [32] are proved highly effective on various vision tasks including image classification [13], object detection [5], segmentation [37], and image restoration [6]. DiTs [24] introduce full transformer backbones to diffusion, which demonstrate outstanding performance and scalability on image-space and latent-space generation tasks. Recent follow-up works have demonstrated the promising prospect of diffusion transformers by extending their applications to flexible-resolution image generation [22], realistic video generation [2], et cetera. Interestingly, DiTs have discarded the U-Net architecture [26] that is universally applied in manifold previous works, either in pixel [17; 11] or latent space [25]. The use of isotropic architectures in DiTs is indeed successful, as scaled-up DiT models achieve supreme performance. However, the abandonment of the widely-applied U-Net architecture by DiTs and their improvements [16; 8; 22] on latent-space image generation tasks triggers our curiosity, because the U-Net inductive bias is always believed to help denoising. Hence, we rethink deploying DiTs on a canonical U-Net architecture. In order to experiment with the combination of U-Net with DiT, we first propose a naive DiT in U-Net style (DiT-UNet) and compare it with an isotropic DiT of similar size. Results turn out that DiT-UNets are merely comparable to DiTs at similar computation costs. From this toy experiment, it \u2217Equal Contribution. \u2020Corresponding Author. Preprint. Under review. arXiv:2405.02730v1 [cs.CV] 4 May 2024 \f101 102 Transformer GFLOPs 10 20 30 40 50 60 70 FID-50K DiT SiT SiT-LLAMA U-DiT (Ours) Figure 1: Comparing U-DiTs with DiTs and their improvements. We plot FID-50K versus denoiser GFLOPs (in log scale) after 400K training steps. U-DiTs could achieve better performance than its counterparts. 200 400 600 800 Training Iterations (K) 0 10 20 30 40 50 60 FID-50K DiT-B/2 DiT-L/2 DiT-XL/2 U-DiT-B U-DiT-L Figure 2: The performance of U-DiTs and DiTs of various size. U-DiTs perform consistently better than DiTs with the increase of training steps. The marker size represents the computation cost of the model qualitatively. is inferred that the inductive bias of U-Net is not fully leveraged when U-Nets and plain transformer blocks are simply combined. Hence, we rethink the self-attention mechanism in DiT-UNet. The backbone in a latent U-Net denoiser provides a feature where low-frequency components dominate [27]. The discovery implies the existence of redundancies in backbone features: the attention module in the U-Net diffuser should highlight low-frequency domains. As previous theories praised downsampling for filtering high-frequency noises in diffusion [35], we seek to leverage this natural low-pass filter by performing token downsampling on the features for self-attention. Unlike previous transformer works [15; 38; 28] that downsample key-value pairs only, we radically downsample the query-key-value tuple altogether, such that self-attention is performed among downsampled latent tokens. It is surprising that when we incorporate self-attention with downsampled tokens into DiT-UNet, better results are achieved on latent U-Net diffusers with a significant reduction of computation. Based on this discovery, we scale U-Nets with downsampled self-attention up and propose a series of State-of-the-Art U-shaped Diffusion Transformers (U-DiTs). We conduct manifold experiments to verify the outstanding performance and scalability of our U-DiT models over isotropic DiTs. As shown in Fig. 1 & Fig. 2, U-DiTs could outperform DiTs by large margins. Amazingly, the proposed U-DiT model could perform better than DiT-XL/2 which is 6 times larger in terms of FLOPs. 2 Preliminaries Vision Transformers. ViTs [13] have introduced a transformer backbone to vision tasks by patchifying the input and viewing an image as a sequence of patch tokens and have proved its effectiveness on large-scale image classification tasks. While ViTs adopt an isotropic architecture, some following works on vision transformers [33; 21] propose a pyramid-like hierarchical architecture that gradually downsamples the feature. The pyramid architecture has proved highly effective in classification and other downstream tasks. Vision transformers are also mainstream backbones for denoising models. IPT [6] introduces an isotropic transformer backbone for denoising and other low-level tasks. Some later works [19; 18; 7] follow the isotropic convention, but other denoising works [34; 36] shift to U-Net backbones as their design. The pioneering work of U-ViT [1] and DiT [24] introduces full-transformer backbones to diffusion as denoisers. Recent Advancements in Diffusion Transformers. Following DiTs, some works investigate the training and diffusion [14; 23] strategies of Diffusion Transformers. Other works focus on the design of the DiT backbone. DiffiT [16] introduces a new fusion method for conditions; FiT [22] and VisionLLaMA [8] strengthens DiT by introducing LLM tricks including RoPE2D [30] and SwishGLU. These transformer-based diffusion works agree on adopting isotropic architectures on latents, i.e. the latent feature space is not downsampled throughout the whole diffusion model. The authors of DiT [24] even regard the inductive bias of U-Net as \u201cnot crucial\u201d. 2 \fNoised Latent 32\u00d732\u00d74 Embed Transformer Block Transformer Block Transformer Block Layer Norm Linear and Reshape Noise 32\u00d732\u00d74 \u2211 32\u00d732\u00d74 Transformer Block .... (a) DiT Noised Latent 32\u00d732\u00d74 Embed Transformer Block Transformer Block Layer Norm Linear and Reshape Noise 32\u00d732\u00d74 \u2211 32\u00d732\u00d74 Transformer Block Transformer Block Transformer Block (b) DiT-UNet Layer Norm MHSA Layer Norm FFN Noised Latent 32\u00d732\u00d74 Transformer Block Transformer Block Layer Norm Linear and Reshape Noise 32\u00d732\u00d74 \u2211 32\u00d732\u00d74 Transformer Block Transformer Block Transformer Block (c) U-DiT (Ours) Layer Norm MHSA Layer Norm Embed Downsampler FFN Figure 3: The evolution from the DiT to the proposed U-DiT. Left (a): the original DiT, which uses an isotropic architecture. Middle (b): DiT-UNet, which is a plain U-Net-style DiT. We try this as a simple combination of DiT and U-Net in the toy experiment. Right (c): the proposed U-DiT. We propose to downsample the input features for self-attention. The downsampling operation could amazingly improve DiT-UNet with a huge cut on the amount of computation. U-Nets for Diffusion. From canonical works [17; 29; 11; 25], the design philosophy of U-Net [26] is generally accepted in diffusion. Specifically, Stable Diffusion [25] uses a U-Net-based denoiser on the compressed latent space for high-resolution image synthesis, which is highly successful in manifold generative tasks. Some previous trials on diffusion transformers [4; 16; 9] also adopt U-Net on pixel-space generation tasks; but strangely, they shifted to isotropic DiT-like structures for latent-space diffusion. Despite its popularity in pixel-space diffusion, the U-Net architecture is not widely accepted in recent transformer-oriented works on latent-space diffusion. Motivated by this, we are dedicated to investigating the potential of Transformer-backboned U-Net on latent-space diffusion. 3 Investigating U-Net DiTs in Latent As is recapped, the U-Net architecture is widely adopted in diffusion applications; theoretical evaluations on U-Net denoisers also reveal their advantage, as downsampling U-Net stage transitions could filter noises that dominate high frequencies [35]. The unprecedented desertion of isotropic architectures for latent diffusion transformers is thus counter-intuitive. We are rethinking and elucidating the potentials of transformer-backboned U-Net denoisers in latent diffusion via a toy experiment. A canonical U-Net-style DiT. To start with, we propose a naive Transformer-backboned U-Net denoiser named DiT-UNet by embedding DiT blocks into a canonical U-Net architecture. Following previous U-Net designs, The DiT-UNet consists of an encoder and a decoder with an equal number of stages. When the encoder processes the input image by downsampling the image as stage-level amounts, the decoder scales up the encoded image from the most compressed stage to input size. At each encoder stage transition, spatial downsampling by the factor of 2 is performed while the feature dimension is doubled as well. Skip connections are provided at each stage transition. The skipped feature is concatenated and fused with the upsampled output from the previous decoder stage, replenishing information loss to decoders brought by feature downsampling. Considering the small, cramped latent space (32\u00d7 32 for 256\u00d7256-sized generation), we designate 3 stages in total, i.e. the feature is downsampled two times and subsequently recovered to its original size. In order to fit time and condition embeddings for various feature dimensions across multiscale stages, we use independent embedders for respective stages. In addition, we avoid patchifying the latent, as the U-Net architecture itself downsamples the latent space and there is no need for further spatial compression. 3 \fVia toy experiments, we compare the proposed U-Net-style DiT with the original DiT that adopts an isotropic architecture. In order to align the model with the DiT design, we repeatedly use plain DiT blocks in each stage. Each DiT block includes a self-attention module as the token mixer and a two-layer feed-forward network as the channel mixer. We conduct the experiment by training the U-Net-Style DiT for 400K iterations and compare it with DiT-S/4 which is comparable in size. All training hyperparameters are kept unchanged. It occurs that the U-Net style DiT only gains a limited advantage over the original isotropic DiT. The inductive bias of U-Net is insufficiently utilized. ImageNet 256\u00d7256 Model GFLOPs FID\u2193 sFID\u2193 IS\u2191 Precision\u2191 Recall\u2191 DiT-S/4 1.41 97.85 21.19 13.27 0.26 0.41 DiT-UNet 1.40 93.48 20.41 14.20 0.27 0.42 + Token Downsampling 0.90 89.43 21.36 15.13 0.29 0.44 Table 1: Toy experiments on U-Net-style DiTs. The naive DiT-UNet performs slightly better than the isotropic DiT-S/4; but interestingly, when we apply token downsampling for self-attention, the DiT-UNet performs better with fewer costs. Improved U-Net-style DiT via token downsampling. In seeking to incorporate attention in transformers to diffusion U-Nets better, we review the role of the U-Net backbone as the diffusion denoiser. A recent work on latent diffusion models [27] conducted frequency analysis on intermediate features from the U-Net backbone, and concluded that energy concentrates at the low-frequency domain. This frequency-domain discovery hints at potential redundancies in the backbone: the U-Net backbone should highlight the coarse object from a global perspective rather than the high-frequency details. Naturally, we resort to attention with downsampled tokens. The operation of downsampling is a natural low-pass filter that discards high-frequency components. The low-pass feature of downsampling has been investigated under the diffusion scenario, which concludes that downsampling helps denoisers in diffusion as it automatically \u201cdiscards those higher-frequency subspaces which are dominated by noise\u201d [35]. Hence, we opt to downsample tokens for attention. In fact, attention to downsampled tokens is not new. Previous works regarding vision transformers [15; 38] have proposed methods to downsample key-value pairs for computation cost reduction. Recent work on training-free acceleration of diffusion [28] also applies key-value downsampling on Stable Diffusion models. But these works maintain the number of queries, and thus the downsampling operation is not completely performed. Besides, these downsampling measures usually involves a reduction of tensor size, which could result in a significant loss in information. Different from these works, we propose a simple yet radical token downsampling method for DiTUNets: we downsample queries, keys, and values at the same time for diffusion-friendly self-attention, but meanwhile we keep the overall tensor size to avoid information loss. The procedure is detailed as follows: the feature-map input is first converted into four 2\u00d7 downsampled features by the downsampler (the downsampler design is detailed in Sec. 4.2). Then, the downsampled features are mapped to Q, K, V for self-attention. Self-attention is performed within each downsampled feature. After the attention operation, the downsampled tokens are spatially merged as a unity to recover the original number of tokens. Notably, the feature dimension is kept intact during the whole process. Unlike U-Net downsampling, we are not reducing or increasing the number of elements in the feature during the downsampling process. Rather, we send four downsampled tokens into self-attention in a parallel manner. Self-attention with downsampled tokens does help DiT-UNets on the task of latent diffusion. As shown in Tab. 1, the substitution of downsampled self-attention to full-scale self-attention brings slight improvement in the Fr\u00e9chet Inception Distance (FID) metric despite a significant reduction in FLOPs. Complexity analysis. Apart from the performance benefits, we are aware that downsampled selfattention could save as much as 1/3 of the overall computation cost compared to full-scale selfattention. We conduct a brief computation complexity analysis on the self-attention mechanism to explain where the savings come from. Given an input feature of size N \u00d7 N and dimension d, we denote Q, K, V \u2208RN 2\u00d7d as mapped query-key-value tuples. The complexity of self-attention is analyzed as: 4 \fX = AV |{z} O(N 4D) s.t. A = Softmax \u0000QKT \u0001 | {z } O(N 4D) . In the proposed self-attention on downsampled tokens, four sets of downsampled query-key-value tuples 4\u00d7(Q\u21932, K\u21932, V\u21932) \u2208R( N 2 )2\u00d7d performs self-attention respectively. While each self-attention operation costs only 1/16 of full-scale self-attention, the total cost for downsampled self-attention is 1/4 of full-scale self-attention. 3/4 of the computation costs by self-attention is saved via token downsampling. In a nutshell, we show from toy experiments that the redundancy of DiT-UNet is reduced by downsampling the tokens for self-attention. 4 Scaling the Model Up Based on the discovery in our toy experiment, we propose a series of U-shaped DiTs (U-DiT) by applying the downsampled self-attention (proposed in Sec. 3) and scaling U-Net-Style DiT up. Settings. We adopt the training setting of DiT. The same VAE (i.e. sd-vae-ft-ema) for latent diffusion models [25] and the AdamW optimizer is adopted. The training hyperparameters are kept unchanged, including global batch size 256, learning rate 1e \u22124, weight decay 0, and global seed 0. The training is conducted with the training set of ImageNet 2012 [10]. Apart from the self-attention on downsampling as introduced in the toy experiment (Section 3), we further introduce a series of modifications to U-DiTs, including cosine similarity attention [20; 18], RoPE2D [30; 22; 8], depthwise conv FFN [34; 3; 38], and re-parametrization [12; 31]. The contribution of each modification is quantitatively evaluated in Sec. 6. 4.1 U-DiT at Larger Scales ImageNet 256\u00d7256 Model FLOPs(G) FID\u2193 sFID\u2193 IS\u2191 Precision\u2191 Recall\u2191 DiT-S/2 [24] 6.06 68.40 DiT-S/2\u2217 6.07 67.40 11.93 20.44 0.368 0.559 U-DiT-S (Ours) 6.04 31.51 8.97 51.62 0.543 0.633 DiT-L/4 [24] 19.70 45.64 DiT-L/4\u2217 19.70 46.10 9.17 31.05 0.472 0.612 DiT-B/2 [24] 23.01 43.47 DiT-B/2\u2217 23.02 42.84 8.24 33.66 0.491 0.629 U-DiT-B (Ours) 22.22 16.64 6.33 85.15 0.642 0.639 DiT-L/2 [24] 80.71 23.33 DiT-L/2\u2217 80.75 23.27 6.35 59.63 0.611 0.635 DiT-XL/2 [24] 118.64 19.47 DiT-XL/2\u2217 118.68 20.05 6.25 66.74 0.632 0.629 U-DiT-L (Ours) 85.00 10.08 5.21 112.44 0.702 0.631 Table 2: Comparing U-DiTs against DiTs on ImageNet 256\u00d7256 generation. Experiments with a supermark \u2217are replicated according to the official code of DiT. We compare models trained for 400K iterations with the standard training hyperparameters of DiT. The performance of U-DiTs is outstanding: U-DiT-B could beat DiT-XL/2 with only 1/6 of inference FLOPs; U-DiT-L could outcompete DiT-XL/2 by 10 FIDs. Comparison with DiTs and their improvements. In order to validate the effectiveness of the proposed U-DiT models beyond simple toy experiments, we scale them up and compare them with DiTs [24] of larger sizes. For a fair comparison, we use the same sets of training hyperparameters as DiT; all models are trained for 400K iterations. The results on ImageNet 256\u00d7256 are shown in Tab. 2, where we scale U-DiTs to \u223c6e9, \u223c20e9, \u223c80e9 FLOPs respectively and compare them with DiTs of similar computation costs. 5 \fIt could be concluded from Tab. 2 that all U-DiT models could outcompete their isotropic counterparts by considerable margins. Specifically, U-DiT-S and U-DiT-B could outperform DiTs of comparable size by \u223c30 FIDs; U-DiT-L could outperform DiT-XL/2 by \u223c10 FIDs. It is shocking that U-DiT-B could outcompete DiT-XL/2 with only 1/6 of the computation costs. To present the advantage of our method better, we also include the performance of U-DiTs in an FID-50K versus FLOPs plot (Fig. 1). Apart from DiTs and U-DiTs, we also include other state-of-the-art methods: SiT [23] that proposes an interpolant framework for DiTs, and SiT-LLaMA [8] that combines state-of-the-art DiT backbone VisionLLaMA and SiT. The advantages of U-DiTs over other baselines are prominent in the plot. The results highlight the extraordinary scalability of the proposed U-DiT models. U-DiTs are also performant in generation scenarios with classifier-free guidance. In Tab. 3, we compare U-DiTs with DiTs at cfg = 1.5. For a fair comparison, we train U-DiTs and DiTs for 400K iterations under identical settings. ImageNet 256\u00d7256 Model Cfg-Scale FLOPs(G) FID\u2193 sFID\u2193 IS\u2191 Precision\u2191 Recall\u2191 DiT-L/2\u2217 1.5 80.75 7.53 4.78 134.69 0.780 0.532 DiT-XL/2\u2217 1.5 118.68 6.24 4.66 150.10 0.794 0.514 U-DiT-B 1.5 22.22 4.26 4.74 199.18 0.825 0.507 U-DiT-L 1.5 85.00 3.37 4.49 246.03 0.862 0.502 Table 3: Generation performance with classifier-free guidance. We measure the performance of U-DiTs and DiTs at 400K training steps with cfg = 1.5. Experiments with a supermark \u2217are replicated according to the official code of DiT. U-DiTs are also performant on conditional generation. Extended training steps. We evacuate the potentials of U-DiTs by extending training steps to 1 Million. Fig. 2 further demonstrate that the advantage of U-DiTs is consistent at all training steps. As training steps gradually goes up to 1 Million, the performance of U-DiTs is improving (Tab. 4). We visualize the process where the image quality is gradually getting better (Fig. 4). Notably, U-DiT-L at only 600K training steps could outperform DiT-XL/2 at 7M training steps without classifier-free guidance. As additionally shown in Fig. 5, U-DiT models could conditionally generate authentic images at merely 1M iterations. U-DiT-B U-DiT-L 200K 400K 600K 800K 200K 400K 600K 800K Figure 4: Quality improvements of generated samples as training continues. We sample from U-DiT models trained for different numbers of iterations on ImageNet 256\u00d7256. More training does improve generation quality. Best viewed on screen. 4.2 Ablations The design of downsampler. The downsampling operation in the proposed U-DiT transforms a complete feature into multiple spatially downsampled features. Based on previous wisdom, we figured out that previous works either directly perform pixel shuffling, or apply a convolution layer before pixel shuffling. While we hold that it is much too rigid to shuffle pixels directly as downsampling, 6 \fImageNet 256\u00d7256 Model Training Steps FID\u2193 sFID\u2193 IS\u2191 Precision\u2191 Recall\u2191 DiT-XL/2 7M 9.62 U-DiT-B 200K 23.23 6.84 64.42 0.610 0.621 U-DiT-B 400K 16.64 6.33 85.15 0.642 0.639 U-DiT-B 600K 14.51 6.30 94.56 0.652 0.643 U-DiT-B 800K 13.53 6.27 98.99 0.654 0.645 U-DiT-B 1M 12.87 6.33 103.79 0.661 0.653 U-DiT-L 200K 15.26 5.60 86.01 0.685 0.615 U-DiT-L 400K 10.08 5.21 112.44 0.702 0.631 U-DiT-L 600K 8.71 5.17 122.45 0.705 0.645 U-DiT-L 800K 7.96 5.21 131.35 0.705 0.648 U-DiT-L 1M 7.54 5.27 135.49 0.706 0.659 Table 4: The performance of U-DiT-B and U-DiT-L models with respect to training iterations. The unconditional generation performance of both models on ImageNet 256\u00d7256 consistently improves as training goes on, where U-DiT-L at 600K steps strikingly beats DiT-XL/2 at 7M steps. ImageNet 256\u00d7256 Model FLOPs(G) FID\u2193 sFID\u2193 IS\u2191 Precision\u2191 Recall\u2191 Pixel Shuffle (PS) 0.89 96.15 23.90 13.93 0.272 0.389 Depthwise (DW) Conv. + PS 0.91 89.87 20.99 14.92 0.288 0.419 DW Conv. || Shortcut + PS 0.91 89.43 21.36 15.13 0.291 0.436 Table 5: Ablations on the choice of downsampler. We have tried several downsampler designs, and it turns out that the parallel connection of a shortcut and a depthwise convolution is the best fit. We avoid using ordinary convolution (i.e. Conv.+PS) because channel-mixing is costly: conventional convolution-based downsamplers could double the amount of computation. The U-DiT with a conventional downsampler costs as many as 2.22G FLOPs in total. applying convolution is hardly affordable in terms of computation costs. Specifically, ordinary convolutions are costly as extensive dense connections on the channel dimension are involved: using convolution-based downsamplers could double computation costs. As a compromise, we apply depthwise convolution instead. We also add a shortcut that short-circuits this depthwise convolution, which has proved crucial for better performance. The shortcut adds negligible computation cost to the model, and in fact, it could be removed during the inference stage with re-parameterization tricks. The results are shown in Tab. 5. The contribution of each individual modification. In this part, we start from a plain U-Net-style DiT (DiT-UNet) and evaluate the contribution of individual components. Firstly, we inspect the advantage of downsampled self-attention. Recapping the toy experiment results in Sec. 3, replacing the full-scale self-attention with downsampled self-attention would result in an improvement in FID and 1/3 reduction in FLOPs. In order to evaluate the improvement of downsampling via model performance, we also design a slim version of DiT-UNet (i.e. DIT-UNet (Slim)). The DiT-UNet (Slim) serves as a full-scale self-attention baseline that spends approximately the same amount (\u223c0.9GFLOPs) of computation as our U-DiT. As shown in the upper part of Tab. 6, by comparing U-DiT against DiT-UNet (Slim), it turns out that downsampling tokens in DiT-UNet could bring a performance improvement of \u223c18FIDs. Next, we inspect other modifications that further refine U-DiTs (lower part of Tab. 6). Swin Transformer V2 [20] proposes a stronger variant of self-attention: instead of directly multiplying Q and K matrices, cosine similarities between queries and keys are used. We apply the design to our selfattention, which yields \u223c2.5FIDs of improvement. RoPE [30] is a powerful positional embedding method, which has been widely applied in Large Language Models. Following the latest diffusion transformer works [22; 8], we inject 2-dimensional RoPE (RoPE2D) into queries and keys right before self-attention. The introduction of RoPE2D improves performance by \u223c2.5FIDs. Some recent transformer works strengthen MLP by inserting a depthwise convolution layer between two linear mappings [34; 3; 38]. As the measure is proved effective in these works, we borrow it to our 7 \fFigure 5: Generated samples by U-DiT-L at 1M iterations. It is astonishing that U-DiT could achieve authentic visual quality at merely 1 Million training steps. Best viewed on screen. U-DiT model, improving \u223c5FIDs. As re-parametrization during training [12] could improve model performance, we apply the trick to FFN [31] and bring an additional improvement of \u223c3.5FIDs. Above all, based on the components mentioned above, the proposed U-DiTs could outcompete plain DiT-UNets and isotropic DiTs by large margins. ImageNet 256\u00d7256 Model FLOPs(G) FID\u2193 sFID\u2193 IS\u2191 Precision\u2191 Recall\u2191 DiT-UNet (Slim) 0.92 107.00 24.66 11.95 0.230 0.315 DiT-UNet 1.40 93.48 20.41 14.20 0.274 0.415 U-DiT-T (DiT-UNet+Downsampling) 0.91 89.43 21.36 15.13 0.291 0.436 U-DiT-T (+Cos.Sim.) 0.91 86.96 19.98 15.63 0.299 0.450 U-DiT-T (+RoPE2D) 0.91 84.64 19.38 16.19 0.306 0.454 U-DiT-T (+DWconv FFN) 0.95 79.30 17.84 17.48 0.326 0.494 U-DiT-T (+Re-param.) 0.95 75.71 16.27 18.59 0.336 0.512 Table 6: Ablations on U-DiT components. Apart from the toy example in Sec. 3, we further validate the effectiveness of downsampled by comparing the U-DiT with a slimmed version of DiT-UNet at equal FLOPs. Results reveal that downsampling could bring \u223c18FIDs on DiT-UNet. Further modifications on top of the U-DiT architecture could improve 2 to 5 FIDs each. 5" +} \ No newline at end of file diff --git a/abs_9K/test_abstract_short_2405.02749v1.json b/abs_9K/test_abstract_short_2405.02749v1.json new file mode 100644 index 0000000000000000000000000000000000000000..5e0fca4413955ce68fddb8dcef5e52caa7bc303e --- /dev/null +++ b/abs_9K/test_abstract_short_2405.02749v1.json @@ -0,0 +1,16 @@ +{ + "url": "http://arxiv.org/abs/2405.02749v1", + "title": "Sub-goal Distillation: A Method to Improve Small Language Agents", + "abstract": "While Large Language Models (LLMs) have demonstrated significant promise as\nagents in interactive tasks, their substantial computational requirements and\nrestricted number of calls constrain their practical utility, especially in\nlong-horizon interactive tasks such as decision-making or in scenarios\ninvolving continuous ongoing tasks. To address these constraints, we propose a\nmethod for transferring the performance of an LLM with billions of parameters\nto a much smaller language model (770M parameters). Our approach involves\nconstructing a hierarchical agent comprising a planning module, which learns\nthrough Knowledge Distillation from an LLM to generate sub-goals, and an\nexecution module, which learns to accomplish these sub-goals using elementary\nactions. In detail, we leverage an LLM to annotate an oracle path with a\nsequence of sub-goals towards completing a goal. Subsequently, we utilize this\nannotated data to fine-tune both the planning and execution modules.\nImportantly, neither module relies on real-time access to an LLM during\ninference, significantly reducing the overall cost associated with LLM\ninteractions to a fixed cost. In ScienceWorld, a challenging and multi-task\ninteractive text environment, our method surpasses standard imitation learning\nbased solely on elementary actions by 16.7% (absolute). Our analysis highlights\nthe efficiency of our approach compared to other LLM-based methods. Our code\nand annotated data for distillation can be found on GitHub.", + "authors": "Maryam Hashemzadeh, Elias Stengel-Eskin, Sarath Chandar, Marc-Alexandre Cote", + "published": "2024-05-04", + "updated": "2024-05-04", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "label": "Original Paper", + "paper_cat": "Parameter AND Efficient AND Fine AND Tuning", + "gt": "While Large Language Models (LLMs) have demonstrated significant promise as\nagents in interactive tasks, their substantial computational requirements and\nrestricted number of calls constrain their practical utility, especially in\nlong-horizon interactive tasks such as decision-making or in scenarios\ninvolving continuous ongoing tasks. To address these constraints, we propose a\nmethod for transferring the performance of an LLM with billions of parameters\nto a much smaller language model (770M parameters). Our approach involves\nconstructing a hierarchical agent comprising a planning module, which learns\nthrough Knowledge Distillation from an LLM to generate sub-goals, and an\nexecution module, which learns to accomplish these sub-goals using elementary\nactions. In detail, we leverage an LLM to annotate an oracle path with a\nsequence of sub-goals towards completing a goal. Subsequently, we utilize this\nannotated data to fine-tune both the planning and execution modules.\nImportantly, neither module relies on real-time access to an LLM during\ninference, significantly reducing the overall cost associated with LLM\ninteractions to a fixed cost. In ScienceWorld, a challenging and multi-task\ninteractive text environment, our method surpasses standard imitation learning\nbased solely on elementary actions by 16.7% (absolute). Our analysis highlights\nthe efficiency of our approach compared to other LLM-based methods. Our code\nand annotated data for distillation can be found on GitHub.", + "main_content": "INTRODUCTION Recently, Large Language Models (LLMs) have found applications in various fields, including multi-task learning, decision making, answering questions, summarizing documents, translating languages, completing sentences, and serving as search assistants. They showcase a remarkable ability to make predictions based on input, enabling their use in generative AI applications to produce content based on input prompts (Devlin et al., 2018; Brown et al., 2020; Rae et al., 2021; Chowdhery et al., 2023; Scao et al., 2022; Patel & Pavlick, 2021; Han et al., 2021; Bommasani et al., 2021). The promising advantage of LLMs is attributed to their training on extensive text datasets, resulting in impressive capabilities. This prior knowledge can be leveraged for action planning to solve tasks in robotics and reinforcement learning (Huang et al., 2022b; Brohan et al., 2023; Liang et al., 2023). Recent works have utilized in-context learning with LLMs to provide actions in autonomous decision-making agents and interactive environments (Mahowald et al., 2023; Yao et al., 2022; Schick et al., 2023; Shen et al., 2023; Nakano et al., 2021; Park et al., 2023; Lin et al., 2023; Brohan et al., 2023). However, the extreme size of LLMs makes them computationally unaffordable for many applications. Moreover, closed-source models like ChatGPT (OpenAI, 2022) and GPT-4 (OpenAI, 2023) limit accessibility and reproducibility. Consequently, there is an increasing demand to find approaches that are less computationally intensive while still capitalizing on the knowledge embedded in LLMs. One prevalent technique is the use of Knowledge Distillation (KD) (Bucilu\u02c7 a et al., 2006; Hinton et al., 2015), wherein a smaller model is trained with guidance from a larger model. \u2217Corresponding author: Maryam Hashemzadeh. \u2020equal supervision. 1https://github.com/chandar-lab/SubGoal_Distillation_LLM 1 arXiv:2405.02749v1 [cs.LG] 4 May 2024 \fPublished at 3rd Conference on Lifelong Learning Agents (CoLLAs), 2024 Through this approach, we can leverage the knowledge in an LLM to train a more compact model with a reduced number of parameters. Navigate_to(kitchen) open door to kitchen go to kitchen Pick_up(thermometer) pick up thermometer Find(metal pot) open cupboard pick up metal pot Fill(metal pot, water) move metal pot to sink activate sink deactivate sink pick up metal pot Focus_on(substance in metal pot focus on substance in metal pot Freeze(water, metal pot) pour metal pot into metal pot pick up metal pot open freezer move metal pot to freezer Monitor_temperature(metal pot, freeze) examine substance in metal pot Annotated Trajectory Task Description: Your task is to change the state of matter of water. First, focus on the substance. Then, take actions that will cause it to change its state of matter. Figure 1: Example of annotating an expert trajectory with sub-goals for a particular variation of task 1-4 (change-the-state-of-matter-of). Looking only at the original trajectory (i.e., ignoring the red rows), we gather the expert ended up changing the state of water to be frozen. The expert had to navigate to the kitchen, find a thermometer and a metal pot, pour water into the pot, place it in the freezer, and continually monitor its temperature until frozen. Each of those milestones (highlighted in red) can be considered a sub-goal, encompassing a sequence of actions. Sub-goals can be shared across different tasks, facilitating generalization. We opted for a format that looks like function calls to encourage reusability (e.g., fill(metal pot, water)). Distilling knowledge from LLMs offers significant advantages, allowing for the training of specialized local models rather than depending on an LLM as a general model. This approach not only enhances privacy, particularly for systems with security-sensitive considerations like co-pilot models, but also provides greater flexibility in tailoring models for specific tasks. Additionally, the use of a smaller model offers the advantage of versatility across diverse applications without size constraints, including device models and mobile apps. Another challenge with LLMs is their susceptibility to hallucinations. This tendency poses a hindrance to their effective execution of long-tail planning, especially in interactive decision-making scenarios. In our research, we leverage the knowledge of LLMs to train an autonomous agent for effective decision-making in complex interactive text environments, utilizing small language models as our policy. Knowledge Distillation facilitates the training of smaller policies, allowing seamless integration of LLM knowledge. To address the challenges at hand, adopting a two-level planning approach proves beneficial for reducing hallucination \u2013 one for high-level reasoning to formulate subgoals and another for low-level action planning to execute each sub-goal. Figure 1 illustrates this concept in the task of freezing water from ScienceWorld (Wang et al., 2022a). The agent\u2019s subtasks involve navigating to the kitchen, finding a thermometer and a metal pot, pouring water into the pot, placing it in the freezer, and continuously monitoring its temperature until frozen. These constitute sub-goals generated by a high-level model, with each sub-goal subsequently executed by a lowlevel model. The generation of sub-goals empowers an autonomous agent to expedite learning for the current task and reuse similar sub-goals in various tasks to have more generalization. The contributions in this work are: \u2022 We employ Knowledge Distillation from an LLM to train a high-level policy capable of generating sub-goals without making assumptions about the specific set of sub-goals. Notably, these sub-goals remain flexible, accommodating various sequences of actions. \u2022 We demonstrate that employing Knowledge Distillation with hierarchical policies surpasses the performance achieved by both standalone imitation learning and its combination with in-context learning. \u2022 We illustrate that this approach is more cost-effective in terms of the number of calls to an LLM compared to other methods utilizing in-context learning. \u2022 We introduce an effective approach instead of using computational requirements of LLM and their restricted number of calls for using in interactive decision making tasks. 2 \fPublished at 3rd Conference on Lifelong Learning Agents (CoLLAs), 2024 2 RELATED WORK Using LLMs for Action Planning Recent works have demonstrated the ability of LLMs to perform action planning for interactive decision making process without any additional training (Huang et al., 2022a). ReAct (Yao et al., 2022) proposes a way of prompting an LLM with interleave reasoning step and action taking step. That led the resolution of a variety of language-based reasoning and decision-making tasks. This approach empowers the model to construct high-level plans for effective action. Reflexion (Shinn et al., 2023) draws inspiration from reinforcement learning, employing a framework to reinforce language agents through linguistic feedback. At the end of each trial, it uses selfreflection to determine what went wrong with the task and keeps it in a memory. Then it leverages this information for the next trial. Some works use a programmatic LLM prompt structure with available actions and objects in an environment to translate natural language commands into robot policy code via few-shot examples (Liang et al., 2023; Singh et al., 2023). Khot et al. (2022) introduced a decomposed prompting approach wherein a task is broken down into simpler sub-tasks, allowing for recursive handling. Subsequently, these sub-tasks are assigned to sub-task-specific LLMs, with both the decomposer and the sub-task LLMs with their own few-shot prompts. Sun et al. (2023) uses three steps, action mining, plan formulation, and plan execution to decompose a question into a sequence of actions by few-shot prompting of LLMs. In Prasad et al. (2023) tasks are decomposed explicitly by a separate LLM through prompting when an executor is unable to execute a given sub-task. Imitation learning Some works employ imitation learning to train a language model as the agent\u2019s policy, as seen in offline decision transformers (Torabi et al., 2018). The inputs consist of states, actions, and reward-to-go values, which are fed into a transformer. This transformer then predicts actions in an autoregressive manner, utilizing a causal self-attention mask (Chen et al., 2021). Contextual Action Language Model (CALM) (Yao et al., 2020) is another work which uses a fine-tuned language model with oracle data to generate a set of candidate actions which are then passed to a policy network to select the best one. In Nakano et al. (2021), the authors fine-tune GPT-3 to address long-form questions within a web-browsing context. Human feedback is employed as a direct optimization measure for enhancing the quality of answers generated by the model. Knowledge Distillation: Knowledge Distillation (KD) typically falls into two categories: black-box KD and whitebox KD. In black-box KD, only the teacher\u2019s predictions are available for guidance, while in white-box KD, we have access to the teacher\u2019s parameters (Gou et al., 2021). Recently, black-box KD has gained widespread use for finetuning original models using self-instruct techniques, as proposed by Wang et al. (2022b), or for smaller models (Taori et al., 2023; Chiang et al., 2023; Peng et al., 2023) through the utilization of prompt-response pairs generated by LLMs. West et al. (2021) introduces symbolic KD from text rather than logits. This process involves the transfer of knowledge from a large, general model to a more compact commonsense model, facilitated by a commonsense corpus, yielding a commonsense knowledge graph and model. The work by Hsieh et al. (2023) trains a smaller model that outperform LLM using reasoning steps called rationales. They incorporated rationales as informative supervision to train smaller models with less training data. Complex interactive text environments In text-based games, an agent interacts with the environment by reading and writing text while aiming towards the end game or solving a given task. Out of the recent frameworks that deals with generating and interfacing text-based games (C\u02c6 ot\u00b4 e et al., 2018; Hausknecht et al., 2019; Shridhar et al., 2021; Murugesan et al., 2021), we use ScienceWorld (Wang et al., 2022a) which is very complicated by having a large set of objects, actions, and tasks. 3 MODEL In this paper, we propose to train a hierarchical policy by combining KD from an LLM and imitation learning from expert trajectories. This section describes both modules in detail and we refer the reader to Figure 2 for a schematic view. We first formulate the problem as a POMDP (Section 3.1). Next, we describe what knowledge we are distilling from an LLM to guide the agent in accomplishing tasks (Section 3.2). Then, we detail how both the high-level and low-level policies of the hierarchical policy are trained (Section 3.3). 3.1 PROBLEM FORMULATION ScienceWorld (Wang et al., 2022a) can be defined as a partially observable Markov decision process (POMDP), where observations provide information solely on environmental changes induced by the current action. ScienceWorld is 3 \fPublished at 3rd Conference on Lifelong Learning Agents (CoLLAs), 2024 Action generator Sub-goal generator History observation/score action sub-goal \ufffd\ufffd Action generator Sub-goal generator Your task is to boil water \u2026 ; Time: 1; Score: 0; Completed subtasks are: navigate_to(kitchen), \u2026 ; The current subtask is heat(water, metal pot); Action history: activate sink --> The sink is now activated, \u2026 ; Current environment: This room is called the kitchen. In it, you see: \u2026 ; Current inventory: a thermometer, \u2026 ; Visited rooms: hallway, \u2026 ; What action should you do next? Next action Sub-goal Figure 2: On the left, a schematic view of our approach is shown. There are two modules: the sub-goal generator and action generator. The sub-goal generator provides a sub-goal for the action generator, which predicts the next action given the current sub-goal and history. On the right, the inputs and outputs of both modules are illustrated. The input comprises different parts including task description, completed sub-goal, current sub-goal, a history of recent actions-observations, and more, each highlighted in a distinct color. an interactive text environment meaning all task instructions, observations and actions are expressed in textual form. Importantly, both observations and rewards in this environment are conditioned by the ongoing task. Given a language vocabulary V and an arbitrary maximum number of tokens N, an observation is defined such as o \u2208\u2126\u2282V N, a reward such as r \u2208R and an action as a \u2208A \u2282V N. Finally, a task or goal description is shown by g \u2208G \u2282V N. We formalize the problem as a goal-augmented POMDP M = (S, V, A, \u2126, G, T, R, O, \u03b3) with S the state space, A \u2282V N the action space, \u2126\u2282V N the observation space, G \u2282V N the goal space, T : S \u00d7 A \u00d7 G \u2192S the goal-conditioned transition function, R : S \u00d7 A \u00d7 G \u2192R the goal-conditioned reward function, O : S \u2192V N an (unknown) observation function mapping a state to a textual description and \u03b3 the discount factor. We assume \u03b3 = 1 in our experiments. 3.2 DISTILLING KNOWLEDGE FROM AN LLM The initial step in training our policies is creating a dataset. This dataset should include sub-goals along with their corresponding aligned sequences of actions for each task. To generate sub-goals along with their corresponding aligned sequences of actions we do the following steps. We assume access to a collection of expert trajectories. Then we prompt an LLM with two in-context examples. Each example is composed of a task description, a similar task as the one we wish to annotate, and its expert trajectory. The example also contains a set of sub-goals, with the sequences of actions linked to each sub-goal. Given the two in-context examples and a new task description with its expert trajectory, the LLM is then instructed to generate a response. The response is a set of sub-goals with their associated list of actions. The generated list of actions is used to determine each sub-goal corresponds to which segment of the expert trajectory. It is important to note that these responses are collected only for the training tasks for which we assume having access to expert trajectories. Also, it is important to point out that the LLM is not generating any novel trajectories. Figure 4 illustrates the prompt examples for task 1 \u22121 which is boiling a given substance. To ensure more uniform sub-goals that can generalize across tasks, we opted for a format that looks like function calls. Since that format was shown in the in-context examples, the LLM-generated sub-goals mimic this format as well making them easier to parse. Since the expert trajectories for some tasks can be long (+100 actions), the generated sub-sequence of actions corresponding to each sub-goal may not align exactly with the expert trajectory. Sometimes, it might miss certain actions, while in other instances, it might include additional actions, especially when there are repeated actions in the trajectory. To address this, we use a trajectory alignment process that finds the minimal set of edits to go from the generated trajectory to the expert trajectory according to the Levenshtein distance. For each \u201cremove\u201d edit, i.e. the generated trajectory has superfluous actions, we simply remove those from the generated trajectory. On the other hand, for \u201cadd\u201d edit, i.e. the generated trajectory is missing some actions, we prompt the LLM to generate a new sub-goal for those. An example is shown in Figure 3. 4 \fPublished at 3rd Conference on Lifelong Learning Agents (CoLLAs), 2024 pick up thermometer open cupboard pick up metal pot move metal pot to sink activate sink deactivate sink pick up metal pot pour metal pot into metal pot open door to kitchen go to kitchen pick up metal pot open freezer move metal pot to freezer Pick_up(thermometer): pick up thermometer Fill(metal pot, water): move metal pot to sink activate sink deactivate sink pick up metal pot open door to bathroom go to bathroom Freeze(water, metal pot): pour metal pot into metal pot open door to kitchen open freezer move metal pot to freezer Missed actions Missed actions Extra actions LLM LLM Remove Generated Trajectory by LLM Expert Trajectory Figure 3: Example of a trajectory generated by the LLM deviating from the provided expert trajectory. In this example, which is for a boiling task, certain actions are omitted in the generated trajectory, indicated in blue in the left box. To address these missing actions, we group them into sequences and prompt the LLM to generate sub-goals for them. If the generated trajectory includes additional actions, such as the green actions in the right box, we simply remove them to align with the expert trajectory. In the resulting annotated dataset, each data point follows the same format as used by Lin et al. (2023) but with the added mention of completed sub-goals and the current sub-goal. Precisely, it corresponds to: \u2022 Input: task description, number of steps, current score, completed sub-goal, current sub-goal, a history of 10 recent actions-observations, current items in the room, inventory, and the visited rooms. \u2022 Target: next action, next sub-goal. 3.3 HIERARCHICAL IMITATION LEARNING With the dataset obtained from distilling knowledge from an LLM, we can now focus on training the policies. Low-level policy: The low-level policy is a language model (LM) which is trained through imitation learning using the annotated dataset. The goal is to have a model much smaller than an LLM so it can fit on a single machine and run faster, ideally below a billion of parameters. This policy learns to predict the next action given the current task description, the 10 previous observation-action pairs, the previous completed sub-goals, and the current sub-goal. We refer to this policy as the action generator. High-level policy: The high-level policy is another LM with a reasonable size. It is trained using the annotated dataset to generate the next sub-goal given the previous sub-goals and a short history, i.e. the last 10 actions and observations. So the high-level policy generates sub-goals while the low-level policy generate actions. Moreover, this policy conditions on the same input information as for the action generator. We call this policy the sub-goal generator. Hierarchical policy: During inference, we first leverage the high-level policy to generate a sub-goal. This generated sub-goal is then fed into the action generator, allowing it to produce the next action aligned with the provided sub-goal. This sequential approach serves as a guiding cue for the action generator, particularly when the trajectory to achieve the goal is complex or long. Moreover, it serves to prevent the action generator from generating actions that might deviate the agent from the correct path, thereby improving the precision and relevance of the actions being generated. 5 \fPublished at 3rd Conference on Lifelong Learning Agents (CoLLAs), 2024 [Example 1] [Task description] Your task is to boil water. \u2026 [Expert trajectory] Here is the goal path to achieve to the goal:open door to kitchen, go to kitchen, \u2026 provide me with the functional format of high-level sub-tasks to complete this task and their correspondings actions. [sub-goals] 1navigate_to(kitchen) : {'open door to kitchen', 'go to kitchen'} 2pick_up(thermometer):{'pick up thermometer'} 3 find(metal pot):{'open cupboard', 'pick up metal pot'} \u2026 [Example 2] \u2026 [Current task] [Task description] Your task is to boil chocolate. \u2026 [Expert trajectory] Here is the goal path to achieve to the goal: 'open door to hallway', 'go to hallway', 'open door to kitchen', 'go to kitchen', \u2026 provide me with the functional format of high-level sub-tasks to complete this task and their correspondings actions. Does the actions sequence match with the oracle path? If some actions are missed, use them in the prompt. If more actions are added mistakenly, remove them. Finish! Yes No Prompt Generated sub-goals LLM 1navigate_to(hallway) : {'open door to hallway', 'go to hallway'} 2navigate_to(kitchen) : {'open door to kitchen', 'go to kitchen'} 3pick_up(thermometer):{'pick up thermometer'} 4find(metal pot):{'open cupboard', 'pick up metal pot'} 5find(chocolate):{'open oven', 'open freezer', 'open drawer in cupboard', 'open glass jar', 'open drawer in counter', 'open fridge', 'focus on chocolate'} 6\u2026 Figure 4: The figure demonstrates KD to generate sub-goals using an LLM. The LLM is presented with a prompt containing two in-context examples. Each example is composed of a task description in green and an expert trajectory detailing the steps to accomplish that task in blue. It also includes the expected set of sub-goals with their corresponding sequences of actions in red. Following this, we provide a new task description and trajectory, and we let the LLM generate the associated sub-goals and segmented actions. 4 EXPERIMENTS 4.1 ENVIRONMENT We chose ScienceWorld (Wang et al., 2022a) as the environment due to its complexity and the diverse range of tasks it encompasses. This environment is an interactive multi-task text-based game where the agent conducts elementary science experiments in a simulated environment. Each experiment is designed as a separate task. For example, \u201dYour task is to boil water. For compounds without a boiling point, combusting the substance is also acceptable. First, focus on the substance. Then, take actions that will cause it to change its state of matter\u201d. To complete a task, the agent must perform multiple actions and receives the result of each action as an observation and a score. The observations and actions are in text format. An observation describes the changes in the environment, and the score is a numerical value ranging from 0% to 100%, indicating the degree of completion of the current task through the current action. Furthermore, ScienceWorld is a benchmark with 30 distinct tasks spanning 10 science domains which are widely different (Appendix A.4). For instance, in the \u201dChanges of State\u201d task, the agent is required to locate and use heating/freezing sources to alter the state of a substance (e.g., ice or chocolate). Conversely, in a task such as \u201dMendelian Genetics,\u201d the agent is tasked with determining whether a specified trait (e.g., white flower color) is dominant or recessive in a plant. These examples illustrate the substantial diversity across the domains, ranging from physical transformations to genetic analyses, underscoring the broad spectrum of challenges within ScienceWorld. On top of that, ScienceWorld has 10 different locations, more than 200 object types, and 25 action templates which makes the search space very larger for the agent. Each type of task has different variations in which the task objects, the agent\u2019s initial location, and random contents of each room are altered. 4.2 EXPERIMENTAL SETUP The environment has separate sets of variations for train and test. In the test variations, the combinations of objects and conditions are not seen in the train set. Following the experimental setup in (Lin et al., 2023), if the number of variations is more than 10, we consider only the first 10 variations. Our base models for both policies is a pre-trained FLAN-T5-LARGE (Chung et al., 2022) with 700M parameters. For the both polices, we used greedy decoding at inference. We also conduct an ablation study over different model sizes (Figure 5a). For fine-tuning the policies, we use all the training tasks and their variations (3600 games in total) from ScienceWorld. We vary the number of training epochs in function of the size of the models (see Appendix A.3). 6 \fPublished at 3rd Conference on Lifelong Learning Agents (CoLLAs), 2024 Methods SayCan\u2217 ReAct\u2217 Reflexion\u2217 Swift-only SwiftSage\u2217 Ours Overall Average 25.22 19.76 23.40 46.25 62.22 65.43 Solved Task Types 0/30 0/30 4/30 4/30 2/30 11/30 Short\u2020 37.24 28.95 39.19 79.68 72.81 91.61 Medium 20.06 21.09 14.73 35.80 55.34 62.83 Long 18.66 11.23 16.27 25.36 57.99 45.35 Task 1-1 33.06 3.52 4.22 15.0 58.0 16.22 Task 3-3 99.56 76.19 72.54 59.5 66.9 5.6 Table 1: The table illustrates the overall average score (%) across all test tasks on the ScienceWorld benchmark for SayCan, ReAct, Reflexion, Swift-only, SwiftSage, and our algorithm (last column). The Solved Task Types row represents the number of task types for which an agent manages to solve all the test variations. The table also shows the average scores for tasks with a short, medium, and long length of expert trajectory. The rows Task 1-1 and Task 3-3 display the scores for each of them in which our approach does not work well in comparison with the other methods. The \u2217denotes scores reported from (Lin et al., 2023) which all use ChatGPT (GPT-3.5). 4.3 BASELINE AGENTS We compare our approach with other works that leverage LLMs. Some rely only on prompting such as SayCan, ReAct, and Reflexion, but SwiftSage also do imitation learning. Here is a brief description of each method. SayCan: the LLM initially offers a set of actions along with their respective ranks. Then, a value-based method is employed to re-rank these actions in order to determine the most rewarding action for execution (Brohan et al., 2023). ReAct: the LLM generates actions by incorporating the provided prompt and the history of generated texts. It employs reasoning traces as intermediate thought steps during the action generation to refine a plan for the upcoming steps (Yao et al., 2022). Reflexion: the language agent reflects the task feedback at each trial in the form of text and retains this information within an episodic memory. During the subsequent trial, it leverages the stored memory text to enhance its decisionmaking process (Shinn et al., 2023). SwiftSage: this method comprises two components: Swift, a fine-tuned LM to predict actions, and Sage, a module that queries an LLM for planning when the performance of Swift is inadequate (as determined by some handcrafted rules) (Lin et al., 2023). Swift-only: this is the Swift part of the SwiftSage method which only has the fine-tuned LM to predict the actions. We consider this method as a strong baseline and the most comparable to our approach as it relies on imitation learning without the need for querying an LLM during inference. Note that all baselines use ChatGPT (GPT-3.5) as their LLM. 4.4 RESULTS AND ANALYSIS Main Results: Table 1 compares the performance of the baselines with our approach in the ScienceWorld. The score for each task type is the average score (in percent) obtained for 10 test variations. Our approach demonstrates an overall performance of 65.43%, surpassing Swift-only by 16.71% (33.9% relative increase), and showing a slight improvement over SwiftSage of 3.3% (5.3% relative). Interestingly, our method is able to solve all test variations (i.e., gets an average score of 100%) for 11 out of the 30 task types. In contrast, SwiftSage solves them only for 2 task types, and Swift-only, only for 4 task types. Additionally, we measured the performance of the agents with respect to the length of the tasks (a proxy for task complexity). The length of a task is determined by how many actions was needed by the expert to solve it.2 Following Lin et al. (2023), we group the tasks into three categories: Short when the length is less than 20 actions, Medium when it falls between 20 and 50 (inclusively), and Long if above 50. As shown in Table 1, our approach outperforms 2Expert trajectories for test tasks were not seen during training. 7 \fPublished at 3rd Conference on Lifelong Learning Agents (CoLLAs), 2024 other methods on short and medium tasks. On long tasks, we outperform all methods except SwiftSage, which has a substantial advantage here: The longer the task, the higher the chance it triggers one of the rules for Sage to take over. As part of the comparison, there are other approaches that do not use a LLM including DRRN (He et al., 2016), KG-A2C (Ammanabrolu & Hausknecht, 2019), CALM (Yao et al., 2020), BC (Torabi et al., 2018), TDT (Chen et al., 2021). The results from (Wang et al., 2022a) show these approaches perform poorly, below 17%, in ScienceWorld. For this reason, we did not include them here and only focus on approaches comparable with us. A key motivation for our approach is cost-effectiveness in terms of LLM queries. During training, we make one query to ChatGPT per task to identify the sub-goals within an expert trajectory. Sometimes mismatches occur between the expert trajectory and the actions assigned to each sub-goal by ChatGPT. When that is the case, we employ dynamic programming, with a maximum of 10 attempts per task. This contrasts with other baseline methods, where LLM is queried for each action, incurring considerably higher costs. Why is it failing on some task types? The performance of our algorithm in some tasks are low, (see Table 5). In Table 1, the scores of two tasks are presented. One contributing factor is the variations in the test are very different from those in the training. For instance, the objects might be very different or the path to complete the task is very different and longer. The main culprit is the sub-goal generator which is not able to generate good sub-goals. As a concrete example (Table 2), in the test variations for task 3-3, the agent needs to go to kitchen and then fill a jug with water. When looking at the transcript, we see the agent is able to go to kitchen but then when it arrives, the sub-goal generator issues a sub-goal which is not relevant, FocusOn(fountain). The agent attempts to focus on the fountain which is a wrong action and the game terminates with a score of 0. Another example is task 1-1 (Table 2) in which the agent should boil a substance. It should first find the substance but since the substance is in a totally different location than those seen during training, the sub-goal generator is not able to generate a good sub-goal for this step. Consequently the agent will do other actions and exhaust all the allocated time steps. Example (task 3-3) Example (task 1-1) With Sub-goal Expert Trajectory With Sub-goal Expert Trajectory NavigateTo(kitchen) NavigateTo(kitchen) go to art studio go to art studio go to art studio go to art studio go to outside go to outside go to outside go to outside go to kitchen go to kitchen go to hallway go to hallway FocusOn(fountain) NavigateTo(bedroom) -focus on fountain move jug to sink -go to bedroom go to workshop activate sink pick up metal potcontaining gallium deactivate sink pick up jug Table 2: Two instances where the performance of our algorithm is low. The first column displays the trajectory generated with sub-goals, while the second column presents the expert trajectory. Sub-goals are highlighted in dark red, accompanied by their corresponding actions, and incorrect actions are marked in red. The impact of scale: We conduct a comparison across various sizes of language models such as FLAN-T5-XL, FLANT5-BASE, and FLAN-T5-SMALL. Additionally, we evaluate T5-3B and T5-LARGE to determine the effectiveness of FLAN-T5 versus T5. The results are illustrated in Figure 5a. In our initial findings, we observed that FLANT5 outperforms T5 significantly. Moreover, our results reveal a positive correlation between the LM size and its performance \u2013 larger models generally yield better results. Intriguingly, we observe that for smaller models (FLANT5-SMALL and FLAN-T5-BASE), not conditioning on sub-goals works slightly better than including them. This might be indicative that the sub-goal generator is not expressive enough to generate meaningful and effective sub-goals which in turn impacts the action generator policy and leads to lower scores. The impact of sub-goals: To study the impact of the sub-goal generator\u2019s size on the overall performance, we try pairing different sizes of sub-goal generator while limiting the action generator to be small. In Figure 5b, the average scores exhibit an upward trajectory. This can be attributed to the larger sub-goal generators producing more accurate and relevant sub-goals, subsequently empowering the action generator to generate more correct actions. See Table 6 for a complete breakdown of the score per task type and per model size. 8 \fPublished at 3rd Conference on Lifelong Learning Agents (CoLLAs), 2024 (a) (b) Figure 5: a) Average scores across different model sizes for FLAN-T5 and T5. For T5 model, X-Large refers to T5-3B. The larger models work better and FLAN-T5 performs also better than T5. Dashed lines represent models that are not conditioning on any sub-goals (\u201cno sg\u201d) and equivalent to Swift-only. b) Average scores across different sizes of sub-goal generator while the action generator is kept to be base (blue) or small (green). Having larger sub-goal generators can significantly boost performance of small action generators. Random Semi-random first 10 steps each first 10 steps each 39.1% 37.6% 6.4% 53.1% 43.3% 14.2% Table 3: Average performance for randomly generated sub-goals. Sub-goals are selected randomly (or semi-randomly) at either the first step, every 10 steps, or each step. To further demonstrate the importance of the sub-goal, we generated random sub-goals and then fed them to the action generator. That yield an average score of 6.4%, indicating that the action generator do condition on the sub-goals, subsequently, it cannot solve the tasks effectively. We conducted an additional experiment by altering the arguments of the sub-goals, as they have a functional format. If the argument corresponds to a location, we replaced it with another from the environment, and if it is an object, we replaced it with a randomly chosen object available at that step of the game. We named this approach semi-random sub-goals. The result for this experiment is 14.2%, showing an increase in performance compared to the random sub-goals. Table 3 shows the average scores and Table 9 shows the score for each task. Recovery from noisy sub-goals: We also assess the performance when both the action and sub-goal generators have been exposed to noisy sub-goals. More specifically, we consider two settings: applying noise 1) only at the first step, or 2) every 10 steps. In the first setting, the first sub-goal is (semi-)randomly selected, while the subsequent sub-goals are generated using the FLAN-T5-LARGE sub-goal generator. In the second experiment, a sub-goal is (semi-)randomly selected every 10 steps instead of using the sub-goal generator for all steps. Table 3 shows the overall scores for both settings and a breakdown per task types is presented in Table 10. In both scenarios, semi-random selection (53.1% and 43.3%) yields better results, as it closely resembles the subgoals generated by the sub-goal generator. Some tasks achieve a score of 100, indicating successful recovery from noisy sub-goals. While overall scores are lower compared to using the FLAN-T5-LARGE sub-goal generator, it is still higher than using Swift only in the first setting and closely approaching it in the second setting (Appendix A.10). Generalization on heldout task types: We select one or two task types from each science domain (see highlighted ones in Table 4) to train the action and sub-goal models. Then, we assessed their performance on the rest of the task types. We compared our algorithm against the Swift-only baseline. The average total scores are 40.63% with subgoals vs. 36.56% for Swift-only. For unseen tasks, the scores are 27.72% with sub-goals vs. 15.25% for Swift-only. This suggests that using sub-goals helps improve generalization across unseen tasks. The scores for each task are presented in Table 11. 9 \fPublished at 3rd Conference on Lifelong Learning Agents (CoLLAs), 2024 5 DISCUSSION AND LIMITATION In contrast to SwiftSage, which relies on interactive usage of the ChatGPT API to handle planning, our approach makes use of a trained sub-goal generator to guide the action generator. Moreover, our framework empowers the agent to retrieve a nearly optimal trajectory by supplying the appropriate sub-goal. Nevertheless our framework has significantly reduced the frequency of API calls, which are both expensive and not universally accessible. ReAct, Reflexion, and SwiftSage require human annotations to correct sub-goals and predict a reasonable action. However in our approach, we do not need human help to predict sub-goals or provide precise prompts. Generalization: In this work, our focus is on optimizing performance within the environment, and there might be a potential limitations when transitioning to entirely different scenarios. If we test it in a distinct environment, the performance may not be optimal, given the fine-tuning with data specific to the ScienceWorld environment. It\u2019s acknowledged that for generalization across diverse scenarios, an LLM may perform better, given its capacity to handle a broader range of inputs and contexts. Goal Modification: When the agent encounters challenges in solving the current sub-goal, it will often find itself cycling through the same sub-goal for several steps. Consequently, the action generator repeats a sequence of actions mirroring recent ones. Sometimes the sub-goal generator will adjust the sub-goal slightly based on the input and that can be enough to get unstuck. Ideally, we would like to avoid being stuck for several steps and learn to modify the sub-goal in the right way. One strategy involves online learning, where the controller is updated based on the reward from the environment. However, this approach carries the risk of catastrophic forgetting, necessitating additional measures such as loss modification and regularization to mitigate this risk. Another approach could involve incorporating an LLM alongside the controller. If the controller fails to produce effective actions, the LLM can suggest alternative sub-goals. This might have the risk of poor sub-goals and hallucinations which rewards might help but it is still challenging in such a sparse environment. 6" +} \ No newline at end of file diff --git a/abs_9K/test_abstract_short_2405.02791v1.json b/abs_9K/test_abstract_short_2405.02791v1.json new file mode 100644 index 0000000000000000000000000000000000000000..117e69687e6f58522a784fda7d5e9e263fd5fe6b --- /dev/null +++ b/abs_9K/test_abstract_short_2405.02791v1.json @@ -0,0 +1,17 @@ +{ + "url": "http://arxiv.org/abs/2405.02791v1", + "title": "Efficient Text-driven Motion Generation via Latent Consistency Training", + "abstract": "Motion diffusion models have recently proven successful for text-driven human\nmotion generation. Despite their excellent generation performance, they are\nchallenging to infer in real time due to the multi-step sampling mechanism that\ninvolves tens or hundreds of repeat function evaluation iterations. To this\nend, we investigate a motion latent consistency Training (MLCT) for motion\ngeneration to alleviate the computation and time consumption during iteration\ninference. It applies diffusion pipelines to low-dimensional motion latent\nspaces to mitigate the computational burden of each function evaluation.\nExplaining the diffusion process with probabilistic flow ordinary differential\nequation (PF-ODE) theory, the MLCT allows extremely few steps infer between the\nprior distribution to the motion latent representation distribution via\nmaintaining consistency of the outputs over the trajectory of PF-ODE.\nEspecially, we introduce a quantization constraint to optimize motion latent\nrepresentations that are bounded, regular, and well-reconstructed compared to\ntraditional variational constraints. Furthermore, we propose a conditional\nPF-ODE trajectory simulation method, which improves the conditional generation\nperformance with minimal additional training costs. Extensive experiments on\ntwo human motion generation benchmarks show that the proposed model achieves\nstate-of-the-art performance with less than 10\\% time cost.", + "authors": "Mengxian Hu, Minghao Zhu, Xun Zhou, Qingqing Yan, Shu Li, Chengju Liu, Qijun Chen", + "published": "2024-05-05", + "updated": "2024-05-05", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.AI" + ], + "label": "Original Paper", + "paper_cat": "Diffusion AND Model", + "gt": "Motion diffusion models have recently proven successful for text-driven human\nmotion generation. Despite their excellent generation performance, they are\nchallenging to infer in real time due to the multi-step sampling mechanism that\ninvolves tens or hundreds of repeat function evaluation iterations. To this\nend, we investigate a motion latent consistency Training (MLCT) for motion\ngeneration to alleviate the computation and time consumption during iteration\ninference. It applies diffusion pipelines to low-dimensional motion latent\nspaces to mitigate the computational burden of each function evaluation.\nExplaining the diffusion process with probabilistic flow ordinary differential\nequation (PF-ODE) theory, the MLCT allows extremely few steps infer between the\nprior distribution to the motion latent representation distribution via\nmaintaining consistency of the outputs over the trajectory of PF-ODE.\nEspecially, we introduce a quantization constraint to optimize motion latent\nrepresentations that are bounded, regular, and well-reconstructed compared to\ntraditional variational constraints. Furthermore, we propose a conditional\nPF-ODE trajectory simulation method, which improves the conditional generation\nperformance with minimal additional training costs. Extensive experiments on\ntwo human motion generation benchmarks show that the proposed model achieves\nstate-of-the-art performance with less than 10\\% time cost.", + "main_content": "Introduction Synthesizing human motion sequences under specified conditions is a fundamental task in robotics and virtual reality. Research in recent years has explored the text-to-motion diffusion framework [1, 2, 3] to generate realistic and diverse motions, which gradually recovers the motion representation from a prior distribution with multiple iterations. These works show more stable distribution estimation and stronger controllability than traditional single-step methods (e.g., GANs [4] or VAEs [5, 6]), but at the cost of a hundredfold increase in computational burden. Such a high-cost sampling mechanism is expensive in time and memory, limiting the model\u2019s accessibility in real-time applications. To mitigate inference cost, previous text-to-motion diffusion frameworks try to trade off between fidelity and efficiency from two perspectives: i) mapping length-varying and high-dimensional original motion sequences into well-reconstructed and low-dimension motion latent representations[3, 7] to reduce data redundancy and complexity, and ii) utilizing skip-step sampling strategy [3, 8] to minimize expensive and repetitive function evaluation iterations. The first perspective inspired by the excellent performance of the latent diffusion model in text-to-image synthesis, they introduce the variational autoencoder with Kullback-Leibler (KL) divergence constraints as motion representation extractor. However, unlike image data support that contains more than ten million samples, the high cost of motion capture limits the number of samples for the text-based motion generation task. As a example, the largest current human motion dataset contains no more than fifteen thousand samples after employing data augmentation. Simultaneous arXiv:2405.02791v1 [cs.CV] 5 May 2024 \foptimization of reconstruction loss and KL divergence loss, which are adversarial targets, is significantly challenging in the presence of limited training resources. To ensure high reconstruction performance, previous state-of-the-art models usually set the KL divergence weights low enough, which results in low regularity of motion representations. Such low-regularity and continuous motion representations suffer redundancy and low robustness. It can be mitigated by a sufficiently numerous repetitive function evaluation iterations, but seriously harms the generative performance in the context of extremely few sampling steps. The second perspective follows from the recently well-established diffusion solvers, which can be categorized as training-free methods and training-based methods. Previous study confirms that the forward diffusion process corresponds to an inverse diffusion process without a stochastic term and is known as the probabilistic flow ordinary differential equation (PF-ODE) [9]. Training-free methods constructed different discrete solvers for the special form of the PF-ODE, achieving almost a 20-fold performance improvement. These works effectively compress the sampling steps to 50-100 steps, but the fidelity of the ODE solution results is lower when the number of iterations is much smaller due to the complexity of the probability distribution of the motion sequences and the cumulative error of the discrete ODE sampling. It is still a significant gap in computational effort compared to traditional single-step motion generation models. Training-based methods usually rely on model distillation or trajectory distillation for implementation, and one promising approach is known as the consistency model. It impose constraints on the model to maintain the consistency of the output on the same PF-ODE trajectory, thus achieving a single-step or multiple-step generative mapping from the prior distribution to the target distribution. Typical PF-ODE trajectory generation methods are consistency distillation, which generates trajectories with pre-trained diffusion models, or consistency training, which simulates trajectories with the unbiased estimation of ground truth. The former relies on well-trained diffusion models as foundation models. Training these models from scratch is computationally expensive and time-consuming. Less costly consistency training frameworks avoid additional pre-trained models, but also suffer poor generation performance and even training collapse due to redundant and irregular latent representations. Moreover, existing consistency training frameworks have not sufficiently explored conditional PF-ODE trajectory. It results in vanilla consistency-training-based models without significant advantages over well-established multi-step diffusion samplers using classifier-free guidance. Upon the above limitations, we propose a Motion Latent Consistency Training (MLCT) framework with generates high-quality motions with no more than 5 sampling steps. Following the common latent space modeling paradigm, our motivation focuses on constructing low-dimensional and regular motion latent representations, as well as exploring the simulation of conditional PF-ODE trajectories with the consistency training model in the absence of pre-trained models. Specifically, the first contribution of this paper is to introduce a pixel-like latent autoencoder with quantization constraints, which aggregates motion information of arbitrary length to multiple latent representation tokens via self-attention calculation. It differs significantly from the widely used variational representations in that the former is bounded and discrete while the latter is unbounded and continuous. We restrict the representation boundaries with the hyperbolic tangent (Tanh) function and forces the continuous representation to map to the nearest predefined clustering center. Compared to the black-box control strategy of fine-tuning the KL divergence weights, our approach trades off the regularity and reconstruction performance of the motion latent representations more controllably via designing finite dimensional discrete latent representation space. In addition, previous practice demonstrates that the boundedness of the representations contributes to sustaining stable inference in classifier-free guidance (CFG) techniques. The second contribution of this paper is to explore a one-stage conditionally guided consistency training framework. The main insight is to consider unbiased estimation based on ground truth motion representations as the simulation of a conditional probability gradient and to propose an online updating mechanism for the unconditional probability gradient. To the best of our knowledge, this is the first application of classifier-free guidance to consistency training. Since it is utilized for generating trajectories, the denoiser does not need to be double computationally expensive in the derivation to get better conditional generation results. We evaluate the proposed framework on two widely-used datasets: KIT and HumanML datasets. The results of our 1, 3 and 5 number of function evaluations (NFE) generation are shown in Figure 1, along with the differences in FID metrics with existing methods. Extensive experiments indicate the effectiveness of MLCT and its components. The proposed framework achieves state-of-the-art performance in motion generation only in around 5 steps. To sum up, the contributions of this paper are as follows: \u2022 We explore a pixel-like motion latent representation relying on quantization constraints which is highly regular, well-reconstruction and bounded. \u2022 We introduce classifier-free guidance in consistency training for the first time. It is beneficial to realize more controllable motion generation as well as more stable training convergence. \u2022 Our proposed MLCT achieves state-of-the-art performance on two challenge datasets with extremely less sampling steps. 2 \f1 NFE 3 NFE 5 NFE 1 NFE 3 NFE 5 NFE Figure 1: Our model achieves better FID metrics with less inference time and allows for the generation of high-quality human motions based on textual prompts in around 5 NFE. The color of humans darkens over time. 2 Related Work Human motion generation. Human motion generation aims to synthesize human motion sequence under specified conditions, such as action categories [10, 11], audio [12, 13], and textual description [14, 2, 3]. In the past few years, numerous works have investigated motion generation from various generative frameworks. For example, VAE-based models [15, 16, 5] represent the motion as a set of Gaussian distributions and constrain its regularity with KL divergence. Such constraint allows it to reconstruct the motion information from the standard normal distribution, yet its results are often ambiguous. GAN-based methods [17, 4] achieve better performance by bypassing direct estimation of probabilistic likelihoods via the adversarial training strategy, but the adversarial property makes their training often unstable and prone to mode collapse. Some multi-step generative methods have emerged recently with great success, such as auto-regressive [18, 19] and diffusion methods [1, 2, 3]. In particular, the latter is gradually dominating the research frontiers due to its stable distribution estimation capability and high-quality sampling results. Motiondiffuse [1] and MDM [2] were the pioneers in implementing diffusion frameworks for motion generation. MLD [3] realizes the latent space diffusion, which significantly improves the efficiency. M2DM [7] represents motion as discrete features and diffusion processes in finite state space with state-of-the-art performance. Some recent work [8] has focused on more controlled generation with equally excellent results. These works validate the outstanding capabilities of the motion diffusion framework and receive continuous attention. Efficient diffusion sampling. Efficient diffusion sampling is the primary challenge of diffusion frameworks oriented to real-time generation tasks. DDIM [20] relaxes the restriction on Markov conditions in the original diffusion framework and achieves a 20 times computational efficiency improvement. Score-based method [9] from the same period relates the diffusion framework to a stochastic differential equation and notes that it has a special form known as the probability flow ODE. This is a milestone achievement. It guides the following works either to steer a simplified diffusion process through a specially designed form of ODE [21, 22, 23], or to skip a sufficiently large number of sampling steps via the more sophisticated higher-order ODE approximation solution strategy [24]. In addition to the above work, the diffusion process can be executed in lower dimensional and more regular latent spaces, thus reducing the single-step computational burden [25]. While these works have proven effective in computer vision, they have received only finite reflections in motion diffusion frameworks. Previous state-of-the-art methods such as MLD [3] and GraphMotion [8] have utilized VAE-based representations and DDIM sampling strategies. Precise and robust motion representation and efficient motion diffusion design remain an open problem. Consistency model. Consistency modeling is a novel and flexible diffusion sampling framework that allows the model to make trade-offs between extreme few steps and generation quality. Latent consistency models extend consistency distillation methods to the latent representation space, saving memory spend and further improving inference efficiency. Subsequently, VideoLCM further applies consistency distillation to video generation. Recent approaches have also investigated the application of Lora and control net to consistency modeling with impressive results. These methods rely on a strong teacher model as the distillation target, which trained from scratch requires not only a large dataset support but also a lot of computational resources. To reduce the training cost, ICM further explores and improves consistency training methods to obtain similar performance to consistency distillation without pre-trained models. However, it is 3 \fstill limited to the original pixel representation space of fixed dimensions and is applied to variance-explosion ODE frameworks. Consistency training methods for broader diffusion strategies in the latent representation space lack further exploration. 3 Preliminaries In this section, we briefly introduce diffusion and consistency models. 3.1 Score-based Diffusion Models The diffusion model [26] is a generative model that gradually injects Gaussian noise into the data and then generates samples from the noise through a reverse denoising process. Specifically, it gradually transforms the data distribution pdata(x0) into a well-sampled prior distribution p(xT ) via a Gaussian perturbation kernel p(xt|x0) = N(xt|\u03b1tx0, \u03c32 t I), where \u03b1t and \u03c3t are specify noise schedules. Recent studies have formalized it into a continuous time form, described as a stochastic partial differential equation, dxt = f(t)xtdt + g(t)dwt, (1) where t \u2208[\u03f5, T], \u03f5 and T are the fixed positive constant, wt denotes the standard Brownian motion, f and g are the drift and diffusion coefficients respectively with follow from, f(t) = d log \u03b1t dt , g2(t) = d\u03c32 t dt \u22122d log \u03b1t dt \u03c32 t . (2) Previous work has revealed that the reverse process of Eq. 1 shares the same marginal probabilities with the probabilistic flow ODE: dxt = [f(t)xt \u22121 2g2(t)\u2207xt log p(xt)]dt, (3) where \u2207x log p(xt) is named the score function, which is the only unknown term in the sampling pipeline. An effective approach is training a time-dependent score network S\u03b8(xt, t) to estimate \u2207x log p(xt) based on conditional score matching, parameterized as the prediction of noise or initial value in forward diffusion. Further, Eq. 3 can be solved in finite steps by any numerical ODE solver such as Euler [9] and Heun solvers [27]. 3.2 Consistency Models Theoretically, the inverse process expressed by Eq. 3 is deterministic, and the consistency model (CM) [23] achieves one-step or few-step generation by pulling in outputs on the same ODE trajectory. It is more formally expressed as, S\u03b8(xt, t) = S\u03b8(xt\u2032, t\u2032) \u2248S\u03b8(x\u03f5, \u03f5) \u2200t, t\u2032 \u2208[\u03f5, T], (4) which is known as the self-consistency property. To maintain the boundary conditions, existing consistency models are commonly parameterized by skip connections, i.e., S\u03b8(xt, t) := cskip(t)x + cout(t) \u02c6 S\u03b8(xt, t) (5) where cskip(t) and cout(t) are differentiable functions satisfied cskip(\u03f5) = 1 and cout(\u03f5) = 0. For stabilize training, the consistency model maintaining target model S\u2212 \u03b8 , trained with the exponential moving average (EMA) of parameter \u03b3, that is \u03b8\u2212\u2190\u03b3\u03b8\u2212+ (1 \u2212\u03b3)\u03b8. The consistency loss can be formulated as, Lcm(\u03b8, \u03b8\u2212) = Ex,t \u0002 d \u0000S\u03b8(xtn+1, tn+1), S\u03b8\u2212(\u02c6 xtn, tn) \u0001\u0003 (6) where d(\u00b7, \u00b7) is a metric function such as mean square or pseudo-huber metric, and \u02c6 xtn is a one-step estimation from xtn+1 with ODE solvers applied in Eq. 3. 4 Motion Latent Consistency Training Framework In this section, we discuss two critical targets. The first is encoding motions with arbitrary lengths into low-dimensional and regularized latent representations of motions to align all motion dimensions. The second is introducing the conditional PF-ODE into less cost consistency training framework for few-steps and high-quality latent representation sampling. To this end, we propose a Motion Latent Consistency Training (MLCT) framework, as shown in Figure 2. It consists of an autoencoder with quantization constraints, which is used to learn various motion representations in low-dimensional and regularized latent spaces (details in Section 4.1), and a denoising network, which is used to capture the corresponding latent state distributions and to implement few-step sampling (details in Section 4.2). 4 \fMotion Latent Representation Motion Feature ... ... ... ... Latent Representation Noise ... Time Text Transformer Block Embedding Embedding ... ... Quantized Conditional PF-ODE Trajectories Skip Connection Clamp Quantization Constraints Conditional Trajectories Simulation Conditional Target Unconditional Target Figure 2: Our Motion Consistency model can achieve high-quality motion generation given a text prompt with around 5 steps. The color of humans darkens over time. E D S S S xt x\u03f5 xT x\u03f5 xt\u2032 x\u03f5 x\u03f5 xt\u2032 xt xT dxt = f(t)xtdt + g(t)dwt dxt = [f(t)xt \u22121 2g2(t)\u2207xt log p(xt)]dt Consistency Property: S(xT , T, c) \u2248S(xt\u2032, t\u2032, c) \u2248S(xt, t, c) \u2248x\u03f5, where \u2200t, t\u2032 \u2208[\u03f5, T] 4.1 Encoding Motion as Quantized Latent Representation We construct an autoencoder G = {E, D} with transformer-based architecture to realize encoding and reconstructing between motion sequences x and latent motion representations z. The core insight is that each dimension of z is sampled from a finite set M of size 2l + 1 as follow, M = {zi; \u22121, \u2212j/l, \u00b7 \u00b7 \u00b7 , 0, \u00b7 \u00b7 \u00b7 , j/l, \u00b7 \u00b7 \u00b7 , 1}l j=0. (7) To this end, we denote z \u2208Rn,d as n learnable tokens with d dimension, aggregating the motion sequence features via attention computation. Inspired by recent quantitative work [28], we employ a hyperbolic tangent (tanh) function on the output of the encoder E to constrain the boundaries of the representation, and then quantize the result by a rounding operator R. Furthermore, the gradient of quantized items is simulated by the previous state gradient to backpropagate the gradient normally. The latent representations z are sampled by follow format, z = R \u0010 l \u00b7 tanh(E(x)) \u0011 /l. (8) The standard optimization target is to reconstruct motion information from z with the decoder D, i.e., to optimize the l1 smooth error loss, Lz = Ex h d \u0010 x, D(z) \u0011 + \u03bbjd \u0010 J (x), J (D(z)) \u0011i , (9) where J is a function to transform features such as joint rotations into joint coordinates, and it is also applied in MLD [3] and GraphMotion [8]. \u03bbj is a balancing term. Compared with the traditional VAEs, the optimization target Eq. 9 does not contain a divergence adversarial term. A well-trained autoencoder G output bounded and regular motion latent representation, which in turn improves the solution space of the denoising network, and experimentally we found that this improvement is important for the convergence of consistent training. 5 \f4.2 Few Step Motion Generation via Consistency Training For conditional motion generation, Class-Free Guidance (CFG) is crucial for synthesizing high-fidelity samples in most successful cases of motion diffusion models, such as MLD or GraphMotion. Previous work introduced CFG into the consistency distillation, demonstrating the feasibility of the consistency model on conditional PF-ODE trajectories. However, they rely on powerful pre-trained teacher models, which not only involve additional training costs but performance is limited by distillation errors. Therefore, we are motivated to simulate CFG more efficiently from the original motion latent representation following the consistency training framework to alleviate the computational burden. The diffusion stage of MLCM begins with the variance preserving schedule [9] to perturbed motion latent representations x\u03f5 = z with perturbation kernel N(xt; \u03b1(t)x0, \u03c32(t)I), \u03b1(t) := e\u22121 4 t2(\u03b21\u2212\u03b20)\u22121 2 t\u03b20, \u03c3(t) := p 1 \u2212e2\u03b1(t). (10) The consistency model S\u03b8 has been constructed to predict x\u03f5 from perturbed xt in a given PF-ODE trajectory. To maintain the boundary conditions that S\u03b8(x\u03f5, \u03f5, c) = x\u03f5, we employ the same skip setting for Eq. ?? as in the latent consistency model (LCM), which parameterized as follow: S\u03b8(xt, t, c) := \u03b72 (10t)2 + \u03b72 \u00b7 xt + 10t p (10t)2 + \u03b72 \u00b7 e S\u03b8(xt, t, c), (11) where e S\u03b8 is a transformer-based network and \u03b7 is a hyperparameter, which is usually set to 0.5. Following the selfconsistency property (as detail in Eq. 4), the model S\u03b8 has to maintain the consistency of the output at the given perturbed state xt with the previous state e xt\u2212\u2206t on the same ODE trajectory. The latter can be estimated via DPM++ solver: e xt\u2212\u2206t \u2248\u03c3t\u2212\u2206t \u03c3t \u00b7 xt \u2212\u03b1t \u00b7 (\u03b1t\u2212\u2206t \u00b7 \u03c3t \u03c3t\u2212\u2206t \u00b7 \u03b1t \u22121) \u00b7 x\u03a6 \u03f5 , (12) where x\u03a6 \u03f5 is the estimation of x\u03f5 under the different sampling strategies. In particular, x\u03a6 \u03f5 can be parameterized as a linear combination of conditional and unconditional latent presentation prediction following the CFG strategy, i.e., x\u03a6 \u03f5 (xt, t, c) = (1 + \u03c9) \u00b7 F\u03b8(xt, t, c) \u2212\u03c9F\u03b8(xt, t, \u2205), (13) where F\u03b8(\u00b7) is well-trained and x\u03f5-prediction-based motion diffusion model. It is worth noting that x\u03f5 can be utilized to simulate F\u03b8(xt, t, c) as used in the vanilla consistency training pipeline. Furthermore, F\u03b8(xt, t, \u2205) can be replaced by S\u03b8(xt, t, \u2205) with online updating. Thus Eq. 13 can be rewritten as: x\u03a6 \u03f5 (xt, t, c) = (1 + \u03c9) \u00b7 x\u03f5 \u2212\u03c9S\u03b8(xt, t, \u2205). (14) The optimization objective of the consistency model S\u03b8 is that, Lc = Ex,t h 1 \u2206td \u0010 S\u03b8(xt, t, c), S\u03b8\u2212(\u02c6 xt\u2212\u2206t, t \u2212\u2206t, c) \u0011 + \u03bbcd \u0010 S\u03b8(xt, t, \u2205), x\u03f5 \u0011i , (15) where d(x, y) = p (x \u2212y)2 + \u03b32 \u2212\u03b3 is pseudo-huber metric, \u03b3 is a constant, \u03bbc is a balancing term. The target network S\u03b8\u2212is updated after each iteration via EMA. 5 Experiments 5.1 Datasets and Metrics Datasets. We evaluate the proposed framework on two mainstream benchmarks for text-driven motion generation tasks, which are the KIT [29] and the HumanML3D [5]. The former contains 3,911 motions and their corresponding 6,363 natural language descriptions. The latter is currently the largest 3D human motion dataset comprising the HumanAct12 [15] and AMASS [30] datasets, containing 14,616 motions and 44,970 descriptions. Evaluation Metrics. Consistent with previous work, we evaluate the proposed framework in four parts. (a) Motion quality: we utilize the frechet inception distance (FID) to evaluate the distance in feature distribution between the generated data and the real data. (b) Condition matching: we first employ the R-precision to measure the correlation between the text description and the generated motion sequence and record the probability of the first k = 1, 2, 3 matches. Then, we further calculate the distance between motions and texts by multi-modal distance (MM Dist). (c) Motion diversity: we compute differences between features with the diversity metric and then measure generative diversity in the same text input using multimodality (MM) metric. (d) Calculating burden: we first use the number of function evaluations (NFE) to evaluate generated performance with fewer steps sampling. Then, we further statistics the average sampling time (AST) of a single sample. 6 \fTable 1: Comparisons to state-of-the-art methods on the HumanML test set. We repeat all the evaluations 20 times and report the average with a 95% confidence interval. \"\u2191\" denotes that higher is better. \"\u2193\" denotes that lower is better. \"\u2192\" denotes that results are better if the metric is closer to the real motion. \u2020 denotes that classifier-free guidance is utilized, causing a double NFE. Method R-Precision \u2191 FID \u2193 MM-Dist\u2193 Diversity\u2192 MModality\u2191 NFE\u2193 Top-1 Top-2 Top-3 Real 0.511\u00b1.003 0.703\u00b1.003 0.797\u00b1.002 0.002\u00b1.000 2.974\u00b1.008 9.503\u00b1.065 TEMOS[6] 0.424\u00b1.002 0.612\u00b1.002 0.722\u00b1.002 3.734\u00b1.028 3.703\u00b1.008 8.973\u00b1.071 0.368\u00b1.018 T2M[5] 0.457\u00b1.002 0.639\u00b1.003 0.740\u00b1.003 1.067\u00b1.002 3.340\u00b1.008 9.188\u00b1.002 2.090\u00b1.083 MDM [2] 0.320\u00b1.005 0.498\u00b1.004 0.611\u00b1.007 0.544\u00b1.044 5.566\u00b1.027 9.559\u00b1.086 2.799\u00b1.072 1000 MD [1] 0.491\u00b1.001 0.681\u00b1.001 0.782\u00b1.001 0.630\u00b1.001 3.113\u00b1.001 9.410\u00b1.049 1.553\u00b1.042 1000 MLD\u2020 [3] 0.481\u00b1.003 0.673\u00b1.003 0.772\u00b1.002 0.473\u00b1.013 3.196\u00b1.010 9.724\u00b1.082 2.413\u00b1.079 100 GraphMotion\u2020[8] 0.504\u00b1.003 0.699\u00b1.002 0.785\u00b1.002 0.116\u00b1.007 3.070\u00b1.008 9.692\u00b1.067 2.766\u00b1.096 300 M2DM [7] 0.497\u00b1.003 0.682\u00b1.002 0.763\u00b1.003 0.352\u00b1.005 3.134\u00b1.010 9.926\u00b1.073 3.587\u00b1.072 100 Our 0.460\u00b1.001 0.655\u00b1.002 0.760\u00b1.006 0.232\u00b1.007 3.238\u00b1.008 9.658\u00b1.065 3.506\u00b1.008 5 Table 2: Comparisons to state-of-the-art methods on the KIT test set. The meaning of the markers is the same as in Tab. 1. Method R-Precision \u2191 FID \u2193 MM-Dist\u2193 Diversity\u2192 MModality\u2191 NFE\u2193 Top-1 Top-2 Top-3 Real 0.424\u00b1.005 0.649\u00b1.006 0.779\u00b1.006 0.031\u00b1.004 2.788\u00b1.012 11.08\u00b1.097 TEMOS[6] 0.353\u00b1.006 0.561\u00b1.007 0.687\u00b1.005 3.717\u00b1.051 3.417\u00b1.019 10.84\u00b1.100 0.532\u00b1.034 T2M[5] 0.370\u00b1.005 0.569\u00b1.007 0.693\u00b1.007 2.770\u00b1.109 3.401\u00b1.008 10.91\u00b1.119 1.482\u00b1.065 MDM [2] 0.164\u00b1.004 0.291\u00b1.004 0.396\u00b1.004 0.497\u00b1.021 9.191\u00b1.022 10.85\u00b1.109 1.907\u00b1.214 1000 MD [1] 0.417\u00b1.004 0.621\u00b1.004 0.739\u00b1.004 1.954\u00b1.062 2.958\u00b1.005 11.10\u00b1.143 0.730\u00b1.013 1000 MLD\u2020 [3] 0.390\u00b1.008 0.609\u00b1.008 0.734\u00b1.007 0.404\u00b1.027 3.204\u00b1.027 10.80\u00b1.117 2.192\u00b1.071 100 GM\u2020,\u2021[8] 0.429\u00b1.007 0.648\u00b1.006 0.769\u00b1.006 0.313\u00b1.013 3.076\u00b1.022 11.12\u00b1.135 3.627\u00b1.113 300 M2DM [7] 0.416\u00b1.004 0.628\u00b1.004 0.743\u00b1.004 0.515\u00b1.029 3.015\u00b1.017 11.417\u00b1.970 3.325\u00b1.370 100 Our 0.433\u00b1.007 0.655\u00b1.006 0.783\u00b1.006 0.408\u00b1.013 2.831\u00b1.018 11.179\u00b1.085 1.23\u00b1.037 5 5.2 Implementation Details Model Configuration. The motion autoencoder {E, D} and the score network S are both the transformer architecture with long skip connections [31], which is also used in MLD [3]. Specifically, both the encoder E and decoder D contain 7 layers of transformer blocks with input dimensions 256, and each block contains 3 learnable tokens. The size of the finite set M is set as 2001, i.e. l = 1000. The score network S contains 15 layers of transformer blocks with input dimensions 512. The frozen CLIP-ViT-L-14 model [32] is used to be the text encoder. It encodes the text to a pooled output w \u2208R1,256 and then projects it as text embedding to sum with the time embedding before the input of each block. Train Configuration. For diffusion time horizon [\u03f5, T] into N \u22121 sub-intervals, we set \u03f5 is 0.002, T is 1, N is 1000. We follow the consistency model [23] to determine ti = (\u03f51/\u03c1 + i\u22121 N\u22121(T 1/\u03c1 \u2212\u03f51/\u03c1))\u03c1, where \u03c1 = 2. For balance training, we set \u03bbj as 0.001. All the proposed models are trained with the AdamW optimizer with a learning rate of 10\u22124 on a single RTX 4090 GPU. The size of each mini-batch is 64 and 128 for the autoencoder and denoising network, and the training process has been iterated with 1500 and 2000 epochs for the autoencoder and denoising network. 5.3 Comparisons to State-of-the-art Methods The test results of HumanML and KIT are shown in Tab. 1 and Tab. 2, respectively. Our framework achieves the state-of-the-art generation performance. Compared to existing motion diffusion generation frameworks with more than 50-1000 iterations (e.g., MDM, MotionDiffuse, and MLD), our approach reduces the computational burden by more than tenfold without severely degrading the quality of damage generation. Remarkably, our inference pipeline is very concise, with no tricks such as additional text preprocessing as used in GraphMotion. Sampling in fewer steps also has 7 \fReal MDM MLD T2M-GPT Our Figure 3: Qualitative analysis of our model and previous models. We provide three textual prompts for the motion visualization results. We achieve better motion generation performance to match some text conditions with fewer NFE. not significantly reduced diversity and multi-modality metrics, which remain competitive. Fig. 3 shows the comparison of the visualization results with the previous model. 5.4 Ablation Study Table 3: Ablation study of our framework with more generation metrics under different guidance parameters. The meaning of the markers is the same as in Tab. 1. Dataset w R-Precision Top-3 \u2191 FID \u2193 MM-Dist \u2193 MModality \u2191 KIT 0 0.742\u00b1.006 0.717\u00b1.028 3.051\u00b1.021 2.496\u00b1.065 0.5 0.771\u00b1.006 0.504\u00b1.021 2.885\u00b1.023 1.935\u00b1.044 1 0.775\u00b1.005 0.494 \u00b1.019 2.831\u00b1.021 1.844\u00b1.049 1.5 0.783\u00b1.006 0.411\u00b1.019 2.809\u00b1.019 1.648\u00b1.040 2 0.777\u00b1.006 0.518\u00b1.016 2.799\u00b1.023 1.612\u00b1.041 Effectiveness of each component. We explore the generative performance of the classifier-free guidance technique under different representations, and the results are reported in Fig. 4. When the guidance coefficient w equals to 0, the model degenerates into a vanilla consistency model. We discover that increasing various degrees of classifier-free guidance accelerates consistency training convergence and improves generation quality. The pixel-discrete motion representation via the quantized autoencoder has better convergence ability generation performance compared to the continuous motion representation. In particular, under the same consistency training parameters, we have not observed significant gains in generation quality from variational constraints compared to the vanilla autoencoder. We further discuss more comprehensive generation metrics at different guidance parameters and the results are reported in Tab. 3. As the guidance parameters increase, controllability and generation quality gradually improve, with a corresponding decrease in diversity. In contrast to the larger guidance parameters employed in the traditional diffusion framework 8 \f500 1000 1500 2000 Epoch ( = 0.0) 0 2 4 6 HumanML3D FID 500 1000 1500 2000 Epoch ( = 0.5) 0 2 4 6 FID 500 1000 1500 2000 Epoch ( = 1.0) 0 2 4 6 FID 500 1000 1500 2000 Epoch ( = 1.5) 0 2 4 6 FID 500 1000 1500 2000 Epoch ( = 2.0) 0 2 4 6 FID Auto-Encoder Variational Auto-Encoder Quantized Auto-Encoder 500 1000 1500 2000 Epoch ( = 0.0) 0 2 4 6 KIT FID 500 1000 1500 2000 Epoch ( = 0.5) 0 2 4 6 FID 500 1000 1500 2000 Epoch ( = 1.0) 0 2 4 6 FID 500 1000 1500 2000 Epoch ( = 1.5) 0 2 4 6 FID 500 1000 1500 2000 Epoch ( = 2.0) 0 2 4 6 FID Figure 4: Ablation study of the quantized autoencoder employed in our framework with the conventional variational autoencoder and the vanilla autoencoder under different guidance parameters. We repeat all evaluations 3 times at each 50 epoch and report the average values. (which can usually be set to 7), we find that there is no contribution to the generation quality starting from w greater than 2 in the consistency training framework. Table 4: Ablation study of different number of token and sizes of representation finite set. The meaning of the markers is the same as in Tab. 1. Dataset Token l R-Precision Top-3 \u2191 FID \u2193 MM-Dist \u2193 MModality \u2191 KIT 2 100 0.770\u00b1.006 0.599\u00b1.025 2.870\u00b1.020 1.656\u00b1.043 2 500 0.774\u00b1.005 0.550\u00b1.019 2.829\u00b1.018 1.769\u00b1.021 2 2000 0.775\u00b1.005 0.428\u00b1.016 2.844\u00b1.019 1.645\u00b1.045 4 1000 0.781\u00b1.003 0.489\u00b1.021 2.823\u00b1.021 1.859\u00b1.044 6 1000 0.781\u00b1.004 0.465\u00b1.021 2.821\u00b1.019 1.839\u00b1.055 2 1000 0.783\u00b1.006 0.411\u00b1.019 2.809\u00b1.019 1.648\u00b1.040 Ablation study on the different model hyperparameters. In Tab. 4, we test the model performance with different hyperparameters. Consistent with the findings of MLD, increasing the number of tokens does not remarkably increase the generation quality. Appropriately increasing the size of the finite set 2l + 1 is beneficial in improving the generation results, and such gain is no longer significant when l is larger than 1000. Table 5: Ablation study of different number of function evaluations. Dataset NFE R-Precision Top-3 \u2191 FID \u2193 MM-Dist \u2193 MModality \u2191 KIT 1 0.777\u00b1.005 0.567\u00b1.002 2.865\u00b1.013 1.424\u00b1.040 3 0.781\u00b1.005 0.409\u00b1.014 2.812\u00b1.019 1.598\u00b1.037 5 0.783\u00b1.006 0.411\u00b1.019 2.809\u00b1.019 1.648\u00b1.040 8 0.783\u00b1.006 0.400\u00b1.015 2.810\u00b1.017 1.667\u00b1.051 10 0.786\u00b1.006 0.395\u00b1.015 2.795\u00b1.019 1.663\u00b1.049 Ablation study on the different sampling steps. Our generation results at different sampling steps are further shown in Tab. 5. We have excellent results with fewer sampling steps, but when the number of sampling steps is increased to more than 15, the increased number of sampling steps does not result in a quality payoff. It is a common problem with consistency training. 9 \f5.5 Time Cost Table 6: Comparison of inference time with previous sota models. Method MDM MLD T2M-GPT GraphMotion Our (NFE 5) Our (NFE 3) AST (s) 7.5604 0.0786 0.2168 0.5417 0.0141 0.0098 The consistency training method we use does not require prior training of the diffusion model, so training is inexpensive and is available on just a single 4090. On the HumanML dataset, we train the encoder in 15 hours and the denoiser in 12 hours. Benefiting from the consistency sampling strategy, our inference time is also more than tenfold less than existing models. A more detailed time comparison is reported in Tab. 6. 6" +} \ No newline at end of file diff --git a/abs_9K/test_abstract_short_2405.02801v2.json b/abs_9K/test_abstract_short_2405.02801v2.json new file mode 100644 index 0000000000000000000000000000000000000000..6c398055d36efa73e676bceaf4ebe66e6f9324a8 --- /dev/null +++ b/abs_9K/test_abstract_short_2405.02801v2.json @@ -0,0 +1,18 @@ +{ + "url": "http://arxiv.org/abs/2405.02801v2", + "title": "Mozart's Touch: A Lightweight Multi-modal Music Generation Framework Based on Pre-Trained Large Models", + "abstract": "In recent years, AI-Generated Content (AIGC) has witnessed rapid\nadvancements, facilitating the generation of music, images, and other forms of\nartistic expression across various industries. However, researches on general\nmulti-modal music generation model remain scarce. To fill this gap, we propose\na multi-modal music generation framework Mozart's Touch. It could generate\naligned music with the cross-modality inputs, such as images, videos and text.\nMozart's Touch is composed of three main components: Multi-modal Captioning\nModule, Large Language Model (LLM) Understanding & Bridging Module, and Music\nGeneration Module. Unlike traditional approaches, Mozart's Touch requires no\ntraining or fine-tuning pre-trained models, offering efficiency and\ntransparency through clear, interpretable prompts. We also introduce\n\"LLM-Bridge\" method to resolve the heterogeneous representation problems\nbetween descriptive texts of different modalities. We conduct a series of\nobjective and subjective evaluations on the proposed model, and results\nindicate that our model surpasses the performance of current state-of-the-art\nmodels. Our codes and examples is availble at:\nhttps://github.com/WangTooNaive/MozartsTouch", + "authors": "Tianze Xu, Jiajun Li, Xuesong Chen, Xinrui Yao, Shuchang Liu", + "published": "2024-05-05", + "updated": "2024-05-07", + "primary_cat": "cs.SD", + "cats": [ + "cs.SD", + "cs.AI", + "eess.AS" + ], + "label": "Original Paper", + "paper_cat": "Multi AND Modal AND LLM", + "gt": "In recent years, AI-Generated Content (AIGC) has witnessed rapid\nadvancements, facilitating the generation of music, images, and other forms of\nartistic expression across various industries. However, researches on general\nmulti-modal music generation model remain scarce. To fill this gap, we propose\na multi-modal music generation framework Mozart's Touch. It could generate\naligned music with the cross-modality inputs, such as images, videos and text.\nMozart's Touch is composed of three main components: Multi-modal Captioning\nModule, Large Language Model (LLM) Understanding & Bridging Module, and Music\nGeneration Module. Unlike traditional approaches, Mozart's Touch requires no\ntraining or fine-tuning pre-trained models, offering efficiency and\ntransparency through clear, interpretable prompts. We also introduce\n\"LLM-Bridge\" method to resolve the heterogeneous representation problems\nbetween descriptive texts of different modalities. We conduct a series of\nobjective and subjective evaluations on the proposed model, and results\nindicate that our model surpasses the performance of current state-of-the-art\nmodels. Our codes and examples is availble at:\nhttps://github.com/WangTooNaive/MozartsTouch", + "main_content": "INTRODUCTION In recent years, the intersection of artificial intelligence (AI) and creative arts has witnessed remarkable advancements [2], leading to the emergence of novel techniques and systems capable of producing music[1, 3, 24], images[21\u201323], and other forms of artistic expression[19] in a wide range of industries. As the remarkable advancements in Artificial Intelligence for Generative Composition (AIGC), there is a growing belief that it heralds a new era in AI and will have a substantial influence across the globe. arXiv:2405.02801v2 [cs.SD] 7 May 2024 \fMM\u201924, October 28 November 1, 2024, Melbourne, Australia. Tianze Xu, Jiajun Li, Xuesong Chen, Xinrui Yao, and Shuchang Liu However, current music generation models, when tasked with image-to-music synthesis, encounter notable limitations. These models often struggle to accurately capture the ambiance and underlying emotions conveyed by the visual input. While they may produce music that aligns with the visual elements, the nuanced details and subtle cues present in the image are frequently lost in translation. This shortfall hampers the ability of existing systems to truly evoke the intended atmosphere and sentiment of the imagery, thereby limiting their effectiveness in multi-modal creative endeavors. It is evident that there exists a gap in the current stateof-the-art models concerning their proficiency in leveraging visual cues to inform the musical composition process. Natural language serves as a powerful intermediary, demonstrating significant potential in bridging across different sensory modalities. Designed to interact directly with human, Large language models (LLMs) are typically comprised of a vast number of parameters and trained on extensive datasets, granting them powerful comprehension and reasoning capabilities.[8] Harnessing these advantages, researchers have employed LLMs to achieve semantic understanding across multiple modalities. Despite the significant strides made in AI-driven creativity, a compelling question arises: How can we harness the formidable capabilities of LLMs to empower multi-modal tasks such as imageto-music synthesis? This inquiry serves as the focal point of our investigation, wherein we seek to elucidate the seamless integration of LLMs into the process of generating music inspired by visual contents. In this paper, we present Mozart\u2019s Touch, a multi-modal music generation framework that harnesses the power of Large Language Models (LLMs) and pre-trained models to generate music based on visual information. An overview of the architecture is depicted in Figure 1. Mozart\u2019s Touch offers multiple advantages for image-to-music generation: By leveraging the deep understanding and generalizable knowledge of Large Language Models (LLMs) to interpret visual elements accurately, it differs from previous multi-modal end-to-end music generation methods (e.g. CoDi [26] and M2UGen [10]). Unlike traditional approaches, it requires no training of music generation models or fine-tuning LLMs, conserving computational resources and ensuring efficiency. Moreover, Mozart\u2019s Touch utilizes clear, interpretable prompts for greater transparency during the whole process, which improves overall framework explainability. Our contributions are summarized as follows: \u2022 We introduce the Mozart\u2019s Touch framework, an innovative integration of Large Language Models (LLMs) for multimodal music generation. Departing from traditional end-toend paradigms, this framework harnesses the power of LLMs to synthesize music aligned with visual inputs. \u2022 We offer a new perspective on leveraging LLMs for multimodal generation tasks. Our framework showcases a novel application of LLMs in text-to-music generation , demonstrating the potential of LLMs in understanding and bridging different sensory modalities and empowering creative processes. \u2022 We assess Mozart\u2019s Touch on the imageand video-to-audio dataset MUImage and MUVideo [11] , utilizing both objective and subjective metrics. Comparative evaluation results show that our approach outperforms existing state-of-theart methods. This experiment demonstrates the effectiveness of our framework and its potential as a new baseline benchmark for future works in the domain. 2 RELATED WORK 2.1 Multi-modal Large Language Model (MLLM) Due to the prevalence of researches in Large Language Models(LLM), the combination of LLM and models in other modalities has also been a rising research hot spot, leading to the new field of MLLM. According to this survey [27] , the key applications of MLLM includes Multi-modal Instruction Tuning (M-IT), Multi-modal InContext Learning (M-ICL), Multi-modal Chain of Thought (M-CoT), and LLM-Aided Visual Reasoning (LAVR). For Mozart\u2019s Touch, we employ Modality Bridging technology, utilizing natural language as an intermediary medium and leveraging LLM to bridge the modality gap. VideoChat-Text [15], for example, is an end-to-end chatcentric video understanding system, which uses pre-trained vision models to extract visual information such as actions and enriches the descriptions using a speech recognition model, which are all represented as textual information as a bridge. 2.2 Image Captioning Image captioning, which is the process of generating descriptive text (captions) that accurately and relevantly capture the content of an image, is a typical multi-modal task requiring both abilities of visual understanding and natural language generation. [25] The field of image captioning has seen significant advancements, such as CLIP [20] and BLIP [14] model. CLIP is developed by OpenAI that has revolutionized the way computers understand images and text, which efficiently learns visual concepts from natural language supervision. The main idea of CLIP is to align texts and images in the feature domain without predetermined labels for specific object categories by training on a large corpus of image-text pairs collected from the Internet. BLIP is another multi-modal framework which transfers flexibly to both vision-language understanding and generation tasks. To pre-train a unified model with both understanding and generation capabilities, they propose multi-modal mixture of encoder-decoder (MED) and achieve great performance across multiple tasks, such as image captioning. 2.3 Multi-Modal Music Generation The advent of Transformer and diffusion models has promoted the development of music generation models. Many impressive works emerged in recent years, such as MusicLM [1], MusicGen [3] , Noise2Music [9] and AudioLDM 2 [17] . MusicLM and MusicGen both consist of autoregressive decoder to generate music. MusicLM can generate high-quality music based on descriptive text such as emotions, styles and instruments. Noise2Music and AudioLDM 2 use diffusion models to generate music based on text that transcends fine-grained semantics and can reach deeper emotions. However, these works above all take text or audio as input to generate music, ignoring other modality information, such as image \fMozart\u2019s Touch: A Lightweight Multi-modal Music Generation Framework Based on Pre-Trained Large Models MM\u201924, October 28 November 1, 2024, Melbourne, Australia. and video. Notable exceptions include the CoDi [26] and M2UGen [11], which allow inputs with more modalities. CoDi(Composable Diffusion) can generate output modalities in parallel from any combination of input modalities. It first use individual modality-specific diffusion models for images, videos, audio, and texts respectively to build a shared multimodal space, and then uses Latent Alignment [4] to achieve joint multi-modal generation. M2UGen is an LLMbased multi-modal music understanding and generation framework. It consists of multi-modal feature encoders, multi-model understanding adapters, bridging LLM, and generation modules to process inputs from multiple modalities such as text, images, and videos, and generate corresponding music. 3 MOZART\u2019S TOUCH Mozart\u2019s Touch is a collaborative multi-modal AIGC framework structured into a sequential integration of three core modules: a Multi-modal Captioning Module, a LLM Understanding & Bridging Module based on LLMs and Music Generation Module. The overall architecture is illustrated in Figure 1. 3.1 Multi-modal Captioning Module The Multi-modal Captioning Module is responsible to encode and understand users\u2019 input, providing textual descriptions for multimodality. This module employs state-of-the-art techniques ViT [5] and BLIP [14] model to analyze images and videos and generate descriptive captions. When users input images and videos without prompting, Our framework can also performs well to generate music that aptly complements the theme. However, in consideration of customization, we also permit users to input textual prompts to guide the music generation process. 3.1.1 Image Captioning Process. For image inputs, we leverage the capabilities of Vision Transformer (ViT) and BLIP-base modules, implemented by the clipinterrogator, to analyze and generate descriptions of the images. This process involves interpreting the visual content of an image \ud835\udc3c and converting it into a image caption description \ud835\udc37caption. Given an input image \ud835\udc3c, the framework generates a caption description \ud835\udc37caption : \ud835\udc37caption = \ud835\udc53BLIP(\ud835\udc3c) (1) where \ud835\udc53BLIP denotes the BLIP model to convert images into descriptive texts. The generated image caption description \ud835\udc37caption serves as input for the subsequent process. 3.1.2 Video Process. For video inputs, we employ a two-step process to handle and interpret the content. Initially, Video-BLIP2-Preprocessor tool is used to sample frames from the video \ud835\udc49, generating a set of frames {\ud835\udc39\ud835\udc56}. Each frame \ud835\udc39\ud835\udc56is then processed to generate a textual description \ud835\udc37\ud835\udc56using the BLIP model, similar to the image process. This process can be formulated as: {\ud835\udc37\ud835\udc56} = {\ud835\udc53BLIP(\ud835\udc39\ud835\udc56)} (2) where \ud835\udc53BLIP denotes the BLIP model to convert frames into descriptive texts. Subsequently, to synthesize a video caption description \ud835\udc37caption of the entire video, we aggregate the frame descriptions {\ud835\udc37\ud835\udc56} and process them through Large Language Models (LLMs) to interpret and condense the video\u2019s visual and thematic content into a coherent textual representation. This process can be represented as: \ud835\udc37caption = \ud835\udc53LLM({\ud835\udc37\ud835\udc56}|\ud835\udc43\ud835\udc5f\ud835\udc5c\ud835\udc5a\ud835\udc5d\ud835\udc61\ud835\udc63\ud835\udc56\ud835\udc51\ud835\udc52\ud835\udc5c) (3) where \ud835\udc53LLM denotes the LLM to integrate and interpret the set of frame descriptions into a single video description \ud835\udc37caption . The prompt used in this process is shown in Table 1. Table 1: Prompt template used to integrate the set of frame descriptions into video description. Role Content system You are about to process a sequence of captions, each corresponding to a distinct frame sampled from a video. Your task is to convert these captions into a cohesive, well-structured paragraph. This paragraph should describe the video in a fluid, engaging manner and follows these guidelines: avoiding semantic repetition to the greatest extent, and giving a description in less than 200 characters. This video caption description \ud835\udc37caption then serves as the input for subsequent process, similar to the image captioning process. 3.2 LLM Understanding & Bridging Module LLM Understanding & Bridging Module plays a pivotal role in the transition from visual to auditory art forms. It is tasked with converting the image/video-descriptive caption text, generated by the Multi-modal Captioning Module, into prompts which are useful in musical generation. This conversion leverages the capabilities of Large Language Models (LLMs) to interpret the underlying mood, themes, and elements conveyed in the textual descriptions of images or videos. Why we undertake the step of LLM-Bridge Module? This is because we contend that although multi-modal caption description have already been presented by Multi-modal Captioning Module, the problems of heterogeneous representations among different modalities still remain unsolved. For example, image captioning model (such as BLIP) intend to generate textual representations which lean more towards describing visual attributes (e.g. appearance, shape, etc.) while for music generation models (e.g. MusicGen), input descriptions that describe musical styles, moods and genres can lead to a better generation of music. From this prospective, we propose LLM Understanding & Bridging Module to align the two types of descriptions mentioned above. To enhance the specificity and relevance of the generated music, the module also optimizes the prompts with additional constraints aimed at music generation. This includes specifying the music genre and incorporating several few-shot examples provided by MusicGen. The optimization process ensures that the final musicdescriptive prompt \ud835\udc37music not only reflects the mood and theme indicated by the input visuals but also adheres to the stylistic and genre-specific guidelines necessary for generating contextually \fMM\u201924, October 28 November 1, 2024, Melbourne, Australia. Tianze Xu, Jiajun Li, Xuesong Chen, Xinrui Yao, and Shuchang Liu appropriate music pieces. Two type of \ud835\udc43\ud835\udc5f\ud835\udc5c\ud835\udc5a\ud835\udc5d\ud835\udc61\ud835\udc4f\ud835\udc5f\ud835\udc56\ud835\udc51\ud835\udc54\ud835\udc52, for image and video input separately, are shown in Table 2 and 3 The process can be formulated as below. Given an visual descriptive caption \ud835\udc37caption, the module generates a corresponding music-descriptive prompt \ud835\udc37music : \ud835\udc37music = \ud835\udc53LLM(\ud835\udc37caption|\ud835\udc43\ud835\udc5f\ud835\udc5c\ud835\udc5a\ud835\udc5d\ud835\udc61\ud835\udc4f\ud835\udc5f\ud835\udc56\ud835\udc51\ud835\udc54\ud835\udc52) (4) where \ud835\udc53LLM denotes the LLM to transform the descriptive texts into a coherent musical prompt that encapsulates the intended mood, themes, and potentially, the genre of the music to be generated, with the help of \ud835\udc43\ud835\udc5f\ud835\udc5c\ud835\udc5a\ud835\udc5d\ud835\udc61\ud835\udc4f\ud835\udc5f\ud835\udc56\ud835\udc51\ud835\udc54\ud835\udc52. Table 2: Prompt template for image-to-music generation. Role Content system Convert in less than 200 characters this image caption to a very concise musical description with musical terms, so that it can be used as a prompt to generate music through AI model, strictly in English. If user provides prompt, give priority to information provided by user. You need to speculate the mood of the given image caption and add it to the music description. You also need to specify a music genre in the description such as pop, hip hop, funk, electronic, jazz, rock, metal, soul, R&B etc. user a city with a tower and a castle in the background, a detailed matte painting, art nouveau, epic cinematic painting, kingslanding assistant A grand orchestral arrangement with thunderous percussion, epic brass fanfares, and soaring strings, creating a cinematic atmosphere fit for a heroic battle. user a group of people sitting on a beach next to a body of water, tourist destination, hawaii assistant Pop dance track with catchy melodies, tropical percussion, and upbeat rhythms, perfect for the beach By invoking LLMs through API, the model is able to distinguish semantic nuances with high accuracy while ensuring its lightweight nature. This capability not only fosters streamlined processing but also facilitates seamless deployment of model services on servers with constrained computational resources. 3.3 Music Generation Module The Music Generation Module utilizes the pre-trained model MusicGenmedium [3] to generate music pieces based on the music-descriptive prompts provided by the LLM Understanding & Bridging Module. MusicGen is designed to produce high-quality music compositions while accommodating various musical styles and preferences. By integrating MusicGen into the Mozart\u2019s Touch framework, we ensure that the generated music aligns closely with the intended mood and theme extracted from the input visuals. Table 3: Prompt template for video-to-music generation. Role Content system Convert in less than 200 characters this video caption to a very concise musical description with musical terms, so that it can be used as a prompt to generate music through AI model, strictly in English. You need to speculate the mood of the given video caption and add it to the music description. You also need to specify a music genre in the description such as pop, hip hop, funk, electronic, jazz, rock, metal, soul, R&B etc. user Two men playing cellos in a room with a piano and a grand glass window backdrop. assistant Classical chamber music piece featuring cello duet, intricate piano accompaniment, the rich harmonies blend seamlessly in an elegant and refined setting, creating a symphonic masterpiece. user A man with guitar in hand, captivates a large audience on stage at a concert. The crowd watches in awe as the performer delivers a stellar musical performance. assistant Rock concert with dynamic guitar riffs, precise drumming, and powerful vocals, creating a captivating and electrifying atmosphere, uniting the audience in excitement and musical euphoria. Given a music-descriptive prompt \ud835\udc37music, the Music Generation Module generates a music piece \ud835\udc40: \ud835\udc40= \ud835\udc53MusicGen(\ud835\udc37music) (5) where \ud835\udc53MusicGen represents the MusicGen model to transform the music prompt into music composition audio. It encapsulates the complex process of interpreting the prompts and translating them into musical elements such as melody, harmony, rhythm, and texture, ensuring that the generated music pieces accurately reflect the intended mood and themes conveyed by the input visuals. 4 EXPERIMENTS In this section, we assess the image-to-music and video-to-music generation capacities of Mozart\u2019s Touch, with the discussion of two evaluation datasets MUImage and MUVideo, and the evaluation metrics utilized. The result of evaluation shows our current state-ofthe-art performance in the task of multi-modal music generation. 4.1 Evaluation Dataset To assess our framework\u2019s performance of image-to-music generation, we utilize the MUImage dataset proposed by M2UGen [10]. MUImage is assembled by obtaining music samples from the AudioSet [6] with corresponding images, which contains 9,966 musicimage pairs in total. We sampled 2,500 music-image pairs randomly from MUImage as our evaluation dataset. \fMozart\u2019s Touch: A Lightweight Multi-modal Music Generation Framework Based on Pre-Trained Large Models MM\u201924, October 28 November 1, 2024, Melbourne, Australia. Table 4: Objective comparison of models for image-to-music generation. The best results are made bold. Model \ud835\udc39\ud835\udc34\ud835\udc37\ud835\udc63\ud835\udc54\ud835\udc54\u2193 KL\u2193 IM Rank\u2191 M2UGen 9.166 1.870 0.556 CoDi 6.674 1.821 0.525 Mozart\u2019s Touch 4.625 1.169 0.753 For video-to-music generation task, we utilize the MUVideo dataset, which is also proposed by M2UGen. We adopted a construction method similar to that of the image-to-music generation task, yielding a corpus of 2,500 music-video pairs for evaluating video-to-music generation task. 4.2 Evaluation metrics For both tasks, we utilize the Frechet Audio Distance (FAD)[12], Kullback-Leibler divergence (KL) and ImageBind Rank (IB Rank)[7] as the evaluation metrics. FAD is a reference-free evaluation metric for music enhancement algorithms. A low score of FAD indicates a high quality of generated music. KL scores measure the labels between the original and the generated music. When the KL score is low, the generated audios are expected to share similar distributions with the reference music. For these two metrics, we utilize the official implementation in PyTorch, where FAD score is supported by the VGGish model. IB Rank[7] is introduced by M2UGen, to assess the alignment between the image/video modality and the generated music. Firstly, we use the Image-Bind model to obtain embeddings for the images/videos and the generated music, then calculate their cosine similarity scores and give them a score based on their ranking. For IB Rank, High score represents a relatively high ranking among the baselines. 4.3 Baselines and Details For both tasks, we compare Mozart\u2019s Touch with two baselines: CoDi[26] and M2UGen[10]. We use open-source CoDi model and M2UGen checkpoint files to run inference. Our framework runs on one NVIDIA RTX 3090 24GB GPU, and two baselines run on one NVIDIA V100 32GB GPU to load the whole models. 4.4 Performance Comparison Table 4 presents the performance of our framework, Mozart\u2019s Touch, and two baseline models in image-to-music generation. The results highlight significant improvements in both the quality and relevance of the music generated by our framework. Moreover, Mozart\u2019s Touch surpasses prior state-of-the-art models despite its simpler architecture. Table 5 shows the results of video-to-music generation. For this task, we observed that Mozart\u2019s Touch still outperforms other models, indicating that our two-step captioning strategy is also highly effective. 4.5 Subjective Evaluation Although we achieve exceptional performance in the objective evaluation, we also believe that quantitative evaluation method Table 5: Objective comparison of models for video-to-music generation. The best results are made bold. Model \ud835\udc39\ud835\udc34\ud835\udc37\ud835\udc63\ud835\udc54\ud835\udc54\u2193 KL\u2193 IM Rank\u2191 M2UGen 9.047 1.878 0.552 CoDi 5.055 1.195 0.494 Mozart\u2019s Touch 4.339 1.048 0.787 Table 6: Subjective comparison of models for image-to-music generation. The best results are made bold. Model OVL\u2191 REL\u2191 CoDi 2.95 3.24 M2UGen 3.77 3.02 Mozart\u2019s Touch 3.74 3.76 Ground Truth\u2217 3.88 4.08 Table 7: Ablation study on image-to-music generation task. The best results are made bold. Model \ud835\udc39\ud835\udc34\ud835\udc37\ud835\udc63\ud835\udc54\ud835\udc54\u2193 KL\u2193 IM Rank\u2191 Mozart\u2019s Touch 4.625 1.170 0.757 w/o LUBM 3.741 1.121 0.743 has great limitations for music generation tasks. The metrics above can effectively measure the quality and relevance of the generated music, but fall short in the understanding of creativity and human feelings, as supported by previous research [18]. Following previous similar works [13, 18], the generated samples are rated based on i) overall quality (OVL); and ii) relevance to the input image (REL). Both OVL and REL metrics have a Likert scale [16] between one and five, where a larger number indicates better performance. In this case, We conduct the subjective evaluation involving 125 participants, taking image-to-music generation as example. Totally 75 questions are created for the subjective evaluation, which are randomly sampled from our evaluation dataset. Each question contains a video with the input image as the visual part and generated (or ground truth) music as the audio. 20 audios are sampled from ground truth, 20 from M2UGen, 20 from Mozart\u2019s Touch, and 15 from CoDi. Each questionnaire comprises ten randomly selected questions. Upon subsequent validation by our team, all 75 questions are covered by the total 125 questionnaires. The subjective evaluation result is presented in Table 6. While our method slightly underperforms in terms of the metrics for overall quality (OVL) when compared to M2UGen, the result shows that there is a notable enhancement in the metric of relevance (REL) to input image, which is consistent with our target to generate corresponding music that aligns the image well. 4.6 Ablation Studies To demonstrate the effectiveness of LLM bridging modality, we conducted a further ablation experiment, comparing the performance \fMM\u201924, October 28 November 1, 2024, Melbourne, Australia. Tianze Xu, Jiajun Li, Xuesong Chen, Xinrui Yao, and Shuchang Liu of the original system with and without (w/o) the LLM Understanding & Bridging Module (LUBM) in the task of iamge-to-music generation. As indicated in the table 7, the framework without LUBM achieves higher scores in the FAD and KL metrics, the two metrics measure the similarity between ground truth and generated audios, rather than the similarity between different modalities. On the other side, the framework with LUBM performs better in IB Rank metric. This metric utilizes the ImageBind model to encode multi-modal information uniformly, thereby evaluating the similarity between input modality information and generated audio, aligning more closely with the objectives of evaluating multi-modal music generation. Therefore, we believe that there is no clear superiority or inferiority between the Mozart\u2019s Touch framework with and without LUBM. This once again emphasizes that quantitative evaluation may not always be the best approach for assessing the multi-modal music generation tasks. 4.7 Case Study In this part, we conduct a case study to analyze how our LLM Understanding & Bridging Module (LUBM) mitigates the problem of heterogeneous representations among different modalities. By showcasing some representative comparative examples in Figure 2, We demonstrate that the absence of the LUBM does indeed have adverse effects on the generation results. The first example illustrates a portrait of Bach. Some keywords in the original image description disturb the generation of corresponding music, as they focus on the attributes of image instead of that of music. The second example illustrates an anime girl from a visual novel game Atri: My Dear Moments. This example shows that insufficiency of music attributions may also mislead the generation of music in a quite different way. 5" +} \ No newline at end of file diff --git a/abs_9K/test_abstract_short_2405.02816v1.json b/abs_9K/test_abstract_short_2405.02816v1.json new file mode 100644 index 0000000000000000000000000000000000000000..c0bcacead17faa2bf24fc85eebf688de32967b13 --- /dev/null +++ b/abs_9K/test_abstract_short_2405.02816v1.json @@ -0,0 +1,18 @@ +{ + "url": "http://arxiv.org/abs/2405.02816v1", + "title": "Stochastic RAG: End-to-End Retrieval-Augmented Generation through Expected Utility Maximization", + "abstract": "This paper introduces Stochastic RAG--a novel approach for end-to-end\noptimization of retrieval-augmented generation (RAG) models that relaxes the\nsimplifying assumptions of marginalization and document independence, made in\nmost prior work. Stochastic RAG casts the retrieval process in RAG as a\nstochastic sampling without replacement process. Through this formulation, we\nemploy straight-through Gumbel-top-k that provides a differentiable\napproximation for sampling without replacement and enables effective end-to-end\noptimization for RAG. We conduct extensive experiments on seven diverse\ndatasets on a wide range of tasks, from open-domain question answering to fact\nverification to slot-filling for relation extraction and to dialogue systems.\nBy applying this optimization method to a recent and effective RAG model, we\nadvance state-of-the-art results on six out of seven datasets.", + "authors": "Hamed Zamani, Michael Bendersky", + "published": "2024-05-05", + "updated": "2024-05-05", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.IR", + "cs.LG" + ], + "label": "Original Paper", + "paper_cat": "Retrieval AND Augmented AND Generation AND RAG", + "gt": "This paper introduces Stochastic RAG--a novel approach for end-to-end\noptimization of retrieval-augmented generation (RAG) models that relaxes the\nsimplifying assumptions of marginalization and document independence, made in\nmost prior work. Stochastic RAG casts the retrieval process in RAG as a\nstochastic sampling without replacement process. Through this formulation, we\nemploy straight-through Gumbel-top-k that provides a differentiable\napproximation for sampling without replacement and enables effective end-to-end\noptimization for RAG. We conduct extensive experiments on seven diverse\ndatasets on a wide range of tasks, from open-domain question answering to fact\nverification to slot-filling for relation extraction and to dialogue systems.\nBy applying this optimization method to a recent and effective RAG model, we\nadvance state-of-the-art results on six out of seven datasets.", + "main_content": "INTRODUCTION Most machine learning systems, including large generative models, are self-contained systems, with both knowledge and reasoning encoded in model parameters. However, these models do not work effectively for tasks that require knowledge grounding [46], especially in case of non-stationary data where new information is actively being produced [47, 52]. As suggested by Zamani et al. [52], this issue can be addressed when machine learning systems Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the owner/author(s). SIGIR \u201924, July 14\u201318, 2024, Washington, DC, USA \u00a9 2024 Copyright held by the owner/author(s). ACM ISBN 979-8-4007-0431-4/24/07. https://doi.org/10.1145/3626772.3657923 are being enhanced with the capability of retrieving stored content. For example, in retrieval-augmented generation (RAG), as a special case of retrieval-enhanced machine learning (REML) [52], systems consume the responses provided by one or more retrieval models for the purpose of (text) generation [21, 22]. RAG models demonstrate substantial promise across various applications, including open-domain question answering [16, 21, 53], fact verification [44], dialogue systems [5, 42, 48], and personalized generation [36, 37]. Many prior studies on RAG use off-the-shelf retrieval models. For instance, Nakano et al. [25] used APIs from a commercial search engine for text generation. Glass et al. [9], on the other hand, used a term matching retrieval model. Neural ranking models trained based on human annotated data have also been used in the literature [12, 21]. There also exist methods that only optimize the retrieval model and keep the language model parameters frozen [40]. A research direction in this area argues that optimizing retrieval models in RAG should depend on the downstream language model that consumes the retrieval results. This is also motivated by the findings presented by Salemi and Zamani [38] on evaluating retrieval quality in RAG systems. There exist solutions based on knowledge distillation [13] or end-to-end optimization based on some simplifying assumptions [35]. One of these assumptions is marginalization via top \ud835\udc58approximation [10, 21]. In more details, they first retrieve the top \ud835\udc58documents using off-the-shelf retrieval models, e.g., BM25 [34], and optimize retrieval models by re-scoring them, i.e., re-ranking, and feeding the documents to the downstream language model one-by-one independently [21]. This is far from reality as RAG models often consume multiple documents. This paper introduces Expected Utility Maximization for RAG\u2013a novel framework for end-to-end RAG optimization by relaxing these simplifying assumptions. This approach takes a utility function, which can be any arbitrary evaluation metric for the downstream generation task, such as exact match, BLEU [26], and ROUGE [23]. A major challenge in end-to-end optimization of RAG systems is that ranking and top \ud835\udc58selection is a non-differentiable process. Hence, this prevents us from using gradient descent-based methods for optimization. We address this issue by casting retrieval as a sampling without replacement process from the retrieval score distribution, which is approximated using the straight-through Gumbel-top-k approach. This stochastic approach\u2014called Stochastic RAG\u2014adds a Gumbel noise to the unnormalized retrieval scores and uses softmax to approximate argmax [17, 18]. Stochastic RAG can be applied to any RAG application. We evaluate our models using seven datasets from a wide range of applications, ranging from open-domain question answering to fact verification to slot-filling for relation extraction as well as dialogue systems. We apply our optimization method to FiD-Light [12], which arXiv:2405.02816v1 [cs.CL] 5 May 2024 \fSIGIR \u201924, July 14\u201318, 2024, Washington, DC, USA Hamed Zamani and Michael Bendersky is the best performing system on six out of these seven datasets, according to the knowledge-intensive language tasks (KILT) leaderboard as of Feb. 1, 2024.1 Our results demonstrate significant improvements on all these datasets. 2 EXPECTED UTILITY MAXIMIZATION FOR STOCHASTIC RAG Each RAG system consists of two main components: a text generation model \ud835\udc3a\ud835\udf03parameterized by \ud835\udf03and a retrieval model \ud835\udc45\ud835\udf19 parameterized by \ud835\udf19that retrieves documents from a large document collection\ud835\udc36. The text generation model consumes the retrieval results returned by the retrieval model. End-to-end optimization of RAG systems is challenging. This is mainly because retrieving top \ud835\udc58documents and feeding them to the generation model is not a differentiable process [52], thus one cannot simply employ gradientbased optimization algorithms for end-to-end optimization of these models. In this section, we introduce stochastic expected utility maximization for end-to-end optimization of retrieval-augmented models. Let \ud835\udc47= {(\ud835\udc651,\ud835\udc661), (\ud835\udc652,\ud835\udc662), \u00b7 \u00b7 \u00b7 , (\ud835\udc65\ud835\udc5b,\ud835\udc66\ud835\udc5b)} be a training set containing \ud835\udc5bpairs of \ud835\udc65\ud835\udc56(an input text) and \ud835\udc66\ud835\udc56(the ground truth output text). Let\ud835\udc48denote a utility function that takes the output generated by the RAG system \u02c6 \ud835\udc66and the ground truth output \ud835\udc66and generates a scalar value. The utility function can be any arbitrary metric, including but is not limited to, exact match, term overlap F1, BLEU, and ROUGE. We assume (1) the higher the utility value, the better, (2) the utility function is bounded within the [0, 1] range, and (3) \ud835\udc48(\ud835\udc66,\ud835\udc66) = 1. We define RAG Expected Utility as follows: RAG Expected Utility = 1 \ud835\udc5b \u2211\ufe01 (\ud835\udc65,\ud835\udc66)\u2208\ud835\udc47 \u2211\ufe01 \u02c6 \ud835\udc66\u2208Y \ud835\udc48(\ud835\udc66, \u02c6 \ud835\udc66)\ud835\udc5d( \u02c6 \ud835\udc66|\ud835\udc65;\ud835\udc3a\ud835\udf03, \ud835\udc45\ud835\udf19) (1) where Y the output space, i.e., all possible output texts. In some models, the output space is limited, for instance in fact verification, the output space is often binary: the given candidate fact is often true or false. In other situations, such as free-form text generation, the output space is unlimited. To make sure that expected utility calculation is tractable, we would need to approximate the above equation by sampling from the unlimited space Y. We will explain how such samples can be obtained at the end of this section. The probability of generating any given output \u02c6 \ud835\udc66in a RAG system can be modeled as: \ud835\udc5d( \u02c6 \ud835\udc66|\ud835\udc65;\ud835\udc3a\ud835\udf03, \ud835\udc45\ud835\udf19) = \u2211\ufe01 d\u2208\ud835\udf0b\ud835\udc58(\ud835\udc36) \ud835\udc5d( \u02c6 \ud835\udc66, d|\ud835\udc65;\ud835\udc3a\ud835\udf03, \ud835\udc45\ud835\udf19) = \u2211\ufe01 d\u2208\ud835\udf0b\ud835\udc58(\ud835\udc36) \ud835\udc5d( \u02c6 \ud835\udc66|\ud835\udc65, d;\ud835\udc3a\ud835\udf03)\ud835\udc5d(d|\ud835\udc65;\ud835\udc3a\ud835\udf03, \ud835\udc45\ud835\udf19) = \u2211\ufe01 d\u2208\ud835\udf0b\ud835\udc58(\ud835\udc36) \ud835\udc5d( \u02c6 \ud835\udc66|\ud835\udc65, d;\ud835\udc3a\ud835\udf03)\ud835\udc5d(d|\ud835\udc65;\ud835\udc45\ud835\udf19) (2) where \ud835\udf0b\ud835\udc58(\ud835\udc36) denotes all permutations of \ud835\udc58documents being selected from the retrieval collection \ud835\udc36. The first step in the above equation is obtained using the law of total probability, the second step is obtained using the chain rule, and the third step is obtained due to the fact that the probability of a result list d being retrieved is independent of the text generation model \ud835\udc3a\ud835\udf03. 1https://eval.ai/web/challenges/challenge-page/689/leaderboard. Note that considering all permutations in \ud835\udf0b\ud835\udc58(\ud835\udc36) is expensive and impractical for large collections, thus we can compute an approximation of this equation. We do such approximation through a stochastic process. We rewrite Equation (2) as follows: \ud835\udc5d( \u02c6 \ud835\udc66|\ud835\udc65;\ud835\udc3a\ud835\udf03, \ud835\udc45\ud835\udf19) = Ed\u223c\ud835\udc5d(d|\ud835\udc65;\ud835\udc45\ud835\udf19) [\ud835\udc5d( \u02c6 \ud835\udc66|\ud835\udc65, d;\ud835\udc3a\ud835\udf03)] (3) where |d| = \ud835\udc58. Inspired by the seq2seq models [43], we compute \ud835\udc5d( \u02c6 \ud835\udc66|\ud835\udc65, d;\ud835\udc3a\ud835\udf03)\u2014the component in Equation (2)\u2014as follows: \ud835\udc5d( \u02c6 \ud835\udc66|\ud835\udc65, d;\ud835\udc3a\ud835\udf03) = | \u02c6 \ud835\udc66| \u00d6 \ud835\udc56=1 \ud835\udc5d( \u02c6 \ud835\udc66\ud835\udc56| \u02c6 \ud835\udc66<\ud835\udc56,\ud835\udc65, d;\ud835\udc3a\ud835\udf03) = exp \u00a9 \u00ad \u00ab | \u02c6 \ud835\udc66| \u2211\ufe01 \ud835\udc56=1 log\ud835\udc5d( \u02c6 \ud835\udc66\ud835\udc56| \u02c6 \ud835\udc66<\ud835\udc56,\ud835\udc65, d;\ud835\udc3a\ud835\udf03)\u00aa \u00ae \u00ac (4) where \u02c6 \ud835\udc66\ud835\udc56denotes the \ud835\udc56th token in \u02c6 \ud835\udc66and \u02c6 \ud835\udc66<\ud835\udc56denotes all tokens \u02c6 \ud835\udc661, \u02c6 \ud835\udc662, \u00b7 \u00b7 \u00b7 , \u02c6 \ud835\udc66\ud835\udc56\u22121. The next step is to estimate \ud835\udc5d(d|\ud835\udc65;\ud835\udc45\ud835\udf19) in Equation (3), which represents the probability of retrieving the result list d in response to input \ud835\udc65using the retrieval model \ud835\udc45\ud835\udf19. Most retrieval models score each query-document pair independently and then sort them with respect to their relevance score in descending order. Therefore, the probability of a document list being produced by \ud835\udc45\ud835\udf19can be modeled as a sampling without replacement process. In other words, assume that the retrieval model \ud835\udc45\ud835\udf19produces a retrieval score \ud835\udc60\ud835\udf19 \ud835\udc65\ud835\udc51\u2208R for any document \ud835\udc51\u2208\ud835\udc36. Sampling without replacement probability of a document list is then computed as: \ud835\udc5d(d|\ud835\udc65;\ud835\udc45\ud835\udf19) = |d| \u00d6 \ud835\udc56=1 \ud835\udc5d(\ud835\udc51\ud835\udc56|\ud835\udc65;\ud835\udc45\ud835\udf19) 1 \u2212\u00cd\ud835\udc56\u22121 \ud835\udc57=1 \ud835\udc5d(\ud835\udc51\ud835\udc57|\ud835\udc65;\ud835\udc45\ud835\udf19) (5) where document-level probabilities \ud835\udc5d(\ud835\udc51\ud835\udc56|\ud835\udc65;\ud835\udc45\ud835\udf19) can be computed using the softmax operation: \ud835\udc5d(\ud835\udc51\ud835\udc56|\ud835\udc65;\ud835\udc45\ud835\udf19) = exp (\ud835\udc60\ud835\udf19 \ud835\udc65\ud835\udc51\ud835\udc56) \u00cd \ud835\udc51\u2208\ud835\udc36exp (\ud835\udc60\ud835\udf19 \ud835\udc65\ud835\udc51) (6) This iterative process of document sampling is non-differentiable, and thus cannot be simply used in gradient descent-based optimization approaches. To address both of these problems, Kool et al. [17, 18] recently introduced Ancestral Gumbel-Top-\ud835\udc58sampling. This approach creates a tree over all items in the sampling set and extends the Gumbel-Softmax sampling approach [24] to sampling without replacement. According to [17], independently perturbing each individual document score with Gumbel noise and picking the top \ud835\udc58documents with the largest perturbed values will generate a valid sample from the Plackett-Luce distribution. Gumbel perturbation itself can be done efficiently by simply drawing a sample \ud835\udc48\u223cUniform(0, 1), as Gumbel(0, \ud835\udefd) \u223c\u2212\ud835\udefdlog(\u2212log(\ud835\udc48)) [24]. \u02dc \ud835\udc5d(\ud835\udc51\ud835\udc56|\ud835\udf19,\ud835\udf03) = exp(\ud835\udc60\ud835\udf19 \ud835\udc65\ud835\udc51\ud835\udc56+ \ud835\udc3a\ud835\udc51\ud835\udc56) \u00cd \ud835\udc51\u2208\ud835\udc36exp(\ud835\udc60\ud835\udf19 \ud835\udc65\ud835\udc51+ \ud835\udc3a\ud835\udc51) (7) where \ud835\udc3a\ud835\udc51denotes the gumbel noise added for scoring document \ud835\udc51. We use straight-through gumbel-top-k, in which the top \ud835\udc58elements are selected from the above distribution using the arg max operation in the forward path, however, the softmax distribution is \fStochastic RAG: End-to-End Retrieval-Augmented Generation through Expected Utility Maximization SIGIR \u201924, July 14\u201318, 2024, Washington, DC, USA used in the backward path for computing the gradients. For more information on straight-through gumbel-softmax, refer to [14, 28]. Gumbel-top-k has been used in IR systems too. For instance, Zamani et al. [51] used the gumbel-top-k trick to optimize re-ranking models conditioned on the first stage retrieval models. Selecting Y. In Equation (1), Y denotes the output space, which can be unlimited for free-form text generation tasks, hence computationally intractable. In such cases, we need to estimate RAG Expected Utility by sampling from the output space. A uniformly random sample can give us an unbiased estimation, however, most random samples are completely unrelated to the input, so they can be easily discriminated from the ground truth output. Inspired by work on hard negative sampling for training ranking models [31, 49], at every \ud835\udc41= 10, 000 training steps, we run the RAG model that is being trained on the training inputs that will be used in the next \ud835\udc41steps and use beam search to return 100 most probable outputs. We randomly sample \ud835\udc5a= 10 of these outputs to form Y. We then made sure that for every pair (\ud835\udc65,\ud835\udc66) in the training set for the next \ud835\udc41steps,\ud835\udc66is included in Y, otherwise we randomly replace one of the sampled outputs in Y with \ud835\udc66. The reason for doing this is to make sure that our sample contains the ground truth output, ensuring that the model learns to produce higher probability for the ground truth output. Preparing Y for the next \ud835\udc41training steps would also enable us to pre-compute utility values\ud835\udc48(\ud835\udc66, \u02c6 \ud835\udc66) : \u2200\u02c6 \ud835\udc66\u2208Y, ensuring an efficient optimization process for RAG Expected Utility Maximization (see Equation (1)). 3 EXPERIMENTS 3.1 Data We use the Natural Questions (NQ) [19], TriviaQA [15], HotpotQA [50], FEVER [45], T-REx [7], zsRE [20], and Wizard of Wikipedia (WoW) [6] datasets from the KILT [29] benchmark. Due to the unavailability of ground truth labels for test set, our experiments are conducted on the publicly accessible validation sets. As the retrieval corpus, we employ the Wikipedia dump provided with the KILT benchmark2 and adhere to the preprocessing steps outlined by Karpukhin et al. [16], where each document is segmented into passages, each constrained to a maximum length of 100 words. The concatenation of the article title and passage text is used as a document. Note that the KILT benchmark furnishes document-level relevance labels (called Provenance) for its datasets, and these are employed for evaluating retrieval performance. In line with our preprocessing method outlined in this paper, we define all passages within a positive document as positive passages for our evaluation. For evaluating our models, we follow the standard KILT evaluation setup [29] by focusing on KILT-score metrics. KILT-scores combine R-Precision (\ud835\udc45\ud835\udc43) obtained by the retrieval results and the quality of the generated output text that is evaluated using any arbitrary metric \ud835\udc40(such as EM, Accuracy, or F1). For a query set \ud835\udc44, KILT-scores are computed as follows: KILT-M = 1 |\ud835\udc44| \u2211\ufe01 \ud835\udc5e\u2208\ud835\udc44 {\ud835\udc45\ud835\udc43(p, d) == 1} \u2217\ud835\udc40(\ud835\udc66, \u02c6 \ud835\udc66) (8) 2Retrieval corpus: https://dl.fbaipublicfiles.com/ur/wikipedia_split/psgs_w100.tsv.gz where d is the retrieval results produced by the retrieval model, p is the provenance label set provided by KILT, \ud835\udc66is the ground truth output, and \u02c6 \ud835\udc66is the generated text. Note that there is only one provenance label per query in most KILT datasets. FEVER and HotPotQA are the only exceptions. 12% of queries are associated with more than one supporting document in FEVER and all queries in HotPotQA (which focuses on multi-hop question answering) are associated with two documents. KILT-scores only evaluates the generated text if R-Precision is 1. This means that it does not solely focus on the quality of the generated text, but also makes sure that relevant supporting documents are provided. We adopt the metrics recommended by the KILT benchmark, namely Exact Match (KILTEM) for NQ, TriviaQA, and HotpotQA, Accuracy (KILT-AC) for FEVER, and F1-score (KILT-F1) for the WoW dataset. 3.2 Experimental Setup We apply the proposed optimization framework to a state-of-the-art RAG model on the KILT benchmark (i.e., FiD-Light, according to the KILT leaderboard) [29]. Therefore, we follow the experimental setup of Hofst\u00e4tter et al. [12] for FiD-Light. That means we used multi-task relevance sampled training set from the authors earlier work in [11] and trained a dense retrieval model, which is pretrained on the MSMARCO passage retrieval data [2]. Given that the datasets in our experiments focuses on relatively short-text generation tasks, and since all passages are less than or equal to 100 tokens, we set the input token limit for both query and passage combined at 384 tokens and for the output at 64 tokens. For training, we use a batch size of 128 with up to 40 retrieved passages, and a learning rate of 10\u22123 with the Adafactor optimizer [39]. We trained our models for 50,000 steps. We cut the learning rate by half for the large language models (i.e., T5-XL). During decoding, we use beam search with a beam size of 4. All our experiments are based on the T5X framework [33] on TPUs using T5v1.1 as the language model backbone [32]. For each dataset, we use the official KILT-score metric as the utility function for optimization (Equation (1)). 3.3 Results To evaluate the effectiveness of the RAG Expected Utility Maximization framework, we compare our model with the best performing entries in the KILT leaderboard (as of February 1, 2024) according to the official KILT-score metrics. These methods use a wide range of techniques to address these issues including dense retrieval methods followed by BART or T5 for generation, generative retrieval models, retrieval and reranking models, pre-trained large language models without augmentation, etc. These methods and their corresponding references are listed in Table 1. For the sake of space, we do not list their underlying methods here. The performance of these methods is obtained from the KILT leaderboard. We use FiD-Light as the main baseline in this paper, as it produces state-of-the-art results on six out of seven datasets and the proposed optimization method is applied to FiD-Light. FiD-Light is a simple extension of the Fusion-in-Decoder architecture that generates the document identifier of relevant documents in addition to the output text and uses then at inference for re-ranking the input result list. According to the results presented in Table 1, employing stochastic expected \fSIGIR \u201924, July 14\u201318, 2024, Washington, DC, USA Hamed Zamani and Michael Bendersky Table 1: Comparing our models with top performing entries in the KILT leaderboard according to KILT-scores, as of February 1, 2024. The results are reported on the blind KILT test sets. Model Open Domain QA Fact Slot Filling Dialog NQ HotpotQA TriviaQA FEVER T-REx zsRE WOW KILT-EM KILT-EM KILT-EM KILT-AC KILT-AC KILT-AC KILT-F1 RAG [21] 32.7 3.2 38.1 53.5 23.1 36.8 8.8 DPR + FiD [30] 35.3 11.7 45.6 65.7 64.6 67.2 7.6 KGI [8] 36.4 \u2013 42.9 64.4 69.1 72.3 11.8 Re2G [10] 43.6 \u2013 57.9 78.5 75.8 \u2013 12.9 Hindsight [27] \u2013 \u2013 \u2013 \u2013 \u2013 \u2013 13.4 SEAL + FiD [4] 38.8 18.1 50.6 71.3 60.1 73.2 11.6 Re3val [41] 39.5 24.2 51.3 73.0 \u2013 \u2013 13.5 GripRank [1] 43.6 \u2013 58.1 \u2013 \u2013 79.9 14.7 PLATO [3] \u2013 \u2013 \u2013 \u2013 \u2013 \u2013 13.6 FiD-Light (T5-Base, \ud835\udc58= 64) 45.6 25.6 57.6 80.6 76.0 81.1 11.9 FiD-Light (T5-XL, \ud835\udc58= 8) 51.1 29.2 63.7 84.5 76.3 84.0 13.1 Stochastic RAG with FiD-Light (T5-Base, \ud835\udc58= 64) 46.2 27.3 59.7 81.3 76.9 82.8 12.8 Stochastic RAG with FiD-Light (T5-XL, \ud835\udc58= 8) 53.0 31.1 64.7 84.8 78.3 87.0 14.2 Figure 1: Sensitivity of Stochastic RAG with FiD-Light XL to the number of samples for estimating Equation (3). utility maximization leads to improvements in all datasets. Comparing against state-of-the-art baselines from the KILT leaderboard, our approach presents the best performing result in all datasets except for Wizard of Wikipedia, where only one method, named GripRank, performs slightly better than our best performing system. Note that in another dataset (i.e., zsRE), our methods outperform GripRank by a large margin. The last two rows in Table 1 present the results for the same model with different sizes for the downstream language model. T5Base contains 220 million parameters, while T5-XL is a language model with 3 billion parameters. We observe that both model sizes benefit from applying stochastic expected utility maximization. As expected, the larger model exhibits a better performance. That said, the performance difference between the Base and XL size models is not consistent across datasets. For instance, we observe substantial relative improvements on Natural Questions (i.e., 14.5%), while improvements on T-REx are smaller (i.e., 1.8%). To provide a deeper analysis of the Stochastic RAG performance, we vary the number of samples we take for estimating Equation (3). For the sake of visualization, we only present the results for a QA, a fact verification, and a slot-filling dataset in Figure 1. We observe that the model is robust with respect to the different number of samples. That said, sometimes we observe slight improvement as we increase the sample size (e.g., on TriviaQA). 4" +} \ No newline at end of file diff --git a/abs_9K/test_abstract_short_2405.02844v1.json b/abs_9K/test_abstract_short_2405.02844v1.json new file mode 100644 index 0000000000000000000000000000000000000000..44f6da868789c1c742512914b0172d6be70ce4fe --- /dev/null +++ b/abs_9K/test_abstract_short_2405.02844v1.json @@ -0,0 +1,16 @@ +{ + "url": "http://arxiv.org/abs/2405.02844v1", + "title": "SMCD: High Realism Motion Style Transfer via Mamba-based Diffusion", + "abstract": "Motion style transfer is a significant research direction in multimedia\napplications. It enables the rapid switching of different styles of the same\nmotion for virtual digital humans, thus vastly increasing the diversity and\nrealism of movements. It is widely applied in multimedia scenarios such as\nmovies, games, and the Metaverse. However, most of the current work in this\nfield adopts the GAN, which may lead to instability and convergence issues,\nmaking the final generated motion sequence somewhat chaotic and unable to\nreflect a highly realistic and natural style. To address these problems, we\nconsider style motion as a condition and propose the Style Motion Conditioned\nDiffusion (SMCD) framework for the first time, which can more comprehensively\nlearn the style features of motion. Moreover, we apply Mamba model for the\nfirst time in the motion style transfer field, introducing the Motion Style\nMamba (MSM) module to handle longer motion sequences. Thirdly, aiming at the\nSMCD framework, we propose Diffusion-based Content Consistency Loss and Content\nConsistency Loss to assist the overall framework's training. Finally, we\nconduct extensive experiments. The results reveal that our method surpasses\nstate-of-the-art methods in both qualitative and quantitative comparisons,\ncapable of generating more realistic motion sequences.", + "authors": "Ziyun Qian, Zeyu Xiao, Zhenyi Wu, Dingkang Yang, Mingcheng Li, Shunli Wang, Shuaibing Wang, Dongliang Kou, Lihua Zhang", + "published": "2024-05-05", + "updated": "2024-05-05", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "label": "Original Paper", + "paper_cat": "Mamba", + "gt": "Motion style transfer is a significant research direction in multimedia\napplications. It enables the rapid switching of different styles of the same\nmotion for virtual digital humans, thus vastly increasing the diversity and\nrealism of movements. It is widely applied in multimedia scenarios such as\nmovies, games, and the Metaverse. However, most of the current work in this\nfield adopts the GAN, which may lead to instability and convergence issues,\nmaking the final generated motion sequence somewhat chaotic and unable to\nreflect a highly realistic and natural style. To address these problems, we\nconsider style motion as a condition and propose the Style Motion Conditioned\nDiffusion (SMCD) framework for the first time, which can more comprehensively\nlearn the style features of motion. Moreover, we apply Mamba model for the\nfirst time in the motion style transfer field, introducing the Motion Style\nMamba (MSM) module to handle longer motion sequences. Thirdly, aiming at the\nSMCD framework, we propose Diffusion-based Content Consistency Loss and Content\nConsistency Loss to assist the overall framework's training. Finally, we\nconduct extensive experiments. The results reveal that our method surpasses\nstate-of-the-art methods in both qualitative and quantitative comparisons,\ncapable of generating more realistic motion sequences.", + "main_content": "INTRODUCTION Motion style transfer is a significant research direction in multimedia applications. The objective is to transpose the style from the style reference onto the content motion while conserving the motion content. As such, the generated motion can possess features from both the content and style motion, thus enabling the swift switching between different styles for a digital humanoid\u2019s identical motion, as depicted in Figure 1. Employing this technology can dramatically enrich and heighten the realism of digital human motion. It is being broadly adapted into various multimedia contexts such as movies, games, the Metaverse and so on. Traditional methods for motion style transfer [1, 12, 25] mainly adopt a generation framework based on GAN [7]. However, GAN training is known to suffer from instability and convergence issues, arXiv:2405.02844v1 [cs.CV] 5 May 2024 \fPreprint, 2024, Conference Paper Ziyun Qian, et al leading to difficulties in generating high-fidelity, natural motion sequences. On the contrary, the diffusion framework process during training tends to be more stable and is typically easier to converge. Therefore, to address the aforementioned problems, we adopt the diffusion model as our generative framework and consider style motion sequences a diffusion condition for the first time. Consequently, we propose the Style Motion Conditioned Diffusion (SMCD) Framework. This framework is capable of learning motion detail features and style variations more comprehensively, generating motions with content and style motion characteristics, thereby achieving more realistic and natural motion style transfer. However, upon the proposition of the SMCD framework, we discover it failed to effectively extract the temporal information of the motion sequences, leading to the generation of disordered motion. To address this problem, we are inspired by the Mamba [8] model and thus propose the Motion Style Mamba (MSM) module. The MSM module effectively captures sequence temporal information utilizing the Selection Mechanism, preserving long-term temporal dependencies within a motion sequence. We are the first researchers to introduce the Mamba [8] model to the field of motion style transfer. Additionally, since we propose a new framework for motion style transfer, suitable loss functions to aid in training are currently lacking. In light of this, we specially design the Diffusion-based Content Consistency Loss and Diffusion-based Style Consistency Loss, tailoring them to the characteristics of our proposed SMCD Framework. These loss functions are utilized to constrain the content and style of the generated motions, and achieve better results. In the experiment section, we carry out extensive comparative tests using other methods. Visual effects and quantifiable indicators show that the motions generated by the proposed SMCD framework possess higher naturality and realism. Furthermore, it maintains the original motion style while generating various motions, such as walking, running, and jumping. In summary, the main contributions of this paper can be summarized as follows: \u2022 We propose a new motion style transfer framework, SMCD, for the first time, considering style motion sequences as conditions for diffusion to generate motions. \u2022 We first utilize the Mamba model [8] in the field of motion style transfer, and propose the MSM module. This module is designed to extract the temporal information of motion sequences better, thereby maintaining long-term dependencies in the time sequence of motion sequences. \u2022 Due to the lack of loss functions that fully adapt to our SMCD framework, we propose the Diffusion-based Content Consistency Loss and Diffusion-based Style Consistency Loss to assist in training for the first time, enabling the model to achieve improved results. \u2022 We conduct extensive experiments to evaluate our framework. The results indicate that our proposed SMCD framework surpasses the effects of state-of-the-art methods in terms of visual effects and quantitative indicators. 2 RELATED WORKS Motion Style transfer. Motion style transfer is a significant research area in multimedia applications. Early methods [3, 29] utilize handcrafted feature extraction to design different motion styles. These approaches, however, are inefficient and incapable of quickly generating large-scale stylized motions. Later, some methods [18, 34] attempt to employ machine learning for motion style transfer. However, these methods typically require a paired dataset for training, meaning they need a human avatar to perform the same motion using different styles, such as running in both a happy and a sad state, with nearly similar steps. Such an intricate process limited the creation of large-scale paired motion datasets. In recent years, specific methods [1, 4, 12, 25] borrow techniques from image style transfer, utilizing deep learning structures for digital human motion style transfer. These methods do not require paired training datasets and achieve sound motion style transfer effects. However, most adopt a Generative Adversarial Network (GAN) [7] based generation framework. GAN [7] training is known to suffer from instability and convergence issues, which results in difficulties in generating realistic, high-fidelity motion sequences. To resolve these problems, we propose a diffusion-based motion style transfer framework. Furthermore, we are the first to consider style motion as a condition within diffusion, allowing a more comprehensive learning of content and style features within a motion sequence. This results in a more realistic, more natural motion style transfer. Diffusion Generative Models. Diffusion consists of both a forward process and a reverse process, forming a Markovian architecture that reverses predetermined noise using neural networks and learns the underlying distribution of data. The researchers highly favor the diffusion model for its excellent performance in various research areas, such as image generation [22, 24, 30], video generation [9], reinforcement learning [13], 3D shape generation [45], and more, benefiting from the advances in learning-based technologies [35\u201341]. Compared to GANs [7] and VAEs [15], the diffusion model exhibits promising quality not only in image tasks but also in motion generation. The work [43] is the first text-based motion diffusion model that achieves body part-level control using fine-grained instructions. Tevet et al. [26] introduce a motion diffusion model, operating on raw motion data, and learn the relationship between motion and input conditions. The method [44] presents a retrievalaugmented motion diffusion model, leveraging additional knowledge from retrieved samples for motion synthesis. The research [33], in contrast to traditional diffusion models, devised a spatialtemporal transformer-based architecture as the core decoder, diverging from the conventional Unet backbone, to introduce diffusion into human motion prediction. Kim et al. [14] combine improved DDPM [19] and Classifier-free guidance [11] integrating diffusionbased generative models into the motion domain. The method [28] utilizes a Transformer-based diffusion model, couples with the Jukebox, to provide motion generation and editing suitable for dance. The effort [5] employs a 1D U-Net with cross-modal transformers to learn a denoising function, synthesizing long-duration motions based on contextual information such as music and text. Flaborea et al. [6] focus on the multimodal generation capability of diffusion models and the improved mode-coverage capabilities of diffusive techniques, applying them to detect video anomalies. However, \fSMCD: High Realism Motion Style Transfer via Mamba-based Diffusion Preprint, 2024, Conference Paper among the numerous diffusion-based frameworks, no work currently incorporates style motion as a condition and applies it to motion style transfer. 3 METHODOLOGY Pose Representation. We categorize the motion sequence input into the Style Motion Conditioned Diffusion (SMCD) framework into two types based on function. The first type, content motion sequence mc \u2208\ud835\udc454\ud835\udc3d\u00d7\ud835\udc41, has \ud835\udc41poses, each pose mci has 4\ud835\udc3ddimensions, i.e., mc = {\ud835\udc8eci}\ud835\udc41 \ud835\udc56=1. Similarly, the second type, style motion sequence \ud835\udc8fs \u2208R3\ud835\udc3d\u00d7\ud835\udc47, also has \ud835\udc41poses, each pose nsi has 3\ud835\udc3ddimensions, i.e., ns = {nsi}\ud835\udc41 \ud835\udc56=1. The content motion sequence \ud835\udc8ec can be represented using joint rotations with a source style c \u2208S. In contrast, the style motion sequence \ud835\udc8fs can be inferred from the relative motion of joint rotations to infer style, hence represented using joint rotations, with a target style s \u2208S. Here, S denotes the collection of all styles, \ud835\udc3d= 21 signifies the number of joints in the human skeleton. The objective of the SMCD framework is to generate a motion sequence that simultaneously possess the content characteristics of mc and the style features of ns, hence achieving motion style transfer. 3.1 Style Motion Conditioned Diffusion Framework A majority of current motion style transfer methodologies [2, 12, 25] predominantly adopt a generative framework based on GAN [7]. However, during training, GAN is prone to instability and convergence issues, often resulting in disorganized, chaotic motion sequences that struggle to embody a realistic, high-level natural motion style. On the contrary, the diffusion framework process during training tends to be more stable and is typically easier to converge. Therefore, to address the highlighted problems, we adopt a diffusion model as our generative framework. To ensure that the diffusion framework can learn the details of motion characteristics and style variations more comprehensively, we innovatively consider the style motion sequence ns as the condition C \u2208R\ud835\udc51\u00d7\ud835\udc41 for diffusion. Consequently, we propose the Style Motion Conditioned Diffusion (SMCD) Framework, achieving a more realistic and high-fidelity motion style transfer. We utilize the definition of diffusion delineated in DDPM [10], considering the forward diffusion process as a Markov noising process. By perpetually infusing Gaussian noise into the motion sequence m0 \u2208R\ud835\udc51\u00d7\ud835\udc41, we disrupt the motion sequence, thus obtaining {mt}T t=0, i.e., the full motion sequence at noising step t, where the m0 \u2208R\ud835\udc51\u00d7\ud835\udc41is drawn from the data distribution. This forward noising process can be defined as follows: \ud835\udc5e(mt | m0) \u223cN \u0010\u221a\u00af \ud835\udefc\ud835\udc61m0, (1 \u2212\u00af \ud835\udefc\ud835\udc61) I \u0011 , (1) where \u00af \ud835\udefc\ud835\udc61\u2208(0, 1) are monotonic decreasing constants, when approximating to 0, we can approximate mT \u223cN (0, \ud835\udc3c). We set timesteps T = 1000. 3.2 Motion Style Mamba Architecture Upon introducing the SMCD framework, the observation shows that the framework exhibited suboptimal performance in extracting temporal information from motion sequences, resulting in somewhat chaotic outcomes. Drawing inspiration from the Mamba model proposed by Gu et al. in reference [8], we propose the Motion Style Mamba (MSM) module to address this issue. This module employs a Selection Mechanism to more effectively capture the temporal dynamics of motion sequences, thereby preserving the long-term dependencies within the sequence and enhancing the efficacy of motion style transfer. To the best of our knowledge, we are the first to introduce the Mamba model for motion style transfer. The Motion Style Mamba (MSM) module primarily embeds independent temporal information into motion sequences. Prior to the input of motion sequences into the MSM module, it is requisite to subject the input motion sequences and temporal steps to the following processing procedures: Seq \ud835\udc47= \ud835\udc43\ud835\udc38\u0000concat \u0000\ud835\udc40\ud835\udc3f\ud835\udc43(\ud835\udc47), Linear \u0000\ud835\udc5b\ud835\udc60\u0001 , Linear \u0000\ud835\udc5a\ud835\udc50\u0001\u0001\u0001 , (2) where the temporal step size denotes as T, undergoes a projection through a multi-layer perceptron (MLP) comprising two linear layers succeeded by an activation layer, thereby mapping it into a continuous vector space. This process results in forming a latent vector that is amenable to manipulation by the Motion Style Mamba (MSM) module. \ud835\udc8f\ud835\udc60\u2208R3\ud835\udc3d\u00d7\ud835\udc47denotes to style motion sequence, \ud835\udc8e\ud835\udc84\u2208R4\ud835\udc3d\u00d7\ud835\udc41denotes to content motion sequence. Once processed through a linear layer, the two components are concatenated to form an augmented motion sequence. Upon undergoing positional encoding, this sequence is transformed into SeqT, which serves as the input for the Motion Style Mamba (MSM) module. Within the MSM module, the Mamba Block [8] undertakes the pivotal role of mapping temporal information via the temporal step size T onto both the content motion sequence and the style motion sequence while modulating the significance of the temporal information. Inside the Mamba Block, SeqT initially passes through a residual structure equips with an InstanceNorm (IN) layer, followed by feature extraction via Causal Conv1D [31]. The Causal Conv1D ensures that the value of each output is contingent solely upon its preceding input values. Moreover, the Selection Scan constitutes the core component of the Mamba Block, enabling the model to selectively update its internal state based on the current characteristics of the input data. This further refines to focus on temporal information, facilitating the capture of the temporal dependencies within the motion sequence. Utilizing the Selection Scan allows for a high degree of temporal alignment between the content motion and style motion, thereby circumventing the rigidity that may arise from asymmetrical motion sequences in the final output. The following formula can delineate the structure of the Mamba Block: \ud835\udc40\ud835\udc4e\ud835\udc5a\ud835\udc4f\ud835\udc4e0 \ud835\udc60= LN \u0000Seq\ud835\udc47 \u0001 , (3) \ud835\udc40\ud835\udc4e\ud835\udc5a\ud835\udc4f\ud835\udc4e\ud835\udc56 \ud835\udc60= LN \u0010 IN \u0010 \ud835\udc40\ud835\udc4e\ud835\udc5a\ud835\udc4f\ud835\udc4e\ud835\udc56\u22121 \ud835\udc60 \u0011\u0011 + IN \u0010 \u03a6 \u0010 \ud835\udf07 \u0010 IN \u0010 \ud835\udc40\ud835\udc4e\ud835\udc5a\ud835\udc4f\ud835\udc4e\ud835\udc56\u22121 \ud835\udc60 \u0011\u0011\u0011\u0011 , (4) \ud835\udc40\ud835\udc4e\ud835\udc5a\ud835\udc4f\ud835\udc4eres = LN \u0000Seq\ud835\udc47 \u0001 + \ud835\udc40\ud835\udc4e\ud835\udc5a\ud835\udc4f\ud835\udc4e\ud835\udc41 \ud835\udc60, (5) \fPreprint, 2024, Conference Paper Ziyun Qian, et al Linear Linear T MSM Style Motion \u2026 \u2026 Seq MLP \u2026 PE \ud835\udc8f\ud835\udc8f\ud835\udc94\ud835\udc94\ud835\udfcf\ud835\udfcf \ud835\udc8f\ud835\udc8f\ud835\udc94\ud835\udc94\ud835\udfd0\ud835\udfd0 \ud835\udc8f\ud835\udc8f\ud835\udc94\ud835\udc94\ud835\udc8f\ud835\udc8f \u0ddd \ud835\udc8e\ud835\udc8e\ud835\udfce\ud835\udfce \ud835\udfcf\ud835\udfcf \u0ddd \ud835\udc8e\ud835\udc8e\ud835\udfce\ud835\udfce \ud835\udfd0\ud835\udfd0 \u0ddd \ud835\udc8e\ud835\udc8e\ud835\udfce\ud835\udfce \ud835\udc8f\ud835\udc8f ... Content Motion \ud835\udc8e\ud835\udc8e\ud835\udc84\ud835\udc84\ud835\udfcf\ud835\udfcf \u2026 \ud835\udc8e\ud835\udc8e\ud835\udc84\ud835\udc84\ud835\udfd0\ud835\udfd0 \ud835\udc8e\ud835\udc8e\ud835\udc84\ud835\udc84\ud835\udc8f\ud835\udc8f ... ... Predicted Motion MSM Noisy Motion T Style Motion Diffuse 0 \u2192T-1 Style Motion T 1 MSM MSM Style Motion 1 \u0ddd \ud835\udc8e\ud835\udc8e\ud835\udfce\ud835\udfce Diffuse 0 \u21921 ... ... ... ... \u0ddd \ud835\udc8e\ud835\udc8e\ud835\udfce\ud835\udfce \u0ddd \ud835\udc8e\ud835\udc8e\ud835\udfce\ud835\udfce Figure 2: (Left) Overview of the Style Motion Conditioned Diffusion (SMCD) framework. The model inputs a content motion sequence mc with N poses in a noising step \ud835\udc61, as well as \ud835\udc61itself, and a style motion sequence \ud835\udc8fs considered as condition C. The Motion Style Mamba (MSM) module predicts the stylized motion m0 in each sampling step. (Right) Sampling MSM. Given the \ud835\udc8fs as condition C, we sample random noise mT at the dimensions of the desired motion, then iterate from T=1000 to 1. In each step \ud835\udc61, MSM predicts stylized motion m0 and diffuses it back to mT-1. where LN is the linear layer, IN is an Instance Normalization layer. \u03a6 is Selective Scan module, \ud835\udf07denotes to Causal Conv1D layer [31], \ud835\udc40\ud835\udc4e\ud835\udc5a\ud835\udc4f\ud835\udc4e\ud835\udc56 \ud835\udc60denotes the Mamba Block corresponding to the ith iteration of the cyclic process. Especially, \ud835\udc40\ud835\udc4e\ud835\udc5a\ud835\udc4f\ud835\udc4e0 \ud835\udc60denotes the input presented to the Mamba Block. \ud835\udc40\ud835\udc4e\ud835\udc5a\ud835\udc4f\ud835\udc4eres represents the output from the residual network that incorporates the Mamba Block as a constitutive element. After the Mamba Block structure facilitates the integration, the temporal information and motion sequences are consolidated and fed into a Multi-Head Attention (MHA) mechanism. This is further followed by the passage through a residual network augmented with a Position-wise Feed-Forward Network, which enhances the efficacy of the style transfer process. \ud835\udf0e= IN (LN ( Mamba res )) + \ud835\udc40\ud835\udc3b\ud835\udc34(LN ( Mamba res )) , (6) where \ud835\udf0erefers to the output of the residual network that includes the integration of MHA. The ultimate output of the MSM module \ud835\udc40\ud835\udc40\ud835\udc46\ud835\udc40can be articulated through the following equation: \ud835\udc40\ud835\udc40\ud835\udc46\ud835\udc40= \ud835\udc39\ud835\udc39\ud835\udc41(\ud835\udf0e) + IN(\ud835\udf0e), (7) where \ud835\udc39\ud835\udc39\ud835\udc41denotes the Position-wise Feed-Forward Network. 3.3 Training Objectives Our objective is to synthesize a motion sequence of length N that embodies both the characteristics of content motion and style motion under the given condition c in style motion sequence ns \u2208 \ud835\udc453\ud835\udc3d\u00d7\ud835\udc47. We model distribution \ud835\udc5d( m0 | C) as the reversed diffusion Mamba Block * N Input Linear Linear Wise position FFN Instance Norm Instance Norm MSM Block Linear MHA K Q V Linear Causal Conv1D Selective Scan Linear Instance Norm predicted motion Figure 3: Architecture of Motion Style Mamba (MSM) Module. process of iteratively cleaning mT. To better handle lengthy motion sequences and enhance computational efficiency, we propose the Motion Style Mamba (MSM) module. After noise mt, noising step t, and motion condition C are fed into the MSM module, we can directly predict the original motion sequence b m0, i.e., b m0 = MSM ( mt, t, C) = MSM ( mt, t, ns), without having to predict noise \ud835\udf16\ud835\udc61as the research [10] (see Figure 2 right). \fSMCD: High Realism Motion Style Transfer via Mamba-based Diffusion Preprint, 2024, Conference Paper Furthermore, we introduce the simple loss proposed by Ho et al. [10] to encourage the predicted motion sequence b m0 to be as consistent as possible with the original motion sequence m0: Lsimple = \ud835\udc38m0,\ud835\udc61\u223c[1,\ud835\udc47] h \u2225m0 \u2212MSM (mt, t, ns)\u22252 2 i . (8) Additionally, in light of the unique characteristics of the style motion conditioned diffusion framework proposed in this paper, we specially designe the Diffusion-based Content Consistency Loss (Eq.9) and Diffusion-based Style Consistency Loss (Eq.10). Diffusion-based Content Consistency Loss. When the inputted content motion sequence mc and style motion sequence ns share the same style (c=s), it would undoubtedly be ideal for the resulting generated motion to closely resemble content motion mc, regardless of the content of style motion ns. Due to the lack of loss functions that fully adapt to our SMCD framework, taking the above observation into account, we propose the Diffusion-based Content Consistency Loss under the style motion conditioned diffusion framework for the first time, aiming to constrain the motion content. In each iteration, two motion sequences with the same content are randomly selected from the dataset M to serve as the style motion and content motion, respectively. Subsequently, the Diffusion-based Content Consistency Loss is computed using the following formula: Ldcc = Emc,nc\u223cM \u2225\ud835\udc40\ud835\udc46\ud835\udc40(mc, t, nc) \u2212mc\u22251 . (9) Two fundamental differences exist between our loss function and the Content Consistency Loss proposed by Aberman et al. [2] : (1) Our loss function is diffusion-based, and the timestep t can control the forward noising process based on motion. (2) The style motion in our loss function acts as a condition for diffusion, aligning more closely with the overall framework of this paper. Diffusion-based Style Consistency Loss. Following the same line of thinking as the Diffusion-based Content Consistency Loss, we also propose the Diffusion-based Style Consistency Loss for the first time. In each iteration, we randomly select two motion sequences with the same style from the dataset M as the style motion and content motion, respectively. The motion generated should be closer to the style motion ns. We calculate the Diffusionbased Style Consistency Loss using the following formula: Ldsc = Enc,ns\u223cM \u2225\ud835\udc40\ud835\udc46\ud835\udc40(nc, t, ns) \u2212ns\u22251 . (10) Geometric losses. Geometric losses are also frequently adopted in motion generation [20, 23, 27, 28] to enhance the physical realism of the motion, prompting the model to generate more naturally coherent motions. We employ three expected geometric losses, which control (1) positions, (2) foot contact, and (3) velocities. Lpos = 1 \ud835\udc41 \ud835\udc41 \u2211\ufe01 \ud835\udc56=1 \r \r \r\ud835\udc39\ud835\udc3e \u0010 mi 0 \u0011 \u2212\ud835\udc39\ud835\udc3e \u0010 b mi 0 \u0011\r \r \r 2 2 , (11) Lfoot = 1 \ud835\udc41\u22121 \ud835\udc41\u22121 \u2211\ufe01 \ud835\udc56=1 \r \r \r \u0010 \ud835\udc39\ud835\udc3e \u0010 mi+1 0 \u0011 \u2212\ud835\udc39\ud835\udc3e \u0010 b mi 0 \u0011\u0011 \u00b7 \ud835\udc53\ud835\udc56 \r \r \r 2 2 , (12) Lvel = 1 \ud835\udc41\u22121 \ud835\udc41\u22121 \u2211\ufe01 \ud835\udc56=1 \r \r \r \u0010 mi+1 0 \u2212mi 0 \u0011 \u2212 \u0010 b mi+1 0 \u2212b mi 0 \u0011\r \r \r 2 2\u2032 (13) where \ud835\udc39\ud835\udc3e(\u00b7) is the forward kinematic function that converts joint angles into joint positions, and the \ud835\udc56superscript denotes the motion frame index. \ud835\udc53\ud835\udc56\u2208{0, 1}\ud835\udc3dis the binary foot contact mask for each frame \ud835\udc56, indicating whether the foot is in contact with the ground. It is set according to the binary ground truth data and mitigates foot sliding by offsetting the velocity when contact occurs. Our total training loss function is a combination of the above six losses: Ltotal = Lsimple + Ldcc + Ldsc + Lpos + Lvel + Lfoot . (14) 4 EXPERIMENT In this section, we conduct extensive experiments comparing the method presented in this paper with state-of-the-art methods in terms of visual effects and quantitative metrics. Subsequently, we also test the effectiveness of the SMCD framework in performing motion style transfer to unseen style to assess the model\u2019s generalizability in practical applications. Ultimately, we conduct extensive ablation experiments to validate the effectiveness of each component within the SMCD framework. 4.1 Implementation Details We train and test based on the Xia dataset [34]. This dataset\u2019s Motion clips include 8 motion styles and 5 motion contents. We reduce the original 120fps motion data to 60fps and obtain approximately 1500 motion sequences in total. Our framework is implemented in PyTorch and trains on an NVIDIA A800, with a batch size of 512, using the AdamW optimizer [17]. The training process takes about 10 hours each time. 4.2 Visual Effect Comparison We qualitatively compare the visual effects in motion style transfer from three aspects: style expressiveness, content preservation, and motion realism. This comparison involves our proposed SMCD framework, the method proposed by Aberman et al. [1] and StyleERD [25]. Due to the scarcity of open-source papers in the field of motion style transfer, our comparison is limited to the two methods mentioned above. The content motion and style motion adopted in the experiments originate from the dataset proposed by Xia et al. [34] Under ideal circumstances, the model should be capable of transferring the style of the style motion to the content motion while preserving the content of the content motion. Hence, the generated motion sequence should embody content and style motion characteristics. As seen in Figure 4, we conduct three sets of motion style transfers. The results show that the motions generated by our SMCD framework can more realistically reflect the style while retaining the original content, demonstrating higher style expressiveness and content preservation. On the other hand, the frameworks [1] and [25] struggle to transfer the motion style effectively. Regarding motion realism, motions generated by our SMCD framework are more realistic. In contrast, the other two methods exhibit flaws at the ankles, shoulders, and other areas, as highlighted in red boxes in Figure 4. \fPreprint, 2024, Conference Paper Ziyun Qian, et al Input style Input content Aberman et al. Style-ERD Ours Old walk into neutral style Proud walk into sexy style Strutting run into old style \u4e0d\u7528\u586b \u4e0d\u7528\u586b \u4e0d\u7528\u586b Figure 4: A comparative visual representation of the SMCD framework with the methods proposed by Aberman et al. [1] and Style-ERD [25]. The image depicts the flaws in the generated motions, denoted by red boxes. 4.3 Quantitative Evaluation Inspired by MoDi [21], we adopt the following metrics to evaluate our framework quantitatively: \u2022 FID (Fr\u00e9chet Inception Distance): This metric measures the difference between the distribution of motions generated in the latent space and real motions to evaluate the quality of generated motions. The lower the FID score, the smaller the distribution difference between the generated and real motions, indicating a higher quality of the motion generated. \u2022 KID (Kernel Inception Distance): Similar to FID, it utilizes convolution to extract motion features when calculating the distance between feature statistical data. Compared with FID, the KID score is more sensitive to the local structure and details of generated motions. A lower KID score indicates a higher quality of the generated motion. \u2022 Diversity: Evaluate the degree of diversity of the generated movements. The higher the value, the more diverse the movements generated, indicating better generation outcomes. We conduct quantitative comparison experiments on the Xia dataset [34], as demonstrated by the results in Table 1. The quantitative comparison results on the BFA dataset [2] can be seen in the supplementary material. Due to the limited availability of publicly accessible datasets in motion style transfer, we only compare these two mainstream datasets. Table 1 reveals that our proposed SMCD framework surpasses the baseline [2, 25] on most metrics, achieving optimal results. This success stems from our SMCD framework and MSM module, which excel in learning content and style motion features and fusing them effectively. At the same time, these elements \fSMCD: High Realism Motion Style Transfer via Mamba-based Diffusion Preprint, 2024, Conference Paper Table 1: A quantitative comparison with State-of-the-art methods on the Xia dataset [34]. The best scores are emphasized in bold. Method FID\u2193 KID\u2193 Diversity\u2191 Aberman et al. [2] 19.405 0.953 2.639 Style-ERD [25] 17.682 0.869 2.595 Ours 16.676 0.768 2.602 maintain the long-term dependencies in temporal sequence within the motion sequence, leading to the generation of more realistic motion sequences. 4.4 Generalizability Our model is capable of extracting styles from any given motion clip. However, in practical applications within the multimedia field, motion style transfer models will likely encounter style categories outside the training dataset. At times like this, whether the model can transfer styles from unseen styles determines its generalization and usability. To compare the generalizability of our proposed SMCD framework with other methods, we train the model on the Xia dataset [34], which does not include angry label motions. Then, we conduct tests on a dataset that included angry style motions. The results, as shown in Figure 5, illustrate that when faced with an unseen motion style angry, our SMCD framework can still learn its characteristics. Our framework achieve better motion style transfer effects than [1] and [25]. The other two methods that participate in the comparison exhibited flaws when transferring unseen styles, as indicated by the red boxes in Figure 5. The results of the generalizability comparison indicate that our framework is more generalizable and practical. Its ability to perform more effectively in various multimedia fields, such as movies, games, and the Metaverse, distinguishes it from other methods. 4.5 Ablation Studies In order to verify the necessity of each component in our model, we conduct extensive ablation experiments, removing the MSM module, the loss functions Lsimple , Ldcc, Ldsc respectively to train the model, and then utilize the same evaluation metrics as quantitative evaluation for validation. As shown in Table 2, the removal of any one component significantly degrades all evaluation metrics of the SMCD framework, with the most noticeable drop in performance for motion style transfer when the MSM module is removed. In addition, we also present the motion effect diagram generated by the model after removal, as illustrated in Figure 6. It can be observed that the motion has many flaws, and it does not effectively reflect the style of the motion. The results of the ablation experiment also affirm the effectiveness of each component in our SMCD framework; they all play integral roles and are indispensable. To further compare the motion style transfer performance of our proposed MSM module with other modules, we substitute the MSM module for four modules: STGCN [42], Transformer Encoder [32], iTransformer [16], and Mamba [8], and retrain the framework for comparative experiments. We leverage the same evaluation metrics Table 2: Ablation experiments on various components of the SMCD framework. The best scores are highlighted in bold. Setting FID\u2193 KID\u2193 Diversity\u2191 Ours w/o Lsimple 17.546 0.831 2.158 Ours w/o Ldcc 22.410 1.168 2.473 Ours w/o Ldsc 20.294 1.030 1.931 Ours w/o MSM 23.330 1.458 1.433 Ours 16.676 0.768 2.602 Table 3: Comparison results between the MSM module and other modules. The best scores are highlighted in bold. Module FID\u2193 KID\u2193 Diversity\u2191 STGCN [42] 21.119 1.021 2.269 Transformer [32] 18.977 0.952 2.080 iTransformer [16] 19.177 0.862 2.392 Mamba [8] 20.962 0.925 2.579 MSM(Ours) 16.676 0.768 2.602 as mentioned above to assess the performance. As shown in Table 3, our MSM module outperform all other modules on all quantitative evaluation metrics, fully demonstrating its superiority in achieving a better motion style transfer effect. We hypothesize that this success is due to the MSM module\u2019s superior ability to capture the temporal information and stylization characteristics of motion sequences, thereby effectively transferring styles while maintaining the long-term dependencies within the sequence. Due to space limitations, more ablation experiment results will be demonstrated in the supplementary materials. 4.6 User study In addition to the qualitative and quantitative comparisons, we conduct a user study to perceptually evaluate the realism, style expressiveness, and content preservation of our style transfer results. As detailed below, we recruite 50 volunteers to respond to a questionnaire consisting of three types of questions. In this part, we assess the realism of the generated motions. Two motions depicting the same type of content and style (such as a depressed walk) are presented to the volunteers. The motions originated from three different sources: (1) our original Xia dataset [34], (2) results generated by method [2], (3) results generated by StyleERD [25], and (4) results generated by our framework. Note that (2), (3), and (4) are all generated using similar inputs. Participants are asked, \"Which motion above looks more like actual walking?\" and must choose one of the four motion sources. Table 4 presents the realism ratios for each method in generating motions. It is easy to find out that 85.2% of our results are judged as realistic, closely resembling the proportion in the real Xia dataset [34]. Notably, this ratio is significantly higher than method [2] with 15.1% and Style-ERD [25] with 28.7%. Content Preservation and Style Transfer. This part compares our style transfer results with those generated by Aberman et al. [2] and Style-ERD [25] regarding content preservation and style \fPreprint, 2024, Conference Paper Ziyun Qian, et al Input unseen style Input content Aberman et al. Style-ERD Ours Neutral run Angry style Neutral run into angry style Neutral run into angry style Neutral run into angry style Childlike style Angry walk Angry walk into childlike style Angry walk into childlike style Angry walk into childlike style Figure 5: Illustration of Unseen Styles. Training on datasets [34] without the angry style, then testing conventionally to evaluate their generalizability when dealing with an unseen style. Red boxes highlight flaws in the generated motions. Input style Input content Ours \ud835\udc98\ud835\udc98/\ud835\udc90\ud835\udc90\u2112\ud835\udc85\ud835\udc85\ud835\udc85\ud835\udc85\ud835\udc85\ud835\udc85 Ours \ud835\udc98\ud835\udc98/\ud835\udc90\ud835\udc90\u2112\ud835\udc85\ud835\udc85\ud835\udc85\ud835\udc85\ud835\udc85\ud835\udc85 Ours (Full) Angry walk Neutral style Angry walk into neutral style Angry walk into neutral style Angry walk into neutral style Neutral style Sexy walk Sexy walk into neutral style Sexy walk into neutral style Sexy walk into neutral style Figure 6: The motion generated by the model trained post-removal of Ldcc and Ldsc. Red boxes highlight flaws in the generated motions. Table 4: The user study for realism ratios. Xia dataset [34] Aberman et al. [2] Style-ERD [25] Ours 88.9% 15.1% 28.7% 85.2% transfer. Volunteers are presented with a content input, a style input, and the results of motion style transfer from three models. They are initially asked to choose which model\u2019s motion content is closer to the input content, followed by selecting which model\u2019s motion style is closer to the input style. The results of the user study are shown in Table 5. The findings indicate that our method achieve the best content preservation and style transfer outcomes. 64.8% and 72.3% of the volunteers perceive that our method\u2019s motion content/style is closer to the input content/style. In contrast, the proportions for the other two methods [1] [25] were significantly lower than ours Table 5: The user study for content preservation and style transfer. Evaluation Metrics Aberman et al. [2] Style-ERD [25] Ours Content Preservation 20.7% 14.5% 64.8% Style Transfer 10.9% 16.8% 72.3% 5" +} \ No newline at end of file diff --git a/abs_9K/test_abstract_short_2405.02905v1.json b/abs_9K/test_abstract_short_2405.02905v1.json new file mode 100644 index 0000000000000000000000000000000000000000..9ff10e0d05842a32406b5c0352aebf496098693e --- /dev/null +++ b/abs_9K/test_abstract_short_2405.02905v1.json @@ -0,0 +1,17 @@ +{ + "url": "http://arxiv.org/abs/2405.02905v1", + "title": "Mixture of partially linear experts", + "abstract": "In the mixture of experts model, a common assumption is the linearity between\na response variable and covariates. While this assumption has theoretical and\ncomputational benefits, it may lead to suboptimal estimates by overlooking\npotential nonlinear relationships among the variables. To address this\nlimitation, we propose a partially linear structure that incorporates\nunspecified functions to capture nonlinear relationships. We establish the\nidentifiability of the proposed model under mild conditions and introduce a\npractical estimation algorithm. We present the performance of our approach\nthrough numerical studies, including simulations and real data analysis.", + "authors": "Yeongsan Hwang, Byungtae Seo, Sangkon Oh", + "published": "2024-05-05", + "updated": "2024-05-05", + "primary_cat": "stat.ME", + "cats": [ + "stat.ME", + "stat.ML" + ], + "label": "Original Paper", + "paper_cat": "Mixture AND of AND Experts", + "gt": "In the mixture of experts model, a common assumption is the linearity between\na response variable and covariates. While this assumption has theoretical and\ncomputational benefits, it may lead to suboptimal estimates by overlooking\npotential nonlinear relationships among the variables. To address this\nlimitation, we propose a partially linear structure that incorporates\nunspecified functions to capture nonlinear relationships. We establish the\nidentifiability of the proposed model under mild conditions and introduce a\npractical estimation algorithm. We present the performance of our approach\nthrough numerical studies, including simulations and real data analysis.", + "main_content": "Introduction Quandt (1972) introduced a finite mixture of regressions (FMR) for uncovering hidden latent structures in data. It assumes the existence of unobserved subgroups, each characterized by distinct regression coefficients. Since the introduction of FMR, extensive research has been conducted to enhance its performance, with contributions from Neykov et al. (2007), Bai et al. (2012), Bashir and Carter (2012), Hunter and Young (2012), Yao et al. (2014), Song et al. (2014), Zeller et al. (2016), Zeller et al. (2019), Ma et al. (2021), Zarei et al. (2023), and Oh and Seo (2024). However, because FMRs assume that the assignment of each data point to clusters is independent of the covariates (Hennig, 2000), FMR can be undermined with regard to the performance of regression clustering when the assumption of assignment independence is violated. Alternatively, Jacobs et al. (1991) introduced the mixture of linear experts (MoE), allowing for the assignment of each data point to depend on the covariates. Nguyen and McLachlan (2016) suggested the Laplace distribution for the error distributions, while Chamroukhi (2016) and Chamroukhi (2017) used t distributions and skew-t distributions for errors, respectively. Murphy and Murphy (2020) further extended MoE with a parsimonious structure to improve estimation efficiency. Mirfarah et al. (2021) introduced the use of scale mixture of normal distributions for errors within MoE. Recently, Oh and Seo (2023) proposed a specific MoE variant, 1 arXiv:2405.02905v1 [stat.ME] 5 May 2024 \fassuming that covariates follow finite Gaussian location-scale mixture distributions and that the response follows finite Gaussian scale mixture distributions. In spite of extra flexibility for errors in these models, they assumed linear structures in each mixture component, which makes too simple to capture the hidden latent structures. In homogeneous population, Engle et al. (1986) introduced a partial linear model, comprising a response variable Y is represented as a linear combination of specific p-dimensional covariates X and an unspecified non-parametric function that includes an additional covariate U, as follows. y = x\u22a4\u03b2 + g(u) + \u03f5, (1) where U \u2282R, \u03f5 is an error term with a mean zero and finite variance, and the function g(\u00b7) is an unknown non-parametric function. This model has the advantages of interpretability, stemming from its linearity, with the flexibility to capture diverse functional relationships through an unspecified function g(\u00b7). The differentiation between X and U is determined either theoretically based on established knowledge in the application field or through methods like scatter plots or statistical hypothesis testing. Wu and Liu (2017) and Skhosana et al. (2023) suggested the FMR to accommodate a partially linear structure within a heterogeneous population. In this paper, we consider a novel approach that incorporates partially linear structures into MoE, utilizing unspecified functions based on kernel methods. This allows proposed model to effectively capture various relationships between the response and covariates, while latent variable is dependent on some covariates. This flexibility can significantly impact the estimation of regression coefficients and enhance clustering performance by mitigating misspecification problems arising from assumptions about the relationships between variables. In addition, we address the issue of identifiability in the proposed model to ensure the reliability of the outcomes derived from proposed approach. The remainder of this paper is organized as follows. Section 2 reviews MoE and introduces the proposed models, addressing the identifiability. Section 3 outlines the estimation procedure, while Section 4 deals with practical issues related to the proposed models. We present the results of simulation studies in Section 5 and apply the models to real datasets in Section 6. Finally, we provide a discussion in Section 7. 2 Semiparametric mixture of partially linear experts 2.1 Mixture of linear experts Let Z be a latent variable indicating the membership of the observations. MoE is a useful tool when exploring the relationship between the response variable and covariates in the presence of unobserved information about C heterogeneous subpopulations by latent variable Z. Jacobs et al. (1991) presented the conditional probability distribution of the response variable given the covariates as p(y|x) = C X c=1 p(Z = c | x)p(y | x, Z = c) = C X c=1 \u03c0c(x)\u03d5(y; \u03b20c + x\u22a4\u03b2c, \u03c32 c), (2) where \u03c0c(\u00b7), c = 1, . . . , C, represents a mixing probability that depends on the given covariates, with 0 < \u03c0c(x) < 1 and PC c=1 \u03c0c(x) = 1. Additionally, (\u03b20c, \u03b2\u22a4 c ) represents a (p+1)-dimensional vector for c = 1, . . . , C, and \u03d5(\u00b7; \u00b5, \u03c32) denotes the probability density function of the normal distribution with mean \u00b5 and variance \u03c32. 2 \fRegression clustering, the process of identifying the latent variable Z, holds significant importance in understanding the prediction mechanism employed by MoE. The predicted value of the response variable for new covariate X = x is determined as E(Y | X = x) = C X c=1 \u03c0c(x) \u00b7 (\u03b20c + x\u22a4\u03b2c), where \u03c0c(x) is often called as the gating network, while (\u03b20c +x\u22a4\u03b2c) is referred to as the expert network. That is, the prediction structure can be understood as an ensemble model as shown in Figure 1 because the predicted values are obtained by combining the outcomes of the expert networks using the gating network. Consequently, selecting an appropriate latent variable Z is a crucial aspect of the MoE model. Figure 1: Predicting mechanism of MoE MoE is applied in various fields as a machine learning model. For example, Li et al. (2019) used MoE to explain differences in lane-changing behavior based on driver characteristics. Shen et al. (2019) extended MoE to adapt to the characteristics of data for creating a translation model capable of various translation styles. Additionally, Riquelme et al. (2021) proposed Vision MoE, which maintains superior performance compared to existing models in image classification while significantly reducing estimation time. 2.2 Proposed model In this section, we introduce a semiparametric mixture of partially linear experts (MoPLE) model. The MoPLE is constructed by considering each expert network of the MoE model as a partial linear model (1), which can be defined as p(y | x, u) = C X c=1 \u03c0c(x; \u03b10c, \u03b1c)\u03d5(y; x\u22a4\u03b2c + gc(u), \u03c32 c). (3) Here, \u03c0c(x; \u03b10c, \u03b1c) is defined as \u03c0c(x; \u03b10c, \u03b1c) = exp(\u03b10c+x\u22a4\u03b1c) PC j=1 exp(\u03b10j+x\u22a4\u03b1j), where (\u03b10c, \u03b1\u22a4 c ) represents a (p + 1)-dimensional vector (c = 1, 2, . . . , C), especially with (\u03b10C, \u03b1\u22a4 C) being a zero vector. 3 \fWhen C = 1, since \u03c0C(x; \u03b10C, \u03b1C) is equal to 1, (3) simply represents a partial linear model (1). If C > 1 and gc(\u00b7) = 0, (3) is equivalent to the MoE (2). Identifiability is a fundamental concern when dealing with finite mixture models. Hennig (2000) established that finite mixture of regressions is identifiable when the domain of X includes an open set in Rp. Additionally, Huang and Yao (2012) demonstrated that (2), with unspecified \u03c0c(x) for c = 1, 2, . . . , C, is identifiable up to a permutation of relabeling. Furthermore, Wu and Liu (2017) extended these findings by establishing the identifiability of the mixture of partially linear regressions, assuming that \u03b1 = (\u03b1\u22a4 1 , \u03b1\u22a4 2 , . . . , \u03b1\u22a4 C)\u22a4is a zero vector in (3). Building upon these results, the following theorem establishes the identifiability of model (3). Theorem 1. Suppose that the functions gc(\u00b7), c = 1, 2, . . . , C, are continuous, and the parameter vectors (\u03b2c, \u03c32 c) are distinct in Rp+1 for c = 1, 2, . . . , C. Additionally, assume that the covariate X does not contain a constant, and none of its components can be a deterministic function of U. If the support of X contains an open set in Rp, then (3) is identifiable up to a permutation of its components for almost all (x\u22a4, u)\u22a4\u2208Rp+1. Proof. In (3), suppose that there exist \u02dc \u03b10k, \u02dc \u03b1k, \u02dc \u03b2k and \u02dc gk(\u00b7), k = 1, 2, . . . , K, satisfying C X c=1 \u03c0c(x; \u03b10c, \u03b1c)\u03d5(y; x\u22a4\u03b2c + gc(u), \u03c32 c) = K X k=1 \u03c0k(x; \u02dc \u03b10k, \u02dc \u03b1k)\u03d5(y; x\u22a4\u02dc \u03b2k + \u02dc gk(u), \u02dc \u03c32 k), (4) where ( \u02dc \u03b2k, \u02dc \u03c32 k), k = 1, 2, . . . , K, are distinct. Consider the set {x \u2208Rp : x\u22a4\u03b2c1 + gc1(u) = x\u22a4\u03b2c2 + gc2(u)} for any \u03b2c1 and \u03b2c2 (c1, c2 \u22081, 2, . . . , C ), where \u03b2c1 \u0338= \u03b2c2 and \u03c32 c1 = \u03c32 c2, for a given U = u. This set represents a (p \u22121)-dimensional hyperplane in Rp. For any pair of \u03b2c1 and \u03b2c2 with \u03b2c1 \u0338= \u03b2c2 and \u03c32 c1 = \u03c32 c2, the union of a finite number of such hyperplanes, where (x\u22a4\u03b2c1, \u03c32 c1) = (x\u22a4\u03b2c2, \u03c32 c2), has a zero Lebesgue measure in Rp. This fact remains true for the finite number of sets {x \u2208Rp : x\u22a4\u02dc \u03b2k1 + \u02dc gk1(u) = x\u22a4\u02dc \u03b2k2 + \u02dc gk2(u)} for any \u02dc \u03b2k1 and \u02dc \u03b2k2 (k1, k2 \u2208{1, 2, . . . , K} ), where \u02dc \u03b2k1 \u0338= \u02dc \u03b2k2 and \u02dc \u03c32 k1 = \u02dc \u03c32 k2 for given U = u. From Lemma 1 of Huang and Yao (2012), it can be established that (4) is identifiable when conditioned on w = (x\u22a4, u)\u22a4, under the condition that both sets of (x\u22a4\u03b2c, gc(u)) for c = 1, 2, . . . , C and (x\u22a4\u02dc \u03b2k, \u02dc gk(u)) for k = 1, 2, . . . , K are distinct. That is, if w is given, we obtain C = K, and there exists a permutation \u03c4w = {\u03c4w(1), \u03c4w(2), . . . , \u03c4w(C)} among the finite number of possible permutations of {1, 2, . . . , C} such that \u03c0c(x; \u03b10c, \u03b1c) = \u03c0\u03c4w(c)(x; \u02dc \u03b10\u03c4w(c), \u02dc \u03b1\u03c4w(c)), x\u22a4\u03b2c + gc(u) = x\u22a4\u02dc \u03b2\u03c4w(c) + \u02dc g\u03c4w(c)(u), \u03c32 c = \u02dc \u03c32 \u03c4w(c) where c = 1, 2, . . . , C. Now, let us consider any permutation \u03c4 = {\u03c4(1), \u03c4(2), . . . , \u03c4(C)} that satisfies x\u22a4\u03b2c + gc(u) = x\u22a4\u02dc \u03b2\u03c4(c) + \u02dc g\u03c4(c)(u), \u03c32 c = \u02dc \u03c32 \u03c4(c), c = 1, 2, . . . , C, (5) for some w, and verify that \u03c4w has to be unique \u03c4. Suppose that \u03b2c \u0338= \u02dc \u03b2\u03c4(c) and gc(u) \u0338= \u02dc g\u03c4(c)(u). This contradicts to the assumption that X cannot be a deterministic function of U. When \u03b2c \u0338= \u02dc \u03b2\u03c4(c) and gc(u) = \u02dc g\u03c4(c)(u), the set {x \u2208Rp : x\u22a4\u03b2c = x\u22a4\u02dc \u03b2\u03c4(c)} has zero Lebesgue measure since it is a (p\u22121) dimensional hyperplane in Rp. Because \u03b2c = \u02dc \u03b2\u03c4(c) indicates gc(u) = \u02dc g\u03c4(c)(u), we obtain that \u03b2c = \u02dc \u03b2\u03c4(c), gc(u) = \u02dc g\u03c4(c)(u) 4 \ffor c = 1, 2, . . . , C. Since the parameter sets (\u03b2c, \u03c32 c) and (\u02dc \u03b2k, \u02dc \u03c32 k) for c, k \u2208{1, 2, . . . , C} are distinct, the permutation \u03c4, satisfying (5) on a subset of the support of w with nonzero Lebesgue measure, is unique. Because \u03c0c(\u00b7) and \u03c0\u03c4(c)(\u00b7) are continuous and one to one function, it follows that \u03b10c + x\u22a4\u03b1c = \u02dc \u03b10\u03c4(c) + x\u22a4\u02dc \u03b1\u03c4(c) for c = 1, 2, . . . , C. Moreover, as X cannot be a constant, \u03b10c = \u02dc \u03b10\u03c4(c) must be hold. Consequently, this indicates \u03b1c = \u02dc \u03b1\u03c4(c) , except for the set {x \u2208Rp : \u03b10c+x\u22a4\u03b1c = \u02dc \u03b10\u03c4(c) + x\u22a4\u02dc \u03b1\u03c4(c)}, which has a zero Lebesgue measure in Rp, for c = 1, 2, . . . , C. Therefore, we can conclude that (3) is identifiable up to a permutation of its components. 3 Estimation When considering the observed data {(yi, xi, ui)}n i=1, the log-likelihood function is defined as \u2113(\u0398, g) = n X i=1 log \" C X c=1 \u03c0c(x)\u03d5{yi; x\u22a4 i \u03b2c + gc(ui), \u03c32 c} # , (6) where \u0398 is the set of all parameters and g = (g1(\u00b7), . . . , gC(\u00b7))\u22a4. To find \u02c6 \u0398 and \u02c6 g that maximize equation (6), we propose the Expectation Conditional Maximization (ECM) algorithm (Meng and Rubin, 1993) using the profile likelihood method. The latent indicator variable Zic (c = 1, . . . , C), which indicates to which latent cluster the observed values belong, and the complete log-likelihood function are respectively defined as Zic = ( 1, if the i-th observation belongs to the c-th latent cluster 0, otherwise and \u2113c(\u0398, g) = n X i=1 C X c=1 Zic log \" \u03c0c(x)\u03d5{yi|x\u22a4 i \u03b2c + gc(ui), \u03c32 c} # . In the E-step for the (t + 1)th iteration of the ECM algorithm, t = 0, 1, . . ., we obtain Q(\u0398(t), g(t)) = E[\u2113c(\u0398, g)|\u0398(t), g(t)] using the posterior probability z(t+1) ic given \u0398(t) and g(t), which is represented as z(t+1) ic = E(Zic|xi, yi, \u0398(t), g(t)) = \u03c0(t) c (x)\u03d5{yi; xT i \u03b2(t) c + g(t) c (ui), \u03c32 c (t)} PC j=1 \u03c0(t) j (x)\u03d5{yi; x\u22a4 i \u03b2(t) j + g(t) j (ui), \u03c32 j (t)} . While keeping \u0398(t) (c = 1, 2, . . . , C) fixed, CM-step 1 involves updating g(t) to g(t+1) that maximizes the following local likelihood: \u2113h(g) = n X i=1 C X c=1 z(t+1) ic \" log \u03d5{yi; xT i \u03b2(t) c + gc(uj), \u03c32 j (t)} # Kh(ui \u2212uj), where j \u2208{1, 2, . . . , n}, and Kh(ui\u2212uj) represents the kernel weighting function with bandwidth h. Consequently, g(t+1) c (uj) can be calculated as g(t+1) c (uj) = Pn i=1 z(t+1) ic (yi \u2212x\u22a4 i \u03b2(t) c )Kh(ui \u2212uj) Pn i=1 z(t+1) ic Kh(ui \u2212uj) . 5 \fIn CM-step 2, after fixing g(t+1) c (uj), we can determine \u0398(t+1) as follows. \u03b1(t+1) c = \u03b1(t) c \u2212 \" \u22022Q(\u0398(t), g(t+1)) \u2202\u03b1c\u2202\u03b1\u22a4 c #\u22121\" \u2202Q(\u0398(t), g(t+1)) \u2202\u03b1c # , \u03b2(t+1) c = ( \u02dc X \u22a4Z(t+1) c \u02dc X)\u22121 \u02dc X \u22a4Z(t+1) c \u02dc y, \u03c32 c (t+1) = Pn i=1 z(t+1) ic (yi \u2212xi\u03b2(t+1) c \u2212g(t+1) c (ui))2 Pn i=1 z(t+1) ic . Here, \u02dc X = (I \u2212S)X, \u02dc y = (I \u2212S)y, Z(t+1) c is a diagonal matrix with diagonal elements z(t+1) ic , I is a n \u00d7 n identity matrix, and S is a n \u00d7 n matrix with elements defined as Sij = z(t+1) ic Kh(ui \u2212uj) Pn i=1 z(t+1) ic Kh(ui \u2212uj) . 4 Practical issues In practice, it is recommend to explore multiple initial values when employing the ECM algorithm, as the mixture likelihood inherently exhibits multimodality. To acquire appropriate initial values, we utilize the mixture of linear experts approach as proposed by Jacobs et al. (1991) for parameters such as \u03b10c, \u03b1c, \u03b2c, gc(u), and \u03c32 c, where c = 1, 2, . . . , C. Specifically, we set gc(u) as \u03b20c in (2) when employing the mixture of linear experts, where c = 1, 2, . . . , C. Multiple initial values are then selected by repeating the process of generating initial values and choosing the ones with the highest likelihood. In this study, we repeat this process 10 times to ensure the acquisition of suitable initial values. Furthermore, it is crucial to employ suitable methods for determining the optimal number of mixture components. In this paper, we utilized the Bayesian information criterion (BIC; Schwarz 1978) obtained as \u22122\u2113+ log(n) \u00d7 df , where \u2113is the log-likelihood function and df is degree of freedoms, to select the number of components. However, directly applying the BIC to the proposed model is challenging due to the complexity of calculating degrees of freedom, particularly in the presence of non-parametric functions. Therefore, we adopt a modified approach for determining degrees of freedom, inspired by Wu and Liu (2017), as follows. df = C \u00d7 \u03c4Kh\u22121|\u2126| \u001a K(0) \u22121 2 Z K2(t)dt \u001b + (2C \u22121)(p + 1), where \u2126represents the support of the non-parametric component covariates and \u03c4K = K(0) \u22120.5 R K2(t)dt R {K(t) \u22120.5K(t)}2dt. Given that the degrees of freedom depends on the bandwidth, we chose the bandwidth associated with the lowest BIC among the candidates. 6 \f5 Simulaton studies In this section, we present simulation results demonstrating the performance of the proposed method compared to other estimation methods under various cases. Specifically, we consider the following methods for each simulated sample: 1. MoE: Mixture of linear experts 2. FMPLR: Finite mixture of partially linear regressions. 3. MoPLE: Mixture of partially linear experts. FMPLR was introduced by Wu and Liu (2017), where it is assumed that all \u03b1 = (\u03b11, \u03b12, . . . , \u03b1C) to be zero vectors. We utilize the MoEClust in R package (Murphy and Murphy, 2022) for MoE, while we implement our R program for FMPLR and MoPLE. We conduct three simulation scenarios, each comprising two mixture components as detailed in Table 1. In each of these experiments, we assume that the covariates X and U are independent random variables following a standard uniform distribution. In the first experiment, we assume a linear relationship between Y and (X, U) within each mixture component, with the probability of observations belonging to latent clusters dependent on X. In the second experiment, we introduce partially linear relationships between Y and (X, U) while keeping the probability of observations belonging to latent clusters independent of X. In the third experiment, we also consider partially linear relationships, but it features the probability of observations belonging to latent clusters as dependent on X. Hence, we can expect that MoE, FMPLR and MoPLE represent efficient methods for Case I, Case II, and Case III, respectively. Table 1: True parameters for each simulation scenarios Scenarios Gating Network Component 1 Component 2 \u03b101 \u03b111 \u03b21 g1(u) \u03c32 1 \u03b22 g2(u) \u03c32 2 Case I -0.5 2 -3 -3u 0.5 3 3u 0.25 Case II 0 0 -3 2u2 0.5 3 2 cos(\u03c0u)2 0.25 Case III -0.5 2 -3 2u2 0.5 3 2 cos(\u03c0u)2 0.25 The performance of each method is evaluated by calculating the bias as 1 r Pr j=1(\u02c6 \u03b2c(j) \u2212\u03b2c) and mean square error (MSE) as 1 r Pr j=1(\u02c6 \u03b2c(j) \u2212\u03b2c)2, where \u03b2c and \u02c6 \u03b2c(j) are the true regression coefficient in cth expert network and the estimate of the \u03b2c from the jth sample for c = 1, 2 and j = 1, 2, . . . , r, respectively, for every regression parameter across a total of r = 400 replicated samples, with sample sizes of n =250, 500 and 1000. To assess the quality of the estimated nonparametric function \u02c6 g = (\u02c6 g1(\u00b7), \u02c6 g2(\u00b7)) for g = (g1(\u00b7), g2(\u00b7)), we utilize the mean absolute error (MAE) defined as MAE = D\u22121 D X d=1 |\u02c6 gc(ud) \u2212gc(ud)|, where c = 1, 2, . . . , C. We chose {ud, d = 1, . . . , D} as grid points evenly distributed within the range of the covariate u, with D set to 100. We employ the Epanechnikov kernel function and determine regression clusters for observations using the maximum a posteriori. To assess the clustering performance, the Adjusted Rand Index (ARI, Hubert and Arabie, 1985) and Adjusted Mutual Information (AMI, Vinh et al., 2009) are computed. Note that smaller values of bias, 7 \fTable 2: Performance of each method for regression coefficients in Case I (Boldfaced numbers indicate the best in each criterion) Method n \u03b21 \u03b22 g1(\u00b7) g2(\u00b7) ARI AMI MSE (bias) MSE (bias) MAE MAE MoE 250 0.045 (0.016) 0.037 (0.005) 0.087 0.077 0.961 0.923 500 0.024 (0.010) 0.017 (-0.005) 0.059 0.052 0.962 0.922 1000 0.011 (-0.003) 0.009 (-0.005) 0.042 0.036 0.963 0.923 FMPLR 250 0.049 (-0.033) 0.040 (-0.023) 0.159 0.130 0.952 0.908 500 0.026 (-0.004) 0.019 (-0.034) 0.113 0.093 0.954 0.908 1000 0.014 (-0.051) 0.010 (-0.032) 0.084 0.070 0.955 0.910 MoPLE 250 0.047 (0.014) 0.040 (0.006) 0.154 0.127 0.960 0.920 500 0.024 (0.011) 0.018 (-0.006) 0.110 0.089 0.961 0.921 1000 0.011 (-0.001) 0.051 (-0.016) 0.082 0.081 0.961 0.920 Table 3: Performance of each method for regression coefficients in Case II (Boldfaced numbers indicate the best in each criterion) Method n \u03b21 \u03b22 g1(\u00b7) g2(\u00b7) ARI AMI MSE (bias) MSE (bias) MAE MAE MoE 250 0.077 (-0.062) 0.120 (-0.036) 0.362 1.056 0.652 0.562 500 0.041 (-0.079) 0.056 (-0.033) 0.361 1.063 0.657 0.562 1000 0.019 (-0.053) 0.030 (-0.033) 0.356 1.063 0.664 0.565 FMPLR 250 0.069 (0.014) 0.035 (0.015) 0.169 0.231 0.737 0.639 500 0.079 (0.026) 0.043 (0.005) 0.126 0.204 0.741 0.643 1000 0.033 (0.040) 0.009 (0.001) 0.095 0.160 0.748 0.649 MoPLE 250 0.071 (0.014) 0.035 (0.012) 0.171 0.214 0.734 0.640 500 0.035 (0.006) 0.018 (0.010) 0.125 0.171 0.744 0.646 1000 0.022 (0.029) 0.031 (-0.013) 0.101 0.131 0.750 0.651 Table 4: Performance of each method for regression coefficients in Case III (Boldfaced numbers indicate the best in each criterion) Method n \u03b21 \u03b22 g1(\u00b7) g2(\u00b7) ARI AMI MSE (bias) MSE (bias) MAE MAE MoE 250 0.062 (-0.074) 0.230 (-0.169) 0.361 1.100 0.641 0.529 500 0.034 (-0.079) 0.127 (-0.175) 0.357 1.074 0.645 0.529 1000 0.020 (-0.076) 0.087 (-0.207) 0.348 1.0578 0.652 0.533 FMPLR 250 0.078 (-0.118) 0.215 (-0.170) 0.182 0.270 0.661 0.556 500 0.044 (-0.132) 0.075 (-0.130) 0.146 0.203 0.671 0.562 1000 0.036 (-0.123) 0.075 (-0.145) 0.125 0.193 0.675 0.566 MoPLE 250 0.066 (0.038) 0.064 (-0.020) 0.172 0.245 0.743 0.638 500 0.038 (0.038) 0.086 (-0.039) 0.123 0.200 0.748 0.641 1000 0.017 (0.038) 0.076 (-0.045) 0.094 0.161 0.771 0.667 8 \fMSE and MAE indicate better performance, while larger values of ARI and AMI signify better performance. In Case I, MoE exhibits the best performance across all criteria, while MoPLE ranks second in terms of clustering performance. In Case II, MoPLE performs the best in terms of ARI and AMI, while FMPLR and MoPLE are competitive with regard to the estimating parameters. In Case III, MoPLE demonstrates the best with regard to almost all criteria compared to the other methods. Overall, MoPLE demonstrates competitive performance, ranking either as the best or the second best method across all cases. 6 Real data analysis 6.1 Prestige dataset For the first real data analysis, we consider the Prestige dataset, which is available in the car package in R. It comprises 102 observations with the variable such as Prestige, indicating occupational prestige from a mid-1960s social survey, Education, representing the average years of education for workers in 1971, Income, denoting the standardized average income of workers in 1971, and Occupational types, specifying occupational categories like professional, whitecollar, and blue-collar occupations. In this study, we model the response variable Y as Prestige, where X represents Education, and U represents Income. Additionally, we assume that the latent variable is associated with Occupational types. Table 5 displays the BIC values obtained by each method for the Prestige dataset. MoPLE correctly selects the expected number of components, while MoE and FMPLR yield fewer clusters than expected. The clustering performance of each method is summarized in Table 6. MoE performs the best in terms of ARI, whereas MoPLE excels in terms of AMI. As a result, MoPLE is considered the best method since it not only produces the expected number of clusters but also delivers competitive clustering performance. MoE is the second-best method, despite not selecting the expected number of clusters. This suggests that occupational types are dependent on education, and there are nonlinear relationships between prestige and income, at least within one component. Table 5: BIC values for each method in prestige dataset (Boldfaced numbers indicate the smallest value in each criterion) Number of clusters MoE FMPLR MoPLE 1 724.22 947.23 947.23 2 718.24 864.11 852.19 3 735.49 951.21 823.31 4 736.40 1042.96 1126.14 5 763.94 1633.26 1186.94 Based on the findings from MoPLE, the clusters denoted as 1, 2, and 3 correspond to professional, white-collar, and blue-collar occupations, respectively. The estimated coefficients for the Education in Class 1, 2, and 3 are 2.331, 5.446, and 2.547, respectively. This suggests that the impact of the education on the prestige is most pronounced in white-collar. Figure 2 illustrates the estimated gc(u) for each cluster, where c = 1, 2, 3. We note a nonlinear association between prestige and income within cluster 1, whereas clusters 2 and 3 exhibit a positive relationship between prestige and income, indicating an increasing trend. 9 \fTable 6: Clustering performance for each method in prestige dataset (Boldfaced numbers indicate the largest value in each criterion) Index MoE FMPLR MoPLE ARI 0.5096 0.0597 0.4779 AMI 0.4012 0.0725 0.4506 (a) Cluster 1 (b) Cluster 2 (c) Cluster 3 Figure 2: Estimated gc(\u00b7), c = 1, 2, 3, through MoPLE for the Prestige dataset 6.2 Gross domestic product dataset In the second real data analysis, we examine gross domestic product (GDP) dataset sourced from the STARS database of World Bank. This dataset comprises information from 82 countries over the period 1960 to 1987 and includes some variables such as log(GDP), indicating logarithm of real gross domestic product in million dollars, log(Labor), representing logarithm of the economically active population aged 15 to 65, log(Capital), implying logarithm of the estimated initial capital stock in each country, and log(Education), denoting logarithm of the average years of education. Previously, researchers such as Duffy and Papageorgiou (2000) utilized this dataset to investigate the Cobb-Douglas specification, while Wu and Liu (2017) examined how education and two other variables influence GDP using FMPLR with a fixed two-component mixture. In this paper, we investigate countries in 1975 with Y = log(GDP), X = (log(Labor), log(Capital)) and U = log(Education), comparing clustering performance. To evaluate the clustering performance, we introduce a latent variable that indicates whether the country was classified as advanced or developing in 1975 based on International Monetary Fund (IMF). Table 7 and Table 8 present the BIC values and clustering performance, respectively. In Table 7, MoPLE yield the expected number of clusters, while MoE and FMPLR selects more clusters than expected. In Table 8, MoPLE achieves the best results in terms of both ARI and AMI, followed by MoE. These findings suggest that MoPLE is the most suitable method when 10 \fTable 7: BIC values for each method in GDP dataset (Boldfaced numbers indicate the smallest value in each criterion) Number of clusters MoE FMPLR MoPLE 1 74.46 337.10 337.10 2 88.05 176.92 134.64 3 60.95 169.90 178.15 4 114.48 265.12 232.49 5 110.04 419.94 405.89 Table 8: Clustering performance for each method in GDP dataset(Boldfaced numbers indicate the largest value in each criterion) Index MoE FMPLR MoPLE ARI 0.3449 -0.1238 0.7165 AMI 0.3280 0.1042 0.6152 attempting to identify clusters among countries based on their classification as advanced or developing. According to the results derived from MoPLE, the clusters labeled as 1 and 2 represent advanced and developing countries, respectively. In addition, cluster 1 reveals estimated coefficients for log(Labor) and log(Capital) as (0.14, 0.86), while cluster 2 displays coefficients as (0.17, 0.82). These results suggest that the impact of labor and capital on GDP does not significantly differ between advanced and developing countries. Figure 3 depicts the estimated gc(u) for each cluster, with c = 1, 2. Specifically, in cluster 1, the values of log(GDP) appear to be higher compared to those in cluster 2, while their shapes look similar. 7 Discussion In this paper, we propose MoPLE, which applies a partial linear structrure to the expert network of MoE, replacing the linear structure. In numerical studies, MoPLE demonstrates the ability to estimate both parametric and non-parametric components effectively, not only under linear relationships between the response variable and covariates but also under non-linear relationships. Furthermore, it gives comparative performance in terms of the regression clustering. These results imply that MoPLE is a valuable model regardless of whether the data exhibits linear or non-linear relationships, excelling not only in parameter estimation but also in clustering. While this study assumed univariate covariates for the non-parametric component, it is possible to extend this approach to higher dimensions. Nevertheless, we must acknowledge the curse of dimensionality as a limitation of non-parametric methods. One potential alternative approach is to structure each expert as a partially linear additive model. Furthermore, although we postulate a specified variable following nonlinear relationships based on the previous work, it is still necessary to construct statistical hypothesis tests for nonlinear relationships, even though it may be challenging due to the presence of a hidden latent structure. 11 \f(a) Cluster 1 (b) Cluster 2 Figure 3: Estimated gc(\u00b7), c = 1, 2, through MoPLE for the GDP dataset" +} \ No newline at end of file diff --git a/abs_9K/test_abstract_short_2405.03003v1.json b/abs_9K/test_abstract_short_2405.03003v1.json new file mode 100644 index 0000000000000000000000000000000000000000..3c141d6893c067e675dcc941b3b6491cee616e38 --- /dev/null +++ b/abs_9K/test_abstract_short_2405.03003v1.json @@ -0,0 +1,18 @@ +{ + "url": "http://arxiv.org/abs/2405.03003v1", + "title": "Parameter-Efficient Fine-Tuning with Discrete Fourier Transform", + "abstract": "Low-rank adaptation~(LoRA) has recently gained much interest in fine-tuning\nfoundation models. It effectively reduces the number of trainable parameters by\nincorporating low-rank matrices $A$ and $B$ to represent the weight change,\ni.e., $\\Delta W=BA$. Despite LoRA's progress, it faces storage challenges when\nhandling extensive customization adaptations or larger base models. In this\nwork, we aim to further compress trainable parameters by enjoying the powerful\nexpressiveness of the Fourier transform. Specifically, we introduce FourierFT,\nwhich treats $\\Delta W$ as a matrix in the spatial domain and learns only a\nsmall fraction of its spectral coefficients. With the trained spectral\ncoefficients, we implement the inverse discrete Fourier transform to recover\n$\\Delta W$. Empirically, our FourierFT method shows comparable or better\nperformance with fewer parameters than LoRA on various tasks, including natural\nlanguage understanding, natural language generation, instruction tuning, and\nimage classification. For example, when performing instruction tuning on the\nLLaMA2-7B model, FourierFT surpasses LoRA with only 0.064M trainable\nparameters, compared to LoRA's 33.5M. Our code is released at\n\\url{https://github.com/Chaos96/fourierft}.", + "authors": "Ziqi Gao, Qichao Wang, Aochuan Chen, Zijing Liu, Bingzhe Wu, Liang Chen, Jia Li", + "published": "2024-05-05", + "updated": "2024-05-05", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.CL" + ], + "label": "Original Paper", + "paper_cat": "Parameter AND Efficient AND Fine AND Tuning", + "gt": "Low-rank adaptation~(LoRA) has recently gained much interest in fine-tuning\nfoundation models. It effectively reduces the number of trainable parameters by\nincorporating low-rank matrices $A$ and $B$ to represent the weight change,\ni.e., $\\Delta W=BA$. Despite LoRA's progress, it faces storage challenges when\nhandling extensive customization adaptations or larger base models. In this\nwork, we aim to further compress trainable parameters by enjoying the powerful\nexpressiveness of the Fourier transform. Specifically, we introduce FourierFT,\nwhich treats $\\Delta W$ as a matrix in the spatial domain and learns only a\nsmall fraction of its spectral coefficients. With the trained spectral\ncoefficients, we implement the inverse discrete Fourier transform to recover\n$\\Delta W$. Empirically, our FourierFT method shows comparable or better\nperformance with fewer parameters than LoRA on various tasks, including natural\nlanguage understanding, natural language generation, instruction tuning, and\nimage classification. For example, when performing instruction tuning on the\nLLaMA2-7B model, FourierFT surpasses LoRA with only 0.064M trainable\nparameters, compared to LoRA's 33.5M. Our code is released at\n\\url{https://github.com/Chaos96/fourierft}.", + "main_content": "Introduction Large foundation models (LFMs) have demonstrated exceptional performance on tasks of multiple domains, including natural language processing (NLP) (Liu et al., 2019; He et al., 2020; Radford et al., 2019; Brown et al., 2020; Li et al., 2022) and computer vision (CV) (Liu et al., 2023a;b; Singh et al., 2022; Rombach et al., 2022). Owing to their *Equal contribution 1Hong Kong University of Science and Technology (Guangzhou) 2Hong Kong University of Science and Technology 3Sun Yat-sen University 4International Digital Economy Academy 5AI Lab, Tencent. Correspondence to: Jia Li . Proceedings of the 41 st International Conference on Machine Learning, Vienna, Austria. PMLR 235, 2024. Copyright 2024 by the author(s). Figure 1. Summary of the performance (y-axis) of fine-tuning methods with different numbers (x-axis) of trainable parameters on NLP (left) and CV (right) tasks. The left side shows the instruction tuning task, where the LLaMA2-7B model is fine-tuned with Alpaca and evaluated by GPT-4. The right side shows the image classification task, where the Vision Transformer (ViT) is finetuned and tested on the DTD dataset. Black circles (\u25cf) represent the Full Fine-tuning (FF) method. Orange circles (\u25cf) represent LoRA method with r = {32, 64, 128} (left) and r = {8, 16, 32} (right). Blue circles (\u25cf) represent our proposed method with n = {1000, 2000} (left) and n = {3000, 10000} (right). impressive capabilities, fine-tuning LFMs for a wide range of downstream tasks has become prevalent (Wang et al., 2022; Taori et al., 2023; Qiu et al., 2020). Under the full fine-tuning paradigm, the new model adapted to each customized task typically contains as many parameters as the original model (Qiu et al., 2020; Raffel et al., 2020; Chen et al., 2024; Gao et al., 2024). As models grow larger and customization needs expand, the demand for storing finetuned checkpoints rises, resulting in both costly storage and memory consumption. As a popular way to address this issue, LoRA (Hu et al., 2021) represents the weight change with two low-rank matrices A and B, i.e., W0+\u2206W = W0+BA. Despite LoRA\u2019s superb performance, its large size of trainable parameters still brings high IT infrastructure consumption, which affects both ends of public communities and individual users. For the former, an intuitive example is that a LoRA adapter (finetuned weights) for a specific style of the stable diffusion model (Rombach et al., 2022) requires about 40MB of memory. This necessitates the LFM communities (e.g., Civi1 arXiv:2405.03003v1 [cs.LG] 5 May 2024 \fParameter-Efficient Fine-Tuning with Discrete Fourier Transform tai (Civitai, 2024)) to bear high storage and bandwidth costs to cater to a large user base. For the latter, fewer parameters mean direct RAM savings when loading fine-tuned weights in mobile APPs, enabling sufficient customization for individual users (Zhou et al., 2022). To this end, we naturally ask the question: How can we aggressively compress trainable parameters even further for fine-tuning LFMs? Previous works have demonstrated the powerful expressiveness of Fourier basis in data compression, where extremely sparse spectral information can be used to recover highfidelity data (e.g., 1D signal vectors (Zwartjes & Gisolf, 2007; Duarte & Baraniuk, 2013; Rudelson & Vershynin, 2008) and 2D image matrices (Vlaardingerbroek & Boer, 2013; Song et al., 2021; Shi et al., 2014)). More importantly, when dealing with more general (non-image) matrices that lack strong spatial semantics and are not frequency-sparse, Fourier transform can still handle recovery effectively (Chen & Chi, 2013; Yang & Xie, 2016). Motivated by this, we investigate the potential for updating the weight change \u2206W with its sparse spectral coefficients for fine-tuning LFMs. In this paper, we aim to aggressively reduce the number of trainable parameters for fine-tuning LFMs. To this end, we propose FourierFT (Fourier Transform for Fine-Tuning), which treats the weight change \u2206W as a matrix in the spatial domain, and learns its sparse spectral coefficients. Specifically, we first randomly select n spectral entries that are shared across all layers. For each layer, FourierFT learns n spectral coefficients located at these n selected entries and then directly applies inverse discrete Fourier transform to compute the updated \u2206W. Therefore, fine-tuning a pretrained model with Lt layers only requires storing 2n entry parameters and nLt coefficient parameters for FourierFT. Empirically, we compare our method with state-of-the-art LoRA variants and other parameter-efficient fine-tuning methods on various tasks including (1) natural language understanding (on the GLUE benchmark), (2) natural language generation (on the E2E benchmark), (3) instruction tuning (with LLaMA-family models), and (4) image classification (with vision transformers). FourierFT can always achieve comparable or even better performance than LoRA, with about 6.0%, 9.4%, 0.2% and 9.2% of LoRA\u2019s trainable parameters for these 4 tasks, respectively. For example in Figure 1, on the instruction tuning task, our FourierFT method outperforms LoRA with only 64K trainable parameters. Moreover, it achieves a comparable score to Full Fine-tuning with only 128K parameters. 2. Related Works Parameter-Efficient Fine-Tuning. With the rapid expansion of large foundation models (LFM), it has become challenging and important to efficiently adapt them for specific tasks. To this end, numerous methods for parameter-efficient fine-tuning (PEFT) are proposed, demonstrating impressive capabilities in both efficiency and accuracy. Existing PEFT methods are broadly partitioned into two categories: nonweight-based and weight-based methods. Non-weight-based methods do not optimize pre-trained LFMs at the weight level. Instead, they achieve fine-tunings by introducing additional modules or optimizing prompts and prefixes. Adapter tuning (He et al., 2021; Rebuffi et al., 2017; Pfeiffer et al., 2020; Houlsby et al., 2019; R\u00a8 uckl\u00b4 e et al., 2020; Lin et al., 2020) aims to introduce light-weighted neural modules, called adapters, between pre-trained layers of the base model. These methods keep the pre-trained weights frozen and efficiently fine-tune the adapters for customized tasks. Prompt tuning (Brown et al., 2020; Lester et al., 2021; Gao et al., 2020; Diao et al., 2022) and prefix tuning (Li & Liang, 2021) insert additional prompts or prefix tokens to the layers of the base model. Weight-based methods, represented by LoRA (Hu et al., 2021), introduce and then update weight changes that can be merged with the original weights to avoid inference latency. LoRA\u2019s innovation lies in the multiplication of low-rank matrices to approximate weight changes. Building upon this, AdaLoRA (Zhang et al., 2023) extends the LoRA method by distributing the parameter budget across weight matrices with importance scores. Additionally, Q-LoRA (Dettmers et al., 2023) proposes to back-propagate gradients upon LoRA through a quantized pre-trained model with 4-bit NormalFloat. Here, we focus on weight-based methods and achieve huge parameter reduction with the powerful expressiveness of Fourier basis, rather than following the low-rank structure. Sparse Fourier Transform in Deep Learning. Sparse Fourier transform (SFT) has flourished in various fields of deep learning (DL). The SFT technique mainly involves using sparse spectral coefficients of significant (Xu et al., 2020; Ehrlich & Davis, 2019; Gueguen et al., 2018; Tang et al., 2022) or even random (Lin et al., 2014; Rawat et al., 2019; Herrmann, 2010) spectral entries, for representation learning. One important application of this technique is matrix recovery. Patel et al. (2011) designs a gradient-based compressed sensing method to recover images with their sparse Fourier information. Shechtman et al. (2014) proposes an efficient phase retrieval method that improves data recovery using sparse Fourier coefficients. Importantly, previous works (Chen & Chi, 2013; Yang & Xie, 2016; Gao et al., 2022) show that even when the original data is not frequency-sparse, SFT can effectively recover the data with extremely few parameters. Although previous works lack studies on the recovery for the weight matrices of DL models with SFT, the aforementioned methods provide potential support for this work. 2 \fParameter-Efficient Fine-Tuning with Discrete Fourier Transform Pre-trained Weights \ud835\udc4a\u2208\u211d!!\u00d7!\" \ud835\udc35= 0 \ud835\udc34= \ud835\udca9(0, \ud835\udf0e!) \u210e \ud835\udc65 \ud835\udc51# \ud835\udc51$ \ud835\udc5f Pre-trained Weights \ud835\udc4a\u2208\u211d!!\u00d7!\" \u210e \ud835\udc65 \ud835\udc51# \ud835\udc51$ Random entries (shared across layers) \u211d!\u00d7# \ud835\udc5b Coefficients : Frozen : Trainable LoRA FourierFT IDFT Dense Spectral Matrix F Figure 2. Overview of LoRA (left) and our FourierFT (right) method. In LoRA, only low-rank (r) matrices A and B are trained. The weight change is represented by their multiplication, i.e., \u2206W = BA. For each pre-trained weight W, the theoretical number of trainable parameters in LoRA is r \u00d7 (d1 + d2). In FourierFT, we first randomly generate the spectral entry matrix R2\u00d7n, which is shared across all layers to reduce parameter storage requirements. The complete spectral matrix is formed by a trainable coefficient vector Rn located at selected entries and 0s at the remaining entries. We obtain the weight change \u2206W by directly performing inverse discrete Fourier transform (IDFT) on the updated spectral matrix. For all L adapted layers, FourierFT needs to store n \u00d7 (2 + L) parameters. 3. Method We present FourierFT (depicted in Figure 2), a parameterefficient fine-tuning method based on discrete Fourier transform. FourierFT follows the principle of only learning the change in the pre-trained weight, as proposed by LoRA (Hu et al., 2021). However, unlike LoRA, FourierFT does not adopt the low-rank structure but learns a set of spectral coefficients of Fourier basis. Specifically, we randomly initialize the spectral entry matrix, which is frozen and shared across all layers. We make the spectral coefficients located at selected entries trainable, which jointly form the spectral matrix. Lastly, we apply the inverse discrete Fourier transform to the spectral matrix, yielding its spatial-domain counterpart as the updated weight change. 3.1. Forward Pass We follow the paradigm of only learning weight changes, as adopted by LoRA-based methods (Hu et al., 2021; Dettmers et al., 2023; Zhang et al., 2023). This can avoid inference latency by merging the pre-trained weight and its change. Formally, we define each pre-trained weight matrix as W0 \u2208 Rd1\u00d7d2, and the weight change for fine-tuning as \u2206W \u2208 Rd1\u00d7d2. LoRA aims to parameterize \u2206W in the form of low-rank decomposition in the forward pass: h = W0x + \u2206Wx = W0x + BAx, (1) where B \u2208Rd1\u00d7r and A \u2208Rr\u00d7d2 with the rank r \u226a min(d1,d2) are trainable matrices. The advantage of FourierFT is that the orthogonal and expressive Fourier basis enables recovery of informative weight changes. This promisingly suggests achieving comparable performance to LoRA with significantly fewer parameters. We first randomly initialize the entry matrix E \u2208R2\u00d7n containing discrete 2D spectral entries. Then we randomly initialize the coefficients c \u2208Rn with a normal Gaussian distribution. The proposed forward pass is: F = TODENSE(E,c) (2) Sp,q = d1\u22121 \u2211 j=0 d2\u22121 \u2211 k=0 Fj,kei2\u03c0( p d1 j+ q d2 k) (3) h = W0x + \u2206Wx = W0x + \u03b1R(S)x. (4) Specifically, TODENSE in Eq. 2 represents to construct the spectral matrix F \u2208Rd1\u00d7d2, i.e., Fj,k = cl (resp. 0), if j = E0,l & k = E1,l (resp. else). Eq. 3 computes the spatio matrix S via the inverse discrete Fourier transform, where i represents the imaginary unit. Finally, in Eq. 4, we take the real part of the complex matrix S (denoted as R(S)) and scale it by \u03b1. Kindly note that all layers involve training various c vectors, while sharing the matrix E and value \u03b1. The pseudocode for FourierFT is shown as Algorithm 1, adhering to the PyTorch style. Initialization for the Entry Matrix E. Previous works lack studies on the importance of the spectral entries in the weight change. Thus, we fill this gap by introducing adjustable frequency bias, causing the entries to be more likely sampled in this area. In addition to randomly sampling entries in the full d1 \u00d7 d2-sized spectral matrix (i.e., no bias), we also implement entry sampling with a bias towards a favored central frequency, e.g., low, middle, or 3 \fParameter-Efficient Fine-Tuning with Discrete Fourier Transform Algorithm 1 PyTorch-style pseudocode for FourierFT. class FourierFT(nn.Module): def __init__( self, n: int = 100, # number of trainable parameters alpha: float = 300.0, # scaling d1: int = 4096, # input dimension d2: int = 4096, # output dimension base_layer: nn.Module # pre-trained layer ) # definitions self.d1 = d1 self.d2 = d2 self.n = n self.alpha = alpha self.base_layer = base_layer # entry initialization (no frequency bias) self.E = torch.randperm(d1 * d2)[:n] self.E = torch.stack([self.E // self.d1, self.E % self.d2], dim=0) # spectral coefficient initialization self.c = nn.Parameter(torch.randn(n), \\\\ requires_grad=True) def forward(self, x: torch.Tensor): # get dense spectral matrix (Eq.2) F = torch.zeros(self.d1, self.d2) F[self.E[0, :], self.E[1, :]] = self.c # compute Delta_W (Eq.3) Delta_W = torch.fft.ifft2(F).real * self.alpha # merge (Eq.4) h = self.base_layer(x) h += torch.einsum(\u2019ijk,kl->ijl\u2019, x, Delta_W) return h high frequencies. Formally, we apply the Gaussian bandpass filter (Gonzales & Wintz, 1987) to model the sampling probability for the entry (u,v),0 \u2264u \u2264d1\u22121,0 \u2264v \u2264d2\u22121: p(u,v) = exp\u239b \u239d\u2212(D2 \u2212f 2 c DW ) 2\u239e \u23a0, (5) where D represents the distance from the point (u,v) to the origin (center of the matrix), fc is the favored central frequency, and W represents the bandwidth. In Figure 3, we visualize the sampling probability map of a 768 \u00d7 768-sized spectral matrix with different fc and W = 200. fc = 0 fc = 100 fc = 200 fc = 350 fc = 480 0 0.5 1 Figure 3. Visualization of entry sampling probability at different favored central frequencies fc. Kindly note that unless specially stated, FourierFT is set by default to the entry initialization with no frequency bias. 3.2. Parameter Summary We summarize the number of trainable parameters for LoRA and FourierFT in Table 1. LoRA relies on a pair of trainable matrices A and B for each layer. Let the number of layers for fine-tuning be Lt. The total number of parameters in Table 1. Theoretical number of trainable parameters and storage requirements for fine-tuning. For both LoRA and FourierFT methods, only the query and value layers are tuned within the transformer architectures. The configurations that are exactly chosen in the \u2018Experiments\u2019 Section are highlighted . Base Models LoRA FourierFT r # Trainable Parameters Required Bytes n # Trainable Parameters Required Bytes RoBERTa Base 4 147K 574KB 200 4.8K 18.8KB 8 295K 1.13MB 200 24K 94KB RoBERTa Large 4 393K 1.5MB 200 9.6K 36.5KB 8 786K 3MB 1000 48K 183KB GPT-2 Medium 4 350K 1.34MB 500 24K 94KB 8 786K 3MB 1000 48K 188KB GPT-2 Large 4 737K 2.81MB 500 36K 141KB 8 1.47M 5.74MB 1000 72K 282KB LLaMA-2 7B 16 8.39M 32.8MB 1000 64K 250KB 64 33.5M 131.1MB 2000 128K 500KB LLaMA-2 13B 16 13.1M 51.2MB 1000 80K 312KB 64 52.4M 204.8MB 2000 160K 625KB ViT Base 8 295K 1.13MB 3000 72K 281KB 16 590K 2.25MB 10000 239K 934KB ViT Large 8 786K 2.93MB 3000 144K 563KB 16 1.57M 6MB 10000 480K 1.83MB LoRA is determined by the rank r and the dimension of weights d = d1 = d2: \u2223\u0398\u2223LoRA = 2 \u00d7 d \u00d7 Lt \u00d7 r. For Fourier, the total number takes the form: \u2223\u0398\u2223F ourierF T = n \u00d7 Lt. As an intuitive example, the RoBERTa Base model contains 12 transformer blocks with d = 768, resulting in Lt = 24 layers when we only fine-tune the query and value ones. Therefore, we have \u2223\u0398\u2223LoRA = 294,912 for r = 8, and \u2223\u0398\u2223F ourierF T = 24,000 for n = 1000. In Table 1, we highlight the configurations where LoRA and our method achieve matched performance in subsequent experiments. We note that the advantage of parameter efficiency in FourierFT becomes more pronounced as the model\u2019s scale (depth and width) increases (e.g., RoBERTa Base \u2192RoBERTa Large). This could be because \u2223\u0398\u2223LoRA has an explicit linear relationship with width d, unlike \u2223\u0398\u2223F ourierF T . 4. Experiments In this section, we evaluate FourierFT in the domains of natural language processing (NLP) and computer vision (CV). For NLP, we implement FourierFT for fine-tuning (1) RoBERTa (Base & Large) on natural language understanding (GLUE, (Wang et al., 2018)), (2) GPT-2 (Medium & Large) on natural language generation (E2E, (Novikova et al., 2017)) and (3) LLaMA-family models (7B & 13B) on instruction tuning. For CV, we apply FourierFT to fine-tune the (4) vision transformers (Base & Large) on image classification. Finally, we conduct ablation studies to analyze the effect of frequency bias, the parameter scalability, and the 4 \fParameter-Efficient Fine-Tuning with Discrete Fourier Transform Table 2. Performance of various fine-tuning methods with RoBERTa Base (RoBbase) and RoBERTa Large (RoBlarge) models on 6 datasets of the GLUE benchmark. We report the Matthew\u2019s correlation coefficient (MCC) for CoLA, Pearson correlation coefficient (PCC) for STS-B and accuracy (Acc.) for all the remaining tasks. We report the median result of 5 runs, each using different random seeds. The best results for each dataset are shown in bold. Higher is better for all metrics in 6 datasets. Model & Method # Trainable Parameters SST-2 (Acc.) MRPC (Acc.) CoLA (MCC) QNLI (Acc.) RTE (Acc.) STS-B (PCC) Avg. RoBbase(FF) 125M 94.8 90.2 63.6 92.8 78.7 91.2 85.2 RoBbase(BitFit) 0.1M 93.7 92.7 62 91.8 81.5 90.8 85.4 RoBbase(AdptD) 0.3M 94.2\u00b10.1 88.5\u00b11.1 60.8\u00b10.4 93.1\u00b10.1 71.5\u00b12.7 89.7\u00b10.3 83.0 RoBbase(AdptD) 0.9M 94.7\u00b10.3 88.4\u00b10.1 62.6\u00b10.9 93.0\u00b10.2 75.9\u00b12.2 90.3\u00b10.1 84.2 RoBbase(LoRA) 0.3M 95.1\u00b10.2 89.7\u00b10.7 63.4\u00b11.2 93.3\u00b10.3 78.4\u00b10.8 91.5\u00b10.2 85.2 RoBbase(AdaLoRA) 0.3M 94.5\u00b10.2 88.7\u00b10.5 62.0\u00b10.6 93.1\u00b10.2 81.0\u00b10.6 90.5\u00b10.2 85.0 RoBbase(DyLoRA) 0.3M 94.3\u00b10.5 89.5\u00b10.5 61.1\u00b10.3 92.2\u00b10.5 78.7\u00b10.7 91.1\u00b10.6 84.5 RoBbase(FourierFT) 0.024M 94.2\u00b10.3 90.0\u00b10.8 63.8\u00b11.6 92.2\u00b10.1 79.1\u00b10.5 90.8\u00b10.2 85.0 RoBlarge(FF) 356M 96.4 90.9 68 94.7 86.6 92.4 88.2 RoBlarge(AdptP) 3M 96.1\u00b10.3 90.2\u00b10.7 68.3\u00b11.0 94.8\u00b10.2 83.8\u00b12.9 92.1\u00b10.7 87.6 RoBlarge(AdptP) 0.8M 96.6\u00b10.2 89.7\u00b11.2 67.8\u00b12.5 94.8\u00b10.3 80.1\u00b12.9 91.9\u00b10.4 86.8 RoBlarge(AdptH) 6M 96.2\u00b10.3 88.7\u00b12.9 66.5\u00b14.4 94.7\u00b10.2 83.4\u00b11.1 91.0\u00b11.7 86.8 RoBlarge(AdptH) 0.8M 96.3\u00b10.5 87.7\u00b11.7 66.3\u00b12.0 94.7\u00b10.2 72.9\u00b12.9 91.5\u00b10.5 84.9 RoBlarge(LoRA) 0.8M 96.2\u00b10.5 90.2\u00b11.0 68.2\u00b11.9 94.8\u00b10.3 85.2\u00b11.1 92.3\u00b10.5 87.8 RoBlarge(FourierFT) 0.048M 96.0\u00b10.2 90.9\u00b10.3 67.1\u00b11.4 94.4\u00b10.4 87.4\u00b11.6 91.9\u00b10.4 88.0 expressiveness of the Fourier basis. Baselines. We compare our FourierFT method with popular parameter-efficient fine-tuning (PEFT) methods. To ensure a comprehensive and fair comparison, we prioritize replicating the setups used in previous works and reusing their reported results. Involved baselines are: \u25cfFull Fine-tuning (FF) During fine-tuning, the base model is initialized with pre-trained weights and biases, and all parameters will undergo gradient updates. \u25cfBitfit (Zaken et al., 2021) Only the bias vectors are finetuned while all other parameters are frozen. \u25cfAdapter tuning This research line was first investigated by Houlsby et al. (2019), which proposes the AdapterH method. AdapterH inserts two-layer adapters between the self-attention and the FNN modules, followed by a subsequent residual connection. We compare it with three additional variants of it. AdapterL (Lin et al., 2020) is more parameter-efficient, with adapter layers applied only after the MLP modules and subsequent to a LayerNorm. AdapterP (Pfeiffer et al., 2020) implements the adapter layers after the feed-forward layer. This design was chosen through a grid search including all settings related to the adapter\u2019s position, number, ect. AdapterD (R\u00a8 uckl\u00b4 e et al., 2020) further enhances the parameter efficiency by dropping adapter layers that are not activated. \u25cfLoRA (Hu et al., 2021) LoRA is the state-of-the-art method for PEFT. It parameterizes incremental weight updates using trainable low-rank matrices. \u25cfDyLoRA (Valipour et al., 2022) This method trains dynamic search-free LoRA models for the best rank choice. \u25cfAdaLoRA (Zhang et al., 2023) This method proposes the SVD-based fine-tuning and prunes redundant singular values with the importance-aware rank allocation. 4.1. Natural Language Understanding Models and Datasets. We evaluate our method on the GLUE benchmark (General Language Understanding Evaluation (Wang et al., 2018)), which consists of a wide range of natural language understanding (NLU) tasks, including single-sentence classification tasks, similarity and paraphrase tasks and natural language inference tasks. We finetune the pre-trained RoBERTa Base and Large foundation models (Liu et al., 2019) for evaluation. Implementation Details. For both models, FourierFT is allowed to have 1000 out of 7682 (RoBERTa Base) and 10242 (RoBERTa Large) trainable spectral coefficients in each layer, i.e., n = 1000. We randomly sample the spectral entries with no frequency bias, which is shared1 across all 24 (Base) and 48 (Large) layers. For all 6 datasets in GLUE, we tune the hyperparameters of the learning rates and the scaling values. We follow the experimental setup applied in Hu et al. (2021), which involves fine-tuning only the query and value weights in each transformer block and 1We use the value 2024 as the seed for all layers. 5 \fParameter-Efficient Fine-Tuning with Discrete Fourier Transform Table 3. Results from GPT-2 Medium and Large models on the E2E benchmark. We present the result from the final epoch. For all metrics, higher values indicate better performance. * indicates that the results are taken from prior works. Best results are shown in bold. Model Method # Trainable Parameters BLEU NIST METEOR ROUGE-L CIDEr GPT-2 Medium FT* 354.92M 68.2 8.62 46.2 71.0 2.47 AdptL* 0.37M 66.3 8.41 45.0 69.8 2.40 AdptL* 11.09M 68.9 8.71 46.1 71.3 2.47 AdptH* 11.09M 67.3\u00b1.6 8.5\u00b1.07 46.0\u00b1.2 70.7\u00b1.2 2.44\u00b1.01 LoRA 0.35M 68.9\u00b1.3 8.76\u00b1.06 46.6\u00b1.1 71.5\u00b1.1 2.53\u00b1.03 FourierFT 0.048M 69.1\u00b1.1 8.82 \u00b1.05 47.0 \u00b1.3 71.8 \u00b1.1 2.51\u00b1.02 GPT-2 Large FT* 774.03M 68.5 8.78 46.0 69.9 2.45 AdptL* 0.88M 69.1\u00b1.1 8.68\u00b1.03 46.3\u00b1.0 71.4\u00b1.2 2.49\u00b1.0 AdptL* 23.00M 68.9\u00b1.3 8.70\u00b1.04 46.1\u00b1.1 71.3\u00b1.2 2.45\u00b1.02 LoRA 0.77M 70.1\u00b1.3 8.83\u00b1.02 46.8\u00b1.2 72.0\u00b1.3 2.47\u00b1.02 FourierFT 0.072M 70.2\u00b1.2 8.90\u00b1.02 47.0\u00b1.2 71.8\u00b1.1 2.50 \u00b1.02 Table 4. The average scores on MT-Bench and Vicuna assessed by GPT-4. \u2020 indicates updating the layers other than lm head. Higher score is better. Model Method # Trainable Parameters MT-Bench Vicuna LLaMA1-7B LoRA\u2020 159.9M 5.05\u00b1.3 6.85\u00b1.4 LoRA 33.5M 4.99\u00b1.3 6.81\u00b1.3 FourierFT 0.064M 5.09\u00b1.6 6.85\u00b1.8 LLaMA1-13B LoRA\u2020 250.3M 5.28\u00b1.6 7.02\u00b1.3 LoRA 52.4M 5.21\u00b1.4 6.97\u00b1.4 FourierFT 0.08M 5.23\u00b1.3 7.14\u00b1.5 LLaMA2-7B LoRA\u2020 159.9M 5.19\u00b1.1 7.38\u00b1.3 LoRA 33.5M 5.20\u00b1.3 7.35\u00b1.6 FourierFT 0.064M 5.18\u00b1.3 7.49\u00b1.4 LLaMA2-13B LoRA\u2020 250.3M 5.78\u00b1.2 7.89\u00b1.5 LoRA 52.4M 5.80\u00b1.2 7.89\u00b1.6 FourierFT 0.08M 5.82\u00b1.3 7.92\u00b1.5 fully fine-tuning the classification head. We provide the hyperparameters in Table 9 in Appendix. Results. Results are summarized in Table 2. Following Hu et al. (2021), Zhang et al. (2023) and Valipour et al. (2022), we specify the number of trainable parameters for the finetuned layers excluding the classification head. We report the median of 5 random seed results, where the best epoch is selected for each run. In general, FourierFT achieves better or on-par performance compared with baseline methods with significantly fewer trainable parameters. Notably, FourierFT outperforms all baselines including fully fine-tuning the RoBERTa Base on CoLA and the RoBERTa Large on RTE. As mentioned in Section 3.2, the parameter count of LoRA is dependent on both the width and depth of models, resulting in a larger count growth (LoRA: 0.8M/0.3M \u22482.7; ours: 0.048M/0.024M = 2) compared to FourierFT. Nevertheless, FourierFT still performs comparably to LoRA, demonstrating the potential scalability of our method when facing even larger models. 4.2. Natural Language Generation Models and Datasets. We evaluate the performance of FourierFT on the E2E natural language generation (NLG) task (Novikova et al., 2017). We fine-tune the GPT-2 (Radford et al., 2019) Medium (354M) and Large (774M) models, which are both decoder-only and have 24 and 36 transformer blocks, respectively. The E2E benchmark contains roughly 42,000 training, 4,600 validation and 4,600 test samples from the restaurant domain. Implementation Details. We report prior results for baselines other than LoRA. For both LoRA and our method, we fine-tune the GPT-2 Medium and Large models with a linear learning rate scheduler for 5 epochs, where we tune the batch size and learning rate. We report the average results over 3 runs, where the last epoch is selected for each run. We provide the hyperparameters in Table 10 in Appendix. Results. We show the results in Table 3. We note that FourierFT can achieve the best performance on most metrics. More importantly, FourierFT only requires 13.7% and 9.4% of the parameter counts of LoRA, for the GPT-2 Medium and Large models respectively. 4.3. Instruction Tuning Models and Datasets. Instruction tuning, as described in (Ouyang et al., 2022; Wei et al., 2021; Mishra et al., 2021), refers to the process of fine-tuning a language model on a collection of paired prompts and responses. We apply LoRA and FourierFT to fine-tune the LLaMA (Touvron et al., 2023a) and LLaMA2 (Touvron et al., 2023b) families. Specifically, we consider the LLaMA-7B, LLaMA-13B, LLaMA2-7B and LLaMA2-13B as base models, which are fine-tuned on the Alpaca dataset (Taori et al., 2023). Alpaca contains 51K instruction-following demonstrations generated from text-davinci-003 (GPT-3.5) (Wang et al., 2022). For evaluation, we use the fine-tuned models to generate responses for the pre-defined questions, which are from the MT-Bench (Zheng et al., 2023) and Vicuna Eval (Chiang et al., 2023). GPT-4 takes these answers as input and evaluates them with scores within 10. Implementation Details. For LoRA, we use r = 64 and apply two configurations: (1) updating all linear layers except the language modelling head (lm head); (2) updating only the WQ and WV matrices. For FourierFT, we only adopt the latter configuration with n = 1000. To ensure the 6 \fParameter-Efficient Fine-Tuning with Discrete Fourier Transform Table 5. Fine-tuning results with ViT Base and Large models on different image classification datasets. We report the accuracy (%) after 10 epochs. Avg. represents the average accuracy of each method on all datasets. The best performance is shown in bold. Model Method # Trainable Parameters OxfordPets StanfordCars CIFAR10 DTD EuroSAT FGVC RESISC45 CIFAR100 Avg. ViT-Base LP 90.28\u00b10.43 25.76\u00b10.28 96.41\u00b10.02 69.77\u00b10.67 88.72\u00b10.13 17.44\u00b10.43 74.22\u00b10.10 84.28\u00b10.11 68.36 FF 85.8M 93.14\u00b10.40 79.78\u00b11.15 98.92\u00b10.05 77.68\u00b11.21 99.05\u00b10.09 54.84\u00b11.23 96.13\u00b10.13 92.38\u00b10.13 86.49 LoRA 581K 93.19\u00b10.36 45.38\u00b10.41 98.78\u00b10.05 74.95\u00b10.40 98.44\u00b10.15 25.16\u00b10.16 92.70\u00b10.18 92.02\u00b10.12 77.58 FourierFT 72K 93.21\u00b10.26 46.11\u00b10.24 98.58\u00b10.07 75.09\u00b10.37 98.29\u00b10.04 27.51\u00b10.64 91.97\u00b10.31 91.20\u00b10.14 77.75 FourierFT 239K 93.05\u00b10.34 56.36\u00b10.66 98.69\u00b10.08 77.30\u00b10.61 98.78\u00b10.11 32.44\u00b10.99 94.26\u00b10.20 91.45\u00b10.18 80.29 ViT-Large LP 91.11\u00b10.30 37.91\u00b10.27 97.78\u00b10.04 73.33\u00b10.26 92.64\u00b10.08 24.62\u00b10.24 82.02\u00b10.11 84.28\u00b10.11 72.96 FF 303.3M 94.43\u00b10.56 88.90\u00b10.26 99.15\u00b10.05 81.79\u00b11.01 99.04\u00b10.08 68.25\u00b11.63 96.43\u00b10.07 93.58\u00b10.19 90.20 LoRA 1.57M 94.82\u00b10.09 73.25\u00b10.36 99.13\u00b10.03 81.79\u00b10.45 98.63\u00b10.07 42.32\u00b10.98 94.71\u00b10.25 94.87\u00b10.10 84.94 FourierFT 144K 94.46\u00b10.28 69.56\u00b10.30 99.10\u00b10.04 80.83\u00b10.43 98.65\u00b10.09 39.92\u00b10.68 93.86\u00b10.14 93.31\u00b10.09 83.71 FourierFT 480K 94.84\u00b10.05 79.14\u00b10.67 99.08\u00b10.01 81.88\u00b10.50 98.66\u00b10.03 51.28\u00b10.68 95.20\u00b10.07 93.37\u00b10.11 86.68 feasibility of training on a single GPU, we deploy the quantization method in Dettmers et al. (2023) for fine-tuning. We train with both methods for only one epoch, and report the average scores of all answers. We provide the hyperparameter setup in Table 11 in the Appendix. Results. The results are shown in Table 4. We find that the expressive power of the 13B model is much stronger than that of the 7B model, regardless of which fine-tuning method is used. Moreover, FourierFT closely matches or slightly exceeds LoRA\u2019s performance with less than 0.2% of its parameters. We provide practical examples containing questions, answers and reviews in the Appendix D. 4.4. Image Classification Models and Datasets. We evaluate our method on the image classification task. We adopt the Base and Large versions of the popular CV foundation model, Vision Transformer (ViT) (Dosovitskiy et al., 2020). The ViTs are pretrained on the ImageNet-21K dataset (Ridnik et al., 2021). The datasets for fine-tuning include OxfordPets (372), CIFAR10 (10), DTD (47), EuroSAT (10) and RESISC45 (45) with small label spaces, as well as StanfordCars (196), FGVC (100) and CIFAR100 (100) with large label spaces. Detailed information is provided in Table 8 in the Appendix. Implementation Details. We include three baselines for evaluation: Full Fine-tuning (FF), Linear Probing (LP, finetuning the classification head only), and LoRA. For both LoRA and our method, only the query and value matrices of ViT are updated. We use r = 16 for LoRA and n = {3000,10000} for FourierFT. We tune the learning rates and weight decay for all methods, and set the maximum training epoch to 10. We provide the hyperparameters in Table 12 in Appendix. 2Numbers in parentheses indicate class counts for each dataset. Results. Table 5 summarizes the results for 8 image classification datasets with the ViT Base and Large models. Both LoRA and FourierFT methods significantly outperform the Linear Probing, demonstrating their effectiveness in the CV domain. Our method obtains matched performance using 12.4% and 9.2% of LoRA\u2019s parameter count, with ViT Base and Large models, respectively. Notably, when we increase the parameter count of FourierFT to 41.1% (ViT Base) and 30.6% (ViT Large) of LoRA\u2019s, it can outperform LoRA by 3.5% and 2.0% respectively. Moreover, our method can even (slightly) outperform the Full Fine-tuning method on OxfordPets and DTD with the ViT Large model. 4.5. Study Effect of Frequency Bias. We examine how the performance is affected by the frequency bias, i.e., the central frequency fc in Eq. 5. We directly apply the optimal hyperparameters searched in Table 2 and fine-tune the RoBERTa Base on the MRPC, STS-B, CoLA and RTE datasets. From Figure 5, we note that the fine-tuning performance of FourierFT without any frequency bias can surpass most cases that are restricted by the central frequency bias. This indicates the universality of our method. Surprisingly, we find that it is always possible to obtain results better than \u201cNo bias\u201d by traversing the fc values. Since this traversal is not efficient, we do not conduct further exploration in this paper. However, we believe that making fc trainable will be a promising new direction for improving FourierFT. Parameter Scalability. We explore the relationship between the number of trainable parameters and the performance of LoRA and our method. We use the set of ranks r = {1,2,4,6,8,15} for LoRA and n = {50,100,200,1000,6144,12288} for FourierFT on 6 tasks of the GLUE benchmark. For both LoRA and ours, the learning rate, and scaling hyperparameters are tuned. For fairness, we ensure that the number of trials for hyperparam7 \fParameter-Efficient Fine-Tuning with Discrete Fourier Transform 4 6 8 10 ln # Trainable Parameters 88 89 90 Accuracy MRPC LoRA FourierFT 4 6 8 10 ln # Trainable Parameters 58 60 62 64 Matthew s Corr. CoLA 4 6 8 10 ln # Trainable Parameters 78 80 Accuracy RTE 4 6 8 10 ln # Trainable Parameters 90.0 90.5 91.0 91.5 Pearson Corr. STS-B 4 6 8 10 ln # Trainable Parameters 94.0 94.5 95.0 Accuracy SST-2 4 6 8 10 ln # Trainable Parameters 90 91 92 Accuracy QQP Figure 4. Performance on the GLUE benchmark with RoBERTa Base vs. number of trainable parameters (each layer) of LoRA and ours. For all 6 datasets, we apply the setting of r = {1, 2, 4, 6, 8, 15} for LoRA and n = {50, 100, 200, 1000, 6144, 12288}. 0 100 200 300 400 500 fc 76 77 78 79 Acc. RTE No bias 0 100 200 300 400 500 fc 89.2 89.4 89.6 89.8 90.0 Acc. MRPC No bias 0 100 200 300 400 500 fc 90.5 90.6 90.7 90.8 PCC. STS-B No bias 0 100 200 300 400 500 fc 62.0 62.5 63.0 63.5 MCC. CoLA No bias Figure 5. Results on 4 datasets in GLUE with different fc values. eter search is 30 for both methods. As shown in Figure 4, our method outperforms LoRA on all 6 datasets. In detail, our method is significantly better than LoRA with the same parameter count, i.e., {r = 4,n = 6144} & {r = 8,n = 12288}. Moreover, we observe that a larger number of parameters does not always bring performance gains for LoRA. On the contrary, the increase of n can consistently improve the accuracy of FourierFT. On most tasks, FourierFT with n = 50 can achieve comparable or even better (MRPC, CoLA, RTE) performance than LoRA with r = 1. In this case, the parameter count in LoRA is about 31 \u00d7 that of ours. Basis Expressiveness. The inverse discrete Fourier transform (IDFT) in Eq. 3 is equivalent to the matrix multiplication (Lu et al., 2021): S = BfFB\u22ba f, where B is the transformation matrix of IDFT that contains the Fourier basis. To evaluate its expressivity, we replace the Fourier basis with random and orthogonal basis, respectively. Specifically, for F \u2208Rd1\u00d7d2, we initialize random basis B1 r \u2208Rd1\u00d7d1 and B2 r \u2208Rd2\u00d7d2 with the normal Gaussian distribution. Then Eq. 3 becomes S = B1 rFB2 r. A similar way is used for the orthogonal basis. We compare FourierFT with the random basis (R-B) and orthogonal basis (O-B) on the GLUE benchmark. Table 6 shows the results. We note that the Fourier basis used in our method outperforms the random and orthogonal basis. In addition, the expressive power of the orthogonal basis is much stronger than that of the random basis. The stronger expressive power of the Fourier basis compared to the general orthogonal basis may be attributed to its effective capture of the spectral information of \u2206W. Table 6. Results with three types of basis. Model RTE CoLA Ours R-B O-B Ours R-B O-B Base 79.1 72.7(\u21938.1%) 75.6(\u21934.4%) 63.8 58.7(\u21938.0%) 60.0(\u21936.0%) Large 87.4 81.8(\u21936.4%) 83.6(\u21934.3%) 67.1 64.8(\u21933.4%) 66.1(\u21931.5%) 5." +} \ No newline at end of file diff --git a/abs_9K/test_abstract_short_2405.03008v1.json b/abs_9K/test_abstract_short_2405.03008v1.json new file mode 100644 index 0000000000000000000000000000000000000000..680ffcd684a95e824c2806a07c78c4f986ef7937 --- /dev/null +++ b/abs_9K/test_abstract_short_2405.03008v1.json @@ -0,0 +1,18 @@ +{ + "url": "http://arxiv.org/abs/2405.03008v1", + "title": "DVMSR: Distillated Vision Mamba for Efficient Super-Resolution", + "abstract": "Efficient Image Super-Resolution (SR) aims to accelerate SR network inference\nby minimizing computational complexity and network parameters while preserving\nperformance. Existing state-of-the-art Efficient Image Super-Resolution methods\nare based on convolutional neural networks. Few attempts have been made with\nMamba to harness its long-range modeling capability and efficient computational\ncomplexity, which have shown impressive performance on high-level vision tasks.\nIn this paper, we propose DVMSR, a novel lightweight Image SR network that\nincorporates Vision Mamba and a distillation strategy. The network of DVMSR\nconsists of three modules: feature extraction convolution, multiple stacked\nResidual State Space Blocks (RSSBs), and a reconstruction module. Specifically,\nthe deep feature extraction module is composed of several residual state space\nblocks (RSSB), each of which has several Vision Mamba Moudles(ViMM) together\nwith a residual connection. To achieve efficiency improvement while maintaining\ncomparable performance, we employ a distillation strategy to the vision Mamba\nnetwork for superior performance. Specifically, we leverage the rich\nrepresentation knowledge of teacher network as additional supervision for the\noutput of lightweight student networks. Extensive experiments have demonstrated\nthat our proposed DVMSR can outperform state-of-the-art efficient SR methods in\nterms of model parameters while maintaining the performance of both PSNR and\nSSIM. The source code is available at https://github.com/nathan66666/DVMSR.git", + "authors": "Xiaoyan Lei, Wenlong ZHang, Weifeng Cao", + "published": "2024-05-05", + "updated": "2024-05-05", + "primary_cat": "eess.IV", + "cats": [ + "eess.IV", + "cs.CV", + "cs.LG" + ], + "label": "Original Paper", + "paper_cat": "Mamba", + "gt": "Efficient Image Super-Resolution (SR) aims to accelerate SR network inference\nby minimizing computational complexity and network parameters while preserving\nperformance. Existing state-of-the-art Efficient Image Super-Resolution methods\nare based on convolutional neural networks. Few attempts have been made with\nMamba to harness its long-range modeling capability and efficient computational\ncomplexity, which have shown impressive performance on high-level vision tasks.\nIn this paper, we propose DVMSR, a novel lightweight Image SR network that\nincorporates Vision Mamba and a distillation strategy. The network of DVMSR\nconsists of three modules: feature extraction convolution, multiple stacked\nResidual State Space Blocks (RSSBs), and a reconstruction module. Specifically,\nthe deep feature extraction module is composed of several residual state space\nblocks (RSSB), each of which has several Vision Mamba Moudles(ViMM) together\nwith a residual connection. To achieve efficiency improvement while maintaining\ncomparable performance, we employ a distillation strategy to the vision Mamba\nnetwork for superior performance. Specifically, we leverage the rich\nrepresentation knowledge of teacher network as additional supervision for the\noutput of lightweight student networks. Extensive experiments have demonstrated\nthat our proposed DVMSR can outperform state-of-the-art efficient SR methods in\nterms of model parameters while maintaining the performance of both PSNR and\nSSIM. The source code is available at https://github.com/nathan66666/DVMSR.git", + "main_content": "Introduction Single image super-resolution (SR) is a key challenge in computer vision and image processing, aiming to reconstruct a high-resolution image from a low-resolution input. Effective super-resolution aims to improve the efficiency of the SR model while maintaining reconstruction perfor*Corresponding author Figure 1. PSNR results v.s the total number of parameters of different methods for image SR on Set5. mance. Since the introduction of deep learning into superresolution tasks [18], many CNN-based methods have been proposed [16, 20, 21, 46, 47, 51, 63] to improve the performance. A series of approaches [20, 37, 39, 46, 47, 50, 53, 67, 114] have been proposed for building efficient models for image SR. The majority of these efficient models focus on five factors: runtime, parameters, FLOPS, activations, and depths. To further promote the development of efficient SR, ICCV holds the first competition in the AIM 2019 challenge [122]. The information multi-distillation network(IMDN) [39] proposes cascaded information multidistillation blocks to improve the feature extraction module, which won first place in this competition. After that, The winning solution of the AIM 2020 challenge [124], residual feature distillation network(RFDN) [67], further improves the IMDN by residual learning in the main block. In the efficient SR track of NTIRE 2022 [45] challenge, the winning solution, residual local feature network(RLFN) [50], removes the hierarchical distillation connection of residual feature distillation block(RFDB) [67] to reduce the inference time. In the efficient SR track of NTIRE 2022 [114] challenge, the winning solution utilizes a multi-stage arXiv:2405.03008v1 [eess.IV] 5 May 2024 \flightweight training strategy that combines distillation and pruning to reduce both time consumption and model size. The Transformer model, initially successful in natural language processing [100], has attracted interest from the computer vision community. Its effectiveness in highlevel visual tasks (e.g., image classification [22, 72, 103]) has demonstrated the potential in super-resolution [12, 64]. Recently, Mamba [24] has demonstrated superior performance over Transformers across various sizes on largescale real data and exhibits linear scalability with sequence length. Despite pioneering works adopting Mamba for vision tasks [24, 85, 112], it is still in its initial stages of exploring its potential (e.g., long-range modeling capability and efficiency) in low-level vision. Different from the CNN-based and transformer-based methods, our goal is to explore the long-range modeling capability and efficiency of mamba-based methods for efficient SR. In this paper, we employ vision mamba as the basic architecture to enhance the model\u2019s long-range modeling capability and efficiency. Our DVMSR consists of several stacked Residual State Space Blocks (RSSB), each containing several Vision Mamba Modules (ViMM). The ViMM includes a unidirectional SSM, a residual connection, and SiLU activation function. These elements work together to accelerate model convergence and enhance model accuracy and efficiency. As shown in Figure 2, our method can achieve a larger perception range compared with other methods. Furthermore, we utilize a distillation strategy to enhance the model\u2019s efficiency. We introduce a Mamba network with a larger number of parameters as the teacher network to extract knowledge for the learning of the student network. Extensive experiments and ablation studies have shown the effectiveness of our proposed method. Our contributions can be summarized as follows: 1. By leveraging the long-range modeling capability of Vision Mamba, we propose a lightweight model with unidirectional state space models (SSM) for efficient superresolution. 2. We propose a special feature distillation strategy to enhance the efficiency ability of vision mamba for efficient super-resolution. 3. Extensive experiments have shown that our proposed method outperforms existing state-of-the-art (SOTA) methods in terms of parameters while maintaining comparable PSNR and SSIM performance. 2. Related Work 2.1. Lightweight Super Resolution SRCNN [18] marks the inaugural application of deep learning algorithms in the Single Image Super-Resolution (SISR) [11, 12]. A series of works have been explored to apply the SR method in real scenarios, such as GAN-based SR [56, 128? , 129], degradation model [107, 126, 130], multi-task learning [132] and systematic evaluation [131]. In real-world SR model deployments, the computing power of the deployed devices is often limited, such as edge devices, etc. In this case, the efficiency of the SR network becomes an important aspect. Efficient Image SuperResolution aims to reduce the computational effort and parameters of the SR network while achieving faster inference times and maintaining high performance. FSRCNN [20] reduces unnecessary computational costs by utilizing the deconvolution layer as the upsampling layer. VDSR [47] is introduced to further improve super-resolution (SR) performance. DRCN [46] achieves parameter reduction through deep recursive convolutional networks. LapSRN [53] employs a Laplacian pyramid super-resolution block for HR image reconstruction. DRRN [91] employs recursive and residual network architectures, surpassing DRCN in both performance and parameter reduction. MemNet [92] introduces a memory block to explicitly model long-term dependencies in CNN-based SR models. IDN [37] explicitly divides the preceding extracted features into two parts. IMDN [39] introduces a lightweight Information MultiDistillation Network by constructing cascaded Information Multi-Distillation Blocks. RFDN [67] proposes the residual feature distillation network. RLFN [50] improves its speed by eliminating hierarchical distillation connections. DIPNet [114] introduces the Reparameterization Residual Feature Block, which explores the potential of complex structures during optimization while maintaining computational efficiency. Besides, they achieve first place in the NTIRE 2023 Efficient Super-Resolution Challenge [60]. 2.2. State space models in Vision Recent researches have led to a surge of interest in the state space model (SSM), which has its origins in the classic Kalman filter model [44]. The linear scalability of State Space Models (SSMs) in handling long-range dependencies, exemplified by the Mamba architecture [24], contrasts with Transformers. While Mamba outperforms Transformers in natural language tasks, recent research endeavors extend its applicability to vision tasks. Specifically, Mamba models are designed to capture long-range temporal dependencies in video data, enhancing video classification performance [41, 42, 80, 102]. Additionally, other works explore Mamba\u2019s applicability in vision tasks, including image classification [71, 139], biomedical image segmentation [73], remote sensing image classification [9], and Multimodal Learning [85]. The research conducted by [26] emphasizes Mamba\u2019s utility as a straightforward and efficient baseline for image restoration in low-level vision tasks. Our work extends this by proposing a novel network architecture that combines Mamba with distillation, achieving a tradeoff between super-resolution quality and computational ef\fficiency. 2.3. Feature Distillation Knowledge distillation stands out as a straightforward yet powerful technique for enhancing the performance of smaller models, a necessity driven by the limited computing power of deployed devices. This method involves training a smaller network (student) under the guidance of a larger network (teacher), enabling effective knowledge transfer. Unlike other compression methods, knowledge distillation can reduce network size regardless of structural differences between the teacher and student networks. The seminal work by [31] introduced the knowledge distillation (KD) method, utilizing the softmax output of the teacher network. Notably, this method can be applied across various network architectures due to matching output dimensions. Over time, intermediate layer distillation methods have emerged, leveraging insights from the teacher network\u2019s convolutional or penultimate layers, preserving crucial feature-map localities [1, 31, 48, 115]. Moreover, there exists a wealth of research integrating distillation techniques into super-resolution tasks [38, 40, 68, 108, 138]. In this paper, we focus on adopting the output feature map of a pre-trained model as the distillation target. Through extensive experimentation, we demonstrate the effectiveness of our approach in enhancing model performance. 3. Methodology 3.1. Motivation Efficient Super Resolution (SR) is designed to transform low-quality images into high-quality counterparts, leveraging a small parameter set and minimal computational power. ESR predominantly relies on CNNs for local feature extraction, but their limited long-range modeling hinders performance. Transformers, while proficient in global context, introduce computational complexities. Mamba excels in high-level vision tasks, supported by prior research [9, 71, 73, 85, 112, 139]. Motivated by Mamba\u2019s long-range modeling capabilities, we investigate its performance in super-resolution (SR) tasks, comparing it to CNN-based ESR methods [39, 67, 114] and transformerbased method [64]. To elucidate Mamba\u2019s operational mechanisms, we employe a specialized diagnostic tool called LAM [13], designed specifically for SR tasks. Utilizing LAM enabled us to pinpoint the input pixels that contribute most significantly to the selected region. As depicted in Figure 2, the red-marked points denote informative pixels crucial for the reconstruction process. Notably, DVMSR exhibited a notably higher DI (Diffusion index) indication compared to other models, indicating its superior ability to leverage a broader range of pixel information and affirming its exceptional long-range modeling capability. The proposed DVMSR yields improved image details during the reconstruction process, thereby substantiating its efficacy for super-resolution tasks. 3.2. Preliminaries State space models (SSMs), such as the Mamba deep learning model, hold potential for long sequence modeling. Inspired by continuous systems, SSMs map a 1-D function or sequence x(t) \u2208R 7\u2212 \u2192y(t) \u2208R via a hidden state h(t) \u2208RN. The formulation is as follows: h\u2032(t) = Ah(t) + Bx(t), y(t) = Ch(t). (1) where N is the state size, A \u2208RN\u00d7N, B \u2208RN\u00d71, C \u2208 R1\u00d7N. Mamba is the discrete versions of the continuous system, and it achieves this by utilizing \u2206to convert continuous parameters A and B into their discrete counterparts, \u00af A and \u00af B. The commonly used method for transformation is zeroorder hold (ZOH), which is defined as follows: \u00af A = exp(\u2206A), \u00af B = (\u2206A)\u22121(exp(\u2206A) \u2212I) \u00b7 \u2206B. (2) After the discretization of \u00af A, \u00af B, the discretized version of Eq. 5 using a step size \u2206can be rewritten as: ht = \u00af Aht\u22121 + \u00af Bxt, yt = Cht. (3) 3.3. Overall network architecture The overall network architecture of our proposed DVMSR is depicted in Figure 3. Our DVMSR mainly consists of three main modules: feature extraction convolution, multiple stacked Residual State Space Blocks (RSSBs), and a reconstruction module. Specifically, for a given lowresolution (LR) input ILR \u2208RH\u00d7W \u00d7Cin , we exploit one convolution layer to extract the first feature F0 \u2208 RH\u00d7W \u00d7C, where Cin and C denote the channel number of the input and the intermediate feature. Then, a series of Residual State Space Block (RSSB) and one 3 \u00d7 3 convolution layer HConv(\u00b7) are utilized to perform the deep feature extraction. After that, we add a global residual connection to fuse shallow features F0 and deep features FD \u2208RH\u00d7W \u00d7C, and then reconstruct the high-resolution result via a reconstruction module. As depicted in Figure 3, each RSSB contains two Vision Mamba Module (ViMM) and a 3 \u00d7 3 convolution layer with a residual connection. For the reconstruction module, the pixel-shuffle method is adopted to up-sample the fused feature. \fFigure 2. The LAM results are provided for various networks including both CNN-based and transformer-based methods. LAM attribution indicates the significance of each pixel in the input LR image during the reconstruction process of the patch highlighted by a box. The Diffusion Index (DI) denotes the extent of pixel involvement. A higher DI indicates a broader range of utilized pixels. Figure 3. The overall network architecture of our DVMSR. Figure 4. The structure of Vision Mamba Module(ViMM). 3.3.1 Mamba network The design of mamba network is shown in Figure 4, which is Vision Mamba Module (ViMM) using unidirectional sequence modeling. The input token sequence X \u2208 RH\u00d7W \u00d7C is first normalized by the normalization layer. Next, we linearly project the normalized sequence, expanded the features channel to \u03bbC. We proceed by processing the projection layer through 1-D convolution, resulting in the computation of X1 via the SSM. The X1 gated by the projection layer and a residual connection to get the output token sequence Xout \u2208RH\u00d7W \u00d7C, as follows: X1 = SSM(Conv1d(Linear(LN(X)))), X2 = SiLU(Linear(LN(X))), Xout = Linear(X1 \u2299X2) + X. (4) Where LN is the layer normalization and \u2299denotes the Hadamard product. 3.3.2 Distillation strategy Our method introduces a deep feature distillation strategy (Fig. 5). During the distillation stage, the teacher network accumulates rich representation knowledge, maintaining a fixed state. By minimizing the L1 loss, we ensure alignment between student network features and those of the teacher. This formal process facilitates effective knowledge transfer from the teacher to the student network: \fFigure 5. The deep feature distillation pipeline of our method. Lout = \u03bbdisLdis + \u03bb1L1, Ldis = \u2225T (ILR) \u2212S(ILR)\u22251 , L1 = \u2225IHR \u2212S(ILR)\u22251 , (5) where \u03bbdis and \u03bb1 represents the coefficient of the Ldis loss function and the coefficient of the L1 loss function, respectively. They are set 1. T represents the function of our teacher network and S denotes the function of our proposed network. ILR and IHR are the input LR images and the corresponding ground-truth HR images, respectively. More information of Ldis can be seen from Fig.6. 4. Experiments 4.1. Datasets and metrics In this paper, DF2K (DIV2K + Flickr2K) [98] with 3450 images are used for training the proposed model from scratch. During testing, we select five standard benchmark datasets: Set5 [7], Set14 [117], BSD100 [75], Urban100 [36] and Manga109 [76]. The low-resolution images are generated from the ground truth images by the \u201cbicubic\u201d downsampling in MATLAB. PSNR/SSIM measured by discarding a 4-pixel boundary around the images, and calculated on the Y channel is reported for the quantitative metrics. 4.2. Implementation details During training, we set the input patch size to 256 \u00d7 256 and use random rotation and horizontal flipping for data augmentation. The batch size is set to 128 and the total number of iterations is 500k. The initial learning rate is set to 2 \u00d7 10\u22124. We adopt a multi-step learning rate strategy, where the learning rate will be halved when the iteration reaches 250000, 400000, 450000, and 475000, respectively. Adam optimizer with \u03b21 = 0.9 and \u03b22 = 0.99 is used to train the model. Distillation training. In the teacher learning phase, we utilize the DF2K dataset with 2K resolution to train the teacher network, which comprises 8 RSSB and 2 ViMM blocks with 192 channels. During the distillation training phase, we use DF2K datasets for the student network, which contains 4 RSSB and 2 ViMM blocks with 60 channels. 4.3. Comparison with State-of-the-art SR models We compare DVMSR with several advanced efficient superresolution model [2, 18, 20, 37, 39, 46, 47, 50, 53, 67, 91, 92, 114, 120]. The quantitative performance comparison on several benchmark datasets [7, 36, 75, 76, 117] is indicated in Table 1. Our experimental results showcase our ability to achieve smaller parameter counts while surpassing several previous methods on five benchmark datasets. Specifically, we attained higher SSIM scores on Set5, Set14, and BSD100. It\u2019s important to note that SSIM scores serve as a crucial metric, indicating how effectively our model preserves the structure and content of the images, ultimately resulting in reconstructions that closely resemble the original images. Additionally, we observed that PSNR values remain comparable across these five datasets. This comprehensive evaluation underscores the effectiveness of our approach in enhancing image quality while maintaining efficiency, making it a promising solution for various image enhancement tasks. It\u2019s worth emphasizing that in our current study, we directly utilize the final model architecture employed in the NTIRE competition. Remarkably, we manage to maintain excellent performance without unnecessarily inflating the parameter count. This strategic decision underscores our commitment to efficiency and effectiveness in model design, ensuring that our approach remains practical and scalable for real-world applications. Model complexity comparisons between SwinIR and DVMSR. Our investigation focuses on Mamba\u2019s performance in super-resolution (SR) tasks. In Fig. 2, we show the excellent long-range modeling capabilities of our \fTable 1. Average PSNR/SSIM for scale factor 4 on datasets Set5, Set14, BSD100, Urban100, and Manga109. The best and second best results are highlighted in red and blue respectively. Method Params Set5 Set14 BSD100 Urban100 Manga109 PSNR/SSIM PSNR/SSIM PSNR/SSIM PSNR/SSIM PSNR/SSIM Bicubic 28.42/0.8104 26.00/0.7027 25.96/0.6675 23.14/0.6577 24.89/0.7866 SRCNN [18] 8K 30.48/0.8626 27.50/0.7513 26.90/0.7101 24.52/0.7221 27.58/0.8555 FSRCNN [20] 13K 30.72/0.8660 27.61/0.7550 26.98/0.7150 24.62/0.7280 27.90/0.8610 VDSR [47] 666K 31.35/0.8838 28.01/0.7674 27.29/0.7251 25.18/0.7524 28.83/0.8870 DRCN [46] 1774K 31.53/0.8854 28.02/0.7670 27.23/0.7233 25.14/0.7510 28.93/0.8854 LapSRN [53] 502K 31.54/0.8852 28.09/0.7700 27.32/0.7275 25.21/0.7562 29.09/0.8900 DRRN [91] 298K 31.68/0.8888 28.21/0.7720 27.38/0.7284 25.44/0.7638 29.45/0.8946 MemNet [92] 678K 31.74/0.8893 28.26/0.7723 27.40/0.7281 25.50/0.7630 29.42/0.8942 IDN [37] 553K 31.82/0.8903 28.25/0.7730 27.41/0.7297 25.41/0.7632 29.41/0.8942 SRMDNF [120] 1552K 31.96/0.8925 28.35/0.7787 27.49/0.7337 25.68/0.7731 30.09/0.9024 CARN [2] 1592K 32.13/0.8937 28.60/0.7806 27.58/0.7349 26.07/0.7837 30.47/0.9084 IMDN [39] 715K 32.21/0.8948 28.58/0.7811 27.56/0.7353 26.04/0.7838 30.45/0.9075 RFDN [67] 550K 32.24/0.8952 28.61/0.7819 27.57/0.7360 26.11/0.7858 30.58/0.9089 RLFN [50] 543K 32.24/0.8952 28.62/0.7813 27.60/0.7364 26.17/0.7877 -/DIPNet [114] 543K 32.20/0.8950 28.58/0.7811 27.59/0.7364 26.16/0.7879 30.53/0.9087 DVMSR (Ours) 424K 32.19/0.8955 28.61/0.7823 27.58/0.7379 26.03/0.7838 30.48/0.9084 DVMSR using LAM. Additionally, we compare DVMSR with SwinIR, a transformer-based model, in terms of model complexity. SwinIR outperforms DVMSR by 0.23 dB in PSNR, but at the cost of approximately twice the number of parameters, significantly higher FLOPS, and about 20 times longer inference time. These findings suggest that Mambabased models hold promise for efficient SR. Table 2. Model complexity comparisons between SwinIR and DVMSR. Times represent the average inference time measured on the DIV2K dataset with an Nvidia RTX 3090 in seconds (s). FLOPS and memory is measured when the input is 256 \u00d7 256. PSNR is the result of testing on DIV2K. Method PSNR Time (s) Params[M] FLOPS[G] Activations Memory[M] SwinIR 29.20 dB 0.865 0.9296 70.7828 26.7387 1454.458 DVMSR 28.97 dB 0.048 0.4244 20.1680 26.7387 1094.245 4.4. Ablation Study 4.4.1 Model Parameter Analysis Here, we train DVMSR on DIV2K for classical image SR (\u00d74) and test it on Set5 and Set14. Impact of ViMM number. We show the effects of ViMM number in each RSSB on model performance in Table 3. In experiments 1 3, it is observed that the PSNR/SSIM is negatively correlated with the number of ViMMs. However, when we set the ViMM number to 1, as presented in experiment 4, the PSNR in Set5 and Set14 decreased by 0.09 dB compared to when the ViMM number is set to 2. Therefore, there may be a balance point for the ViMM number, where it should not be too large to avoid over-complexity of the model, nor too small to limit the model\u2019s ability to represent the data. Experimental results indicate that setting the ViMM number to 2 is appropriate. Table 3. Impact of ViMM number in each RSSB on the Set5 and Set14 datasets with scale factor of \u00d74. The number of RSSB is fixed at 4 and keep other parameter settings consistent. The best results are highlighted. Exp. Params[M] ViMM number Set5 Set14 PSNR/SSIM PSNR/SSIM 1 7.222 6,6,6,6 31.99/0.8926 28.44/0.7785 2 5.214 2,2,9,2 32.17/0.8959 28.63/0.7834 3 3.651 2,2,2,2 32.30/0.8972 28.68/0.7847 4 2.758 1,1,1,1 32.21/0.8954 28.59/0.7821 Impact of RSSB number. In Table 4, In Experiments 13, as the RSSB number increases, the parameter count increases, with the channel number set to 180. Along with the increase in RSSB number, the PSNR in Set5 shows a significant improvement. Compared to Experiment 1, Experiment 2 shows an increase of 0.26 dB, and relative to Experiment 2, Experiment 3 shows an increase of 0.13 dB. When we set the RSSB number to 10, the improvement is moderated, with Experiment 4 showing an increase of 0.01 dB relative to Experiment 3. Impact of channel number. We maintained the ViMM number and RSSB number while examining the influence of channel numbers on model performance, as detailed in Table 5. Notably, our analysis revealed a diminishing improvement in model performance when the channel number \fTable 4. Impact of RSSB number on the Set5 and Set14 datasets with scale factor of \u00d74. The number of ViMM is fixed at 2 and keeps other parameter settings consistent. The best results are highlighted. Exp. Params[M] RSSB number Set5 Set14 PSNR/SSIM PSNR/SSIM 1 2.175 2 32.04/0.8938 28.51/0.7799 2 3.651 4 32.30/0.8972 28.68/0.7847 3 5.128 6 32.43/0.8987 28.75/0.7866 4 8.080 10 32.44/0.8990 28.77/0.7874 was set to 210. Thus, we conclude that setting the channel number to 192 is more suitable for optimal model performance. Table 5. Impact of channel number on the Set5 and Set14 datasets with scale factor of \u00d74. keep other parameter settings consistent. The best results are highlighted. Exp. Params[M] channel number Set5 Set14 PSNR/SSIM PSNR/SSIM 1 2.664 150 32.32/0.8971 28.65/0.7838 2 3.651 180 32.30/0.8972 28.68/0.7847 3 4.089 192 32.37/0.8977 28.71/0.7851 4 4.809 210 32.39/0.8976 28.71/0.7850 Table 6. Comparison of unidirectional SSM or bidirectional SSM. Times represent the average inference time measured on the DIV2K dataset with an Nvidia RTX 3090 in seconds (s). FLOPS and memory are measured when the input is 256 \u00d7 256. PSNR is the result of testing on DIV2K. Method PSNR Time (s) Params[M] FLOPS[G] Activations Memory[M] unidirectional SSM 28.87 dB 0.048 0.4244 20.1680 26.7387 1094.245 bidirectional SSM 28.88 dB 0.087 0.4849 23.9429 26.7387 1451.680 4.4.2 Distillation Learning Distillation loss. To investigate the effectiveness of distillation loss, we tried multiple distillation strategies. Mid-level feature distillation and end-level feature distillation are presented in Figure 6. As shown in Table 7, using the end-level feature distillation method tends to increase the PSNR and SSIM on Set5 and Set14 datasets. This suggests that the features towards the end of the model might be closer to the target output of the SR task. When attempting to alter the weights and types of distillation loss in the mid-level feature distillation method, there were no changes observed in PSNR and SSIM values on Set5 and Set14 datasets. This indicates that it is difficult for the student model to benefit from the features of the middle layer of the teacher model, as even after modifying the weights and types of distillation loss, there were no significant changes in the results. When we increase the weight of distillation loss in the end-level Figure 6. Left: The structure of mid-level feature distillation; Right: The structure of end-level feature distillation feature distillation method, there is a slight decrease in the PSNR and SSIM on Set5 and Set14 datasets. This could be because excessively high weights on distillation loss might introduce too many constraints, thereby affecting the model\u2019s performance. Table 7. Impact of the distillation loss. \u201c\u2718\u201d signifies that distillation is not used, and \u201c\u2714\u201d signifies that distillation is used. \u201cmid\u201d and \u201cend\u201d represent mid-level feature distillation and endlevel feature distillation, respectively. Ldis : L1 represents the weight ratio of the distillation loss and L1 loss. distillation distillation distillation Ldis : L1 Set5 Set14 strategy position loss PSNR/SSIM PSNR/SSIM \u2718 32.04/0.8940 28.50/0.7801 \u2714 mid L1 1:1 32.11/0.8949 28.56/0.7811 \u2714 mid L1 5:1 32.11/0.8949 28.56/0.7811 \u2714 mid L2 1:1 32.11/0.8949 28.56/0.7811 \u2714 end L1 1:1 32.12/0.8951 28.57/0.7813 \u2714 end L1 5:1 32.11/0.8950 28.57/0.7813 Teacher model. When the teacher model has more parameters and richer representation capability, the knowledge it transfers to the student model will be more abundant, leading to a more significant performance improvement of the student model on the task. To verify this conclusion, we attempted two teacher models with different parameters. They exhibited a PSNR difference of 0.27dB on the Set5 dataset. However, as shown in Table 8, the performance of the student model remained unchanged. This could indicate that the student model\u2019s capacity or architecture may not be sufficiently expressive to fully utilize the additional knowledge provided by the larger teacher model. Therefore, finding the balance point between the performance of the teacher model and the student model is a worthwhile exploration. 4.4.3 Unidirectional v.s. Bidirectional SSM To investigate the effectiveness of bidirectional SSM in ESR, we evaluate its performance in ESR based on several aspects: PSNR, Time, Params, FLOPS, Activations, and Memory. The architecture of unidirectional SSM and \fTable 8. Design of the teacher model. PSNR is the result of testing on Set5. Params is the parameter of teacher model, and the parameter of student model is fixed. Method Params[M] Teacher model Student model PSNR/SSIM PSNR/SSIM DVMSR 32.04/0.8940 DVMSR 4.089 32.38/0.8977 32.12/0.8950 DVMSR 7.432 32.65/0.9011 32.12/0.8950 Figure 7. Unidirectional SSM or bidirectional SSM in ViMM. bidirectional SSM are presented in Figure 7. As shown in Table 6, compared to Unidirectional SSM, the improvement of bidirectional SSM in PSNR is limited (increased by 0.01dB), while the inference time has increased by 0.039s. This cost is significant. Therefore, Unidirectional SSM is more suitable for the ESR task. 4.4.4 NTIRE 2024 Challenge on Efficient SR We actively participate in the NTIRE 2024 Efficient SuperResolution Challenge [86]. The model structure and training strategy are slightly different from the above. This competition aims to procure solutions that excel in overall performance metrics, encompassing inference runtime, FLOPS, and parameter optimization on the NVIDIA GeForce RTX 3090 GPU. This challenge also requires the maintenance or enhancement of threshold PSNR results, underscoring the importance of efficiency without compromising on image quality benchmarks. During the teacher learning phase, we train the teacher network using the DIV2K dataset with a resolution of 2K. Our teacher architecture consists of 6 RSSB (Residual Scaling and Shifting Block) and 2 ViMM (Vision Mamba Modules), each configured with 180 channels. In the subsequent distillation training phase, we amalgamated data from both the DIV2K and LSDIR datasets to train the student network. This student model comprises 2 RSSB and 2 ViMM blocks, tailored with 60 channels to maintain computational efficiency while preserving performance standards. Notably, the teacher network remains unchanged. We employ DIV2K [98] and LSDIR [59] to construct the training dataset. The High-Resolution (HR) images are cropped to 256 \u00d7 256 patches for the training procedure. During network optimization, we employ the L1 loss function in conjunction with the Adam optimizer, a widely adopted optimization algorithm in deep learning tasks. Our optimization regimen commenced with an initial learning rate of 2 \u00d7 10\u22124, evolving through a multi-step learning rate strategy. Specifically, the learning rate halved at key iterations: 250000, 400000, 450000, and 475000, respectively, throughout the 500k total iterations. This adaptive learning rate scheme enhances model convergence and stability over the training period, crucial for achieving superior performance. Through extensive experiments, we refine our model\u2019s architecture and training process, aiming for excellence in both efficiency and performance, as evidenced by our results in Table 9. Our approach employs a novel architecture that differs from both CNN and transformer, providing a reference for the development of mamba in Efficient SuperReslution. Table 9. NTIRE 2024 ESR Challenge results. Model Val PSNR Test PSNR Val Time Test Time FLOPS Params (dB) (dB) (ms) (ms) (G) (M) RLFN baseline 26.96 27.07 14.348 9.194 19.67 0.317 DVMSR 26.93 27.04 40.75 34.634 20.17 0.424 5." +} \ No newline at end of file diff --git a/abs_9K/test_abstract_short_2405.03025v1.json b/abs_9K/test_abstract_short_2405.03025v1.json new file mode 100644 index 0000000000000000000000000000000000000000..f4f22c35445e78e2a942aaa8fed226dd30b39911 --- /dev/null +++ b/abs_9K/test_abstract_short_2405.03025v1.json @@ -0,0 +1,16 @@ +{ + "url": "http://arxiv.org/abs/2405.03025v1", + "title": "Matten: Video Generation with Mamba-Attention", + "abstract": "In this paper, we introduce Matten, a cutting-edge latent diffusion model\nwith Mamba-Attention architecture for video generation. With minimal\ncomputational cost, Matten employs spatial-temporal attention for local video\ncontent modeling and bidirectional Mamba for global video content modeling. Our\ncomprehensive experimental evaluation demonstrates that Matten has competitive\nperformance with the current Transformer-based and GAN-based models in\nbenchmark performance, achieving superior FVD scores and efficiency.\nAdditionally, we observe a direct positive correlation between the complexity\nof our designed model and the improvement in video quality, indicating the\nexcellent scalability of Matten.", + "authors": "Yu Gao, Jiancheng Huang, Xiaopeng Sun, Zequn Jie, Yujie Zhong, Lin Ma", + "published": "2024-05-05", + "updated": "2024-05-05", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "label": "Original Paper", + "paper_cat": "Mamba", + "gt": "In this paper, we introduce Matten, a cutting-edge latent diffusion model\nwith Mamba-Attention architecture for video generation. With minimal\ncomputational cost, Matten employs spatial-temporal attention for local video\ncontent modeling and bidirectional Mamba for global video content modeling. Our\ncomprehensive experimental evaluation demonstrates that Matten has competitive\nperformance with the current Transformer-based and GAN-based models in\nbenchmark performance, achieving superior FVD scores and efficiency.\nAdditionally, we observe a direct positive correlation between the complexity\nof our designed model and the improvement in video quality, indicating the\nexcellent scalability of Matten.", + "main_content": "Introduction Recent advancements in diffusion models have demonstrated impressive capabilities in video generation [1\u20135]. It has been observed that breakthroughs in architectural design are crucial for the efficient application of these models [6\u20138]. Contemporary studies largely concentrate on CNN-based U-Net architectures [1, 4] and Transformer-based frameworks [3, 2], both of which employ attention mechanisms to process spatio-temporal dynamics in video content. Spatial attention, which involves computing self-attention among image tokens within a single frame, is extensively utilized in both U-Net-based and Transformer-based video generation diffusion models as shown in Fig. 1 (a). Prevailing techniques typically apply local attention within the temporal layers as illustrated in Fig. 1 (b), where attention calculations are confined to identical positions across different frames. This approach fails to address the critical aspect of capturing interrelations across varying spatial positions in successive frames. A more effective method for temporal-spatial analysis would involve mapping interactions across disparate spatial and temporal locations, as depicted in Fig. 1 (c). Nonetheless, this global-attention method is computationally intensive due to the quadratic complexity involved in computing attention, thus requiring substantial computational resources. There has been a rise in fascination with state space models (SSMs) across a variety of fields, largely due to their ability to deal with long sequences of data [9\u201311]. In the field of Natural Language Processing (NLP), innovations such as the Mamba model [10] have significantly improved both the efficiency of data inference processes and the overall performance of models by introducing dynamic parameters into the SSM structure and by building algorithms tailored for better hardware compatibility. The utility of the Mamba framework has been successfully extended beyond its initial applications, demonstrating its effectiveness in areas such as vision [12, 13] and multimodal applications [14]. Given the complexity of processing video data, we propose to use the Mamba architecture to explore spatio-temporal interactions in video content, as shown in Fig. 1 (d). However, unlike the self-attention layer, it\u2019s important to note that Mamba scans, which do not inherently compute dependencies between tokens, struggle to effectively detect localised data patterns, a limitation pointed out by [15]. \u2020 Corresponding to Zequn Jie . Preprint. Under review. arXiv:2405.03025v1 [cs.CV] 5 May 2024 \f(a) Spatial-Attention (b) Local Temporal-Attention (c) Global-Attention (d) Global-Mamba Figure 1: Different ways of spatio-temporal modeling using Mamba and Attention. H, W, and F denote the height, weight, and frames, respectively. The red token is an example query, and the blue tokens mean those tokens having information interaction with the query. The shade of blue represents the intensity of the information interaction, with darker colors representing more direct interactions. Mamba scan interactions are distance-related between tokens with a linear complexity, while attention interactions are equal among these tokens with a quadratic complexity. For simplicity, we only show the unidirectional Mamba scan. Regarding the advantages of Mamba and Attention, we introduce a latent diffusion model for video generation with a Mamba-Attention architecture, namely Matten. Specifically, we investigated the impact of various combinations of Mamba and Attention mechanisms on video generation. Our findings demonstrate that the most effective approach is to utilize the Mamba module to capture global temporal relationships (Fig. 1 (d)) while employing the Attention module for capturing spatial and local temporal relationships (Fig. 1 (a) and Fig. 1 (b)). We conducted experimental evaluations to examine the performance and effects of Matten in both unconditional and conditional video generation tasks. Across all test benchmarks, Matten consistently exhibits the comparable FVD score [16] and efficiency with SOTAs. Furthermore, our results indicate that Matten is scalable, evidenced by the direct positive relationship between the model\u2019s complexity and the quality of generated samples. In summary, our contributions are as follows: \u2022 We propose Matten, a novel video latent diffusion model integrated with the mamba block and attention operations, which enables efficient and superior video generation. \u2022 We design four model variants to explore the optimal combination of Mamba and attention in video generation. Based on these variants, we find that the most favorable approach is adopting attention mechanisms to capture local spatio-temporal details and utilizing the Mamba module to capture global information. \u2022 Comprehensive evaluations show that our Matten achieves comparable performance to other models with lower computational and parameter requirements and exhibits strong scalability. 2 Related Work 2.1 Video Generation The task of video generation primarily focuses on produce realistic video clips characterized by high-quality visuals and fluid movements. Previous video generation work can be grouped into 3 types. Initially, a number of researchers focused on adapting powerful GAN-based image generation techniques for video creation [17\u201321]. Nonetheless, GAN-based methods may lead to problems such as mode collapse, reducing diversity and realism. In addition, certain models suggest the learning of data distributions via autoregressive models [22\u2013 25]. These methods typically yield high-quality videos and demonstrate more reliable convergence, but they are hindered by their substantial computational demands. Finally, the latest strides in video generation are centered on the development of systems that utilize diffusion models [26, 27, 4, 28\u2013 33, 2], which have shown considerable promise. These methods primarily use CNN-based U-Net or Transformer as the model architecture. Distinct from these works, our method concentrates on investigating the underexplored area of the combination of mamba and attention within video diffusion. 2 \f2.2 Mamba Mamba, a new State-Space Model, has recently gained prominence in deep learning for its universal approximation capabilities and efficient modeling of long sequences, with applications in diverse fields such as medical imaging, image restoration, graphs, NLP, and image generation [34\u201340]. Drawing from control systems and leveraging HiPPO initialization [41], these models, like LSSL [11], address long-range dependencies but are limited by computational demands. To overcome this, S4 [42] and other structured state-space models introduce various configurations [43, 44, 9] and mechanisms [10] that have been integrated into larger representation models [45\u201347] for tasks in language and speech. Mamba, and its iterations like VisionMamba [12, 13], S4ND [48], and MambaND [49], exhibit a range of computational strategies, from bidirectional SSMs to local convolution and multi-dimensionality considerations. For 3D imaging, T-Mamba [50] tackles the challenges in orthodontic diagnosis due to the powerful ability of Mmaba to handle long-range dependencies. For video understanding, VideoMamba [51] and Video Mamba Suite [52] adapt Mamba to the video domain and address the challenges of local redundancy and global dependencies prevalent in video data. In the domain of diffusion applications using mamba, Zigzag Mamba [53] advances the scalability and efficiency of generating visual content. It tackles the crucial problem of spatial continuity with an innovative scanning approach, incorporates text-conditioning features, and shows enhanced performance across high-resolution image and video datasets. [54] closely relates to our work, employing the mamba block in the temporal layer of video diffusion. Diverging from previous research focused mainly on local temporal modeling, our method, Matten, is uniquely designed to encompass global temporal dimensions. 3 Methodology Our discussion starts with a brief overview of the latent space diffusion model and state space model in Sec. 3.1. This is followed by an in-depth description of the Matten model variants in Sec. 3.2. We then explore conditional ways related to timestep or class in Sec. 3.3. Lastly, a theoretical analysis comparing Mamba with Attention mechanisms is presented in Sec. 3.4. 3.1 Background Latent Space Diffusion Models. [55]. For an input data sample x \u2208pdata(x), Latent Diffusion Models (LDMs) initially utilize the pre-trained VAE or VQ-VAE encoder E to transform the data sample into a latent representation z = E(x). This transformation is followed by a learning phase where the data distribution is modeled through diffusion and denoising steps. During the diffusion phase, noise is incrementally added to the latent encoding, producing a series of increasingly perturbed latent states zt, where the intensity of additive noise is denoted by the timesteps t \u2208T. A specialized model such as U-Net \u03f5\u03b8 is utilized as the noise estimate network to estimate the noise perturbations affecting the latent representation zt during the denoising phase, aiming to minimize the latent diffusion objective. Lsimple = Ez\u223cp(z), \u03f5\u223cN (0,I), t h \u2225\u03f5 \u2212\u03f5\u03b8(zt, t)\u22252 2 i . (1) Furthermore, the diffusion models \u03f5\u03b8 are enhanced with a learned reverse process covariance \u03a3\u03b8, optimized using Lvlb as outlined by [6]. In our research, \u03f5\u03b8 is designed using a Mamba-based framework. Both Lsimple and Lvlb are employed to refine the model\u2019s effectiveness and efficiency. State Space Backbone. State space models (SSMs) have been rigorously validated both theoretically and through empirical evidence to adeptly manage long-range dependencies, demonstrating linear scaling with the length of data sequences. Conventionally, a linear state space model is represented as the following type: h\u2032(t) = A(t)h(t) + B(t)x(t), y(t) = C(t)h(t) + D(t)x(t), (2) which describes the transformation of a 1-D input sequence x(t) \u2208R into a 1-D output sequence y(t) \u2208R, mediated by an N-D latent state sequence h(t) \u2208RN. State space models are particularly crafted to integrate multiple layers of these basic equations within a neural sequence modeling architecture, allowing the parameters A, B, C, and D of each layer to be optimized via deep learning on loss function. N represents the state size, A \u2208RN\u00d7N, B \u2208RN\u00d71, C \u2208R1\u00d7N, and D \u2208R. The process of discretization, essential for applying state space models as detailed in Eq. 2 to realworld deep learning tasks, converts continuous system parameters like A and B into their discrete 3 \fConv \ud835\udf0e SSM \ud835\udf0e Conv \ud835\udf0e SSM \ud835\udf0e Conv \ud835\udf0e SSM flip flip Linear Projection \ud835\udf0e Activation Multiplication Summarization flip Sequence flip (a) Mamba (b) Bidirectional Mamba Figure 2: The original 1D sequence Mamba block and 2D bidirectional Mamba block. The normalization and the residual are omitted for simplification. equivalents A and B. This critical step typically utilizes the zero-order hold (ZOH) method, a technique well-established in academic research for its efficacy. The ZOH method uses the timescale parameter \u2206to bridge the gap between continuous and discrete parameters, thereby facilitating the application of theoretical models within computational settings. A = exp(\u2206A), B = (\u2206A)\u22121(exp(A) \u2212I) \u00b7 \u2206B. (3) With these discretized parameters, the model outlined in Eq. 2 is then adapted to a discrete framework using a timestep \u2206: hk = Ahk\u22121 + Bxk, yk = Chk + Dxk. (4) This approach allows for the seamless integration of state space models into digital platforms. The traditional Mamba block, initially crafted for 1D sequence processing as shown in Fig. 2, is not ideally suited for visual tasks that demand spatial cognizance. To address this limitation, Vision Mamba [13] has developed a bidirectional Mamba block specifically tailored for vision-related applications. This innovative block is engineered to handle flattened visual sequences by employing both forward and backward SSMs concurrently, significantly improving its ability to process with spatial awareness. Mamba employs a work-efficient parallel scan that effectively reduces the sequential dependencies typically associated with recurrent computations. This optimization, coupled with the strategic utilization of GPU operations, eliminates the necessity to explicitly manage the expanded state matrix. In our study, we explore the integration of the Mamba architecture within a video generation framework, leveraging its efficiency and scalability. 3.2 The Model Variants of Matten Consider the representation of a video clip\u2019s latent space, represented by VL \u2208RF \u00d7H\u00d7W \u00d7C, where F indicates the number of frames, H the height of the frame, W the width of the frame, and C the channels per frame within the video\u2019s latent configuration. We transform VL into a sequence of tokens by segmenting and reshaping it, represented as \u02c6 z \u2208R(nf \u00d7nh\u00d7nw)\u00d7d. Here, nf \u00d7 nh \u00d7 nw denotes the total number of tokens, with each token having dimension d. Adopting a strategy similar to Latte, we assign nf = F, nh = H/2, and nw = W/2 to structure the data effectively. Furthermore, a spatio-temporal positional embedding, denoted as p, is incorporated into the token sequence \u02c6 z. The input for the Matten model thus becomes z = \u02c6 z + p, facilitating complex model interactions. As illustrated in Fig. 3, we introduce four distinct variants of the Matten model to enhance its versatility and effectiveness in video processing. Global-Sequence Mamba Block. As illustrated in Fig. 3 (a), this variant refers to the execution of 3D Mamba scans in the full sequence of this spatiotemporal input. Following VideoMamba [51], we adopt Spatial-First Scan for our Global-Sequence Mamba block. This straightforward operation has been proven to be highly effective. It involves arranging spatial tokens based on their location and stacking them sequentially frame by frame. We reshape z into zfull \u2208R1\u00d7nf \u2217nh\u2217nw\u00d7d as the input of the Global-Sequence Mamba block to capture spatial-first information. The Bidirectional-Mamba layer is used. 4 \fFull-Sequence Mamba Block Embedding Full-Sequence Mamba Block \u2026 \u2026 Spatial Mamba Block Embedding Temporal Mamba Block \u2026 \u2026 Spatial Mamba Block Embedding \u2026 \u2026 Full-Sequence Scans Spatial-Attention Temporal-Attention Embedding \u2026 \u2026 Full-Sequence Scans TemporalAttention Variant 1 Variant 2 Variant 3 Variant 4 Figure 3: We introduce four model variants designed to harness spatio-temporal dynamics in videos effectively. For clarity, the embeddings shown in the diagram represent the patch and reshaped outcomes of the latent video. Spatial and Temporal Mamba Blocks Interleaved. This particular variant leverages the Mamba module as a substitute for the traditional attention module within Transformer-based diffusion models for video generation, as noted in studies such as [2, 56, 57]. Illustrated in Fig. 3 (b), the backbone of this variant, known as Matten, is equipped with two types of Bidirectional-Mamba blocks: spatial Bidirectional-Mamba blocks and temporal Bidirectional-Mamba blocks. The spatial blocks are designed to solely capture spatial details among tokens that share identical temporal indices, whereas the temporal blocks are tasked with capturing information across different times within the same spatial coordinate. For effective spatial information processing, z is restructured into zs \u2208Rnf \u00d7s\u00d7d, which then serves as the input for the spatial Mamba block. Then, we reshape zs into zt \u2208Rs\u00d7nf \u00d7d for the temporal Mamba block to process temporal information. Global-Sequence Mamba Block with Spatial-Temporal Attention Interleaved. Although Mamba demonstrates efficient performance in long-distance modeling, its advantages in shorter sequences modeling are not as pronounced [10], compared to the attention operation in Transformer. Consequently, we have developed a hybrid block that leverages the strengths of both the attention mechanism and Mamba as illustrated in Fig. 3 (c), which integrates Mamba and Attention computations for both short and long-range modeling. Each block is composed of Spatial Attention computation, Temporal Attention computation, and a Global-Sequence Mamba scan in series. This design enables our model to effectively capture both the global and local information present in the latent space of videos. Global-Sequence Mamba Block with Temporal Attention Interleaved. The scanning in the Global-Sequence Mamba block is continuous in the spatial domain but discontinuous in the temporal domain [51]. Thus, this variant has removed the Spatial Attention component, while retaining the Temporal Attention block. Consequently, by concentrating on a Spatial-First scan augmented with Temporal Attention shown in Fig. 3 (d), we strive to enhance our model\u2019s efficiency and precision in processing the dynamic facets of video data, thereby assuring robust performance in a diverse range of video processing tasks. 3.3 Conditional Way of Timestep or Class Drawing from the frameworks presented by Latte and DiS, we perform experiments on two distinct methodologies for embedding timestep or class information c into our model. The first method, inspired by DiS, involves treating c as tokens, a strategy we designate as conditional tokens. The second method adopts a technique akin to adaptive normalization (AdaN) [58, 7], specifically tailored for integration within the Mamba block. This involves using MLP layer to compute parameters 5 \fMethod Pretrained FaceForensics SkyTimelapse UCF101 Taichi-HD FLOPs (G) MoCoGAN % 124.7 206.6 2886.9 VideoGPT % 185.9 222.7 2880.6 DIGAN % 62.5 83.11 1630.2 156.7 StyleGAN-V % 47.41 79.52 1431.0 PVDM % 355.92 75.48 1141.9 540.2 MoStGAN-V % 39.70 65.30 1380.3 MoCoGAN-HD \" 111.8 164.1 1729.6 128.1 LVDM \" 95.20 372.0 99.0 Latte \" 34.00 59.82 477.97 159.60 5572 Matten (ours) % 45.01 53.56 210.61 158.56 4008 Table 1: FVD metrics for various video generation models across multiple datasets are presented. FVD scores for comparative baseline models, as reported in sources such as Latte, StyleGAN-V, or respective original publications, are included for reference. In this context, \"Pretrained\" refers to models that utilize a pretraining approach based on image generation techniques. \u03b3c and \u03b2c from c, formulating the operation AdaN(f, c) = \u03b3cNorm(f) + \u03b2c, where f denotes the feature maps in the Mamba block. Further, this adaptive normalization is implemented prior to residual connections of the Mamba block, implementing by the transformation RCs(f, c) = \u03b1cf + MambaScans(AdaN(f, c)), with MambaScans representing the Bidirectional-Mamba scans within the block. We refer to this advanced technique as Mamba adaptive normalization (MAdaN), which seamlessly incorporates class or timestep information to enhance model responsiveness and contextual relevance. 3.4 Analysis of Mamba and Attention In summary, the hyperparameters of our proposed block encompass hidden size D, expanded state dimension E, and SSM dimension N. All the settings of Matten are detailed in Table 2, covering different numbers of parameters and computation cost to thoroughly evaluate scalability performance. Specifically, the Gflop metric is analyzed during the generation of 16\u00d7256\u00d7256 unconditional videos, employing a patch size of p = 2. Consistent with [10], we standardize the SSM dimension N across all models at 16. Both the SSM block within Matten and the self-attention mechanism in Transformer architectures are integral for effective context modeling. We provide a detailed theoretical analysis of computational efficiency as well. For a given sequence X \u2208R1\u00d7J\u00d7D with the standard setting E = 2, the computational complexities of self-attention (SA), Feed-Forward Net (FFN) and SSM operations are calculated as follows: O(SA) = 2J2D, (5) O(FFN) = 4JD2, (6) O(SSM) = 3J(2D)N + J(2D)N 2. (7) 3J(2D)N involves the calculation with B, C, and D, while J(2D)N 2 denotes the calculation with A. It demonstrates that self-attention\u2019s computational demand scales quadratically with the sequence length J, whereas SSM operations scale linearly. Notably, with N typically fixed at 16, this linear scalability renders the Mamba architecture particularly apt for handling extensive sequences typical in scenarios like global relationship modeling in video data. When comparing the terms 2J2D and J(2D)N 2, it is clear that the Mamba block is more computationally efficient than self-attention, particularly when the sequence length J significantly exceeds N 2. For shorter sequences that focus on spatial and localized temporal relationships, the attention mechanism offers a more computationally efficient alternative when the computational overhead is manageable, as corroborated by empirical results. 4 Experiments This part first describes the experimental settings, including details about the datasets we used, evaluation metrics, compared methods, configurations of the Matten model, and specific implementation 6 \fVariant 1 Variant 2 Variant 3 Variant 4 Params (M) 814 814 853 846 FLOPs (G) 1590 1660 4008 3660 Table 2: The parameter count and FLOPs (Floating-Point Operations) associated with various model variants of Matten. Latte Matten PVDM Real Figure 4: Sample videos from the different methods and real data on SkyTimelapse. aspects. Following this, ablation studies are conducted to identify optimal practices and assess the impact of model size. The section concludes with a comparative analysis of our results on 4 common datasets against advanced video generation methods. 4.1 Experimental Detail Datasets Overview. We engage in extensive experiments across four renowned and common datasets: FaceForensics [59], SkyTimelapse [60], UCF101 [61], and Taichi-HD [62]. Following protocols established in Latte, we utilize predefined training and testing divisions. From these datasets, we extract video clips consisting of 16 frames, applying a sampling interval of 3, and resize each frame to a uniform resolution of 256x256 for our experiments. Evaluation Metrics. For robust quantitative analysis, we adopt the Fr\u00e9chet Video Distance (FVD) [16], recognized for its correlation with human perceptual evaluation. In compliance with the methodologies of StyleGAN-V, we determine FVD scores by examining 2,048 video clips, each containing 16 frames. Baseline Comparisons. Our study includes comparisons with advanced methods to assess the performance of our approach quantitatively, including MoCoGAN [63], VideoGPT [25], MoCoGANHD [64], DIGAN [65], StyleGAN-V [66], PVDM [1], MoStGAN-V [67], LVDM [68], and Latte [2]. Unless explicitly stated otherwise, all presented values are obtained from the latest relevant studies: Latte, StyleGAN-V, PVDM, or the original paper. Matten Model Configurations. Our Matten model is structured using a series of L Mamba blocks, with each block having a hidden dimension of D. Inspired by the Vision Transformer (ViT) approach, we delineate four distinct configurations varying in parameter count, detailed in Table 3. Implementation Specifics. All ablation experiments adopt the AdamW optimizer, set at a fixed learning rate of 1 \u00d7 10\u22124. The sole augmentation technique applied is horizontal flipping. Consistent with prevailing strategies in generative modeling [7, 8], we employ the exponential moving average (EMA) of the model weights with a decay rate of 0.99 at the first 50k steps and the other 100k steps during the training process. The results reported are derived directly using the EMA-enhanced models. Additionally, the architecture benefits from the integration of a pre-trained variational autoencoder, sourced from Stable Diffusion v1-4. 7 \fLatte Matten PVDM Real Figure 5: Sample videos generated using various methods on the UCF101 dataset, highlighting the visually appealing nature of the results. Latte Matten Real PVDM Figure 6: Sample videos generated using various methods on the FaceForensics dataset, highlighting the visually appealing nature of the results. 4.2 Ablation study In this part, we detail our experimental investigations using the SkyTimelapse dataset to assess the impact of various design modifications, model variations, and model sizes on performance, as previously introduced in Secs. 3.3 and 3.2. Timestep-Class Information Injection Illustrated in Fig. 8b, the M-AdaN approach markedly outperforms conditional tokens. We surmise this difference stems from the method of integration of timestep or class information. Conditional tokens are introduced directly into the model\u2019s input, potentially creating a spatial disconnect within the Mamba scans. In contrast, M-AdaN embeds both timestep and class data more cohesively, ensuring uniform dissemination across all video tokens, and enhancing the overall synchronization within the model. Exploring Model Variants Our analysis of Matten\u2019s model variants, as detailed in Sec. 3.2, aims to maintain consistency in parameter counts to ensure equitable comparisons. Each variant is developed from the ground up. As depicted in Fig. 8a, Variant 3 demonstrates superior performance with increasing iterations, indicating its robustness. Conversely, Variants 1 and 2, which focus primarily 8 \fLatte Matten Real LVDM Figure 7: Sample videos generated using various methods on the Taichi-HD dataset, highlighting the visually appealing nature of the results. (a) Model variants (b) Timestep-class conditional way Figure 8: Exploration of Design Choices Through Ablation Studies. We have conducted various ablation studies to identify optimal strategies for Mamba-based video diffusion models, focusing on improving FVD metrics on the SkyTimelapse dataset. For enhanced clarity, please magnify the displayed results. on local or global information, respectively, lag in performance, underscoring the necessity for a balanced approach in model design. Assessment of Model Size We experiment with four distinct sizes of the Matten model\u2014XL, L, B, and S as listed in Tab. 3 on the SkyTimelapse dataset. The progression of their Fr\u00e9chet Video Distances (FVDs) with training iterations is captured in Fig. 9. There is a clear trend showing that larger models tend to deliver improved performance, echoing findings from other studies in image and video generation [7], which highlight the benefits of scaling up model dimensions. 4.3 Comparison Experiment According to the findings from the ablation studies presented in Sec. 4.2, we have pinpointed the settings about how to design our Matten, notably highlighting the efficacy of model variant 3 equipped with M-AdaN. Leveraging these established best practices, we proceed to conduct comparisons against contemporary state-of-the-art techniques. Qualitative Assessment of Results Figures 4 through 7 display the outcomes of video synthesis using various methods across datasets such as UCF101, Taichi-HD, FaceForensics, and SkyTimelapse. Across these different contexts, our method consistently delivers realistic video generations at a high resolution of 256x256 pixels. Notable achievements include accurately capturing facial motions and effectively handling dynamic movements of athletes. Our model particularly excels in generating 9 \fFigure 9: The impact of varying model sizes on performance is notable. Generally, enlarging the model dimensions tends to markedly enhance its effectiveness. Model Layer numbers L Hidden size D SSM dimension N Param Matten-S 12 384 16 35M Matten-B 12 768 16 164M Matten-L 24 1024 16 579M Matten-XL 28 1152 16 853M Table 3: Specifics of our model configurations adhere to the setups outlined for various model sizes following the ViT and DiT frameworks. high-quality videos on the UCF101 dataset, an area where many other models frequently falter. This capability underscores our method\u2019s robustness in tackling complex video synthesis challenges. Quantitative results. Tab. 1 presents the quantitative results of each comparative method. Overall, our method surpasses prior works and matches the performance of methods with image-pretrained weights, demonstrating our method\u2019s superiority in video generation. Furthermore, our model attains roughly a 25% reduction in flops compared to Latte, the latest Transformer-based model. Given the abundance of released pre-trained U-Net-based (Stable Diffusion, SDXL) or Transformer-based (DiT, PixArt) image generation models, these U-Net-based or Transformer-based video generation models can leverage these pre-trained models for training. However, there are no released, pre-trained Mamba-based image generation models yet, so our model has to be trained from scratch. We believe that once Mamba-based image generation models become available, they will be of great help in training our Matten. 5" +} \ No newline at end of file diff --git a/abs_9K/test_abstract_short_2405.03085v1.json b/abs_9K/test_abstract_short_2405.03085v1.json new file mode 100644 index 0000000000000000000000000000000000000000..0eac6304f8a08eae5eedfa8ae567c597cdd59d9d --- /dev/null +++ b/abs_9K/test_abstract_short_2405.03085v1.json @@ -0,0 +1,16 @@ +{ + "url": "http://arxiv.org/abs/2405.03085v1", + "title": "Compressing Long Context for Enhancing RAG with AMR-based Concept Distillation", + "abstract": "Large Language Models (LLMs) have made significant strides in information\nacquisition. However, their overreliance on potentially flawed parametric\nknowledge leads to hallucinations and inaccuracies, particularly when handling\nlong-tail, domain-specific queries. Retrieval Augmented Generation (RAG)\naddresses this limitation by incorporating external, non-parametric knowledge.\nNevertheless, the retrieved long-context documents often contain noisy,\nirrelevant information alongside vital knowledge, negatively diluting LLMs'\nattention. Inspired by the supportive role of essential concepts in\nindividuals' reading comprehension, we propose a novel concept-based RAG\nframework with the Abstract Meaning Representation (AMR)-based concept\ndistillation algorithm. The proposed algorithm compresses the cluttered raw\nretrieved documents into a compact set of crucial concepts distilled from the\ninformative nodes of AMR by referring to reliable linguistic features. The\nconcepts explicitly constrain LLMs to focus solely on vital information in the\ninference process. We conduct extensive experiments on open-domain\nquestion-answering datasets to empirically evaluate the proposed method's\neffectiveness. The results indicate that the concept-based RAG framework\noutperforms other baseline methods, particularly as the number of supporting\ndocuments increases, while also exhibiting robustness across various backbone\nLLMs. This emphasizes the distilled concepts are informative for augmenting the\nRAG process by filtering out interference information. To the best of our\nknowledge, this is the first work introducing AMR to enhance the RAG,\npresenting a potential solution to augment inference performance with\nsemantic-based context compression.", + "authors": "Kaize Shi, Xueyao Sun, Qing Li, Guandong Xu", + "published": "2024-05-06", + "updated": "2024-05-06", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "label": "Original Paper", + "paper_cat": "Retrieval AND Augmented AND Generation AND RAG", + "gt": "Large Language Models (LLMs) have made significant strides in information\nacquisition. However, their overreliance on potentially flawed parametric\nknowledge leads to hallucinations and inaccuracies, particularly when handling\nlong-tail, domain-specific queries. Retrieval Augmented Generation (RAG)\naddresses this limitation by incorporating external, non-parametric knowledge.\nNevertheless, the retrieved long-context documents often contain noisy,\nirrelevant information alongside vital knowledge, negatively diluting LLMs'\nattention. Inspired by the supportive role of essential concepts in\nindividuals' reading comprehension, we propose a novel concept-based RAG\nframework with the Abstract Meaning Representation (AMR)-based concept\ndistillation algorithm. The proposed algorithm compresses the cluttered raw\nretrieved documents into a compact set of crucial concepts distilled from the\ninformative nodes of AMR by referring to reliable linguistic features. The\nconcepts explicitly constrain LLMs to focus solely on vital information in the\ninference process. We conduct extensive experiments on open-domain\nquestion-answering datasets to empirically evaluate the proposed method's\neffectiveness. The results indicate that the concept-based RAG framework\noutperforms other baseline methods, particularly as the number of supporting\ndocuments increases, while also exhibiting robustness across various backbone\nLLMs. This emphasizes the distilled concepts are informative for augmenting the\nRAG process by filtering out interference information. To the best of our\nknowledge, this is the first work introducing AMR to enhance the RAG,\npresenting a potential solution to augment inference performance with\nsemantic-based context compression.", + "main_content": "Introduction Large Language Models (LLMs) have emerged as indispensable tools for daily information acquisition, owing to their extensive knowledge base and ability to fulfil diverse user instructions [6, 47, 1]. By leveraging large-scale pre-training on massive datasets, LLMs memorize vast amounts of knowledge within their parameters as internal memory, known as parametric knowledge [33]. However, the presence of outdated or incorrect knowledge within internal memory can lead to hallucinations, hindering the performance of LLMs\u2019 inferencing process [46]. This limitation is particularly pronounced when handling long-tail knowledge for domain-specific or highly specialized queries, as the inherent difficulty in memorizing rare entities persists even in the most robust models. Consequently, the overreliance on potentially flawed parametric knowledge can significantly interfere with the reliability of LLMs\u2019 outputs, especially in scenarios with fine-grained knowledge requirements [58, 36]. Retrieval Augmented Generation (RAG) employs additional retrievers to augment LLMs with external, non-parametric knowledge, effectively expanding their internal knowledge boundaries [27, 14]. This Preprint. Under review. arXiv:2405.03085v1 [cs.CL] 6 May 2024 \fallows LLMs to access up-to-date, query-focused information that may not be adequately memorized within their parametric memory to alleviate the aforementioned limitations [24]. In contrast to finetuning by updating the model parameters, RAG preserves pre-trained knowledge while dynamically incorporating relevant external context. This paradigm offers greater flexibility and scalability, as the retrievers can be easily plug-and-play without modifying the underlying language model\u2019s parameters, thus circumventing complex computational hurdles [17, 16]. However, RAG is easily confused when dealing with long contextual retrieved support documents, which often consist of multiple shreds of evidence for providing vital knowledgeable context but are also accompanied by noisy and irrelevant information [56]. The distracting contexts can dilute the LLMs\u2019 attention and adversely affect their performance with misrepresentation [30, 25]. Compressing lengthy contexts to distil vital knowledge is crucial for enhancing LLMs and ensuring factually consistent responses in the RAG process. Figure 1: The examples of concept-based RAG1. Numerous studies have demonstrated that individuals tend to directly search for key concepts when reading long documents as the brain will complete the remaining details based on prior knowledge, expectations, background, and motivations [15, 22]. This selective attention to critical information allows ignoring redundant details and rearranging the text informatively [51]. As illustrated in Fig. 1, given only the key concepts of the question-related supporting documents that still enable us to grasp the crucial semantics. LLMs parameterize massive common knowledge, enabling them to exhibit a similar ability in context understanding even when the word or character-level information is disrupted [43, 7]. This provides the possibility of whether LLMs can comprehend scenarios solely based on discrete informative concepts. Linguistic features, such as semantic and syntactic, have significantly improved the interpretability, controllability, and diversity of Natural Language Generation (NLG) [28]. Language models can implicitly discover these features during pre-training to ensure the logic of the generated text [21]. It has been demonstrated that explicitly leveraging linguistic features for downstream tasks is beneficial, as it refactors the source documents into concise representations that reduce entropy by focusing on the critical information, thereby aiding in a comprehensive understanding of the described scenarios [41, 48, 44, 28, 23, 55]. This advantage enables the stable linguistic features to reliably assist context understanding. Inspired by the aforementioned insights, we propose enhancing RAG\u2019s performance with the crucial concepts distilled from the raw retrieved supporting documents. To effectively capture the informative concepts, we introduce Abstract Meaning Representation (AMR), a semantic formalism that encodes the meaning of serialized texts by a rooted, directed, labelled, acyclic graph [3]. Compared to other linguistic representations, AMR prioritizes semantic consistency among concepts carried by nodes when representing sentences, offering the advantage of automatically rectifying surfacelevel variations or understanding abbreviated terms, ensuring the structured concepts represent the underlying meaning to transcend the limitations of linguistic noise [59]. Specifically, we propose the concept-based RAG framework with the AMR-based concept distillation algorithm, which formats the concepts for augmenting LLMs by compressing the lengthy context to concentrate on crucial information exclusively. We empirically experiment on two open-domain Q&A datasets, PopQA [32] and EntityQuestions [40]. The results show that the performance of our method improves significantly as the number of supporting documents increases, outperforming baselines with various compression methods and backbone LLMs. The contributions of this paper can be summarized as follows: \u2022 This paper proposes the concept-based RAG framework that explicitly integrates AMR, a semantic representation, to enable LLMs to focus on essential rather than messy knowledge 1The corresponding complete sentences: [1] The Outfit is a 1973 crime film directed by John Flynn. [2] It stars Robert Duvall, Karen Black, Joe Don Baker and Robert Ryan. [3] Flynn\u2019s screenplay is an adaptation of the novel of the same name by Richard Stark. [4] Two hitmen drive to Eddie Macklin\u2019s house to assassinate him as he builds a brick wall in his backyard. 2 \fwhen processing long-context retrieved supporting documents. To the best of our knowledge, this is the first research introducing AMR to enhance RAG for more reliable inference. \u2022 We propose an AMR-based concept distillation algorithm, which compresses long-context raw supporting documents into concepts by formatting the informative nodes. The distilled concepts are more knowledge-centralized than the raw supporting documents, reducing the interference of irrelevant information during the inference process of LLMs. \u2022 We conduct extensive experiments on open-domain Q&A datasets. The results indicate that our framework effectively enhances inference performance as the number of supporting documents increases, outperforming baselines with various context compression methods and backbone LLMs. This demonstrates its applicability in long-context RAG scenarios. 2 Related Works 2.1 Long-context Understanding The increasing complexity of downstream tasks and the demand for models capable of capturing intricate dependencies have driven significant attention to the long-context understanding of LLMs [37, 19, 53]. One prominent research avenue involves modifying the basic architecture of LLMs. For instance, Dai et al.[11] introduced a segment-level recurrence mechanism with their Transformer-XL model, enabling it to retain longer contextual information than the standard Transformer structure. Similarly, Beltagy et al.[4] extended the self-attention mechanism in their Longformer model to handle longer sequences by introducing a sparse attention pattern, thereby facilitating the efficient processing of documents with thousands of tokens. However, a significant drawback of modifying model architecture is the necessity for complex re-training processes. In contrast, research on prompt compression aims to understand long-token prompts by compressing them into low-dimensional soft prompts [50, 9, 34]. While offering a more efficient alternative to architecture modification, this approach constrains the transferability of learned prompts across various LLMs. Recent research has advanced to a more intuitive level, aiming to comprehensively understand the context by directly expanding the context window or explicit compression. Chen et al.[8] introduced position interpolation to extend the context window of pre-trained LLMs, scaling LLaMA\u2019s context window to 32k tokens with few fine-tuning steps. Ding et al.[12] proposed LongRoPE to extend LLMs\u2019 context window to 2048k tokens while maintaining the performance of the original short context window through a positional and interpolation progressive extension strategy. However, the long context window raises another challenge of diluting core information with redundant data [53]. To address this, Li et al.[29] filtered out irrelevant context with low self-information for compressing the long prompts. Chuang et al.[10] proposed the Nano-Capsulator to compress original prompts into capsule prompts, decreasing inference latency across diverse LLMs. Compression methods can benefit the RAG by allowing LLMs to focus on essential knowledge in supporting documents [54]. 2.2 Linguistics-augmented NLG Incorporating linguistic principles into LLMs has shown promise in improving the coherence and semantic fidelity of generated text [55]. Augmentation techniques like syntactic trees [35] and lexical patterns [28] assist in linguistic feature injection, enabling language models to generate more faithful text. Ahmed et al. [2] proposed automatic semantic augmentation of prompts to enhance LLMs with tagged facts, resulting in improved code summarization performance. Zhou et al. [60] introduced InstructCTG, a framework for controlling LLMs\u2019 generation based on syntax constraints, facilitating flexibility and adaptation to new conditions without complex model modification. LLMs can be explicitly guided by leveraging linguistic insights to mitigate biases inherent in parameterized-only approaches, hereby enhancing performance in tasks demanding strict factual consistency. Abstract Meaning Representation (AMR) has proven its efficacy in enhancing downstream generation tasks by providing a structured semantic representation that encapsulates static concepts [18]. Frisoni et al. [13] integrated AMR with pre-trained language models to enhance biomedical summarization by capturing inter-entity relations. Ribeiro et al. [38] employed AMR to improve factuality evaluation in abstractive summarization by identifying content verifiability errors and subsentence-level factual inconsistencies. Shi et al. [42] proposed AMR-TST, which generates fluent and reliable texts with the target style by optimizing core concept nodes. Jangra et al. [20] preserved style-agnostic content 3 \fwhile generating transferred text by utilizing AMR as an intermediate representation. These studies illustrate AMR\u2019s advantages in capturing essential concepts containing informative linguistic features. 3 Method 3.1 Concept-based RAG Framework This section introduces the proposed concept-based RAG framework for inference utilising the concepts distilled from the raw supporting documents. The overview of the framework is in Fig. 2. Figure 2: The overview of the concept-based RAG framework, which consists of three main components: (a) information retrieval, (b) concept distillation, and (c) concept-based inference. Given an input question Q, the (a) information retrieval component aims to utilize a retriever to return the top-K knowledgeable supporting documents D = {D1, ..., DK} relevant to Q from sources such as Wikipedia or other information repositories. At this stage, the retriever\u2019s performance significantly influences the resulting answer set A = {A1, ..., AM} [33, 14]. However, the retriever\u2019s performance is beyond this paper\u2019s scope. We hypothesize that all retrieved supporting documents D contain the correct answer corresponding to Q, expressed as a proposition: \u2200Dk \u2208D, \u2203Am \u2208A, Am \u2286Dk. The (b) concept distillation component is devised to format the concept C from the retrieved supporting document D by the proposed AMR-based concept distillation algorithm. This algorithm converts the supporting documents from continuous sequences to discrete concepts formatted from the AMR graph, denoted as G. Further details of this algorithm will be elucidated in the subsequent section. After obtaining the distilled concept C, the (c) concept-based inference component proceeds to integrate it with various backbone LLMs to derive answers A using a faithful-intensive prompt template as follows: [Refer to the following facts to answer the question. Facts: C. Question: Q]. The intensity of prompts has been demonstrated to influence LLMs\u2019 adherence to knowledge from internal memory and retrieved documents [52]. Since our hypothesis is that the retrieved documents contain correct answers, we encourage the LLMs to leverage the knowledge encapsulated in C when responding to queries. This strategy helps minimize potential conflicts caused by their memorized parametric knowledge. To achieve this objective, we designate the concept as a \"fact\" within the instructional prompt, explicitly delineating a delimited sandbox for LLMs to presuppose the absolute correctness of the knowledge conveyed by C. This non-parametric knowledge can seamlessly integrate into LLMs in a plug-and-play manner. The overarching framework can be represented as Eq. 1. P(A|Q) = P(A|C, Q)P(C|D, Q)P(D|Q). (1) 3.2 AMR-based Concept Distillation Abstract Meaning Representation (AMR) serves as a logical formal semantic structure proficient in encapsulating common-sense knowledge necessary for representing events, time, participants, and other elements within serialized texts [39]. Given a supporting document Dk \u2208D, the AMR parser is utilized to parse Dk into the corresponding AMR graph G =< N, E >, where C represents the nodes for concepts and E denotes the edges for the correlation relationships. In this context, we utilize a 4 \fmBart-based [31] parser2 trained on the AMR 3.0 corpus3 to address potential multilingual concerns. The detailed illustration of the AMR graph parsing is depicted in Table A1. Algorithm 1: Concept Distillation Input :AMR Graph (G) Output :concept (C) 1 Function Concept_Distillation(G): 2 concept \u2190[], role \u2190[]; 3 for Gsntn in SplitSnt (G) do 4 for N in DFS(Gsntn) do 5 if IsRole(N) then 6 if IsName(N) then 7 AppendRole(HandleName(N)) 8 if IsWiki(N) then 9 AppendRole(HandleWiki(N)) 10 if IsDate(N) then 11 AppendRole(HandleDate(N)) 12 else 13 if role is not None then 14 AppendConcept(HandleRole(role)); 15 role \u2190[]; 16 AppendConcept(N); 17 if (N is Last) and (role is not None) then repeat :Algorithm.Line 5-11 18 AppendConcept(HandleRole(role)); 19 concept \u2190ConceptFormat (concept); 20 concept \u2190ConceptBacktrace (concept); 21 return C \u2190concept We propose the concept distillation algorithm to format the concepts represented in G, as described in Algorithm 1. The supporting document Dk encompasses multiple sentences (sntn), and the AMR parser can structurally parse Dk into a pre-defined multi-sentence structure. The SplitSnt(\u00b7) function is designed to partition G and organize the resulting sentence-based sub-graphs according to the sequential order. Notably, we simplify G by disregarding the agent and patient of the concepts, i.e., the edges denoting relations between the connected concepts (Frame args, ARGX). Consequently, G is streamlined into a unidirectional connecting structure. Leveraging this structure, we perform a Depth First Search, DFS(\u00b7) on the N of G to traverse the concepts while maintaining the relative positional correlation of adjacent nodes. This approach emphasizes the connection as it exists in the preceding sequential representation, and the process is elaborated in Fig. A1. Previous research has investigated the influence of context order on LLMs [30]. We delve into the various traversal methods for testing their potential impact in Section D. The AMR defines a set of roles to meticulously delineate the semantic fabric of sentences. This paper underscores the meticulous handling of three roles, namely :name, :wiki, and date-entity, employing IsRole(\u00b7) to identify the predefined roles comprehensively. The :name role signifies a property node within the AMR graph, signifying entities such as individuals, organizations, or geographic locations. In instances where the concept expressed by :name spans multiple words, the parsing process of AMR decomposes each word within the :name into predicate roles (:op), thereby dispersing the holistic concept across multiple nodes. During the DFS(\u00b7) traversal process, fragmented nodes can potentially confuse LLMs due to incomplete meaning expressions. To maintain the integrity of concepts carried by :name, we introduce HandleName(\u00b7), organizing predicates in a stack structure. The :wiki role provides reliable external concept references sourced from Wikipedia. For standardizing concepts\u2019 diverse expressions referring to the same named entities, we utilize the HandleWiki (\u00b7) function, which aligns the concepts with the corresponding definitions in Wikipedia. If the concept in :name differs from :wiki, we designate the concept expressed by this node as :wiki to avoid semantic disambiguation. In addition, there is a date-entity role that depicts temporal concepts. In our algorithm, we specifically manage the roles :year, :month, and :day by HandleDate (\u00b7). This function consolidates roles under the same date-entity, forming concepts like \"19 04 2024\" with numerical months translated into textual representations, \"19 April 2024\", for clear expression. AMR incorporates special numerical annotations for certain parsing nodes, such as work-01, where the number appended to the word indicates different meanings of the same word in distinct contexts as defined in OntoNotes [49]. In the RAG scenario, we provide 2https://github.com/BramVanroy/multilingual-text-to-amr 3https://catalog.ldc.upenn.edu/LDC2020T02 5 \fLLMs with supporting documents comprising a set of concepts. This suggests that concepts are understood in relation to relevant contexts rather than in isolation. Therefore, the proposed conceptbased RAG framework depends on the contextual learning capability of LLMs to distinguish between polysemous concepts, instead of relying on intricate semantic references. The nodes belonging to the aforementioned roles are integrated into the preliminary concept set with the HandleRole(\u00b7), while the AppendConcept(\u00b7) directly integrate the remaining nodes based on the corresponding instances. The structure of AMR comprises a collection of canonical nodes (city-district, market-sector, etc.) designed to enforce knowledge and prevent hallucination regarding entity types. However, in the concept-based RAG scenario, the inference process isn\u2019t directly based on AMR but distilled concepts. The auxiliary semantics embedded within these nodes, which are absent in the source supporting documents, may dilute the essence of the core concept. To address this concern, we employ ConceptFormat(\u00b7) to filter out these nodes to reduce the potential interference. Additionally, frequently occurring concepts are filtered out based on their Inverse Document Frequency (IDF). Furthermore, the selection of representations in AMR is based on the principle of abstraction and generalization rather than the exact lexical items. This representation may mislead the nodes into ignoring variations such as tense, which are informative for concept-based RAG without reference annotations. To mitigate this, we develop the ConceptBacktrace(\u00b7) function to maintain consistency with concepts expressed in the source supporting documents. This function facilitates the backtracking of formatted concepts by incorporating representations from the supporting documents, ensuring they closely adhere to the original semantics without deviation. Subsequently, the backtraced concepts serve as the finalized concepts C, providing conceptual support for LLMs in RAG inference. 4 Experiments 4.1 Datasets We conducted extensive experiments to verify the efficacy of the concept-based RAG framework on open-domain Q&A datasets: PopQA [32] and EntityQuestions [40]. Each dataset includes a label (\"hasanswer\") for every supporting document, indicating whether it contains the answer to the associated question. To ensure a focused evaluation, we screened out the \"\" pairs where hasanswer=True. This selection criterion accommodates scenarios where all retrieved documents contribute positively to answering questions, thus mitigating interference from extraneous factors. The experiments involved verifying the LLMs\u2019 inference performance with different K, which denotes the amount of supporting documents to Q. For the PopQA dataset, we filtered out questions with subject entities having monthly Wikipedia pageviews (spop) \u2265500. This step excludes frequently accessed entities, preserving the dataset focused on long-tail knowledge. This approach serves the dual purpose of preventing data contamination and encouraging LLMs to rely more on retrieved documents than memorized knowledge, mitigating potential knowledge conflicts in the RAG process. The statistical results of the number of the selected pairs with different K settings are in Table 1. Table 1: Statistical results of the number of screened-out pairs from the datasets. K= 1 2 3 4 5 6 7 8 9 10 PopQA [32] 738 1307 422 262 161 151 108 79 66 70 EntityQuestions [40] 1671 1127 670 454 335 264 196 166 163 103 4.2 Baselines The baseline evaluations encompass two aspects: (1) exploration of diverse backbone LLMs, and (2) experimentation with different context compression methods. Specifically, we consider various mainstream LLMs as backbones, including GPT-Neo-1.3B, GPT-Neo-2.7B [5], OPT-1.3b, OPT2.7b [57], bloom-560m, bloom-1b1, bloom-1b7, bloom-3b [26], LLaMA-2-7b-chat-hf, LLaMA-213b-chat-hf [47]. The backbone LLMs coupled with the original supporting documents serve as the Vanilla methods. Regarding the alternative aspect, we explore the three context compression methods: context keywords extraction, context summarization, and Selective Context (SelCon) [29]. These methods aim to validate the efficacy of context compression while preserving essential information for inference, emphasizing discrete key features, fluent representation, and non-redundant information. 6 \fInspired by Chuang et al. [10], we employ a novel open-access LLM, LLaMA-2-13b-chat-hf [47], for context keyword extraction and summarization. This process involves extracting key phrases or terms from the context and generating a concise summary of the provided content, constrained by prompts of \"[Generate a short summary of the following content.]\" and \"[Extract a few keywords from the following content.]\". The detailed prompts are available in Appendix B. The SelCon enhances the efficiency of LLMs\u2019 inference by identifying and eliminating redundant content from the source context for compression. The reduction ratio of the SelCon compared here is set to 0.5. These baseline settings effectively demonstrate the comprehensive advantages of the proposed algorithm in capturing informative concepts when compared to various alternative compression techniques, whether generative-based or semantic-based methods. 4.3 Evaluation Metrics We employ two metrics to evaluate the concept-based RAG: accuracy (Acc.) and integration (Intg.). Accuracy (Acc.) is determined by assessing whether any answer A matches any of the gold answers corresponding to the question Q. The integration metric (Intg.) is designed to comprehensively evaluate the performance across various K of the retrieved supporting documents D. Specifically, the Intg. signifies the area beneath the accuracy curve of each model plotted against the X-axis (K). The calculation of Intg. is as Eq. 2, where K \u2208[xs, xe], and xs and xe represent the minimum and maximum number of supporting documents respectively. A higher value of Intg. indicates superior overall performance. Given that the proposed framework aims to enhance long-context RAG, we segment the evaluation of Intg. into two distinct intervals: normal interval (In = [1, 10], K \u2208In) and longer interval (Il = [6, 10], K \u2208Il). This division is intended to emphasize the effectiveness of the concept-based RAG framework, particularly in scenarios involving longer contexts. Intg. = Z xe xs Acc(x) dx \u22481 2 xe\u2212xs+1 X i=1 (xi \u2212xi\u22121) [Acc(xi) + Acc(xi\u22121)] (2) 5 Results and Analysis The evaluation results for the PopQA and EntityQuestion datasets are depicted in Fig. 3 and Fig. 4, respectively, providing graphical trends of Acc. as K increases intuitively. Furthermore, Table 2 and Table 3 present quantitative results of Intg. for the datasets. These tables include the calculation of \u2206, quantifying the improvement achieved by our proposed method over the Vanilla methods. Specifically, \u2206is computed as follows: \u2206= Intg.ours \u2212Intg.vanilla. The detailed quantitative evaluation results of Acc. are provided in Table A3 and Table A4. Section E and section F examine compression ratio and inference latency comparison to demonstrate the advantages of concept-compressed contexts. Figure 3: The evaluation results of the Acc. \u2191trends and Intg. \u2191on the PopQA dataset. The vertical axis represents Acc., and the horizontal axis represents the number of supporting documents, K. The polyline reflects the changing trend of Acc. with different K, and the under area is Intg. A key intuitive finding reflected by Fig. 3 and Fig. 4 is the superior performance of our method in long-context scenarios, particularly evident when K is high. As K increases, especially within 7 \fFigure 4: The evaluation results of the Acc. \u2191trends and Intg. \u2191on the EntityQuestion dataset. The definitions of the axis and symbols are the same with the Fig. 3. Table 2: The quantitative results of Intg. \u2191for the PopQA dataset, where the full name order of the LLMs is: GPT-Neo-1.3B, GPT-Neo-2.7B, OPT-1.3b, OPT-2.7b, bloom-560m, bloom-1b1, bloom-1b7, bloom-3b, LLaMA-2-chat-7b, LLaMA-2-chat-13b. The best results are in bold, and the second best results are in underlined. The increased and decreased \u2206are marked differently. D K G-1.3 G-2.7 O-1.3 O-2.7 b-560 b-1b1 b-1b7 b-3 L-7 L-13 Vanilla In 620.68 631.39 656.68 687.15 619.86 692.68 707.25 671.88 682.30 672.03 Il 291.08 275.32 300.85 322.23 294.94 325.37 326.29 305.91 337.19 312.62 Keywords In 468.94 484.98 554.67 571.38 502.70 610.69 621.85 600.65 628.78 617.06 Il 257.12 244.24 297.70 305.64 275.39 327.70 338.01 318.37 326.41 315.93 Summary In 517.57 513.37 619.78 575.32 573.95 608.41 637.55 591.12 564.51 553.24 Il 263.14 260.64 316.80 290.50 304.55 313.36 336.20 297.44 291.50 291.39 SelCon In 444.29 524.54 615.78 607.12 423.22 634.81 606.15 625.66 715.90 703.29 Il 237.49 262.78 313.39 323.69 230.20 318.64 306.72 314.07 344.10 332.51 Ours In 625.31 652.71 668.86 688.47 608.31 686.29 698.91 681.22 738.82 716.55 Il 322.37 321.73 329.65 344.31 314.34 347.71 355.52 344.08 357.56 339.38 \u2206 In +4.63 +21.32 +12.18 +1.32 -11.55 -6.93 -8.34 +9.34 +56.52 +44.52 Il +31.29 +46.41 +28.8 +22.08 +19.40 +22.34 +29.23 +38.17 +20.37 +26.76 Table 3: The quantitative results of Intg. \u2191for the EntityQuestions dataset. The LLMs\u2019 order and symbol definitions are the same as Table 2. D K G-1.3 G-2.7 O-1.3 O-2.7 b-560 b-1b1 b-1b7 b-3 L-7 L-13 Vanilla In 531.54 605.06 602.52 634.28 488.95 594.88 608.85 619.30 607.22 632.24 Il 247.50 284.47 277.47 299.03 222.99 266.91 284.00 289.26 289.95 287.48 Keywords In 280.76 360.00 403.37 439.73 295.02 428.54 465.15 462.65 584.67 574.61 Il 134.96 167.13 196.04 215.41 143.68 207.59 227.84 223.38 287.84 284.53 Summary In 366.73 406.72 501.51 446.50 388.36 415.61 501.90 435.49 425.70 438.31 Il 179.97 205.02 255.51 210.93 187.75 197.43 257.16 211.83 210.34 222.92 SelCon In 298.49 405.22 471.36 468.18 215.52 460.37 451.41 539.49 623.91 641.01 Il 144.69 195.05 231.76 223.55 108.45 214.94 217.40 261.79 295.33 304.57 Ours In 551.50 618.18 609.88 652.48 483.02 600.72 624.53 621.36 664.18 703.67 Il 267.12 298.74 285.06 303.49 243.55 286.20 295.45 300.29 303.39 320.87 \u2206 In +19.96 +13.12 +7.36 +18.2 -5.93 +5.84 +15.58 +2.06 +56.96 +71.43 Il +19.62 +14.27 +7.59 +4.45 +20.56 +19.29 +11.45 +11.03 +13.44 +33.39 8 \fthe longer context setting (Il), the Acc. of our method consistently outperforms that of various backbone LLMs coupled with other context compression methods. This trend suggests that the concepts distilled by our method are supportive of reducing interference and enabling the LLMs to concentrate on key knowledge. Moreover, the positive values of \u2206in Table 2 and Table 3 for the Il interval further underscore the improvement achieved by our framework over baseline methods when handling longer contexts. This observation emphasizes the effectiveness of the AMR-based concept distillation algorithm in capturing essential semantic information from supporting documents, thereby enabling LLMs to generate more accurate answers even when confronted with messy contexts. When setting the bloom-560m model as the backbone LLMs, an interesting finding is that \u2206exhibits negative trends in the In interval of both datasets, while the SelCon does not perform ideally either. We hypothesize that this is due to the limitation of small-scale models to associate semantic scenarios through discrete concepts, which results in the model\u2019s inability to understand the core information expressed in the compressed supporting documents. Conversely, when coupling advanced LLMs, such as LLaMA-2, the contexts compressed by the proposed method and SelCon exhibit the most significant and second most significant enhancements to the LLMs, respectively. This observation likely arises from these large-scale models\u2019 superior contextual understanding capabilities, which corroborates our hypothesis. Regarding the improvements of \u2206on Il interval of two datasets, our method\u2019s enhancement on the PopQA dataset is more pronounced. This is because PopQA was released recently, and its knowledge is less likely to be memorized by earlier models such as GPT-Neo and OPT. Moreover, the screening of long-tail knowledge further accentuates the unique scenario provided by PopQA, making it an ideal testbed for evaluating context compression methods. The proposed AMR-based concept distillation method demonstrates clear advantages over generative compression methods of keyword extraction and summarization. While these methods utilise the LLMs to generate compressed representations and show competitive results in certain cases, they may inadvertently introduce noise or lose essential details during the compression process. Moreover, the generative nature of these methods makes them inherently difficult to control, even when provided with instructions as constraints. Consequently, the generated keywords and summaries may exhibit randomness, potentially deviating from the core concepts conveyed in the original supporting documents. In contrast, our framework leverages the inherent structured semantic representation of AMR to capture the core concepts explicitly. This semantic-level abstraction enables the framework to faithfully format the concepts to provide more reliable and informative support for the RAG process. Compared to the linguistics context compression baseline, SelCon, which identifies and prunes redundant content based on self-information computed at the lexical level, the proposed method based on the semantic level achieves superior results. SelCon\u2019s effectiveness depends on determining the right granularity for redundancy removal, making it sensitive to lexical unit choice. In contrast, our method takes a macro view by focusing on the semantic consistency carried by the AMR structure, making it insensitive to the delicate lexical bias. This characteristic enables it to be a reliable plug-andplay component in various RAG systems dealing with supporting documents containing irrelevant information and potential lexical errors. The robustness of the proposed framework is demonstrated by its consistent performance improvements across various LLMs. The experimental results on both datasets showcase the generalizability of our method, irrespective of the underlying LLM architecture. This finding suggests that the concept-based RAG framework can be effectively coupled with diverse LLMs, making it a versatile solution for enhancing inference performance in long-context scenarios. 6" +} \ No newline at end of file diff --git a/abs_9K/test_abstract_short_2405.03108v1.json b/abs_9K/test_abstract_short_2405.03108v1.json new file mode 100644 index 0000000000000000000000000000000000000000..eaf95fc244c640c51aab087d462e0f05de1d7459 --- /dev/null +++ b/abs_9K/test_abstract_short_2405.03108v1.json @@ -0,0 +1,16 @@ +{ + "url": "http://arxiv.org/abs/2405.03108v1", + "title": "Impact of Postshock Turbulence on the Radio Spectrum of Radio Relic Shocks in Merging Clusters", + "abstract": "This study investigates the impact of magnetic turbulence on cosmic ray (CR)\nelectrons through Fermi-II acceleration behind merger-driven shocks in the\nintracluster medium and examines how the ensuing synchrotron radio emission is\ninfluenced by the decay of magnetic energy through dissipation in the postshock\nregion. We adopt simplified models for the momentum diffusion coefficient,\nspecifically considering transit-time-damping resonance with fast-mode waves\nand gyroresonance with Alfv\\'en waves. Utilizing analytic solutions derived\nfrom diffusive shock acceleration theory, at the shock location, we introduce a\nCR spectrum that is either shock-injected or shock-reaccelerated. We then track\nits temporal evolution along the Lagrangian fluid element in the time domain.\nThe resulting CR spectra are mapped onto a spherical shell configuration to\nestimate the surface brightness profile of the model radio relics. Turbulent\nacceleration proves to be a significant factor in delaying the aging of\npostshock CR electrons, while decaying magnetic fields have marginal impacts\ndue to the dominance of inverse Compton cooling over synchrotron cooling.\nHowever, the decay of magnetic fields substantially reduces synchrotron\nradiation. Consequently, the spatial distribution of the postshock magnetic\nfields affects the volume-integrated radio spectrum and its spectral index. We\ndemonstrate that the Mach numbers estimated from the integrated spectral index\ntend to be higher than the actual shock Mach numbers, highlighting the\nnecessity for accurate modeling of postshock magnetic turbulence in\ninterpreting observations of radio relics.", + "authors": "Hyesung Kang", + "published": "2024-05-06", + "updated": "2024-05-06", + "primary_cat": "astro-ph.HE", + "cats": [ + "astro-ph.HE" + ], + "label": "Original Paper", + "paper_cat": "Diffusion AND Model", + "gt": "This study investigates the impact of magnetic turbulence on cosmic ray (CR)\nelectrons through Fermi-II acceleration behind merger-driven shocks in the\nintracluster medium and examines how the ensuing synchrotron radio emission is\ninfluenced by the decay of magnetic energy through dissipation in the postshock\nregion. We adopt simplified models for the momentum diffusion coefficient,\nspecifically considering transit-time-damping resonance with fast-mode waves\nand gyroresonance with Alfv\\'en waves. Utilizing analytic solutions derived\nfrom diffusive shock acceleration theory, at the shock location, we introduce a\nCR spectrum that is either shock-injected or shock-reaccelerated. We then track\nits temporal evolution along the Lagrangian fluid element in the time domain.\nThe resulting CR spectra are mapped onto a spherical shell configuration to\nestimate the surface brightness profile of the model radio relics. Turbulent\nacceleration proves to be a significant factor in delaying the aging of\npostshock CR electrons, while decaying magnetic fields have marginal impacts\ndue to the dominance of inverse Compton cooling over synchrotron cooling.\nHowever, the decay of magnetic fields substantially reduces synchrotron\nradiation. Consequently, the spatial distribution of the postshock magnetic\nfields affects the volume-integrated radio spectrum and its spectral index. We\ndemonstrate that the Mach numbers estimated from the integrated spectral index\ntend to be higher than the actual shock Mach numbers, highlighting the\nnecessity for accurate modeling of postshock magnetic turbulence in\ninterpreting observations of radio relics.", + "main_content": "Introduction Giant radio relics found in the outskirts of galaxy clusters, such as the Sausage and Toothbrush relics, are thought to result from shocks that occur following the passage of the dark matter (DM) core during major mergers (e.g., van Weeren et al. 2010, 2016; Ha et al. 2018). They are weak quasi-perpendicular shocks with low Mach numbers (Ms \u22723) formed in the weakly magnetized intracluster medium (ICM) (e.g., Kang et al. 2012; Kang 2016; Kang et al. 2017). Diffuse radio emissions originate from cosmic ray (CR) electrons with the Lorentz factor \u03b3 \u223c103 \u2212104, gyrating in microgauss-level magnetic fields. These electrons are believed to be accelerated via diffusive shock acceleration (DSA) (see Brunetti & Jones 2014; van Weeren et al. 2019, for reviews). Alternative scenarios such as adiabatic compression by shocks (Ensslin & Gopal-Krishna 2001; Ensslin & Br\u00fcggen 2002), reacceleration of fossil CR electrons by shocks (Kang et al. 2012; Pinzke et al. 2013), and reacceleration by postshock turbulence (Fujita et al. 2015; Kang 2017) have been considered as well. The DSA theory predicts that the energy spectrum of CR particles, accelerated through the Fermi first-order (Fermi-I) process, follows a power-law distribution, fsh \u221dp\u2212q, where q = 4M 2 s /(M 2 s \u22121) (Bell 1978; Drury 1983). Consequently, this leads to a synchrotron radio spectrum, j\u03bd \u221d\u03bd\u2212\u03b1sh with the so-called \u201cinjection spectral index\u201d, \u03b1sh = (q \u22123)/2, immediately behind the shock. As a result, the Mach numbers of radio relic shocks can be estimated using the relation (e.g., Kang 2015): Mrad,sh = \u00123 + 2\u03b1sh 2\u03b1sh \u22121 \u00131/2 . (1) Alternatively, one can determine the Mach numbers by observing the steepening of the volume-integrated spectrum, J\u03bd \u221d\u03bd\u2212\u03b1int, toward the so-called \u201cintegrated spectral index\", \u03b1int = \u03b1sh +0.5, at high frequencies. This steepening is attributed to synchrotron and inverse-Compton (IC) losses in the postshock region with a constant magnetic field strength, leading to the following relation (e.g., Kang et al. 2017) : Mrad,int = \u0012\u03b1int + 1 \u03b1int \u22121 \u00131/2 . (2) \u00a9 Published under Creative Commons license CC BY-SA 4.0 1 arXiv:2405.03108v1 [astro-ph.HE] 6 May 2024 \fImpact of Postshock Turbulence on Radio Relics However, the transition of the power-law index from \u03b1sh to \u03b1int takes place gradually over the broad frequency range of \u223c0.1 \u221210 GHz, depending on the shock age and postshock magnetic field strength. Furthermore, the volume-integrated emission spectrum could deviate from the simple DSA powerlaw in the case of the evolving shock dynamics and nonuniform magnetic field strength in the postshock regions, as suggested by Kang (2015). Thus, the estimation of Mrad,int of observed radio relics tend to be higher than Mrad,sh (e.g., Hoang et al. 2018). On the other hand, Mach numbers inferred from X-ray observations, MX, are sometimes found to be smaller than Mrad, i.e., MX \u2272Mrad (e.g., Akamatsu & Kawahara 2013; van Weeren et al. 2019). This discrepancy is recognized as an unsolved challenge in understanding the origin of radio relics. Wittor et al. (2021) compiled values of Mrad and MX for observed radio relics available in the literature, confirming the Mach number discrepancy (refer to their Figure 7). By employing cosmological structure formation simulations, the authors confirmed the prevailing notion that radio flux is dominated by contributions from high Mach number shocks among the ensemble associated with the particular relic, whereas Xray emission predominantly originates from low Mach number shocks (see also Hong et al. 2015; Roh et al. 2019; Botteon et al. 2020; Dom\u00ednguez-Fern\u00e1ndez et al. 2021). Additionally, several potential solutions have been suggested to address this puzzle. These include the reacceleration of preexisting fossil CR electrons with a flat spectrum (e.g., Pinzke et al. 2013; Kang 2016; Kang et al. 2017) and acceleration by multiple shocks with different Mach numbers formed in the turbulent ICM (e.g., Inchingolo et al. 2022). As clusters form through numerous merging episodes of smaller subclusters, the gas flows within the ICM inherently become turbulent (Miniati 2015; Poter et al. 2015; Vazza et al. 2017). During active mergers, the ICM turbulence becomes transonic, and the largest turbulent eddies (L \u223c100\u2212500 kpc) undergo decay into smaller ones. This process cascades into magnetohydrodynamic (MHD) turbulence and further down to kinetic turbulence through plasma instabilities, as comprehensively reviewed by Brunetti & Jones (2014). Additionally, vorticity generated behind curved ICM shocks is known to produce MHD turbulence and amplify magnetic fields in the postshock region (Ryu et al. 2008). On the other hand, numerical simulations of non-driven, decaying MHD turbulence indicate that turbulent energy dissipates within one eddy turnover time, tdec \u223c\u03bbd/vturb, where \u03bbd represents the largest driving scale, and vturb is the mean turbulent velocity (e.g. MacLow et el. 1998; MacLow 1999; Cho & Lazarizn 2003). Consequently, behind typical merger shocks, the estimated turbulent decay timescale is approximately tdec \u223cL/u2 \u223c(100 kpc)/(103 km s\u22121) \u223c0.1 Gyr, where L is the largest eddy size of the induced turbulence and u2 is the characteristic postshock flow speed1. Moreover, the interaction of preexisting turbulence with shock waves can induce corrugation of the shock front, thereby 1Throughout the paper, the subscript \u20182\u2019 is used for the postshock quantities. enhancing postshock turbulence on plasma kinetic scales through processes such as shock compression and turbulent dynamo mechanisms (Guo & Giacalone 2015; Trotta et al. 2023). Hybrid kinetic simulations of similar setups also indicate that postshock magnetic fluctuations exhibit a Kolmogorov spectrum and undergo substantial decay downstream due to dissipation (Nakanotani et al. 2022). Although these studies examined the plasma processes and wave-particle interactions on kinetic scales in a low beta (\u03b2 = PB/Pg \u223c1) plasma relevant for interplanetary shocks, we expect the same processes to operate similarly in the postshock region of ICM shocks formed in high beta (\u03b2 \u223c100) plasma as well. The amplification of postshock magnetic fields and the subsequent decay of MHD turbulence affects the radio spectrum of relic shocks. First, CR electrons can be further energized via Fermi second-order (Fermi-II) acceleration primarily through the interaction with the compressible fast mode waves via the transit-time-damping (TTD) resonance (Brunetti & Lazarian 2007; Brunetti & Jones 2014), and Alfv\u00e9n waves via gyroresonance (Brunetti et al. 2004; Fujita et al. 2015). Additionally, the synchrotron emission scales with the magnetic field strength as j\u03bd \u221dB(q\u22121)/2, typically with q \u223c4.0 \u22125.0, so the decay of magnetic fields B significantly reduces synchrotron radiation emission. In this study, we explore the impact of turbulent acceleration (TA) on the evolution of the CR electron spectrum in the postshock flow, considering the decay of the magnetic fluctuations. The numerical procedure is outlined as follows: 1. We incorporate Fermi-II acceleration of CR electrons, employing simplified models for the momentum diffusion coefficient, Dpp(p). This accounts for TTD resonance with fast mode waves and gyroresonance with Alfv\u00e9n waves. 2. We track the time evolution of the CR electron population, f(p, t), by following the Lagrangian fluid element through advection in the postshock region. This is accomplished by solving the Fokker-Planck equation in the time domain. In a one-dimensional (1D) planar shock configuration, the time integration can be transformed into the spatial profile of f(p, x) through the relation x = u2t, where u2 is a constant postshock speed and t is the advection time since the shock passage. 3. The synchrotron emissivity, j\u03bd(t), is calculated, utilizing the information for f(p, t) and B(t). 4. The surface brightness profile, I\u03bd(d), is estimated as a function of the distance d from the relic edge projected onto the sky plane. This is obtained by adopting a coconut-shell-shaped spherical surface, as illustrated in Figure 1. In the next section, we provide detailed explanations of the numerical methods and working models employed to simulate physical processes. In Section 3, we apply our approach to various examples. Specifically, we focus on scenarios involving the injection and the reacceleration of CR electrons by weak shocks with Mach numbers 2.3 \u2272M \u22723. Additionally, \fImpact of Postshock Turbulence on Radio Relics Figure 1. Schematic diagrams elucidate our model assumptions. (a) To model the surface of a radio relic, we employ a spherical, coconutshell-shaped structure with an axial ratio of a/b \u22731 and a thickness of lcool = u2tcool. Here, u2 and tcool represent the advection speed and cooling timescale in the post-shock flow, respectively. Radio relics become prominent after the passage of the dark matter core during a major merger. (b) The surface brightness, I\u03bd(d), is estimated by integrating the volume emissivity, j\u03bd(x), with x = u2t, along a line of sight, where d is the distance from the relic edge projected onto the sky plane. I\u03bd(d) depends on the CRe density, the magnetic field strength, B(x), and the momentum diffusion coefficient, Dpp(x), which decay with a timescale of tdec. Here, B2 and Dpp,2 are the immediate postshock values. The inset panel illustrates how the spatial profile of I\u03bd(d) depends on the decay timescale, tdec, of magnetic turbulence. Here, turbulent acceleration is ignored (Dpp = 0), but synchrotron and inverse-Compton losses are included. The shell radius is Rs = 1 Mpc, and the extension angles are \u03c81 = \u03c82 = 15\u25e6. we estimate the resulting radio emission spectra in an idealized setup. A brief summary of our findings will be presented in Section 4. 2. Physical Models and Numerical Method Here, we consider merger-driven shocks that become radioluminous subsequent to the DM core passage in a major binary merger, as depicted in Figure 1(a) (Ha et al. 2018). Although the shock surface evolves as a spherical shell expanding radially, we treat its dynamics as a planar shock with a constant postshock speed. This simplification is justified because the thickness of the postshock volume is on the order of lcool \u2248u2tcool \u223c0.1 Mpc, which is much smaller than the shock radius, Rs \u223c1 \u22121.5 Mpc. Furthermore, the cooing timescale, tcool \u223c0.1 Gyr, is shorter than the typical dynamical timescales of clusters, tdyn \u223c1 Gyr. In such a scenario, the time integration can be transformed into the spatial profile using the relation x = u2t. 2.1. Postshock Magnetic Turbulence As outlined in the introduction, downstream of the shock front, CR electrons further acquire energy through TTD resonance with compressive fast-mode waves and gyroresonant scattering off Alfv\u00e9n waves. These waves might be present in small-scale, kinetic magnetic fluctuations that are cascaded down from MHD-scale turbulence (Brunetti & Jones 2014) or excited by plasma microinstabilities in the shock transition zone (Guo & Giacalone 2015; Trotta et al. 2023). However, the microphysics governing the excitation and evolution of MHD/plasma waves and Fermi-II acceleration of CR electrons in the high beta ICM plasmas are quite complex and relatively underexplored (e.g. Lazarian et al. 2012). This makes it hard to formulate accurate models for the momentum diffusion coefficient, Dpp. The TA timescale due to the interaction with fast modes can be related with Dpp as Dpp,f p2 \u2248 4 \u03c4pp(p), (3) where, in general, \u03c4pp(p) depends on the nature and amplitude of magnetic fluctuations, \u03b4B(x, t), in the flow. As in many previous studies (e.g. Kang et al. 2017), we take a practical approach in which a constant value, \u03c4pp = 0.1 Gyr is assumed since the detail properties of the postshock turbulence are not well constrained. For instance, using cosmological structure formation simulations, Miniati (2015) found that in the ICM typically tpp \u223c0.1\u22121 Gyr due to enhanced turbulence during the active phase of major mergers. Based on the work of Fujita et al. (2015), we adopt Dpp due to gyro-resonance with Alfv\u00e9n waves as follows: Dpp,A p2 \u223c1 9( v2 A Dxx ) \u223c1 3(vA c )( vA lmfp ) \u223c1 3(vA c )( vA lmfp,c )\u03b7\u22121 m (p/p0)qK\u22122 (4) where vA = B/\u221a4\u03c0\u03c1 is the Alfv\u00e9n speed, Dxx \u223cclmfp/3 is \fImpact of Postshock Turbulence on Radio Relics Figure 2. Cooling timescales and TA timescales, all in units of 109 years: \u03c4Coul (blue) for Coulomb losses, \u03c4Syn+IC (red) for synchrotron and inverse Compton losses, \u03c4cool (black) for the total losses, \u03c4Dpf (magenta) due to fast mode waves, and \u03c4DpA (cyan) due to Alfv\u00e9n mode waves. Representative cases are considered with the following parameters: gas density n = 10\u22124 cm\u22123, magnetic field strength B = 2 \u00b5G, redshift zr = 0.2, and reduction factor, \u03b7m = 5 \u00d7 10\u22124. the spatial diffusion coefficient, \u03b7m \u223c5 \u00d7 10\u22124 is a reduction factor for waves on small kinetic scales, and p0 = 10\u22123mec is the reference momentum. The slope qK = 5/3 is adopted since Alfv\u00e9n modes of decaying MHD turbulence are expected to have a Kolmogorov spectrum (e.g., Cho & Lazarizn 2003). As a result, Dpp,A/p2 \u221dp\u22121/3B2, so TA becomes increasingly inefficient at higher momentum. In addition, Dpp,A decreases as magnetic fluctuations decay in the postshock flow. The Coulomb mean free path for thermal electrons can be estimated as lmfp,c \u223c174 kpc(ln \u039b 40 )\u22121( T 108K )2( n 10\u22124cm\u22123 )\u22121, (5) where ln \u039b \u223c40 is the Coulomb logarithm (Brunetti & Lazarian 2007). Figure 2 shows the cooling timescales for Coulomb collisions, \u03c4Coul, and synchrotron plus IC losses, \u03c4sync+IC, for a representative set of parameters for the ICM, i.e., n = 10\u22124cm\u22123, B = 2 \u00b5G, and the redshift, zr = 0.2. For radio emitting CR electrons with the Lorentz factor, \u03b3 \u223c103 \u2212104, typical cooling timescales range \u03c4cool \u223c0.1 \u22121 Gyr. The figure also compares the TA timescales due to fast modes, \u03c4Dpf, and for Alfv\u00e9n modes, \u03c4DpA. For the set of representative parameters considered here, TA with Dpp,A is more efficient compared to radiative losses for \u03b3 \u22723\u00d7103, whereas TA with Dpp,f is more efficient for \u03b3 \u2272104. The full consideration of the evolution of postshock turbulence, including the vorticity generation behind a rippled shock front, additional injection of turbulence driven by continuous subclump mergers, decompression of the postshock flows, and kinetic wave-particle interactions, is beyond the scope of this Figure 3. The momentum distribution function, g(p) = p4f(p), is depicted in a Ms = 3.0 shock, based on the test-particle DSA model. The blue line represents the injected population, finj. The magenta dotted-dashed line illustrates a power-law spectrum of the pre-existing fossil electron population, fpre, with a slope s = 4.7 and a cutoff momentum pcut/mec = 103. The magenta dotted line displays the spectrum of the reaccelerated population, fRA. Here, the amplitude of fpre is the same as that of finj(p) at pmin = Qe \u00b7 pth, where Qe = 3.5. In our calculations, finj and fRA are deposited at the shock front (t = 0) in the M3In* and M3RA* models, respectively. study. In anticipation of the dissipation of MHD turbulence energy, we employ an exponential function to model the decay of magnetic energy and the reduction of momentum diffusion: B(t) = B2 \u00b7 exp(\u2212t/tdec) Dpp(p, t) = Dpp,2(p) \u00b7 exp(\u2212t/tdec), (6) where tdec = 0.1 or 0.2 Gyr is considered (see Table 1). Although the functional forms for the two quantities could differ with separate values of tdec, we opt for this simple model to reduce the number of free parameters in our modeling. In addition, we note that non-driven MHD turbulence is known to decay as a power law in time, i.e., EB \u221d(1 + CBt/tdec)\u2212\u03b7 with \u03b7 \u223c1 and CB \u223c1 (MacLow et el. 1998; Cho & Lazarizn 2003). Within one eddy turnover time (tdec), the magnetic energy density decreases by a factor of \u223c2.72 in the exponential decay model given in equation (6), and by a factor of \u223c2 in the power-law decline model. We can justify our choice since our study primarily focuses on a qualitative examination of how turbulence decay influences postshock synchrotron emission. Considering that tdec is a not-so-well constrained, free parameter in our model, the quantitative interpretation of our results should be taken with caution. 2.2. DSA CR Spectrum at the Shock Position We follow the time evolution of the CR distribution function, f(p, t), in the Lagrangian fluid element that advects downstream with the constant postshock speed. So the spatial advection distance of the fluid element from the shock front is \fImpact of Postshock Turbulence on Radio Relics Table 1. Model Parameters and Estimated Spectral Indices Model Name Dpp tdec(Myr) finj \u221dp\u2212q (\u03b10.61 0.15)a \u03b13.0 0.61 \u03b116 3.0 (M 0.61 0.15 )b M 3.0 0.61 M 16 3.0 M3InDp0 Dpp = 0 \u221e 1.15 1.25 1.25 3.80 3.01 2.97 M3InDp0(200) Dpp = 0 200 1.02 1.10 1.18 11.1 4.51 3.52 M3InDpf(200) Dpp,f 200 1.09 1.30 1.39 4.78 2.75 2.49 M3InDpA(200) Dpp,A 200 1.18 1.18 1.22 4.45 3.51 3.20 M3InDp0(100) Dpp = 0 100 0.938 1.03 1.12 8.68 4.22 M3InDpf(100) Dpp,f 100 0.985 1.10 1.21 4.49 3.21 M3InDpA(100) Dpp,A 100 0.981 1.06 1.14 5.80 3.86 Model Name Dpp tdec(Myr) fpre \u221dp\u2212s \u03b10.61 0.15 \u03b13.0 0.61 \u03b116 3.0 M 0.61 0.15 M 3.0 0.61 M 16 3.0 M3RADp0(4.3) Dpp = 0 100 s = 4.3 0.938 1.03 1.12 8.68 4.22 M3RADp0(4.7) Dpp = 0 100 s = 4.7 0.938 1.03 1.12 8.68 4.22 M3RADpf(4.3) Dpp,f 100 s = 4.3 0.985 1.10 1.21 4.49 3.21 M3RADpf(4.7) Dpp,f 100 s = 4.7 0.985 1.10 1.21 4.49 3.21 M3RADpA(4.3) Dpp,A 100 s = 4.3 0.981 1.06 1.14 5.80 3.86 M3RADpA(4.7) Dpp,A 100 s = 4.7 0.981 1.06 1.14 5.80 3.86 The model name consists of characters that represent the sonic Mach number, injection (In) or reacceleration (RA) cases, and the momentum diffusion models (Dp0, Dpf, and DpA). For M3In*(tdec) models, the number in the parenthesis is the decay time scale in units of Myr, while for M3RA*(s) models, it is the power-law slope of the preexisting CR population. The same set of models, M2.3*, for Ms = 2.3 shocks are also considered. a The spectral index, \u03b1\u03bd2 \u03bd1, is estimated from the volume-integrated spectrum, J\u03bd, between two frequencies, \u03bd1 and \u03bd2, where \u03bd = 0.15, 0.61, 3.0, and 16 GHz. b The integrated Mach number, M\u03bd2 \u03bd1 , is estimated based on Equation (2) using \u03b1\u03bd2 \u03bd1. Note that for \u03b1\u03bd2 \u03bd1 < 1, M\u03bd2 \u03bd1 cannot be calculated. given as x = u2t. At the shock position (t = 0), the shockinjected spectrum, finj(p), or the shock-reaccelerated spectrum, fRA(p), are assigned as the initial spectrum (see Figure 3). The spectrum of injected CR electrons is assumed to follow the DSA power-law for p \u2265pmin: finj(p) \u2248[ n2 \u03c01.5 p\u22123 th exp(\u2212Q2 e)] \u00b7 \u0012 p pmin \u0013\u2212q , (7) where n2 and T2 are the postshock gas density and temperature, respectively (Kang 2020). In addition, pth = \u221a2mekBT2, pmin = Qe pth with the injection parameter Qe = 3.5. Usual physical constants are used: me for the electron mass, c for the speed of light, and kB for the Boltzmann constant. For the preshock population of CR electrons, we adopt a power-law spectrum with the slope s for p \u2265pmin: fpre(p) = fo \u00b7 \u0012 p pmin \u0013\u2212s exp \u0012 \u2212p2 p2 cut \u0013 , (8) where fo is the normalization factor and pcut \u2248103mec is a cutoff momentum due to cooling. The preexisting CR electrons may consist of fossil electrons injected by relativistic jets from radio galaxies or residual electrons accelerated in previous shock passages. If these fossil electrons are accelerated by relativistic shocks contained in relativistic jets, the power-law slope could be s \u22484.3 (Kirk et al. 2000). On the other hand, if they are accelerated by ICM shock with Ms \u22482.3\u22123 in the cluster outskirts, s \u22484.5 \u22124.9 (Hong et al. 2014). The reaccelerated population at the shock can be calculated by the following integration: fRA(p) = q \u00b7 p\u2212q Z p pmin p\u2032q\u22121fpre(p\u2032)dp\u2032 (9) (Drury 1983; Kang & Ryu 2011). Except in the case of q = s, fRA(p) \u221dp\u2212r with r = min(q, s), meaning fRA(p) adopts the harder spectrum between p\u2212q and p\u2212s. 2.3. Model Parameters We choose shocks with Mach numbers Ms = 2.3 and Ms = 3.0 as the reference models. This selection is based on the observation that the Mach number of radio relic shocks detected in the cluster outskirts typically falls in the range of 2 \u2272Mrad \u22725 (Wittor et al. 2021). Furthermore, numerous particle-in-cell (PIC) simulations have shown that only supercritical shocks with Ms \u22732.3 can effectively accelerate CR electrons in weakly magnetized ICM characterized by \u03b2 \u223c50 \u2212100 (e.g., Kang et al. 2019; Ha et al. 2021; Boula et al. 2024). The columns 1-4 of Table 1 list the model names for shocks with Ms = 3.0, along with the various model parameters being considered. In M3In* models, the shock-injected population given in Equation (7) is deposited at the shock location, while the reaccelerated population given in Equation (9) is used in M3RA* models. Additionally, we will present the same set of models with Ms = 2.3, denoted as M2.3*, in Section 3. M3InDp0 corresponds to the conventional DSA model without TA (Dpp = 0) in the postshock region with a constant magnetic field (B2). The effects of decaying B(t) is explored with the two values of the decay time, tdec = 100 Myr and 200 Myr. For M3In* models, the number in the parenthesis represents tdec in units of Myr. Additionally, we investigate the dependence on the momentum diffusion models, namely Dpp,f and Dpp,A. Note that for the models with nonzero Dpp, the constant B field case is not included, as it is incompatible \fImpact of Postshock Turbulence on Radio Relics Figure 4. Evolution of momentum distribution function, g(p) = p4f(p), at the avection time, t = 0.02, 0.04, ...0.2 Gyr behind the Ms = 3 shock models, illustrating the postshock aging with the color coded lines. See Table 1 for the model names and parameters. (a-c): The M3In* models are presented. The dotted line in each model represents the volume-integrated spectrum, G(p) = p4 \u00b7 u2 R tf 0 f(p, t)dt. (d-f): The M3RA*(4.3) models with s = 4.3 and pcut = 103mec are displayed, including the green dotted-dashed line for fpre(p) and the green dotted line for G(p). Additionally, for comparison, fpre(p) and G(p) for the M3RA*(4.7) models with s = 4.7 are shown in the magenta lines. All functions are given in arbitrary units, but the relative amplitudes among different models are valid. For all models, the decay timescale for postshock magnetic turbulence is set as tdec = 0.1 Gyr. with the decaying model for magnetic fluctuations. For M3RA* models, we explore two values of the powerlaw slope, s = 4.3 and 4.7, considering the DSA slope q = 4.5 for Ms = 3 shocks. Note that the number in the parenthesis of the model names for M3RA* represents the value of s. 2.4. Evolution of CR Spectrum in the Postshock Flow To follow the time evolution of f(p, t) along the Lagrangian fluid element, we solve the following Fokker-Planck equation: d f(p, t) dt = (1 3\u2207\u00b7 u)p\u2202f \u2202p + 1 p2 \u2202 \u2202p \u0014 p2bl \u2202f \u2202p \u0015 + 1 p2 \u2202 \u2202p \u0014 p2Dpp \u2202f \u2202p \u0015 + S(p). (10) Here, the cooling rate, bl, includes energy losses from Coulomb, synchrotron, and IC interactions. Standard formulas for these processes can be found in various previous papers, such as Brunetti & Jones (2014). Specifically, the Coulomb interaction depends on the density of thermal electrons, n, synchrotron losses depend on the magnetic field strength, B, and the inverse Compton scattering off the cosmic background radiation depends on the redshift, zr (see Figure 2). The divergence term becomes \u2207\u00b7 u = 0 in the postshock flow in 1D plane-parallel geometry, and the source term S(p) accounts for finj(p) and fRA(p) deposited at the shock position. 3. Results 3.1. Postshock Cooling and TA of CR Electrons Figure 4 illustrates the evolution of the distribution function, g(p) = p4f(p), for M3In*(100) and M3RA*(4.3) models. Additionally, it presents the volume-integrated spectrum, G(p) = p4F(p) = p4 \u00b7 u2 R tf 0 f(p, t)dt, where tf = 0.2 Gyr denotes the final advection time. The M3InDp0(100) model, which solely incorporates radiative cooling without TA, serves as a reference for comparison with other models. In Panel (a), it is evident that Coulomb loss is important only for low-energy electrons with \u03b3 < 10, whereas synchrotron + IC losses are significant for \u03b3 > 103. This panel demonstrates that the volume-integrated CR spectrum F(p) steepens from p\u2212q to p\u2212(q+1) above the \u201cbreak momentum\u201d as expected: pbr mec \u2248104 \u0012 t 0.1Gyr \u0013\u22121 \u0012 Be 5\u00b5G \u0013\u22122 , (11) where the effective magnetic field strength, B2 e = B2 2 + B2 rad, takes account for radiative losses due to both synchrotron and IC processes, and Brad = 3.24\u00b5G(1+zr)2 corresponds to the cosmic background radiation at redshift zr. Figures 4(b-c) illustrate how TA with Dpp,f or Dpp,A delays or reduces the postshock cooling, enhancing f(p). Consequently, the resulting spectrum, including both f(p, t) and \fImpact of Postshock Turbulence on Radio Relics Figure 5. (a-c): Volume-integrated spectrum, G(p), for different models with Ms = 3. See Table 1 for the model names and parameters. In each column (from top to bottom), the lines and the model names have the same color. The M3InDp0 model (no TA and constant B) is displayed in the black dotted-dashed line in each panel for comparison. (d-f): Volume-integrated radio spectrum, \u03bdJ\u03bd, for the same models shown in the top panels. (g-i): Spectral index, \u03b1\u03bd = \u2212d ln J\u03bd/d ln \u03bd, for the same models shown in the top panels. All functions except \u03b1\u03bd are given in arbitrary units, but the relative amplitudes among different models are valid. For all the models, the total advection time is set as tf = 0.2 Gyr. F(p), deviates significantly from the simple DSA predictions that take into account only postshock cooling. As shown in Figure 2, TA with Dpp,A is dominant for \u03b3 < 102, while TA with Dpp,f becomes more effective for higher \u03b3 for the parameters considered here. Regarding the parameter dependence, obviously, TA with Dpp,f becomes less efficient for a greater value of \u03c4Dpf. On the other hand, TA with Dpp,A becomes more efficient with a stronger B and a smaller \u03b7m. Figures 4(d-f) present similar results for the M3RA*(4.3) models, wherein the reaccelerated spectrum fRA(p) with s = 4.3 is introduced at t = 0. For illustrative purposes, the normalization factor, fo, is set to be the same as that of finj(p) in Equation (7). Consequently, the resulting fRA (depicted by the blue lines at t = 0 in the lower panels) is larger than finj (represented by the blue lines at t = 0 in the upper panels), as shown in the figure. For the M3RA*(4.7) models with s = 4.7, only fpre(p) and G(p) are displayed in the magenta lines for comparison. In the case of the reacceleration models, both the postshock spectrum, f(p, t), and the volume-integrated spectrum, F(p), may not be represented by simple power-law forms, even without TA. 3.2. Volume Integrated Radio Emission Figures 5(a-c) compare G(p) for all Ms = 3 models listed in Table 1. For the three models without TA but with different values of tdec, M3InDp0, M3InDp0(200), and M3InDp0(100), G(p) is almost the same since the total cooling is dominated by the IC cooling, and the effects of decaying B(t) are relatively minor. For comparison, G(p) for M3InDp0 (no TA and a constant B2) is displayed in the black dotted-dashed line in each panel. Panels (b) and (c) show the effects of TA with Dpp,f and Dpp,A, respectively. Thus, compared with the conventional DSA model, TA due to postshock turbulence may enhance the CR electron population. In addition, the reaccelerated spectrum, fRA (green and magenta dotted lines) could be higher than finj, depending on the amplitude of the fossil electron population. For the same thirteen models depicted in Figures 5(a-c), the volume-integrated synchrotron spectrum, \u03bdJ\u03bd, is shown in Figures 5(d-f), while its spectral index, \u03b1\u03bd = \u2212d ln J\u03bd/d ln \u03bd \fImpact of Postshock Turbulence on Radio Relics Figure 6. The same as Figure 5 except that Ms = 2.3 models are shown. is displayed in Figures 5(g-i). Again, in each panel, the black dotted-dashed line represents the results for M3InDp0, included for comparison. In Panels (d) and (g), the three models without TA, M3InDp0, M3InDp0(200) and M3InDp0(100), are depicted in the black, red, and blue lines, respectively. They demonstrate that the effects of decaying B(t) are quite prominent due the strong dependence of the synchrotron emissivity on the magnetic field strength. For example, j\u03bd \u221dB(q\u22121)/2 for the power-law spectrum of f(p) \u221dp\u2212q. In the conventional DSA model with a constant B (M3InDp0), the transition from \u03b1sh to \u03b1int occur rather gradually around the break frequency, \u03bdbr \u22480.25 GHz \u0012 tage 0.1Gyr \u0013\u22122 \u0012 Be 5\u00b5G \u0013\u22124 \u0012 B2 2\u00b5G \u0013 . (12) So one should use radio observations at sufficiently high frequencies, \u03bd \u226b\u03bdbr, to estimate the Mach number given in Equation (2) using the integrated spectral index (Kang 2015). However, as depicted in the red and blue solid lines in Panel (g), this transition takes place much more gradually in the case of decaying magnetic fields with smaller tdec. Thus, an accurate model for the postshock B(x) is required to estimate the Mach number of radio relic shocks using Equation (2), considering the observational radio frequency range of \u223c0.1 \u221230 GHz. Figures 5(h-i) illustrate that TA with a large momentum diffusion coefficient, especially Dpp,A, could lead to a significant deviation from the simple DSA prediction with a constant magnetic field strength. We also note that, in Panels (g)-(i), the blue, green, and magenta lines (all with tdec = 100 Myr) overlap with each other, except for very low frequencies (\u03bd < 10 MHz), whereas they differ significantly from the black (tdec = \u221e) and red (tdec = 200 Myr) lines. This implies that the magnetic field distribution plays a significant role in governing the integrated spectral index \u03b1\u03bd of the volume-integrated radio spectrum. In Table 1 for the M3* models, the columns 5-7 list the integrated spectral index, \u03b1\u03bd2 \u03bd1, between two frequencies, \u03bd1 and \u03bd2, where \u03bd = 0.15, 0.61, 3.0, and 16 GHz are chosen as representative values. Moreover, the columns 8-10 list the integrated Mach number, M \u03bd2 \u03bd1 , estimated based on Equation (2) using \u03b1\u03bd2 \u03bd1. For M3InDp0, the results are consistent with conventional DSA predictions except for the low frequency case: i.e., \u03b1\u03bd2 \u03bd1 = 1.25 and M \u03bd2 \u03bd1 = 3 for \u03bd \u226b\u03bdbr. In the case of \u03b10.61 0.15, the frequencies are not sufficiently high, resulting in the overestimation of Mach number, M 0.61 0.15 = 3.8 for M3InDp0. In fact, for most other models, \u03b10.61 0.15 < 1, so M 0.61 0.15 cannot be estimated. Both TA and reacceleration significantly influence the integrated spectrum J\u03bd and tend to generate smaller \u03b1\u03bd2 \u03bd1, resulting in higher M \u03bd2 \u03bd1 except for M3InDpf(200) (see also Figures 5(g-i)). \fImpact of Postshock Turbulence on Radio Relics Figure 7. (a-c): Surface brightness profile at 0.15 GHz, I0.15(d), for the same M3 models presented in Figure 4. See Table 1 for the model names and parameters. In each column (from top to bottom), the lines and the model names have the same color. In the M3InDp0 model(black dotted\u2013dashed lines), the postshock magnetic field remain constant as B2 and Dpp = 0 (no TA). See Figure 1 for the adopted shape of the relic surface and the definition of the intensity, I\u03bd(d). The extension angles are \u03c81 = \u03c82 = 15\u25e6. The displayed functions are given in arbitrary units, but the relative amplitudes among different models are valid. (d-f): Surface brightness profile at 0.61 GHz, I0.61(d), for the same models as in (a-c). (g-i): Spectral index between 0.15 and 0.61 GHz, \u03b10.61 0.15, for the same models shown in the upper panels. The M2.3* models also exhibit similar results, as can be seen in Figure 6. 3.3. Surface Brightness Profile of Model Radio Relics Using the geometrical configuration of the shock surface depicted in Figure 1, we estimate the surface brightness, I\u03bd(d), as a function of the projected distance, d. In brief, a radio relic has a coconut-shell-shaped, elongated surface with an axial ratio a/b \u223c1\u22121.5 and a thickness corresponding to the cooling length of electrons, lcool. Here, the radius of the spherical shell is set as Rs = 1 Mpc. Then the surface brightness or intensity is calculated by I\u03bd(d) = Z hmax hmin j\u03bd(x)dh, (13) where hmin and hmax are determined by the extension angles, \u03c81 and \u03c82. As illustrated in Figure 1, the path length h along the observer\u2019s line of sight reaches its maximum at dpeak = Rs(1 \u2212cos \u03c81). So for the assumed model parameters, Rs = 1 Mpc and \u03c81 = \u03c82 = 15\u25e6, the surface brightness peaks at dpeak \u224834 kpc. Figure 7 presents the spatial profiles of I0.15(d) at 0.15 GHz and I0.61(d) at 0.61 GHz for the same thirteen models shown in Figure 5. The spectral index \u03b10.61 0.15(d) is calculated from the projected I\u03bd(d) between the two frequencies. Several points are noted: 1. The postshock magnetic field plays a key role in determining the profile of I\u03bd(d) and \u03b1\u03bd(d), as it governs the synchrotron emissivity j\u03bd and Dpp,A. Consequently, the results depend sensitively on the decay of B(t) in the postshock region. 2. The models with postshock TA (middle and right columns) exhibit a slower decrease in I\u03bd(d) compared to the models without TA (left column). This occurs because TA delays the postshock cooling of electrons, resulting in a broader effective width of radio relics. In particular, the models with Dpp,f generate greater widths than those with Dpp,A. \fImpact of Postshock Turbulence on Radio Relics Figure 8. The same as Figure 7 except that Ms = 2.3 models are shown. 3. In the models with Dpp,A, the enhancement by TA is less significant due to the effects of decaying magnetic fields, distinguishing it from models with Dpp,f. 4. Panels (g-i) demonstrate that the postshock profile of \u03b1\u03bd is independent of the injection spectrum (i.e., finj or fRA). The profile is mainly influenced by the decay profile of B(x) and by TA due to Dpp(p, x). 5. The spectral index is the smallest at the relic edge (d = 0), while the intensity profile peaks at dpeak in our model setup for the relic shock surface. Therefore, in observations of radio relics, the region d < dpeak corresponds to the postshock region rather than the preshock region. The M2.3* models presented in Figure 8 also exhibit the similar behaviors. 4. Summary Giant radio relics are thought to be generated by weak bow shocks that form after the DM core passage during major mergers of galaxy clusters. In such a scenario, CR electrons are accelerated mainly via the Fermi-I mechanism, resulting in the simple predictions for the DSA power-law spectrum, f(p) \u221dp\u2212q, and the ensuing synchrotron radiation spectrum, j\u03bd \u221d\u03bd\u2212\u03b1sh. Although most observational aspects of radio relics are consistent with such DSA predictions, the so-called Mach number discrepancy among the estimated Mach numbers based on various methods, i.e., Mrad,int \u2273Mrad,sh \u2273MX, remains yet to be resolved. The ICM is turbulent by nature. The cascade of magnetic turbulence from large MHD scales to small kinetic scales and the excitation and amplification of magnetic fluctuations via plasma microinstabilities behind the shock front could influence the CR energy spectrum through Fermi-II acceleration. Moreover, magnetic turbulence is expected to decay approximately in one eddy turnover time, L/u2 \u223c0.1 Gyr, and decaying magnetic fields could significantly affect turbulent acceleration (TA) and the synchrotron emissivity in the postshock region. In this study, we adopt simplified models for the momentum diffusion coefficient, Dpp,f due to fast-mode waves and Dpp,A due to Alfv\u00e9n-mode waves, to explore the effects of TA. The CR spectrum finj(p) for the shock-injected population or fRA(p) for the shock-reaccelerated population is deposited at the shock front at t = 0. Then the time evolution of f(p, t) is calculated along the Lagrangian fluid element in the time-domain. The results are mapped onto the spherical shell, whose geometrical configuration is depicted in Figure 1, to estimate the surface brightness profile, I\u03bd(d), as a function of the projected distance d. \fImpact of Postshock Turbulence on Radio Relics The main results can be summarized as follows: 1. TA due to Dpp,f and Dpp,A could delay the postshock aging of CR electrons, leading to a significant deviation from the simple power-law spectrum (Figure 4) and a broader spatial width of the surface brightness of radio relics (Figure 6). 2. The postshock aging of the CR electron spectrum is insensitive to the decay of magnetic fields since IC cooling dominates over synchrotron cooling (typically Brad > B in the postshock region) (Figures 5(a-c) and 6(a-c)). 3. The integrated spectral index, \u03b1\u03bd, of the volumeintegrated radio spectrum sensitively depends on the postshock magnetic field distribution, whereas it is insensitive to the CR spectrum deposited at the shock front. For instance, the transition from the power-law index \u03b1sh to \u03b1int occurs more gradually than predicted by the simple DSA model with a constant postshock magnetic field (Figures 5(g-i) and 6(g-i)). Therefore, observational frequencies should be sufficiently high (i.e., \u03bd \u226b\u03bdbr) for estimating the Mach number using the integrated spectral index . 4. On the other hand, the synchrotron emissivity scales as j\u03bd \u221dB(q\u22121)/2 and the momentum diffusion coefficient due to Alfv\u00e9n modes, Dpp,A \u221dB2. This means that the decay of B fields significantly impacts both the surface brightness, I\u03bd(d), and the spectral index, \u03b1\u03bd2 \u03bd1(d) (Figures 7 and 8). 5. The columns 8-10 of Table 1 indicate that, in most models except the MInDp0 model (no TA and constant B), the integrated Mach number, M \u03bd2 \u03bd1 , estimated using the integrated spectral index, \u03b1\u03bd2 \u03bd1, between two frequencies \u03bd1 and n2, tends to be higher than the actual shock Mach number. This highlights the critical importance of incorporating accurate models for turbulent acceleration arising from postshock turbulence and the impact of decaying magnetic fields when interpreting observations of radio relics. In particular, the shock Mach number estimated using the integrated spectral index may tend to be larger than the actual Mach number. Therefore, a thorough consideration of these factors is essential for a more precise interpretation of radio relic observations. Acknowledgments The author thanks the anonymous referee for constructive feedback. This work was supported by a 2-Year Research Grant of Pusan National University." +} \ No newline at end of file diff --git a/abs_9K/test_abstract_short_2405.03121v1.json b/abs_9K/test_abstract_short_2405.03121v1.json new file mode 100644 index 0000000000000000000000000000000000000000..c8642bb44d6109fc95e5850760a39bc7befe570a --- /dev/null +++ b/abs_9K/test_abstract_short_2405.03121v1.json @@ -0,0 +1,17 @@ +{ + "url": "http://arxiv.org/abs/2405.03121v1", + "title": "AniTalker: Animate Vivid and Diverse Talking Faces through Identity-Decoupled Facial Motion Encoding", + "abstract": "The paper introduces AniTalker, an innovative framework designed to generate\nlifelike talking faces from a single portrait. Unlike existing models that\nprimarily focus on verbal cues such as lip synchronization and fail to capture\nthe complex dynamics of facial expressions and nonverbal cues, AniTalker\nemploys a universal motion representation. This innovative representation\neffectively captures a wide range of facial dynamics, including subtle\nexpressions and head movements. AniTalker enhances motion depiction through two\nself-supervised learning strategies: the first involves reconstructing target\nvideo frames from source frames within the same identity to learn subtle motion\nrepresentations, and the second develops an identity encoder using metric\nlearning while actively minimizing mutual information between the identity and\nmotion encoders. This approach ensures that the motion representation is\ndynamic and devoid of identity-specific details, significantly reducing the\nneed for labeled data. Additionally, the integration of a diffusion model with\na variance adapter allows for the generation of diverse and controllable facial\nanimations. This method not only demonstrates AniTalker's capability to create\ndetailed and realistic facial movements but also underscores its potential in\ncrafting dynamic avatars for real-world applications. Synthetic results can be\nviewed at https://github.com/X-LANCE/AniTalker.", + "authors": "Tao Liu, Feilong Chen, Shuai Fan, Chenpeng Du, Qi Chen, Xie Chen, Kai Yu", + "published": "2024-05-06", + "updated": "2024-05-06", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.AI" + ], + "label": "Original Paper", + "paper_cat": "Diffusion AND Model", + "gt": "The paper introduces AniTalker, an innovative framework designed to generate\nlifelike talking faces from a single portrait. Unlike existing models that\nprimarily focus on verbal cues such as lip synchronization and fail to capture\nthe complex dynamics of facial expressions and nonverbal cues, AniTalker\nemploys a universal motion representation. This innovative representation\neffectively captures a wide range of facial dynamics, including subtle\nexpressions and head movements. AniTalker enhances motion depiction through two\nself-supervised learning strategies: the first involves reconstructing target\nvideo frames from source frames within the same identity to learn subtle motion\nrepresentations, and the second develops an identity encoder using metric\nlearning while actively minimizing mutual information between the identity and\nmotion encoders. This approach ensures that the motion representation is\ndynamic and devoid of identity-specific details, significantly reducing the\nneed for labeled data. Additionally, the integration of a diffusion model with\na variance adapter allows for the generation of diverse and controllable facial\nanimations. This method not only demonstrates AniTalker's capability to create\ndetailed and realistic facial movements but also underscores its potential in\ncrafting dynamic avatars for real-world applications. Synthetic results can be\nviewed at https://github.com/X-LANCE/AniTalker.", + "main_content": "INTRODUCTION Integrating speech signals with single portraits [13, 18, 33, 45, 47, 59\u2013 61] to generate talking avatars has greatly enhanced both the entertainment and education sectors, providing innovative avenues for interactive digital experiences. While current methodologies [36, 47, 57, 61, 62] have made notable strides in achieving synchronicity between speech signals and lip movements, thus enhancing verbal communication, they often neglect the critical aspect of nonverbal communication. Nonverbal communication encompasses the transmission of information without the use of words, including but not limited to specific head movements, facial expressions, and blinking. Research [35] indicates that these nonverbal cues are pivotal in communicating. The primary challenge lies in the inadequacy of existing models to encapsulate the complex dynamics associated with facial motion representation. Existing approaches predominantly employ explicit structural representations such as blendshapes [3, 13, 34], landmark coefficients [18, 48, 60], or 3D Morphable Models (3DMM) [7, 14, 27] to animate faces. Designed initially for single-image processing, these methods offer a constrained approximation of facial dynamics, failing to capture the full breadth of human expressiveness. Recent advancements [11, 25] have introduced trainable facial motion encoders as alternatives to conventional explicit features, showing \u2217The Corresponding author. significant progress in capturing detailed facial movements. However, their deployment is often tailored for specific speakers [11] or limited to the mouth region [25], highlighting a gap in fine-grained motion representation that captures all varieties of facial dynamics. A universal and fine-grained motion representation that is applicable across different characters remains absent. Such a representation should fulfill three key criteria: capturing minute details, such as minor mouth movements, eye blinks, or slight facial muscle twitching; ensuring universality, making it applicable to any speaker while removing identity-specific information to maintain a clear separation between appearance and motion; and incorporating a wide range of nonverbal cues, such as expressions, head movements, and posture. In this paper, we introduce AniTalker. Our approach hinges on a universal motion encoder designed to grasp the intricacies of facial dynamics. By adopting the self-supervised learning paradigm, we mitigate the reliance on labeled data, enabling our motion encoder to learn robust motion representations. This learning process operates on dual levels: one entails understanding motion dynamics through the transformation of a source image into a target image, capturing a spectrum of facial movements, from subtle changes to significant alterations. Concurrently, the use of identity labels within the dataset facilitates the joint optimization of an identity recognition network in a self-supervised manner, further aiming to disentangle identity from motion information through mutual information minimization. This ensures that the motion representation retains minimal identity information, upholding its universal applicability. To authenticate the versatility of our motion space, we integrate a diffusion model and a variance adapter to enable varied generation and manipulation of facial animations. Thanks to our sophisticated representation and the diffusion motion generator, AniTalker is capable of producing diverse and controllable talking faces. In summary, our contributions are threefold: (1) We have developed universal facial motion encoders using a self-supervised approach that effectively captures facial dynamics across various individuals. These encoders feature an identity decoupling mechanism to minimize identity information in the motion data and prevent identity leakage. (2) Our framework includes a motion generation system that combines a diffusion-based motion generator with a variance adapter. This system allows for the production of diverse and controllable facial animations, showcasing the flexibility of our motion space. (3) Extensive evaluations affirm our framework\u2019s contribution to enhancing the realism and dynamism of digital human representations, while simultaneously preserving identity. 2 RELATED WORKS Speech-driven Talking Face Generation refers to creating talking faces driven by speech, We categorize the models based on whether they are single-stage or two-stage. Single-stage models [36, 58, 61] generate images directly from speech, performing end-toend rendering. Due to the size constraints of rendering networks, this method struggles with processing longer videos, generally managing hundreds of milliseconds. The two-stage type [3, 11, 13, \fAniTalker: Animate Vivid and Diverse Talking Faces through Identity-Decoupled Facial Motion Encoding , 2024, 18, 25, 33, 60] decouples motion information from facial appearance and consists of a speech-to-motion generator followed by a motion-to-video rendering stage. As the first stage solely generates motion information and does not involve the texture information of the frames, it requires less model size and can handle long sequences, up to several seconds or even minutes. This two-stage method is known to reduce jitter [3, 11, 25], enhance speech-tomotion synchronization [11, 13, 33, 60], reduce the need for aligned audio-visual training data [3, 25], and enable the creation of longer videos [18]. Our framework also employs a two-stage structure but with a redesigned motion representation and generation process. Motion Representation serves as an essential bridge between the driving features and the final rendered output in creating talking faces. Current methods predominantly utilize explicit structural representations, such as blendshapes [3, 13, 32], 3D Morphable Models (3DMMs) [27], or landmarks [48, 60]. These formats offer high interpretability and facilitate the separation of facial actions from textures, making them favored as intermediary representations in facial generation tasks. However, due to the wide range of variability in real-world facial movements, they often fail to capture the subtle nuances of facial expressions fully, thus limiting the diversity and expressiveness of methods dependent on these representations. Our research is dedicated to expanding the spectrum of motion representation by developing a learned implicit representation that is not constrained by the limitations of explicit parametric models. Self-supervised motion transfer approaches [31, 41, 44, 48, 49, 51, 54] aim to reconstruct the target image from a source image by learning robust motion representations from a large amount of unlabeled data. This significantly reduces the need for labeled data. A key challenge in these methods is separating motion from identity information. They primarily warp the source image using predicted dense optical flow fields. This approach attempts to disentangle motion from identity by predicting distortions and transformations of the source image. However, information leakage occurs in practice, causing the target image to contain not just motion but also identity information. Building on this observation, we explicitly introduce identity modeling and employ the Mutual Information Neural Estimation (MINE) [1, 4] method to achieve a motion representation independent of identity. Diffusion Models [19] have demonstrated outstanding performance across various generative tasks [12, 17, 21, 39]. Recent research has utilized diffusion models as a rendering module [2, 11, 25, 29, 40, 43, 45]. Although diffusion models often produce higher-quality images, they require extensive model parameters and substantial training data to converge. To enhance the generation process, several approaches [18, 27, 28, 32, 55] employ diffusion models for generating motion representations. Diffusion models excel at addressing the one-to-many mapping challenge, which is crucial for speech-driven generation tasks. Given that the same audio clip can lead to different actions (e.g., lip movements and head poses) across different individuals or even within the same person, diffusion models provide a robust solution for managing this variability. Additionally, the training and inference phases of diffusion models, which systematically introduce and then remove noise, allow for the incorporation of noise during generation to foster diversity. We also use diffusion in conjunction with our motion representation to further explore diversity in talking face generation. 3 ANITALKER FRAMEWORK 3.1 Model Overview AniTalker contains two critical components: (1) Training a motion representation that can capture universal face dynamics, and (2) Based on the well-trained motion encoder from the previous step, the generation or manipulation of the motion representation using the user-controlled driving signal to produce the synthesised talking face video. 3.2 Universal Motion Representation Our approach utilizes a self-supervised image animation framework, employing two RGB images from a video clip: a source image \ud835\udc3c\ud835\udc60and a target image \ud835\udc3c\ud835\udc61(\ud835\udc3c\u2208R\ud835\udc3b\u00d7\ud835\udc4a\u00d73), to serve distinct functions: \ud835\udc3c\ud835\udc60provides identity information, whereas \ud835\udc3c\ud835\udc61delivers motion details. The primary aim is to reconstruct \ud835\udc3c\ud835\udc61. Due to the random selection of frames, occasionally adjacent frames are chosen, enabling the network to learn representations of subtle movements. As depicted in Figure 2 (a), both the source and target images originate from the same video clip. Through this self-supervised learning method, the target image\u2019s encoder is intended to exclusively capture motion information. By learning from frame-to-frame transfer, we can acquire a more universal representation of facial motion. This representation includes verbal actions such as lip movements, as well as nonverbal actions, including expressions, posture, and movement. To explicitly decouple motion and identity in the aforementioned processes, we strengthen the self-supervised learning approach by incorporating Metric Learning (ML) and Mutual Information Disentanglement (MID). Specifically: Metric Learning. Drawing inspiration from face recognition [8, 46] and speaker identification [9], metric learning facilitates the generation of robust identity information. This technique employs a strategy involving pairs of positive and negative samples, aiming to minimize the distance between similar samples and maximize it between dissimilar ones, thereby enhancing the network\u2019s ability to discriminate between different identities. This process can also proceed in a self-supervised fashion, with each iteration randomly selecting distinct identities from the dataset. Specifically, the approach establishes an anchor (\ud835\udc4e) and selects a positive sample (\ud835\udc5d) and a negative sample (\ud835\udc5b)\u2014corresponding to faces of different identities\u2014with the goal of reducing the distance (\ud835\udc51) between the anchor and the positive sample while increasing the distance between the anchor and the negative samples. This optimization, depicted in Figure 2 (b), involves randomly selecting a different identity from a list of candidates not belonging to the current person as the negative sample. The optimization goal for this process is as follows: L\ud835\udc40\ud835\udc3f= max (0, \ud835\udc51(\ud835\udc4e, \ud835\udc5d) \u2212\ud835\udc51(\ud835\udc4e,\ud835\udc5b) + margin) Here, the margin is a positive threshold introduced to further separate the positive and negative samples, thus improving the model\u2019s ability to distinguish between different identities. Mutual Information Disentanglement. Although metric learning effectively constrains the identity encoder, focusing solely on this encoder does not adequately minimize the identity information \f, 2024, Tao Liu, et al. Motion Encoder t t HAL Identity Encoder Motion Encoder s s HAL Identity Encoder Pull Push Target Image Source Image AvgPool \ud835\udc5a! \u2026 \u2026 \u2026 Weighted Sum Target Image Wrap Layer Feature Maps (d) HAL Image Renderer o t s Positive Speech Encoder Image Renderer \u2026 \u2026 Speech Variance Adapter Diffusion Motion Generator Motion Encoder ( Conformer \u00d7 N ) ( Conformer \u00d7 N ) Other Images Motion Latent Motion Latent Identity Latent Noisy Latent \ud835\udc74!~\ud835\udc41(0,1) Motion Encoder Image Encoder \ud835\udc5a\" \ud835\udc5a# \ud835\udc5a (a) Details of Training Universal Motion Representation Flow Fields (c) MID (b) ML MLP MLP \u2026 Candidates (e) Motion Generator \u2026 \u2026 Positional Embedding Audio-driven Video-driven Frozen Layers Image Encoder \u2026 Denoising Iteration Anchor Negative (\ud835\udc74) \u2026 Random Pick Figure 2: The AniTalker framework comprises two main components: learning a universal motion representation and then generating and manipulating this representation through a sequence model. Specifically, the first part aims to learn a robust motion representation by employing metric learning (ML), mutual information disentanglement (MID), and Hierarchical Aggregation Layer (HAL). Subsequently, this motion representation can be used for further generation and manipulation. within the motion encoder. To tackle this issue, we utilize Mutual Information (MI), a statistical measure that evaluates the dependency between the outputs of the identity and motion encoders. Given the challenge of directly computing MI between two variables, we adopt a parametric method to approximate MI estimation among random variables. Specifically, we use CLUB [4], which estimates an upper bound for MI. Assuming the output of the identity encoder is the identity latent \ud835\udc67\ud835\udc56\ud835\udc51and the motion encoder\u2019s output is the motion latent \ud835\udc67\ud835\udc5a, our goal is to optimize the mutual information \ud835\udc3c(E(\ud835\udc67\ud835\udc56\ud835\udc51); E(\ud835\udc67\ud835\udc5a)), where E denotes the learnable Multi-Layer Perceptron (MLP) within CLUB. This optimization ensures that the motion encoder primarily captures motion, thereby preventing identity information from contaminating the motion space. This strategy is depicted in Figure 2 (c). In summary, by leveraging Metric Learning and Mutual Information Disentanglement, we enhance the model\u2019s capacity to accurately differentiate between identity and motion while reducing reliance on labeled data. Hierarchical Aggregation Layer (HAL). To enhance the motion encoder\u2019s capability to understand motion variance across different scales, we introduce the Hierarchical Aggregation Layer (HAL). This layer aims to integrate information from various stages of the image encoder, each providing different receptive fields [24]. HAL processes inputs from all intermediate layers of the image encoder and passes them through an Average Pooling (AvgPool) layer to capture scale-specific information. A Weighted Sum [53] layer follows, assigning learnable weights to effectively merge information from these diverse layers. This soft fusion approach enables the motion encoder to capture and depict movements across a broad range of scales. Such a strategy allows our representations to adapt to faces of different sizes without the need for prior face alignment or normalization. Specifically, the features following the AvgPool layer are denoted as [\ud835\udc5a1,\ud835\udc5a2, . . . ,\ud835\udc5a\ud835\udc5b], representing the set of averaged features, with [\ud835\udc641,\ud835\udc642, . . . ,\ud835\udc64\ud835\udc5b] as the corresponding set of weights, where \ud835\udc5bsymbolizes the number of intermediate layers in the image encoder. These weights undergo normalization through the softmax function to guarantee a cumulative weight of 1. The equation for the weighted sum of tensors, indicating the layer\u2019s output, is formulated as m = \u00cd\ud835\udc5b \ud835\udc56=1 \ud835\udc64\ud835\udc56\u00b7 \ud835\udc5a\ud835\udc56. The softmax normalization process is mathematically articulated as \ud835\udc64\ud835\udc56= \ud835\udc52\ud835\udc4a\ud835\udc56 \u00cd\ud835\udc5b \ud835\udc57=1 \ud835\udc52\ud835\udc4a\ud835\udc57, ensuring the proportional distribution of weights across the various layers. Subsequently, m is fed into the motion encoder for further encoding. Learning Objective. The main goal of learning is to reconstruct the target image by inputting two images: the source and the target within the current identity index. Several loss functions are utilized during the training process, including reconstruction loss \ud835\udc3f\ud835\udc5f\ud835\udc52\ud835\udc50\ud835\udc5c\ud835\udc5b, perceptual loss \ud835\udc3f\ud835\udc5d\ud835\udc52\ud835\udc5f\ud835\udc50\ud835\udc52\ud835\udc5d, adversarial loss \ud835\udc3f\ud835\udc4e\ud835\udc51\ud835\udc63, mutual information loss \ud835\udc3f\ud835\udc40\ud835\udc3c, and identity metric learning loss \ud835\udc3f\ud835\udc40\ud835\udc3f. The total loss is formulated as follows: \ud835\udc3f\ud835\udc5a\ud835\udc5c\ud835\udc61\ud835\udc56\ud835\udc5c\ud835\udc5b= \ud835\udc3f\ud835\udc5f\ud835\udc52\ud835\udc50\ud835\udc5c\ud835\udc5b+ \ud835\udf061\ud835\udc3f\ud835\udc5d\ud835\udc52\ud835\udc5f\ud835\udc50\ud835\udc52\ud835\udc5d+ \ud835\udf062\ud835\udc3f\ud835\udc4e\ud835\udc51\ud835\udc63+ \ud835\udf063\ud835\udc3f\ud835\udc40\ud835\udc3c+ \ud835\udf064\ud835\udc3f\ud835\udc40\ud835\udc3f \fAniTalker: Animate Vivid and Diverse Talking Faces through Identity-Decoupled Facial Motion Encoding , 2024, 3.3 Motion Generation Once the motion encoder and image renderer are trained, at the second stage, we can freeze these models. The motion encoder is used to generate images, then video-driven or speech-driven methods are employed to produce motion, and finally, the image renderer carries out the final frame-by-frame rendering. 3.3.1 Video-Driven Pipeline. Video driving, also referred to face reenactment, leverages a driven speaker\u2019s video sequence I\ud835\udc51= [\ud835\udc3c\ud835\udc51 1 , \ud835\udc3c\ud835\udc51 2 , . . . , \ud835\udc3c\ud835\udc51 \ud835\udc47] to animate a source image \ud835\udc3c\ud835\udc60, resulting in a video that accurately replicates the driven poses and facial expressions. In this process, the video sequence I\ud835\udc51is input into the motion encoder, previously trained in the first phase, to extract the motion latent. This latent, along with \ud835\udc3c\ud835\udc60, is then directly fed, frame by frame, into the image renderer for rendering. No additional training is required. The detailed inference process, where the orange lines represent the data flow during video-driven inference, is depicted in Figure 2 (e). 3.3.2 Speech-Driven Pipeline. Unlike video-driven methods that use images, the speech-driven approach generates videos consistent with the speech signal or other control signals to animate a source image \ud835\udc3c\ud835\udc60. Specifically, we utilize a combination of diffusion and variance adapters: the former learns a better distribution of motion data, while the latter mainly introduces attribute manipulation. Diffusion Models. For generating motion latent sequences, we utilize a multi-layer Conformer [16]. During training, we incorporate the training process of diffusion, which includes both adding noise and denoising steps. The noising process gradually converts clean Motion Latent M into Gaussian noise M\ud835\udc47, where\ud835\udc47represents the number of total denoising steps in the diffusion process. Conversely, the denoising process systematically eliminates noise from the Gaussian noise, resulting in clean Motion Latents. This iterative process better captures the distribution of motion, enhancing the diversity of the generated results. During the training phase, we adhere to the methodology described in [19] for the DDPM\u2019s training stage, applying the specified simplified loss objective, as illustrated in Equation 1, where \ud835\udc61represents a specific time step and C represents the control signal, which refers to either speech or speech perturbed by a Variance Adapter (to be discussed in the following section). For inference, considering the numerous iteration steps required by diffusion, we select the Denoising Diffusion Implicit Model (DDIM) [42]\u2014an alternate non-Markovian noising process\u2014as the solver to quicken the sampling process. \ud835\udc3fdiff = E\ud835\udc61,M,\ud835\udf16 \u0002 \u2225\ud835\udf16\u2212\u02c6 \ud835\udf16\ud835\udc61(M\ud835\udc61,\ud835\udc61, C)\u22252\u0003 (1) Variance Adapter. The Variance Adapter [38] is a residual branch connected to audio features, allowing optional control over the speech signal. Originally proposed to mitigate the one-to-many problem in Text-to-Speech (TTS) tasks, its architecture includes a predictor and an encoder that use speech signals to predict attribute representations. A residual connection is then applied between the encoder output and the speech signals. During the Training Stage, the encoder processes speech features in collaboration with the predictor to minimize the L2 loss against a ground truth control signal. This includes incorporating an attribute extractor for targeting specific attributes, such as employing a pose extractor (yaw, pitch, roll) to control head posture during the audio generation process. In Predictor \u2295 L2 Loss Encoder Speech Feature Attribute Extractor (a) Training Stage (b) Inference Stage Predictor \u2295 Speech Feature Attribute Extractor or Encoder Audio-driven only w. Attribute Control ( LSTM \u00d7 N ) ( LSTM \u00d7 N ) ( LSTM \u00d7 N ) ( LSTM \u00d7 N ) \u2026 \u2026 GT images Any images Attribute Latent \u00d7 N Figure 3: Variance Adapter Block. Each block models a single attribute and can be iterated multiple times, where \ud835\udc41represents the number of attributes. the Inference Stage, the trained encoder and predictor can flexibly synthesize speech with controlled attributes or operate based on speech-driven inputs. The detailed structure is depicted in Figure 3. Our approach extends previous works [11, 18] by incorporating LSTM [15] for improved temporal modeling and introducing additional cues such as head position and head scale, which we refer to as camera parameters. The architecture is detailed in Figure 3. Learning Objective. The total loss comprises diffusion loss and variance adapter loss, where \ud835\udc3erepresents the number of attributes: \ud835\udc3fgen = \ud835\udc3fdiff + \ud835\udf06 \ud835\udc3e \u2211\ufe01 \ud835\udc58=1 \ud835\udc3fvar\ud835\udc58 4 EXPERIMENTS 4.1 Experimental Settings We utilizes three datasets: VoxCeleb [30], HDTF [59], and VFHQ [52]. Due to different processing approaches across these datasets, we re-downloaded the original videos and processed them in a unified way. Specifically, our processing pipeline included filtering out blurred faces and faces at extreme angles. It is noted that we did not align faces but instead used a fixed detection box for each video clip, allowing for natural head movement. This effort resulted in a dataset containing 4,242 unique speaker IDs, encompassing 17,108 video clips with a cumulative duration of 55 hours. Details of this filtering process are provided in the supplementary material. Each video in these datasets carries a unique facial ID tag, which we used as labels for training our identity encoder. We also reserved some videos from HDTF for testing, following the test split in [58]. Scenario Setting We evaluate methods under two scenarios: video-driven and speech-driven, both operating on a one-shot basis with only a single portrait required. The primary distinction lies in the source of animation: image sequences for video-driven and audio signals for speech-driven scenarios. The detailed data flow for inference is illustrated in Figure 2. Additionally, each scenario is divided into two types: self-driven, where the source and target \f, 2024, Tao Liu, et al. share the same identity, and cross-driven, involving different identities. In speech-driven tasks, if posture information is needed, it is provided from the ground truth. Moreover, for our motion generator, unless specified otherwise, we use a consistent seed to generate all outcomes. To ensure a fair comparison, the output resolution for all algorithms is standardized to 256 \u00d7 256. Implementation Details In training the motion representation, our self-supervised training paradigm is primarily based on LIA [49]. Both the identity and motion encoders employ MLPs. Our training targets use the CLUB 1 for mutual information loss, in conjunction with AAM-Softmax [46]. This robust metric learning method utilizes angular distance and incorporates an increased number of negative samples to enhance the metric learning loss. In the second phase, the speech encoder and the Motion Generator utilize a four-layer and a two-layer conformer architecture, respectively, inspired by [11, 25]. This architecture integrates the conformer structure [16] and relative positional encoding [6]. A pre-trained HuBERT-large model [20] serves as the audio feature encoder, incorporating a downsampling layer to adjust the audio sampling rate from 50 Hz to 25 Hz to synchronize with the video frame rate. The training of the audio generation process spans 125 frames (5 seconds). Detailed implementation specifics and model structure are further elaborated in the supplementary materials. Evaluation Metric For objective metrics, we utilize Peak Signal-to-Noise Ratio (PSNR), Structural Similarity Index (SSIM) [50], and Learned Perceptual Image Patch Similarity (LPIPS) [56] to quantify the similarity between generated and ground truth images. Cosine Similarity (CSIM) 2 measures facial similarity using a pretrained face recognition. Lip-sync Error Distance (LSE-D) [5] assesses the alignment between generated lip movements and the corresponding audio. Regarding subjective metrics, we employ the Mean Opinion Score (MOS) as our metric, with 10 participants rating our method based on Fidelity (F), Lip-sync (LS), Naturalness (N), and Motion Jittering (MJ). 4.2 Video Driven Methods Table 1: Quantitative comparisons with previous Face Reenactment methods. Method Self-Reenactment Cross-Reenactment PSNR\u2191 SSIM\u2191 LPIPS\u2193 CSIM\u2191 SSIM\u2191 LPIPS\u2193 CSIM\u2191 FOMM [41] 23.944 0.775 0.178 0.830 0.411 0.423 0.494 DPE [31] 27.239 0.861 0.151 0.912 0.445 0.410 0.567 MTIA [44] 28.435 0.870 0.122 0.929 0.393 0.456 0.448 Vid2Vid [48] 27.659 0.870 0.115 0.924 0.410 0.401 0.553 LIA [49] 25.854 0.831 0.137 0.916 0.421 0.406 0.522 FADM [54] 26.169 0.849 0.147 0.916 0.445 0.399 0.574 AniTalker 29.071 0.905 0.079 0.927 0.494 0.347 0.586 Quantitative Results We benchmarked our approach against several leading face reenactment methods [31, 41, 44, 48, 49, 54], all employing variations of self-supervised learning. The results are presented in Table 1. Due to the inherent challenges and the absence 1https://github.com/Linear95/CLUB/ 2https://github.com/dc3ea9f/vico_challenge_baseline of frame-by-frame ground truth in Cross-Reenactment (using another person\u2019s video for driving), the overall results tend to be lower compared to Self-Reenactment (using the current person\u2019s video). In Self-Reenactment, our algorithm achieved superior results for image structural metrics such as PSNR, SSIM, and LPIPS, validating the effectiveness of our motion representation in reconstructing images. Additionally, using the CSIM metric to measure face similarity, we observed that the similarity between the reconstructed face and the original portrait was the second highest, slightly behind MTIA [44], illustrating our model\u2019s identity preservation capabilities. For Cross-Reenactment, where the portrait serves as ground truth and considering cross-driven deformations, we focused on high-level metrics: SSIM and LPIPS. Our method demonstrated commendable performance. We also evaluated CSIM, which, unlike self-reenactment, showed a significant improvement, achieving the best results among these datasets. This highlights our algorithm\u2019s outstanding ability to disentangle identity and motion when driving with different individuals. Qualitative Results To highlight comparative results, we conducted a cross-reenactment scenario analysis with different algorithms, as presented in Figure 4. The objective was to deform the source portrait using the actions of the target. Each row in the figure represents a driving case. We observed that baseline methods exhibited varying degrees of identity leakage, where the identity information from the target contaminated the source portrait\u2019s identity. For example, as demonstrated in the fourth row, the slim facial structure of the driving portrait led to slimmer outcomes, which was unintended. However, our results consistently preserved the facial identity. Additionally, in terms of expression recovery, as evident in the first and third rows, our approach replicated the action of opening the eyes in the source portrait accurately, creating a natural set of eyes. In contrast, other algorithms either produced slight eye-opening or unnatural eyes. These qualitative findings highlight the advantage of decoupling ability. 4.3 Speech-driven Methods Table 2: Quantitative comparisons with previous speechdriven methods. The subjective evaluation is the mean option score (MOS) rated at five grades (1-5) in terms of Fidelity (F), Lip-Sync (LS), Naturalness (N), and Motion Jittering (MJ). Method Subjective Evaluation Objective Evaluation (Self) MOS-F\u2191 MOS-LS\u2191 MOS-N\u2191 MOS-MJ\u2191 SSIM\u2191 CSIM\u2191 Sync-D\u2193 MakeItTalk [62] 3.434 1.922 2.823 3.129 0.580 0.719 8.933 PC-AVS [61] 3.322 3.785 2.582 2.573 0.305 0.703 7.597 Audio2Head [47] 3.127 3.650 2.891 2.467 0.597 0.719 8.197 SadTalker [57] 3.772 3.963 2.733 3.883 0.504 0.723 7.967 AniTalker 3.832 3.978 3.832 3.976 0.671 0.725 8.298 We compare our method against existing state-of-the-art speechdriven approaches, including MakeItTalk [62], PC-AVS [61], Audio2Head [47], and SadTalker [57]. Quantitative results are presented in Table 2. From the subjective evaluation, our method consistently shows improvements in fidelity, lip-sync accuracy, naturalness, and a reduction in motion jittering, particularly noted for the enhanced naturalness of movements. These advancements can \fAniTalker: Animate Vivid and Diverse Talking Faces through Identity-Decoupled Facial Motion Encoding , 2024, Portrait (Source) FOMM Portrait (Target) DPE MTIA Vid2Vid LIA FADM AniTalker Figure 4: Cross-Reenactment Visualization: This task involves transferring actions from a target portrait to a source portrait to evaluate each algorithm\u2019s ability to separate motion and appearance. Starting from the third column, each column represents the output from a different algorithm. The results highlight our method\u2019s superior ability to preserve fidelity in both motion transfer and appearance retention. I /a\u026a/ State /\u02c8ste\u026at/ Believe / b\u026a\u02c8li\u02d0v / Climate /\u02c8kla\u026am\u0259t/ Self Driven Cross Driven Portrait MakeItTalk Audio Source: Audio2Head SadTalker AniTalker Figure 5: Visual comparison of the speech-driven method in selfand cross-driven scenarios. Phonetic sounds are highlighted in red. be attributed to our sophisticated universal motion representation. The objective evaluation involves driving the image with its audio. Compared to these methods, our approach shows significant improvements in SSIM and CSIM. However, our Sync-D metric shows a decrease, which we believe is due to two main reasons: (1) we do not use this metric as a supervisory signal, and (2) the Sync-D metric focuses on short-term alignment and does not adequately represent long-term information that is more crucial for the comprehensibility of generated videos. This is also corroborated by the qualitative results shown in Figure 5, highlighting our model\u2019s ability to produce convincingly synchronized lip movements to the given phonetic sounds. 4.4 Ablation Study Table 3: Quantitative comparisons of disentanglement methods and the HAL module in Self-Reenactment setting Method ML MID HAL PNSR \u2191 SSIM \u2191 CSIM \u2191 Baseline 25.854 0.849 0.916 Triplet [10] \u2713 26.455 0.860 0.911 AAM-Softmax [46] \u2713 27.922 0.894 0.923 AAM-Softmax + CLUB [4] \u2713 \u2713 28.728 0.900 0.924 AniTalker \u2713 \u2713 \u2713 29.071 0.905 0.927 4.4.1 Ablations on Disentanglement. To further validate the effectiveness of our disentanglement between motion and identity, we \f, 2024, Tao Liu, et al. conducted tests using various methods. Initially, to evaluate the performance of developing a reliable identity encoder using only Metric Learning (ML) without Mutual Information Disentanglement (MID), we assessed both Triplet loss [10] and AAM-Softmax [46]. Our results indicate that AAM-Softmax, an angle-based metric, achieves superior outcomes in our experiments. Additionally, by incorporating a mutual information decoupling module alongside AAM-Softmax, we noted further improvements in results. This enhancement encouraged the motion encoder to focus exclusively on motion-related information. These findings are comprehensively detailed in Table 3. Table 4: Different intermediate representations under the Self-Reenactment setting. \u2018Face Repr.\u2019 is short for face representation, and \u2018Dim.\u2019 represents the corresponding dimension. Method Face Repr. Dim. PSNR \u2191 SSIM \u2191 CSIM\u2191 EMOCA [7] 3DMM 50 20.911 0.670 0.768 PIPNet [22] Landmark 136 22.360 0.725 0.830 AniTalker Motion Latent 20 29.071 0.905 0.927 4.4.2 Ablation Study on Motion Representation. To compare our motion representation with commonly used landmark and 3D Morphable Model (3DMM) representations, we utilized 68 2D coordinates [22] (136 dimensions) for the landmark representation and expression parameters (50 dimensions) from EMOCA [7] for the 3DMM representation. In self-reenactment scenarios, all rendering methods were kept consistent, and different features were used to generate driven images. We observed several key points: (1) As shown in Table 4, our learned representation exhibits a more compact dimensionality, indicating a more succinct encoding of facial dynamics. (2) Our video comparisons show that, unlike these explicit representations, our implicit motion representation maintains frame stability without the need for additional smoothing. This can be attributed to our self-supervised training strategy of sampling adjacent frames, which effectively captures subtle dynamic changes while inherently ensuring temporal stability. 0 0.1 0.2 0.3 0.4 0.5 1 2 3 4 5 6 7 8 \u2026 \u2026 Image Encoder Layers Weights Figure 6: The weights of motion representation from different layers of the Image Encoder. 4.4.3 Ablations on HAL. To explore the significance of the Hierarchical Aggregation Layer (HAL) in dynamic representations, we conducted a series of ablation experiments focusing on the HAL layer. The results showed that models incorporating the HAL layer exhibited performance improvements, as detailed in the final row of Table 3. To analyze the impact and importance of different HAL layers on motion representation, we extracted and examined the softmax-normalized weights of each layer (a total of 8 layers in our experiment) in our Image Encoder as shown in Figure 6. It was found that the weights of the last layer contributed most significantly, likely because it represents global features that can effectively recover most motion information. Notably, the fourth layer\u2014situated in the middle of the image encoder feature map\u2014demonstrated a local maximum. Considering the receptive field size of this layer\u2019s patch is similar to the size of eyes and approximately half the size of the mouth, this finding suggests that the layer plays a potential role in simulating areas such as the mouth and eyes. These results not only confirm the pivotal role of the HAL layer in dynamic representation but also reveal the deep mechanisms of the model\u2019s ability to capture facial movements of different scales. Motion Manifold Turn Head Left Eye Closed Diversity Perturbation Speak with Homophones Figure 7: Motion Manifold of the continuous motion space. 5 DISCUSSION Discussion on Universal Motion Representation Our investigations into the model\u2019s ability to encode facial dynamics have highlighted a universal representation of human facial movements. As depicted in Figure 7, we observed that different individuals maintain consistent postures and expressions (such as turning the head left, speaking with homophones, and closing eyes) at each point within our motion space, demonstrating that our motion space forms a Motion Manifold. This manifold facilitates the representation of a continuous motion space, enabling the precise modeling of subtle facial feature variations and allowing for smooth transitions. Additionally, by integrating perturbations through diffusion noise, \fAniTalker: Animate Vivid and Diverse Talking Faces through Identity-Decoupled Facial Motion Encoding , 2024, our model can simulate random, minute motion changes that align with fundamental movement patterns, thus enhancing the diversity of generated expressions. These findings demonstrate that our motion representation has a robust capacity to capture and represent a wide array of human facial movements. Discussion on Generalization Ability Although our model is trained on real human faces, it demonstrates the ability to generalize to other images with facial structures, such as cartoons, sculptures, reliefs, and game characters. This underscores the model\u2019s excellent scalability. We primarily attribute this capability to the complete decoupling of identity and motion, which ensures that the model grasps the intrinsic nature of facial movements, thereby enhancing its generalization capability. 6" +} \ No newline at end of file diff --git a/abs_9K/test_abstract_short_2405.03133v1.json b/abs_9K/test_abstract_short_2405.03133v1.json new file mode 100644 index 0000000000000000000000000000000000000000..a2b4e0ac793460e9078e0e039c552833d56a4aa7 --- /dev/null +++ b/abs_9K/test_abstract_short_2405.03133v1.json @@ -0,0 +1,17 @@ +{ + "url": "http://arxiv.org/abs/2405.03133v1", + "title": "Lory: Fully Differentiable Mixture-of-Experts for Autoregressive Language Model Pre-training", + "abstract": "Mixture-of-experts (MoE) models facilitate efficient scaling; however,\ntraining the router network introduces the challenge of optimizing a\nnon-differentiable, discrete objective. Recently, a fully-differentiable MoE\narchitecture, SMEAR, was proposed (Muqeeth et al., 2023), which softly merges\nexperts in the parameter space; nevertheless, its effectiveness was only\ndemonstrated in downstream fine-tuning on classification tasks. In this paper,\nwe present Lory, the first approach that scales such architectures to\nautoregressive language model pre-training. Lory introduces two key techniques:\n(1) a causal segment routing strategy that achieves high efficiency for expert\nmerging operations while preserving the autoregressive nature of language\nmodels; (2) a similarity-based data batching method that encourages expert\nspecialization by grouping similar documents in training instances. We\npre-train a series of Lory models on 150B tokens from scratch, with up to 32\nexperts and 30B (1.5B active) parameters. Experimental results show significant\nperformance gains over parameter-matched dense models on both perplexity\n(+13.9%) and a variety of downstream tasks (+1.5%-11.1%). Despite segment-level\nrouting, Lory models achieve competitive performance compared to\nstate-of-the-art MoE models with token-level routing. We further demonstrate\nthat the trained experts in Lory capture domain-level specialization without\nsupervision. Our work highlights the potential of fully-differentiable MoE\narchitectures for language model pre-training and advocates future research in\nthis area.", + "authors": "Zexuan Zhong, Mengzhou Xia, Danqi Chen, Mike Lewis", + "published": "2024-05-06", + "updated": "2024-05-06", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.LG" + ], + "label": "Original Paper", + "paper_cat": "Mixture AND of AND Experts", + "gt": "Mixture-of-experts (MoE) models facilitate efficient scaling; however,\ntraining the router network introduces the challenge of optimizing a\nnon-differentiable, discrete objective. Recently, a fully-differentiable MoE\narchitecture, SMEAR, was proposed (Muqeeth et al., 2023), which softly merges\nexperts in the parameter space; nevertheless, its effectiveness was only\ndemonstrated in downstream fine-tuning on classification tasks. In this paper,\nwe present Lory, the first approach that scales such architectures to\nautoregressive language model pre-training. Lory introduces two key techniques:\n(1) a causal segment routing strategy that achieves high efficiency for expert\nmerging operations while preserving the autoregressive nature of language\nmodels; (2) a similarity-based data batching method that encourages expert\nspecialization by grouping similar documents in training instances. We\npre-train a series of Lory models on 150B tokens from scratch, with up to 32\nexperts and 30B (1.5B active) parameters. Experimental results show significant\nperformance gains over parameter-matched dense models on both perplexity\n(+13.9%) and a variety of downstream tasks (+1.5%-11.1%). Despite segment-level\nrouting, Lory models achieve competitive performance compared to\nstate-of-the-art MoE models with token-level routing. We further demonstrate\nthat the trained experts in Lory capture domain-level specialization without\nsupervision. Our work highlights the potential of fully-differentiable MoE\narchitectures for language model pre-training and advocates future research in\nthis area.", + "main_content": "Introduction Mixture-of-experts (MoE) architectures with sparse activation enable the scaling of model sizes while maintaining high training and inference efficiency (Lepikhin et al., 2021; Fedus et al., 2022; Du et al., 2022; Zoph et al., 2022; Lewis et al., 2021; Zhou et al., 2022; Jiang et al., 2024; Xue et al., 2024; Shen et al., 2024). However, training the routing network in MoE architectures introduces the challenge of optimizing a non-differentiable, discrete objective (Shazeer et al., 2017; Zoph et al., 2022). Various techniques\u2014such as switch routing (Fedus et al., 2022), top-k expert-choice routing (Zhou et al., 2022), and linear programming (Lewis et al., 2021)\u2014have been developed to address this challenge, often requiring carefully designed load balancing objectives (Fedus et al., 2022) or introducing additional complexity in assignment algorithms (Lewis et al., 2021; Roller et al., 2021). Recent research has started to explore fully-differentiable MoE architectures as an alternative to overcome training difficulty. Notably, SMEAR (Muqeeth et al., 2023) is an approach that softly merges experts as a weighted average of all the experts\u2019 parameters, as opposed to activating the top-k experts. However, the effectiveness of SMEAR has only been demonstrated in small-scale fine-tuning experiments on downstream classification tasks (Wang et al., 2018). In this work, we propose Lory1, the first approach that scales such fully-differentiated MoE architectures to autoregressive language model pre-training. Unlike 1Lory is a tribe of parrots with rainbow-like colors, which resembles the spirit of \u2018soft\u2019 MoE. 1 arXiv:2405.03133v1 [cs.CL] 6 May 2024 \fPreprint Merged FFN Segment 1 ( ) T \u00d7 d Router FFN 1 FFN 2 FFN 3 FFN 4 FFN 1 FFN 2 FFN 3 FFN 4 Merged FFN FFN 1 FFN 2 FFN 3 FFN 4 Merged FFN Stop gradient Segment 2 ( ) T \u00d7 d Segment 3 ( ) T \u00d7 d Input of the MoE layer ( ) L \u00d7 d Output of the MoE layer ( ) L \u00d7 d Router Router Attention layer MoE layer doc 1 doc 2 \u2026 doc m Training instance: similar docs The Fields Medal is a prize awarded to two, three, \u2026 Huh was awarded 2022 Fields Medal \u2026 Similarity-based data batching Causal segment routing Lory: Fully Differentiable Mixture-of-Experts for Autoregressive Language Model Pre-training Figure 1: We propose Lory, a fully differentiable MoE architecture designed for autoregressive language models based on expert merging (Section 2.2). We introduce two key techniques to train Lory: First, we propose the causal segment routing strategy, which conducts expert merging at the segment level and preserves the autoregressive property of language models. Second, we use the similarity-based data batching method to construct training instances, which steers the experts toward specializing in specific domains or topics. text classification tasks which only require routing each input sequence to different experts, language modeling makes predictions for each input token, and performing token-level routing is prohibitively expensive as the computational cost of merging operations scales linearly with the number of experts. Lory is based on two key techniques (Figure 1). We first propose causal segment routing. For a sequence of input tokens, we split them into multiple segments with a fixed length, and use the previous segment to determine the router\u2019s weights and calculate the merged expert for the subsequent segment. During inference, we can simply use the prompt to make a single routing decision throughout the generation. This segment-level routing strategy preserves the autoregressive nature of language models, while keeping the merging operations efficient. However, since the text data for pre-training language models usually concatenates random sets of documents, we find that such routing can lead to scenarios in which experts are not sufficiently specialized. Hence, we propose our second technique\u2014 similarity-based data batching for MoE training, which groups semantically similar documents to form consecutive segments. This idea has been recently proposed to train LMs to better reason across document boundaries (Shi et al., 2024), while we find that it leads to more effective training of expert routing. We pre-train a series of Lory models from scratch under a training budget of 150B tokens, with 0.3B and 1.5B active parameters, and 8, 16 or 32 experts (up to 6.8B and 29.5B full parameters; see Table 3). Experimental results show that our Lory models significantly outperform equal-sized dense models trained with the same amount of data, achieving performance gains on both perplexity (+13.9%), and a wide range of downstream tasks including commonsense reasoning (+3.7%), reading comprehension (+3.3%), closed-book QA (+1.5%), and text classification (+11.1%). Interestingly, despite that Lory uses segmentlevel routing, we find it achieves competitive performance compared to state-of-the-art MoE models with token-level, non-differentiable discrete routing (Zhou et al., 2022). Our analysis further shows that the trained experts in Lory capture domain-level specialization without any supervision, making it distinct from previous MoE LMs with token-level routing, which only exhibits local patterns uniformly distributed across different domains (Xue et al., 2024; Jiang et al., 2024). Together, we present the first fully-differentiated MoE model that is suitable for language model pre-training, and demonstrate its effectiveness at scale. We hope our work sheds light on the potential of fully differentiable MoE architectures in cultivating specialized experts and we seek to encourage continued exploration in this research field. 2 \fPreprint 2 Preliminaries 2.1 Sparsely-activated MoE Transformer-based MoE language models typically substitute feed-forward network (FFN) layers with sparsely-activiated MoE layers (Shazeer et al., 2017; Fedus et al., 2022; Zoph et al., 2022). Assume an MoE layer consists of E expert FFNs, each parameterized as FFN(\u00b7; \u03b81), . . . , FFN(\u00b7; \u03b8E), where the function FFN : Rd \u2192Rd defines a single expert module. For each token x in a sequence, an MoE layer takes the hidden representation hx \u2208Rd as the input and computes its output ox \u2208Rd by sparsely activating k experts in this layer and aggregating the outputs through a weighted sum: ox = E \u2211 i=1 ei \u00b7 FFN(hx; \u03b8i), where ei = Top-k(Softmax(R(hx)))i. (1) The routing weight ei for the i-th expert is measured by a routing network or router R, which takes hx as input and calculates the weight for each expert. In practice, to achieve sparsity and computational efficiency, only one (Fedus et al., 2022) or top-k (Lepikhin et al., 2021) experts with the highest routing weights are activated at each MoE layer. The weights of the remaining experts are set to 0 (i.e., ei = 0), eliminating the need to compute FFN(hx; \u03b8i) and effectively deactivating the i-th expert. 2.2 Fully Differentiable MoE Architectures via Expert Merging The primary challenges in training sparsely activated MoE models arise from the difficulty in training discrete routers. A promising direction is to design fully differentiable MoE architectures that do not depend on extra loss formulations for stablized training. A recent model architecture (Muqeeth et al., 2023) demonstrates the feasibility by computing a weighted average of all expert FFNs in the parameter space (Matena & Raffel, 2022; Wortsman et al., 2022), thereby creating a \"merged FFN\". Given an input x and its corresponding routing weights ei, the output ox of a merged FFN is computed as: ox = FFN(hx; E \u2211 i=1 ei \u00b7 \u03b8i), where ei = Softmax(R(hx))i. (2) However, naively extending it to autoregressive language models, which would require computing the merged FFN for each token in a sequence, would be infeasible as the computational costs of merging operations scales linearly with the number of experts. SMEAR (Muqeeth et al., 2023) has only been evaluated for downstream fine-tuning on text classification tasks, which makes routing decisions based on a pooling representation of the entire input sequence, i.e., ei = Softmax(R( \u2211L j=1 hxj L ))i. Such operations will disrupt the autoregressive property in language model pre-training. In this work, we address these challenges by developing a fully differentiable MoE architecture suitable for autoregressive language modeling, and pre-train such models at scale. 3 Our Approach: Lory In this section, we present Lory, an approach for pre-training fully differentiable MoE language models (Figure 1). The core technique that enables Lory to be fully differentiable is expert merging (Muqeeth et al., 2023, see details in Section 2.2). To make it computationally feasible, we propose a causal segment routing method that only merges experts once for each segment, effectively reducing the number of merging operations (Section 3.1). We also propose a data batching strategy of grouping semantically similar texts, which is crucial for effective training of the segment-level router (Section 3.2). Notations. We denote an input sequence of L tokens as X = (x1, x2, . . . , xL). By considering a segment size T, we divide the input sequence into N = \u2308L/T\u2309segments, denoted as 3 \fPreprint S1, S2, . . . , SN. We use R to denote the routing network (parameterized as a linear layer) that computes the weights for expert merging. Let hx represent the hidden representation of the token x. The parameters of the i-th expert FFN are denoted by \u03b8i. 3.1 Efficient Expert Merging via Causal Segment Routing Challenges. An intuitive way of reducing the computational cost is to use segment-level routing instead of token-level routing, which can reduce the number of merging operations from L to N times. However, simply using the current segment to compute the routing weights can cause information leakage. Training design. We propose causal segment routing to effectively route information across segments in an autoregressive manner.2 It merges FFNs in an MoE layer based on the previous segment\u2019s information, and uses it to process the current segment. Specifically, given a training instance X that consists of L tokens (e.g., L = 4096), we split the training instance into N segments, each of which contains T (e.g., T = 256) consecutive tokens. For the k-th segment Sk when k > 1, we compute the average of the hidden representations of its preceding segment Sk\u22121 , denoted as \u00af hk\u22121. Using the average hidden representation allows the model to adapt to prompts of varying lengths during inference. \u00af hk\u22121 is then utilized to determine the routing weights, resulting in a merged expert \u00af \u03b8: \u00af hk\u22121 = 1 T \u2211 x\u2208Sk\u22121 hx, ei = Softmax(R(\u00af hk\u22121)), \u00af \u03b8 = \u2211 i ei \u00b7 \u03b8i. (3) We then use the merged expert \u00af \u03b8 to process all the tokens in the current segment Sk, i.e., ox = FFN(hx; \u00af \u03b8), \u2200x \u2208Sk. This approach guarantees that the routing decisions made by the model are based exclusively on data from preceding positions. For the first segment S1, the representation of the segment itself is used to compute the merging weights for its own FFN. To prevent information leakage, we implement a stop-gradient operation on R(\u00af h1). As demonstrated in Appendix B, merging experts at the segment level incurs minimal overhead compared to the training of dense models. Prompt-only routing during inference. During inference, we begin with a given prompt and make a single routing decision per layer based on the average hidden representations of the prompt. This routing decision determines a merged FFN and it is used consistently throughout the entire generation process. It is important to note that this inference process is as simple and computationally efficient as dense models.3 3.2 Similarity-based Data Batching The standard practice of pre-training LMs is to randomly concatenate documents to construct training instances with a fixed length. This could lead to under-specialized experts, because tokens within adjacent segments may come from very different and irrelevant documents. To mitigate this issue, we employ a similarity-based data batching technique inspired by Shi et al. (2024), which sequentially concatenates similar documents to construct training instances. This encourages high similarity between adjacent segments, enabling the experts to specialize in different domains or topics. We measure document similarity using Contriever (Izacard et al., 2022) and concatenate similar documents based on a greedy 2A piece of pseudocode of the causal segment routing strategy can be found in Appendix A. 3In Appendix G.3, we compare the prompt-only routing strategy to using the causal segment routing strategy that faithfully follows the training design. We find that these two strategies do not lead to significant differences in performance on downstream tasks. Given the efficiency advantage of making a single routing decision, we adopt the prompt-only strategy as the default approach. In Appendix H.2, we discuss the potential of converting Lory to sparsely activated MoE models for memory-efficient inference, which we leave it as future work. 4 \fPreprint search algorithm (see Appendix C). Although we employ a data batching technique similar to Shi et al. (2024), our motivation differs from theirs. While their work aims to improve language models\u2019 reasoning across document boundaries, we find this technique effective in encouraging expert specialization in training MoE models. 4 Experiments In this section, we evaluate Lory by training a series of language models from scratch. We first describe the experimental setups (Section 4.1) and then present the results (Section 4.2). 4.1 Setups Models. We evaluate our approach by training decoder-only Transformer models which consist of 0.3B and 1.5B active parameters.4 For each FFN layer in the Transformer model, we replace it with MoE layers with E \u2208{8, 16, 32} experts with exactly the same architecture.5 Appendix D shows the configuration of model architectures as well as the total parameter count. We follow LLaMA (Touvron et al., 2023a) and use SwiGLU (Shazeer, 2020) as the activation function in FFNs. We use the same tokenizer as the LLaMA models (Touvron et al., 2023a;b). All models are trained with a 4096-token context window. In the causal segment routing strategy, we set the length of each segment to be T = 256. Training details. We employ the AdamW optimizer (Loshchilov & Hutter, 2019) with \u03b21 = 0.9 and \u03b22 = 0.95 and use a learning rate of 2e-4 with a cosine learning rate scheduler. All models with a batch size of 1 million tokens. We employ the data parallelism with the ZeRO optimization (Rajbhandari et al., 2020) for distributed training.6 At the beginning of training, we train a parameter-matched dense model and duplicate the FFN layers as initialization of the MoE model. In our experiments, we use the first 5% training steps as the warmup to initialize the MoE weights. We find that without warmup training, there may be more experts under-utilized (see Appendix G.4 for an ablation study). We also apply a linear warmup to the learning rate scheduler for the first 5% training steps. We train our models with up to 64 A100 GPUs. Training datasets. We randomly sample a subset of the Commoncrawl dataset (Wenzek et al., 2019) as the training data. The full training dataset consists of 150 billion tokens in total. We apply the similarity-based data batching method on this subset of construct all the training instances, following Shi et al. (2024). See Appendix C for details of the data batching method. Evaluation datasets. We evaluate all the models on language modeling tasks by measuring the perplexity of trained models on held-out evaluation datasets sampled from arXiv, Books, Wikipedia, C4 (Raffel et al., 2020), and Python code (a Python subset of Github). Each evaluation dataset contains 1K samples, each of which consists of 4096 tokens. We also evaluate models in downstream tasks with in-context learning (Brown et al., 2020), including common sense reasoning: BoolQ (Clark et al., 2019), PIQA (Bisk et al., 2020), SIQA (Sap et al., 2019), HellaSwag (Zellers et al., 2019), WinoGrand (Sakaguchi et al., 2020); reading comprehension: RACE (Lai et al., 2017), ARC (Clark et al., 2018)); closedbook QA: Natural Questions (Kwiatkowski et al., 2019), TriviaQA (Joshi et al., 2017); and text classification: AGNews (Zhang et al., 2015), SST-2 Socher et al. (2013), Amazon and Yelp (Zhang et al., 2015), FEVER (Thorne et al., 2018), MRPC (Dolan & Brockett, 2005). For text classification tasks, we follow the evaluation setup of Min et al. (2022); for the rest of tasks, we follow the same setup as Touvron et al. (2023b). 4Here, \u201cactive parameters\u201d refers to the size of the model after merging at each MoE layer. 5In Appendix E, we additionally conduct experiments on a 7B dense model and a 7B/4E MoE model without using similarity-based data batching. Due to the limited computing resources, we are not able to train 7B models on the similarity-based batched dataset. 6In Appendix H.1, we discuss parallelism strategies when scaling up model sizes (e.g., > 100B). 5 \fPreprint 0 50 100 150 2.1 2.2 2.3 2.4 2.5 2.6 Log Perplexity 0.3B (dense) 0.3B/8E (Lory) 0.3B/16E (Lory) 0.3B/32E (Lory) 0 50 100 150 1.9 2.0 2.1 2.2 2.3 2.4 1.5B (dense) 1.5B/8E (Lory) 1.5B/16E (Lory) 1.5B/32E (Lory) Billion of tokens Model arXiv Books Wiki C4 Python 0.3B 8.4 18.0 10.3 13.8 15.2 0.3B/8E 7.4 16.0 9.2 13.3 12.5 0.3B/16E 7.2 15.7 9.1 13.1 12.2 0.3B/32E 7.2 15.5 8.9 13.0 11.7 1.5B 6.6 13.6 7.8 10.7 10.4 1.5B/8E 6.2 12.8 7.6 10.6 10.1 1.5B/16E 6.0 12.4 7.1 10.6 8.9 1.5B/32E 5.8 12.3 7.1 10.4 8.7 Figure 2: Left: training curves (log perplexity) of models with different sizes and experts. Right: Perplexity of trained models on different evaluation sets (arXiv, Books, Wikipedia, C4, and Python). We include the detailed model configurations and sizes in Appendix D. Commonsense Reasoning Reading Comprehension Model PIQA SIQA BoolQ HellaSwag WinoGrande RACE-m RACE-h ARC-e ARC-c 0.3B 65.8 42.7 44.6 34.6 51.2 41.7 30.9 51.5 21.3 0.3B/8E 67.5 41.2 41.2 34.8 54.4 43.1 31.4 52.4 22.1 0.3B/16E 67.2 44.1 56.6 34.9 54.1 43.9 31.1 54.8 24.9 0.3B/32E 68.2 43.0 58.0 34.7 53.4 42.7 32.0 57.4 26.3 1.5B 71.2 45.0 54.0 43.9 60.9 50.1 36.7 65.0 31.0 1.5B/8E 72.1 45.2 62.0 43.6 63.7 51.2 36.5 66.3 32.5 1.5B/16E 71.3 45.0 56.0 43.7 61.5 51.7 37.3 66.3 32.7 1.5B/32E 72.1 47.1 59.9 43.8 61.9 51.5 32.4 66.7 32.7 Closed-book QA Text Classification Avg Model NQ TQA AGNews Amazon SST-2 Yelp Fever MRPC 0.3B 4.7 8.8 30.3 53.6 54.6 66.0 47.6 62.0 41.8 0.3B/8E 5.3 9.0 38.4 52.3 54.6 62.6 56.6 59.0 42.7 0.3B/16E 6.0 10.2 36.3 75.6 53.3 64.0 57.0 65.0 45.8 0.3B/32E 5.3 10.2 47.3 64.0 55.3 73.3 55.7 56.0 46.0 1.5B 7.6 23.8 64.0 65.3 80.0 58.6 59.0 66.7 51.9 1.5B/8E 7.3 24.2 65.0 94.0 80.0 88.3 57.0 64.0 56.1 1.5B/16E 7.3 25.6 61.6 78.3 84.6 93.6 57.3 63.6 55.1 1.5B/32E 7.0 25.4 62.3 94.7 85.0 95.3 56.3 66.7 56.5 Table 1: We compare the Lory MoE models with the parameter-matched dense models on downstream tasks, including commonsense reasoning, reading comprehension, closed-book QA, and text classification. 4.2 Main Results Training efficiency and convergence. Figure 2 (left) shows the training loss curves of the dense model and our MoE models with different model sizes. First, we find that with the same amount of training tokens, our models clearly achieve better training loss compared to the dense model baseline. For the 0.3B and 1.5B models, our models with 32 experts achieve the same level of loss with fewer than half of the training tokens. This indicates that our approach achieves much better performance with the same training compute (see analysis of additional FLOPs from MoE layers in Appendix B). We also observe that when using more experts, we are able to gain more improvement. Language modeling. We evaluate trained models on language modeling evaluation sets. As shown in Figure 2 (right), our MoE models outperform the dense baseline in all domains, significantly reducing perplexity. For example, our 0.3B/32E model achieves a relative improvement of 13.9% on Books compared to the 0.3B dense model. We observe that the improvement is especially large in test domains that are markedly different from the domains of the training dataset (e.g. Python). We consider this as a strong indication of expert specialization in specific domains (We further study expert specialization in Section 5.4). 6 \fPreprint Downstream tasks. Table 1 shows the model performance on downstream tasks. We observe significant performance across all tasks. For example, our 0.3B/32E model achieves an average performance improvement of +3.7% in common sense reasoning, +3.3% in reading comprehension, +1.5% in reading comprehension, and +11.1% in text classification. 5 Analysis and Ablation Studies In this section, we conduct ablation studies and analysis to understand the essence of each component of our approach. 5.1 Importance of Causal Segment Routing We compare our causal segment routing strategy with an alternative prefix routing strategy for training. In prefix routing, expert merging is performed only once for each sequence based on the first segment. The merged FFN is then used to process the rest of the sequence without further updates. Figure 3 shows that using only a prefix for routing leads to much worse performance compared to causal segment routing. These results highlight the importance of using every segment to provide strong training signals for routers. 0 50 100 150 Billion of tokens 2.1 2.2 2.3 2.4 2.5 2.6 Log Perplexity 0.3B (dense) 0.3B/8E (causal segment routing) 0.3B/8E (prefix routing) Figure 3: Training curves of causal segment routing and prefix routing. The latter is a straightforward segment-level routing strategy that uses the first segment to route the entire input. 0 50 100 150 2.1 2.2 2.3 2.4 2.5 2.6 Log Perplexity 0.3B (sim batch) 0.3B/8E (sim batch) 0.3B (rand batch) 0.3B/8E (rand batch) 0 50 100 150 0.00 0.02 0.04 0.06 0.08 0.10 Loss Improvement (MoE over Dense) sim batch rand batch Billion of tokens Figure 4: Left: Training curves of similarity-based data batching (sim batch) or the standard random batching (rand batch). Right: Training loss difference between Lory and a dense model when using different batching strategies. Lory leads to a larger loss improvement over the dense model when using similarity-based data batching. 5.2 Importance of Similarity-based Data Batching To investigate the importance of similarity-based data batching, we compare the performance improvement of MoE models over dense models with and without this batching method. Figure 4 (left) shows the training loss of dense (0.3B) and MoE models with eight experts (0.3B/8E) using similarity-batched (sim batch) and randomly-batched (rand batch) data. MoE models consistently outperform dense models in both setups. However, the loss improvement (i.e., the difference in loss between dense and MoE models) is much larger with similarity-based batching, and this effect is amplified with more training data (Figure 4 (right)). These results strongly support the importance of similarity-based batching for effectively training our MoE model. 5.3 Comparison with Existing MoE Models We compare our approach with Expert Choice (EC) (Zhou et al., 2022), a state-of-theart MoE method that ensures balanced load during training by having each expert select top-k inputs according to the routing weights. We consider two variants of EC MoE models, both with a capacity factor of 1 to match the computation of our MoE models. First, we train a sparse EC MoE model using our segment routing strategy, where each expert selects top segments and processes all tokens within those segments. This variant allows us to directly compare our expert-merging strategy with the expert choice method while using the same segment-level routing approach. 7 \fPreprint 0 50 100 150 Billion of tokens 2.1 2.2 2.3 2.4 2.5 2.6 Log Perplexity 0.3B/8E (Lory) 0.3B/8E (EC, segment-level) 0.3B/8E (EC, token-level) Figure 5: Comparison with the state-ofthe-art MoE training technique Expert Choice (EC) with a segment-level or token-level routing. For both EC models, we use the capacity factor of 1 with the same amount of FLOPs as our training method for the fair comparison. Second, we consider the original EC setting with token-level routing to provide an end-to-end comparison with state-of-the-art MoE models using the same amount of training computation. Figure 5 shows the training loss curves. We observe that Lory (blue curve) significantly outperforms segment-level EC (orange curve) with the same routing setting, suggesting that a fully differentiable architecture is more effective than a sparse MoE when using the same routing strategy. Comparing Lory with the token-level EC model (green curve), we find that Lory achieves competitive results despite using segmentlevel routing and not requiring any advanced training techniques. These results highlight the significant potential of Lory. In Appendix G.1, we compare Lory and EC on held-out evaluation sets. We find Lory achieves much better perplexity compared to the token-level EC model, while performing similarly on other domains (arXiv, Books, Wiki, C4). Our analysis in Section 5.4 demonstrates that Lory learns experts specialized in specific domains (e.g., Python code), potentially improving performance in less frequent domains. 5.4 Expert Utilization and Specialization Utilization: How many experts are actively utilized? One potential issue of training MoE models is the models may collapse to dense models because most experts are under-utilized (e.g., some experts have never been activated). In Appendix G.2, we show although without using any auxiliary loss on load balancing, Lory is able to achieve high expert utilization, preventing the MoE models from collapsing to dense models. Specialization: What do experts learn? In order to study the expert specialization, we investigate the averaged routing weights at different layers of the 0.3B/8E model, on different domains (Books, arXiv, Python, and Wikipedia). Figure 6 shows the routing weights at layer 0, 11, and 23 (the first, middle, and last layer) of the 0.3B/8E model.7 First, we find that there exists clear domain-level expert specialization in our trained MoE models, even though no additional domain-level supervision is used during training. For instance, expert 7 at layer 11 is specialized to process inputs in the arXiv domain. We also observe that routing weights on arXiv and Python code are more similar compared to Books and Wikipedia, likely because LaTex code and Python code are dissimilar to natural language. Second, experts at the middle or high layers are more specialized in specific domains, while the routing weights at lower layers are similar and flat across domains. 0 2 4 6 0.0 0.1 0.2 0.3 0.4 Layer 0 0 2 4 6 Expert ID Layer 11 0 2 4 6 Layer 23 Averaged Weights Books arXiv Python Wikipedia Figure 6: Averaged routing weights at layer {0, 11, 23} of the 0.3B/8E model on different domains (Books, arXiv, Python, Wikipedia). We observe that the experts in our MoE models learn domain-level specialization, especially at middle and higher layers. 7In Appendix F, we show the averaged routing weights at all layers of the 0.3B/8E model. 8 \fPreprint It is worth noting that our learned experts behave differently from those of prior token-level MoE models, where shallow token-level specialization is observed. For example, some experts are specialized for a specific type of word (e.g., punctuations, articles), and few deep semantic features are captured by the learned routers (Jiang et al., 2024; Lewis et al., 2021; Zoph et al., 2022; Shazeer et al., 2017; Xue et al., 2024). Our models learn domain-level specialization, which we attribute to the segment-level routing strategy used during training. This strategy allows routers to capture global semantic features beyond the token level. The complementary nature of features captured by segment/sentence-level and token-level routing strategies suggests the possibility of combining them to build even stronger models, and we leave it for future work. 5.5 More Analysis and Discussion In Appendix G, we further show that (1) during inference of downstream tasks, routing the entire input prompt once or routing each segment does not make substantial differences on the tasks we evaluate; (2) warmup training is crucial to achieve high expert utilization, especially when training MoE models with a large number of experts. In addition, we discuss training parallelism strategies when further scaling up model sizes in Appendix H.1; and discuss the potential of converting Lory to sparse models for more efficient inference in Appendix H.2. 6 Related Work Mixture of Experts. Sparsely activated MoE models (Shazeer et al., 2017) have been proposed to demonstrate the potential of massively scaling up model sizes. GShard (Lepikhin et al., 2021) adapts the sparse MoE architecture into Transformer models and achieves strong results on machine translation. Recent work has extended it to general language models (Fedus et al., 2022; Zoph et al., 2022; Jiang et al., 2024; Dai et al., 2024; Zhou et al., 2022; Du et al., 2022; Artetxe et al., 2021; Xue et al., 2024). Traditional MoE models are trained to route given inputs to one or a few specialized expert modules, which introduces a non-differentiable, discrete decision-learning problem. These existing models are trained with the top-1 or top-2 routing strategy on a carefully designed load balancing objective (Lepikhin et al., 2021; Fedus et al., 2022; Zoph et al., 2022), or employ complicated assignment algorithms to distribute inputs (Lewis et al., 2021; Roller et al., 2021; Zhou et al., 2022). Training MoE models has been shown to be difficult, facing the issues of training instability, expert under-specialization, poor training efficiency (Zoph et al., 2022). Our approach enables end-to-end gradient back-propagation by employing fully differentiable MoE architectures. SMEAR (Muqeeth et al., 2023) proposes softly merging experts by taking a weighted average on the parameter space. However, SMEAR is only applied to text classification tasks with an encoder backbone. Although Lory shares a similar expert merging technique, it is the first approach that scales such architecture to autoregressive language model pre-training. Soft MoE (Puigcerver et al., 2024) is another fully-differentiable MoE architecture which enables end-to-end gradient back-propagation. However, it is only evaluated on vision tasks and does not apply to autoregressive language model pre-training either. We leave how to extend Soft MoE on decoder language models as the future work. Similarity-based data batching. There exists research that applies a similar data batching method during training. In-context pre-training (Shi et al., 2024) groups relevant documents together to encourage language models to leverage long-range contexts and improve the results of in-context learning and retrieval augmentation. Zhong et al. (2022) batch documents with high lexical similarity to collect more positive pairs in a contrastive learning framework to provide stronger training signals. Despite sharing a similar idea, the goal of our data batching method is to avoid routing irrelevant documents together, which may hurt the expert specialization. 9 \fPreprint 7" +} \ No newline at end of file diff --git a/abs_9K/test_abstract_short_2405.03150v1.json b/abs_9K/test_abstract_short_2405.03150v1.json new file mode 100644 index 0000000000000000000000000000000000000000..d60dd6bafd59dd49c0fa97c881b537a345e574e1 --- /dev/null +++ b/abs_9K/test_abstract_short_2405.03150v1.json @@ -0,0 +1,17 @@ +{ + "url": "http://arxiv.org/abs/2405.03150v1", + "title": "Video Diffusion Models: A Survey", + "abstract": "Diffusion generative models have recently become a robust technique for\nproducing and modifying coherent, high-quality video. This survey offers a\nsystematic overview of critical elements of diffusion models for video\ngeneration, covering applications, architectural choices, and the modeling of\ntemporal dynamics. Recent advancements in the field are summarized and grouped\ninto development trends. The survey concludes with an overview of remaining\nchallenges and an outlook on the future of the field. Website:\nhttps://github.com/ndrwmlnk/Awesome-Video-Diffusion-Models", + "authors": "Andrew Melnik, Michal Ljubljanac, Cong Lu, Qi Yan, Weiming Ren, Helge Ritter", + "published": "2024-05-06", + "updated": "2024-05-06", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.LG" + ], + "label": "Original Paper", + "paper_cat": "Diffusion AND Model", + "gt": "Diffusion generative models have recently become a robust technique for\nproducing and modifying coherent, high-quality video. This survey offers a\nsystematic overview of critical elements of diffusion models for video\ngeneration, covering applications, architectural choices, and the modeling of\ntemporal dynamics. Recent advancements in the field are summarized and grouped\ninto development trends. The survey concludes with an overview of remaining\nchallenges and an outlook on the future of the field. Website:\nhttps://github.com/ndrwmlnk/Awesome-Video-Diffusion-Models", + "main_content": "Introduction Diffusion generative models (Sohl-Dickstein et al., 2015; Song & Ermon, 2019; Ho et al., 2020; Song et al., 2021; Ruiz et al., 2024) have already demonstrated a remarkable ability for learning heterogeneous visual Figure 1: Overview of the key aspects of video diffusion models that we cover in this survey. 1 arXiv:2405.03150v1 [cs.CV] 6 May 2024 \fconcepts and creating high-quality images conditioned on text descriptions (Rombach et al., 2022; Ramesh et al., 2022). Recent developments have also extended diffusion models to video (Ho et al., 2022c), with the potential to revolutionize the generation of content for entertainment or simulating the world for intelligent decision-making (Yang et al., 2023a). For example, the text-to-video SORA (Brooks et al., 2024) model has been able to generate high-quality videos up to a minute long conditional on a user\u2019s prompt. Adapting diffusion models to video generation poses unique challenges that still need to be fully overcome; including maintaining temporal consistency, generating long video, and computational costs. In this survey, we provide an overview over key aspects of video diffusion models including possible applications, the choice of architecture, mechanisms for modeling of temporal dynamics, and training modes (see Fig. 1 for an overview). We then provide brief summaries of notable papers in order to outline developments in the field until now. Finally, we conclude with a discussion of ongoing challenges and identify potential areas for future improvements. 2 Taxonomy of Applications The possible applications of video diffusion models can be roughly categorized according to input modalities. This includes text prompts, images, videos, and auditory signals. Many models also accept inputs that are a combination of some of these modalities. Fig. 2 visualizes the different applications. We summarize notable papers in each application domain starting from Sec. 7.1.3. For this, we have categorized each model according to one main task. In our taxonomy, text-conditioned generation (Sec. 7.1.3) refers to the task of generating videos purely based on text descriptions. Different models show varying degrees of success in how well they can model objectspecific motion. We thus categorize models into two types: those capable of producing simple movements such as a slight camera pan or flowing hair, and those that can represent more intricate motion over time, such as those incorporating Physical Reasoning (Melnik et al., 2023). In image-conditioned video generation (Sec. 7.4) tasks, an existing reference image is animated. Sometimes, a text prompt or other guidance information is provided. Image-conditioned video generation has been extensively studied recently, due to its high controllability to the generated video content. For models introduced in other sections, we mention their capability for image-to-video generation where applicable. We treat video completion (Sec. 8) models that take an existing video and extend it in the temporal domain as a distinct group, even though they intersect with the previous applications. Video diffusion models typically have a fixed number of input and output frames due to architectural and hardware limitations. To extend such models to generate videos of arbitrary length, both auto-regressive and hierarchical approaches have been explored. Audio-conditioned models (Sec. 9) accept sound clips as input, sometimes in combination with other modalities such as text or images. They can then synthesize videos that are congruent with the sound source. Typical applications include the generation of talking faces, music videos, as well as more general scenes. Video editing models (Sec. 10) use an existing video as a baseline from which a new video is generated. Typical tasks include style editing (changing the look of the video while maintaining the identity of objects), object / background replacement, deep fakes, and restoration of old video footage (including tasks such as denoising, colorization, or extension of the aspect ratio). Finally, we consider the application of video diffusion models to intelligent decision-making (Sec. 11). Video diffusion models can be used as simulators of the real-world, conditioned on the current state of an agent or a high-level text description of the task. This could enable planning in a simulated world, as well as fully training reinforcement learning policies within a generative world model. 3 Mathematical Formulation We first review the mathematical formulation of diffusion generative models, which learn to model a target distribution p(x0), for example, of natural videos. A diffusion model generates samples via a chain of 2 \fFigure 2: Applications of video diffusion models. Bounding boxes are clickable links to relevant chapters. Example images taken from the following papers (top to bottom): Blattmann et al. (2023b), Ho et al. (2022a), Singer et al. (2022), Lu et al. (2023b), Yin et al. (2023), Lee et al. (2023b), Stypu\u0142kowski et al. (2023), Wu et al. (2022b), Xing et al. (2023a), Ma et al. (2023), Liu et al. (2023a) denoising steps that start from an initial noise vector that is a sample from a Gaussian distribution of uncorrelated white noise. Each denoising step is performed by a neural network that has been trained to guide a noisy input toward the target distribution. After a pre-determined number of such denoising steps, the vector will approximate a noise-free sample of the target domain. The key for this mechanism to succeed is training a suitable denoising network. This is achieved by an objective that learns to reverse the forward noising process at pre-specified noise levels. In the following, we summarize the formalization of the unconditioned denoising diffusion probabilistic model (DDPM) process from Ho et al. (2020): The forward diffusion process follows a Markov chain that iteratively adds sampled noise to an initial input video x0 over T time steps. The Markov property ensures that the degraded video xt at time step t only depends on the video xt\u22121 in the immediately preceding step t \u22121. The distribution q(xt|xt\u22121) of xt in a forward step can be described by the Gaussian q(xt|xt\u22121) := N(xt; p 1 \u2212\u03b2txt\u22121, \u03b2tI), (1) where the mean and standard deviation are determined by a variance-preserving noise schedule \u03b21, ..., \u03b2T and I is the identity matrix. Different schedules can be used, such as a linear or cosine schedule, influencing how quickly information in the original vide is destroyed. Due to the Markov property that each state only depends on the preceding state, the overall forward process is described by q(x1:T |x0) := T Y t=1 q(xt|xt\u22121). (2) Ho et al. (2020) show that the distribution at an arbitrary time step t can be directly computed by q(xt|x0) = N(xt; \u221a\u03b1tx0, (1 \u2212\u03b1t)I), (3) 3 \fwhere \u03b1t := Qt s=1 \u03b1s and \u03b1t := (1 \u2212\u03b2t). In the denoising phase, we try to reverse this process, starting from the final time step T. The reverse process is again a Markov chain, this time with Gaussian transition probabilities that need to be learned by our model. A single denoising step is described by p\u03b8(xt\u22121|xt) := N(xt\u22121; \u00b5\u03b8(xt, t), \u03a3\u03b8(xt, t)), (4) where \u03b8 are the parameters of our denoising model. The full forward model is described by p\u03b8(x0:T ) := p(xT ) T Y t=1 p\u03b8(xt\u22121|xt), (5) where p(xT ) := N(xT ; 0, I). To train the model, we minimize the variational lower bound on the negative log-likelihood E[\u2212log p\u03b8(x0)] \u2264Eq \" \u2212log p(xT ) \u2212 X t>1 log p\u03b8(xt\u22121|xt) q(xt|xt\u22121) # . (6) This loss function can be rewritten as a sum of Kulback-Leibler divergences between the distributions of the forward and backward steps L := Eq \" DKL(q(xT |x0)\u2225p(xT )) + X t>1 DKL(q(xt\u22121|xt, x0)\u2225p\u03b8(xt\u22121|xt)) \u2212log(p\u03b8(x0|x1)) # . (7) This formulation has the advantage that we can calculate closed-form solutions for the Kulback-Leibler terms. Note that the forward posteriors are now also conditioned on the initial video x0. Using Bayes theorem, it can be shown that q(xt\u22121|xt, x0) = N(xt\u22121; \u02dc \u00b5t(xt|x0), \u02dc \u03b2tI), (8) where \u02dc \u00b5(xt, x0) := \u221a \u03b1t\u22121\u03b2t 1\u2212\u03b1t x0 + \u03b1t(1\u2212\u03b1t\u22121) 1\u2212\u03b1t xt and \u02dc \u03b2t := 1\u2212\u03b1t\u22121 1\u2212\u03b1t \u03b2t. Ho et al. (2020) showed that predicting the added noise \u03f5\u03b8(xt, t) rather than the mean \u02dc \u00b5\u03b8(xt, t) of each forward step leads to a simplified loss function Lsimple := Et,x0,\u03f5 \u0002 \u2225\u03f5 \u2212\u03f5\u03b8(xt, t)\u22252 2 \u0003 , (9) that performs better in practice. This original DDPM formulation of the generation process in the form of a reverse Markov chain has more recently been complemented by a non-Markovian alternative denoted as denoising diffusion implicit models (DDIM, Song et al. 2020), which offers a deterministic and more efficient generation process. Here, a backward denoising step can be computed with xt\u22121 = p \u03b1t\u22121 xt \u2212\u221a1 \u2212\u03b1t\u03f5\u03b8(xt, t) \u221a\u03b1t + p 1 \u2212\u03b1t\u22121\u03f5\u03b8(xt, t). (10) One distinct advantage of this formulation of the denoising process is that it allows for accurate reconstruction of the original input video x0 from the noise at time step T. This technique, called DDIM inversion, can be utilized for applications such as image and video editing (see Section 10). 4 Architecture Next, we review popular architectures used for video diffusion models including UNets and transformers. We first introduce image-based variants before discussing how they may be suitably adapted for video generation, in Sec. 5. We also discuss common variations on these including latent diffusion models and cascaded diffusion models. 4 \fFigure 3: The denoising UNet architecture typically used in text-to-image diffusion models. The model iteratively predicts a denoised version of the noisy input image. The image is processed through a number of encoding layers and the same number of decoding layers that are linked through residual connections. Each layer consists of ResNet blocks implementing convolutions, as well as Vision Transformer self-attention and cross-attention blocks. Self-attention shares information across image patches, while cross-attention conditions the denoising process on text prompts. 4.1 UNet The UNet (Ronneberger et al., 2015) is currently the most popular architectural choice for the denoiser in visual diffusion models (see Fig. 3). Originally developed for medical image segmentation, it has more recently been successfully adapted for generative tasks in image, video, and audio domains. A UNet transforms an input image into an output image of the same size and shape by encoding the input first into increasingly lower spatial resolution latent representations while increasing the number of feature channels by progressing through a fixed number of encoding layers. Then, the resulting \u2018middle\u2019 latent representation is upsampled back to its original size through the same number of decoding layers. While the original UNet (Ronneberger et al., 2015) only used ResNet blocks, most diffusion models interleave them with Vision Transformer blocks in each layer. The ResNet blocks mainly utilize 2D-Convolutions, while the Vision Transformer blocks implement spatial self-attention, as well as cross-attention. This happens in a way that allows conditioning of the generative process on additional information such as text prompts and current timestep. Layers of the same resolution in the encoder and decoder part of the UNet are connected through residual connections. The UNet can be trained by the process outlined in Sec. 3. 4.2 Vision Transformer The Vision Transformer (ViT, Dosovitskiy et al. (2020)) is an important building block of generative diffusion models based on the transformer architecture developed for natural language processing (Vaswani 5 \fFigure 4: Architectural choices for increasing the output resolution of image diffusion models. a) Cascaded Diffusion Models (CDM) chain denoising UNets of increasing resolution to generate high-fidelity images. b) Latent Diffusion Models (LDM) use a pre-trained variational auto-encoder (VAE) to operate in lowerdimensional space, thus preserving computational resources. et al., 2017). Therefore, it similarly combines multi-head attention layers, normalization layers, residual connections, as well as a linear projection layer to transform a vector of input tokens into a vector of output tokens. In the image case, the input tokens are obtained by dividing the input image into regular patches and using an image encoder to compute for each patch a patch embedding, supplemented with learnable position embeddings. Within the attention layer, the patch embeddings are projected through trainable projection matrices, producing so called Query, Key and Value matrices. The first two matrices are used to compute a learnable affinity matrix A between different image token positions, which is calculated according to the scaled dot-product attention formula: A(Q, K) = softmax( QKT \u221adk ). Here, Q and K are d \u00d7 dk dimensional and refer to the query and key matrix, d is the number of input tokens, dk the dimensionalities of the d query and key vectors making up the rows of K and Q, and the matrix Z of output embeddings is obtained as Z = AV , i.e. the attention-weighted superposition of the rows of the value matrix V (with one row for each input token embedding). In the simplest case, there is a single (d \u00d7 d dimensional) affinity matrix, resulting from a single set of projection matrices. In multi-head attention, a stack of such projections with separate query, key, and value matrices is used. Their outputs are concatenated and transformed through a linear output layer to form a single set of d new patch embeddings. The attention heads can be computed in parallel and allow the model to focus on multiple aspects of the image. Depending on the task, ViTs can output an image embedding or be equipped with a classification head. In diffusion models, ViT blocks serve two purposes: On the one hand, they implement spatial self-attention where Q, K, and V refer to image patches. This allows information to be shared across the whole image, or even an entire video sequence. On the other hand, they are used for cross-attention that conditions the denoising process on additional guiding information such as text prompts. Here, Q is an image patch and K and V are based on text tokens that have been encoded into an image-like representation using a CLIP encoder (Radford et al., 2021). Purely Vision Transformer-based diffusion models have been proposed as an alternative to the standard UNet (Peebles & Xie, 2022; Lu et al., 2023b; Ma et al., 2024; Chen et al., 2023c;b; Gupta et al., 2023). Rather than utilizing convolutions, the whole model consists of a series of transformer blocks only. This approach has distinct advantages, such as flexibility in regard to the length of the generated videos. While UNet-based models typically generate output sequences of a fixed length, transformer models can autoregressively predict tokens in sequences of relatively arbitrary length. 6 \f4.3 Cascaded Diffusion Models Cascaded Diffusion Models (CDM, Ho et al. 2022b) consist of multiple UNet models that operate at increasing image resolutions. By upsampling the low-resolution output image of one model and passing it as input to the next model, a high-fidelity version of the image can be generated. At training time, various forms of data augmentation are applied to the outputs of one denoising UNet model before it is passed as input to the next model in the cascade. These include Gaussian blurring, as well as premature stopping of the denoising process (Ho et al., 2022b). The use of CDMs has largely vanished after the adaptation of Latent Diffusion Models (Rombach et al., 2022) that allow for native generation of high-fidelity images with lower resources. 4.4 Latent Diffusion Models Latent Diffusion Models (LDM, Rombach et al. (2022)) have been an important development of the base UNet architecture that now forms the de-facto standard for image and video generation tasks. Instead of operating in RGB space, the input image is first encoded into a latent representation with lower spatial resolution and more feature channels using a pre-trained vector-quantized variational auto-encoder (VQVAE, Van Den Oord et al. (2017)). This low-resolution representation is then passed to the UNet where the whole diffusion and denoising process takes place in the latent space of the VQ-VAE encoder. The denoised latent is then decoded back to the original pixel space using the decoder part of the VQ-VAE. By operating in a lower-dimensional latent space, LDMs can save significant computational resources, thus allowing them to generate higher-resolution images compared to previous diffusion models. Stable Diffusion 1 is a canonical open source implementation of the LDM architecture. Further improvements of the LDM architecture have been introduced by Chen et al. (2020), who addressed specific concerns about how to adjust the architecture for high-resolution images, and Podell et al. (2023), who used a second refiner network for improving the sample quality of generated images. 5 Temporal Dynamics Text-to-image models such as Stable Diffusion can produce realistic images, but extending them for video generation tasks is not trivial (Ho et al., 2022c). If we try to naively generate individual video frames from a text prompt, the resulting sequence has no spatial or temporal coherence (see Fig. 5a). For video editing tasks, we can extract spatial cues from the original video sequence and use them to condition the diffusion process. In this way, we can produce fluid motion of objects, but temporal coherence still suffers due to changes in the finer texture of objects (see Fig. 5b). In order to achieve spatio-temporal consistency, video diffusion models need to share information across video frames. The most obvious way to achieve this is to add a third temporal dimension to the denoising model. ResNet blocks then implement 3D convolutions, while self-attention blocks are turned into full cross-frame attention blocks (see Fig. 6). This type of full 3D architecture is however associated with very high computational costs. To lower the computational demands of video UNet models, different approaches have been proposed (see Fig. 7): 3D convolution and attention blocks can be factorized into spatial 2D and temporal 1D blocks. The temporal 1D modules are often inserted into a pre-trained text-to-image model. Additionally, temporal upsampling techniques are often used to increase motion consistency. In video-to-video tasks, pre-processed video features such as depth estimates are often used to guide the denoising process. Finally, the type of training data and training strategy has a profound impact on a model\u2019s ability to generate consistent motion. 5.1 Spatio-Temporal Attention Mechanisms In order to achieve spatial and temporal consistency across video frames, most video diffusion models modify the self-attention layers in the UNet model. These layers consist of a vision transformer that computes the affinity between a query patch of an image and all other patches in that same image. This basic mechanism can be extended in several ways (see Wang et al. 2023b for a discussion): In temporal attention (Hong et al., 2022; Singer et al., 2022), the query patch attends to patches at the same location in other video 1https://github.com/Stability-AI/stablediffusion 7 \fframes. In full spatio-temporal attention (Zhang & Agrawala, 2023; Bar-Tal et al., 2024), it attends to all patches in all video frames. In causal attention, it only attends to patches in all previous video frames. In sparse causal attention (Wu et al., 2022b), it only attends to patches in a limited number of previous frames, typically the first and immediately preceding one. The different forms of spatio-temporal attention differ in how computationally demanding they are and how well they can capture motion. Additionally, the quality of the produced motion greatly depends on the used training strategy and data set. 5.2 Temporal Upsampling Generating long video sequences in a single batch often exceeds the capacity of current hardware. While different techniques have been explored to reduce the computational burden (such as sparse causal attention, Wu et al. 2022b), most models are still limited to generating video sequences that are no longer than a few seconds even on high-end GPUs. To get around this limitation, many works have adapted a hierarchical upsampling technique whereby they first generate spaced-out key frames. The intermediate frames can then be filled in by either interpolating between neighboring key frames, or using additional passes of the diffusion model conditioned on two key frames each. As an alternative to temporal upsampling, the generated video sequence can also be extended in an autoregressive manner (Blattmann et al., 2023b). Hereby, the last generated video frame(s) of the previous batch are used as conditioning for the first frame(s) of the next batch. While it is in principle possible to arbitrarily extend a video in this way, the results often suffer from repetition and quality degradation over time. 5.3 Structure Preservation Video-to-video translation tasks typically strive for two opposing objectives: Maintaining the coarse structure of the source video on the one hand, while introducing desired changes on the other hand. Adhering to the source video too much can hamper a model\u2019s ability to perform edits, while strolling too far away from Figure 5: Limitations of text-to-video diffusion models for generating consistent videos. (Top) When using only a text prompt (\u201cMichael Jordan running\u201d), both the appearance and position of objects change wildly between video frames. (Bottom) Conditioning on spatial information from a reference video can produce consistent movement, but the appearance of objects and the background still fluctuate between video frames. 8 \fthe layout of the source video allows for more creative results but negatively impacts spatial and temporal coherence. A common approach for preserving the coarse structure of the input video is to replace the initial noise in the denoising model with (a latent representation of) the input video frames (Wu et al., 2022b). By varying the amount of noise added to each input frame, the user can control how closely the output video should resemble the input, or how much freedom should be granted while editing it. In practice, this method in itself is not sufficient for preserving the more fine-grained structure of the input video and is therefore usually augmented with other techniques. For one, the outlines of objects are not sufficiently preserved when adding higher amounts of noise. This can lead to unwanted object warping across the video. Furthermore, finer details can shift over time if information is not shared across frames during the denoising process. These shortcomings can be mitigated to some degree by conditioning the denoising process on additional spatial cues extracted from the original video. For instance, specialized diffusion models have been trained to take into account depth estimates2. ControlNet (Zhang & Agrawala, 2023) is a more general extension for Stable Diffusion that enables conditioning on various kinds of information, such as depth maps, OpenPose skeletons, or lineart. A ControlNet model is a fine-tuned copy of the encoder portion of the Stable Diffusion denoising UNet that can be interfaced with a pre-trained Stable Diffusion model. Image features are extracted using a preprocessor, encoded through a specialized encoder, passed through the ControlNet model, and concatenated with the image latents to condition the denoising process. Multiple ControlNets can be combined in an arbitrary fashion. Several video diffusion models have also implemented video editing that is conditioned on extracted frame features such as depth (Ceylan et al. 2023; Esser et al. 2023; Xing et al. 2023a, see Sec. 10.2) or pose estimates (Ma et al. 2023; Zhao et al. 2023, see Sec. 10.3). 2https://huggingface.co/stabilityai/stable-diffusion-2-depth Figure 6: Three-dimensional extension of the UNet architecture for video generation. Topmost: temporally adjacent UNet 2D-layer outputs are stacked to provide 3D input at each new resolution (yellow) in the UNet layer chain. Below: processing inside the layer group starts with 3D operations, followed by cross-attention to accommodate text input, followed by flattening back to purely spatial ResNet and upsampling stages. 9 \fFigure 7: Attention mechanisms for modeling temporal dynamics. 6 Training & Evaluation Video diffusion models can differ greatly in regards to how they are trained. Some models are trained from scratch (e.g. Ho et al. 2022c, Singer et al. 2022, Ho et al. 2022a), while others are built on top of a pretrained image model (e.g. Zhou et al. 2022, Khachatryan et al. 2023, Blattmann et al. 2023b). It is possible to train a model completely on labeled video data, whereby it learns associations between text prompts and video contents as well as temporal correspondence across video frames (e.g. Ho et al. 2022c). However, large data sets of labeled videos (e.g. Bain et al. 2021, Xue et al. 2022, see Section 6.1) tend to be smaller than pure image data sets and may include only a limited range of content. Additionally, a single text label per video may fail to describe the changing image content across all frames. At a minimum, automatically collected videos need to be divided into chunks of suitable length that can be described with a single text annotation and that are free of unwanted scene transitions, thereby posing higher barriers for uncurated or weakly curated data collection. For that reason, training is often augmented with readily available data sets of labeled images (e.g. Russakovsky et al. 2015, Schuhmann et al. 2022, see Section 6.2). This allows a given model to learn a broader number of relationships between text and visual concepts. Meanwhile, the spatial and temporal coherence across frames can be trained independently on video data that can even be unlabeled (Zhou et al., 2022). In contrast to models that are trained from scratch (e.g. Ho et al. 2022c, Singer et al. 2022, Ho et al. 2022a), recent video diffusion approaches (e.g. Zhou et al. 2022, Khachatryan et al. 2023, Blattmann et al. 2023b) often rely on a pre-trained image generation model such as Stable Diffusion (Rombach et al. 2022). These models show impressive results in the text-to-image (Rombach et al., 2022; Ramesh et al., 2022) and image editing domains (Brooks et al., 2023; Zhang & Agrawala, 2023), but are not built with video generation in mind. For this reason, they have to be adjusted in order to yield results that are spatially and temporally coherent. One possibility to achieve this is to add new attention blocks or to tweak existing ones so that they model the spatio-temporal correspondence across frames. Depending on the implementation, these attention blocks either re-use parameters from the pre-trained model, are fine-tuned on a training data set consisting of many videos, or only on a single input video in the case of video-to-video translation tasks. During fine-tuning, the rest of the pre-trained model\u2019s parameters are usually frozen in place. The different training methods are shown in Fig. 8. 6.1 Video Data Sets Table 1 offers an overview of commonly used video data sets for training and evaluation of video diffusion models. 10 \fFigure 8: Training approaches for video diffusion models: a) Training on videos. b) Simultaneous training on images and videos. c) Pre-training on images and fine-tuning on videos. Table 1: Commonly used video data sets for diffusion model training. Data Set Resolution Source Labels # Clips Clip Length Total Length (h.) WebVid-10M (2021) Web Alt-Text 10.7 mio. 18 sec. 52k HD-Villa-100M (2022) 1280\u00d7720 Youtube Transcription (auto.) 100 mio. 13.4 sec. 371k Kinetics-600 (2018) Youtube Action Class 500,000 10 sec. 1.4k UCF101 (2012) 320\u00d7240 Youtube Action Class 13k 7 sec. 27 MSR-VTT (2016) Web Annotation (human) 10k 10-30 sec. 41.2 Sky Time-lapse (2018) 640\u00d7360 Youtube 35k 32 frames 10 Tai-Chi-HD (2019) 256\u00d7256 Youtube 3k 128-1024 frames TikTok (2022) 640\u00d71080 TikTok Depth 340 10-15 sec. 0.86 WebVid-10M (Bain et al., 2021) is a large data set of text-video pairs scraped from the internet that covers a wide range of content. It consists of 10.7 million video clips with a total length of about 52,000 hours. It is an expanded version of the WebVid-2M data set, which includes 2.5 million videos with an average length of 18 seconds and a total play time of 13,000 hours. Each video is annotated with an HTML Alt-text which normally serves the purpose of making it accessible to vision-impaired users. The videos and their Alt-texts have been selected based on a filtering pipeline similar to that proposed in Sharma et al. (2018). This ensures that the videos have sufficiently high resolution, normal aspect ratio, and lack profanity. Additionally, only well-formed Alt-text that is aligned with the video content is selected (as judged by a classifier). WebVid-10M is only distributed in the form of links to the original video sources, therefore it is possible that individual videos that have been taken down by their owners are no longer accessible. HD-Villa-100M (Xue et al., 2022) contains over 100 million short video clips extracted from about 3.3 million videos found on YouTube. The average length of a clip is 13.4 seconds with a total run time of about 371.5 thousand hours. All videos have a high-definition resolution of 1280 \u00d7 720 pixels and are paired with automatic text transcriptions. Along with WebVid-10M, HD-Villa-100M is one of the most popular training data sets for generative video models. Kinetics-600 (Carreira et al., 2018) contains short YouTube videos of 600 distinct human actions with their associated class labels. Each action class is represented by more than 600 video clips that last around 10 seconds. In total, the data set contains around 500,000 clips. The data set expands upon the previous Kinetics-400 (Kay et al., 2017) data set. UCF101 (Soomro et al., 2012) is a data set of videos showing human actions. It contains over 13,000 YouTube video clips with a total duration of about 27 hours and an average length of 7 seconds. It expands upon the previous UCF50 (Reddy & Shah, 2013) data set, which includes only roughly half as many video clips and action classes. The clips have a resolution of 320\u00d7240 pixels. Each video has been annotated with a class label that identifies it as showing one of 101 possible actions. The 101 action classes are more broadly categorized into 5 action types: Human-Object Interaction, Body-Motion Only, Human-Human Interaction, Playing Musical Instruments, and Sports. While UCF101 was mainly intended for training and evaluating 11 \fTable 2: Commonly used image data sets that are used for video diffusion model training. Data Set # Images Annotation Labels # Classes ImageNet-21k (2015) 14 mio. Human Class 20,000 ImageNet-1k (2015) 1.28 mio. Human Class 1,000 MS-COCO (2014) 328k Human Class 91 LAION-5B (2022) 5.58 mio. Automated Text action classifiers, it has also been adopted as a benchmark for generative models. For this, the class labels are often used as text prompts. The generated videos are then usually evaluated using IS, FID, and FVD metrics. MSR-VTT (Xu et al., 2016) includes about 10,000 short video clips from over 7,000 videos with a total run time of about 41 hours. The videos were retrieved based on popular video search queries and filtered according to quality criteria such as resolution and length. Each clip was annotated by 20 different humans with a short text description, yielding 200,000 video-text pairs. The data set was originally intended as a benchmark for automatic video annotation but has been used for evaluating text-to-video models as well. For this, CLIP text-similarity, FID, and FVD scores are usually reported. Sky Time-lapse (Xiong et al., 2018) is a collection of unlabeled short clips that contain time-lapse shots of the sky. The videos have been taken from YouTube and divided into smaller non-overlapping segments. Each clip consists of 32 frames of continuous video at a resolution of 640\u00d7360 pixels. The clips show the sky at different times of day, under different weather conditions, and with different scenery in the background. The data set can serve as a benchmark for unconditional video generation or video prediction. In particular, it allows one to assess how well a given generative video model is able to replicate complex motion patterns of clouds and stars. Tai-Chi-HD (Siarohin et al., 2019) contains over 3,000 unlabeled clips from 280 tai chi Youtube videos. The videos have been split into smaller chunks that range from 128 to 1024 frames and have a resolution of 256\u00d7256 pixels. Similar to Sky Time-lapse, Tai-Chi-HD can be used for training and evaluating unconditional generation or video prediction. TikTok Dataset (Jafarian & Park, 2022) is recognized as a popular benchmark for the generation of human dancing videos. It comprises approximately 350 dance videos, each lasting between 10 and 15 seconds. The curated videos in the dataset capture a single individual performing dance moves from monthly TikTok dance challenge compilations. These selected videos showcase moderate movements without significant motion blur. For each video, RGB images are extracted at a rate of 30 frames per second, resulting in a total of more than 100,000 images. Human poses, depth maps, segmentation masks, and UV coordinates of each video are also provided. 6.2 Image Data Sets Video models are sometimes jointly trained on image and video data. Alternatively, they may extend a pretrained image generation model with temporal components that are fine-tuned on videos. Table 2 provides a brief overview over commonly used labeled image data sets. ImageNet (Russakovsky et al., 2015) is a data set developed for the ImageNet Large Scale Visual Recognition Challenge that was held annually between 2010 and 2017. Since 2012, the same data set has been used for the main image classification task. ImageNet-21k is a large collection of over 14 million images that have been annotated by humans with one object category label. Overall, there are 20,000 different object classes present in the data set that are hierarchically organized according to the WordNet (Fellbaum, 1998) structure. A subset of this dataset used for the ImageNet competition itself is often called ImageNet-1k. It contains over 1 million images that each have been annotated by humans with one object category label and a corresponding bounding box. There are only 1,000 object categories in this data set. MS-COCO (Lin et al., 2014) has originally been developed as a benchmark data set for object localization models. It contains over 300,000 images containing 91 different categories of everyday objects. Every instance 12 \fof an object is labeled with a segmentation mask and a corresponding class label. Overall, there are about 2.5 million instances of objects in this data set. LAION-5B (Schuhmann et al., 2022) is a very large public collection of 5.58 billion text-image pairs that can be found on the internet. Access is provided in the form of a list of links. To ensure a minimal level of correspondence between the images and their associated alt-texts, the pairs have been filtered by the following method: Images and texts have both been encoded through a pre-trained CLIP (Radford et al., 2021) model and pairs with a low cosine CLIP similarity have been excluded. To train image or video models, often only the subset of LAION-5B that contains English captions is used. It contains 2.32 billion text-image pairs and is referred to as LAION-2B. Additionally, labels for not safe for work (NSFW), watermarked, or toxic content are provided based on automated classification. The LAION-5B data set offers are relatively low level of curation, but its sheer size has proven very valuable for training large image and video models. 6.3 Evaluation Metrics Figure 9: Commonly used algorithmic video evaluation metrics. Human ratings are the most important evaluation method for video models since the ultimate goal is to produce results that appeal to our aesthetic standards. To demonstrate the quality of a new model, subjects usually rate its output in comparison to an existing baseline. Subjects are usually presented with pairs of generated clips from two different video models. They are then asked to indicate which of the two examples they prefer in regard to a specific evaluation criterion. Depending on the study, the ratings can either purely reflect the subject\u2019s personal preference, or they can refer to specific aspects of the video such as temporal consistency and adherence to the prompt. Humans are very good at judging what \u201clooks natural\u201d and identifying small temporal inconsistencies. The downsides of human ratings include the effort and time needed to collect large enough samples, as well as the limited comparability across studies. For this reason, it is desirable to also report automated evaluation metrics. Human studies can also be used to measure how well the automated metrics align with human preferences, checking if human judgments agree with metric results or differ when assessing similar videos (Unterthiner et al., 2018; Huang et al., 2024; Liu et al., 2024). Common automated evaluation metrics can be categorized into two types: 1) set-to-set comparison metrics, and 2) unary metrics, as shown in Figure 9. The first category measures the difference between the generated set of data and the reference dataset, typically using statistical measures such as Fr\u00e9chet distance (Dowson & Landau, 1982). The second category, unary metrics, does not require a reference set. This makes them suitable for applications like video generation in the wild or video editing, where a gold-standard reference is absent. 6.3.1 Set-to-set Comparison Metrics Fr\u00e9chet Inception Distance (FID, Heusel et al. 2017) measures the similarity between the output distribution of a generative image model and its training data. Rather than comparing the images directly, they are first encoded by a pre-trained inception network (Szegedy et al., 2016). The FID score is calculated as the squared Wasserstein distance between the image embeddings in the real and synthetic data. FID can be applied to individual frames in a video sequence to study the image quality of generative video models, but it fails to properly measure temporal coherence. Fr\u00e9chet Video Distance (FVD, Unterthiner et al. 2018) has been proposed as an extension of FID for the video domain. Its inception net is comprised of a 3D Convnet pre-trained on action recognition tasks 13 \fin YouTube videos (I3D, Carreira & Zisserman 2017). The authors demonstrate that the FVD measure is not only sensitive to spatial degradation (different kinds of noise), but also to temporal aberrations such as swapping of video frames. FVD is a commonly used metric for assessing the quality of unconditional or text-conditioned video generation. Kernel Video Distance (KVD, Unterthiner et al. 2018) is an alternative to FVD. It is computed in an analogous manner, except that a polynomial kernel is applied to the features of the inception net. The authors found that FVD aligns better with human judgments than KVD. Nevertheless, both are commonly reported as benchmark metrics for unconditional video generation. Fr\u00e9chet Video Motion Distance (FVMD, Liu et al. 2024) is a metric focused on temporal consistency, measuring the similarity between motion features of generated and reference videos using Fr\u00e9chet Distance. It begins by tracking keypoints using the pre-trained PIPs++ model (Zheng et al., 2023), then calculates the velocity and acceleration fields for each frame. The metric aggregates these features into statistical histograms and measures their differences using the Fr\u00e9chet Distance. FVMD assesses motion consistency by analyzing speed and acceleration patterns, assuming smooth motions should follow physical laws and avoid abrupt changes. In addition to video-based metrics, the Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity Index (SSIM) (Wang et al., 2004) are commonly used image-level metrics for video quality assessment. Specifically, SSIM characterizes the brightness, contrast, and structural attributes of the reference and generated videos, while PSNR quantifies the ratio of the peak signal to the Mean Squared Error (MSE). Originally proposed for imaging tasks such as super-resolution and in-painting, these metrics are nonetheless repurposed for video evaluation. Unlike the aforementioned methods, PSNR and SSIM do not need pre-trained models. 6.3.2 Unary Metrics Inception Score (IS, Salimans et al. 2016) is applicable to generative models trained on data sets with categorical labels. An Inception Net (Szegedy et al., 2016) classifier pre-trained on the ImageNet data set (Deng et al., 2009) is used to predict the class labels of each generated image. The IS score is then expressed by the Kullback-Leibler distance between the conditional class probability distribution p(y|G(x)) and the marginal class distribution p(y) of the generated samples. While IS aligns well with human ratings and possesses good discriminative power (Borji, 2019), it is susceptible to noise, as shown by Heusel et al. (2017). It should be noted that IS only assesses the quality of individual images. When applied to video data, it can therefore not take into account aspects such as temporal coherence between video frames. Saito et al. (2020) generalizes the IS to the video domain, specifically for the UCF101 dataset (Soomro et al., 2012), where a pre-trained action recognition classifier (C3D, Tran et al. 2015) is used for score computation. However, this metric is generally highly specific to the UCF101 dataset and is hardly applicable to videos in the wild due to classification difficulty. CLIP cosine similarity is often used to measure text prompt and frame consistency, where a reference video dataset is not needed. CLIP (Radford et al., 2021) is a family of vision transformer auto-encoder models that can project image and text data into a shared embedding space. During training, the distance between embedded images and their associated text labels is minimized. Thereby, visual concepts are represented close to words that describe them. The similarity between CLIP embeddings is typically measured through their cosine distance. A value of 1 describes identical concepts, while a value of 0 implies completely unrelated concepts. In order to determine how well a video sequence adheres to the text prompt used to generate or edit it, the average similarity between each video frame and the text prompt is calculated (prompt consistency, Esser et al. 2023). In a similar fashion, it is also possible to get a rough measure of temporal coherence by computing the mean CLIP similarity between adjacent video frames in a sequence (frame consistency, Esser et al. 2023). In video editing tasks, the percentage of frames with a higher prompt consistency score in the edited over the original video is also sometimes reported (frame accuracy, Qi et al. 2023). VBench (Huang et al., 2024) proposes a comprehensive set of fine-grained video evaluation metrics to assess temporal and frame-wise video quality, as well as video-text consistency in terms of semantics and style. They employ a number of pre-trained models, e.g., RAFT (Teed & Deng, 2020) for dynamic degree, and MUSIQ (Ke et al., 2021) for imaging quality, along with heuristics-inspired algorithms, e.g., visual 14 \fFigure 10: Limitations of Automated Evaluation Metrics: We selected five video samples ranked by human assessment from worst to best. We compare quantitative evaluation results provided by the FVD (Unterthiner et al., 2018), FID (Heusel et al., 2017), VBench (Huang et al., 2024), and FVMD (Liu et al., 2024) metrics to demonstrate the limitations of algorithmic evaluation procedures. For example, video sample (a) is of the poorest quality but cannot be effectively distinguished from samples (b), (c), and (d) based on the FID or VBench metrics. However, the FVMD metric is better aligned with the video quality and motion consistency. The video samples are reproduced results collected from various models trained on the TikTok dataset (Jafarian & Park, 2022): (a) is from Magic Animate (Xu et al., 2023); (b), (c), and (e) from Animate Anyone (Hu et al., 2023b); and (d) from DisCo (Wang et al., 2023a). smoothness and temporal flickering, based on inter-frame interpolation and reconstruction error. The overall score is determined by a weighted sum of a number of fine-grained metrics, and the authors also conduct human studies to validate the effectiveness of these metrics. 6.4 Benchmarks Commonly used evaluation datasets for video generation include UCF-101 (Soomro et al., 2012), MSRVTT (Xu et al., 2016), Tai-Chi-HD (Siarohin et al., 2019), and Sky Time-Lapse (Radford et al., 2021). All four benchmarks can be calculated on samples that have been generated in an unconditional manner. For UCF-101, a second benchmark is sometimes reported on conditional generation where the 101 class labels are used for guiding the generative process. In this case, IS can be used as an evaluation metric. For MSR-VTT, conditional generation using the 200,000 human video annotations as text prompts can also be evaluated. Here, CLIP text-similarity is often reported as a measure of text-video alignment. Most often, the benchmarked models are either directly trained on the train split of the evaluation data set, or they are pre-trained on a separate large video data set (such as WebVid-10M or HD-Villa-100M) and later fine-tuned 15 \fTable 3: Overview of video diffusion models and their applications. Paper Model Application Max. Resolution Methodology Shots Ho et al. (2022c) VDM T V L 128\u00d7128\u00d764 FA \u2191S \u2191T AR Many Singer et al. (2022) Make-a-Video T I V 768\u00d7768\u00d776 FA \u2191S \u2191T Many Ho et al. (2022a) ImagenVideo T 1280\u00d7768\u00d7128 FA \u2191S \u2191T Many Zhou et al. (2022) MagicVideo T I V 1024\u00d71024\u00d761 P L FA \u2191S \u2191T Many Blattmann et al. (2023b) VideoLDM T V 2048\u00d71280\u00d790000 P L FA \u2191S \u2191T AR Many Khachatryan et al. (2023) Text2Video-Zero T V 512\u00d7512\u00d78+ P L Many Guo et al. (2023) AnimateDiff T 512\u00d7512\u00d716 P L FA Many Chen et al. (2023d) MCDiff I 256\u00d7256\u00d710 L AR Many Chen et al. (2023e) SEINE I 512\u00d7320\u00d716 P L AR Many Yin et al. (2023) Nuwa-XL T L NaN\u00d7NaN\u00d71024 P L FA \u2191T Many He et al. (2022b) LVDM T L 256\u00d7256\u00d71024 P L FA \u2191T AR Many Harvey et al. (2022) FDM V L 128\u00d7128\u00d715000 P FA \u2191T AR Many Lu et al. (2023b) VDT I V 256\u00d7256\u00d730 L FA \u2191T Many Wang et al. (2023) Gen-L-Video T V L 512\u00d7512\u00d7hundreds P L 3D NaN Zhu et al. (2023) MovieFactory T L 3072\u00d71280\u00d7NaN P L FA \u2191S Many Sun et al. (2023) GLOBER T L 256\u00d7256\u00d7128 P L 3D FA Many Luo et al. (2023) VideoFusion T L 128\u00d7128\u00d7512 P \u2191S AR Many Hu et al. (2023a) GAIA-1 T I L 288\u00d7512\u00d7minutes FA \u2191T AR Many Lee et al. (2023a) Soundini A 256\u00d7256\u00d7NaN P One Lee et al. (2023b) AADiff I A 512\u00d7512\u00d7150 P L Zero Liu et al. (2023d) Generative Disco A 512\u00d7512\u00d7NaN P L Zero Tang et al. (2023) Composable Diffusion T I V A 512\u00d7512\u00d716 P L FA Many Stypu\u0142kowski et al. (2023) Diffused Heads A 128\u00d7128\u00d78-9s AR Many Zhua et al. (2023) (Audio Heads) A 1024\u00d71024\u00d7NaN P \u2191S AR Many Casademunt et al. (2023) Laughing Matters A 128\u00d7128\u00d750 FA AR Many Molad et al. (2023) Dreamix I V 1280\u00d7768\u00d7128 P FA \u2191S \u2191T One Wu et al. (2022b) Tune-A-Video V 512\u00d7512\u00d7100 P L FA AR One Qi et al. (2023) FateZero V 512\u00d7512\u00d7100 P L FA AR One Liu et al. (2023b) Video-P2P V 512\u00d7512\u00d7100 P L FA AR One Ceylan et al. (2023) Pix2Video V 512\u00d7512\u00d7NaN P L FA AR Zero Esser et al. (2023) Runway Gen-2 T I V 448\u00d7256\u00d78 P L FA Many Xing et al. (2023a) Make-Your-Video V 256\u00d7256\u00d764 P L FA AR Many Ma et al. (2023) Follow Your Pose V 512\u00d7512\u00d7100 P L FA AR Many Zhao et al. (2023) Make-A-Protagonist V 768\u00d7768\u00d78 P L FA One Bai et al. (2024) UniEdit V 512\u00d7320\u00d716 P L Zero Ku et al. (2024) AnyV2V V 512\u00d7512\u00d716 P L Zero Zhang et al. (2023c) ControlVideo V 512\u00d7512\u00d7100 P L 3D \u2191T Zero Wang et al. (2023b) vid2vid-zero V 512\u00d7512\u00d78 P L AR Zero Huang et al. (2023) Style-A-Video V 512\u00d7256\u00d7NaN P L Zero Yang et al. (2023b) Rerender A Video V 512\u00d7512\u00d7NaN P L \u2191T AR Zero Liu et al. (2023a) ColorDiffuser V 256\u00d7256\u00d7NaN P L FA Many T : txt2vid, I : img2vid, V : vid2vid, A : aud2vid, L : long vid P : pre-trained model, L : latent space, 3D : full 3D attn./conv., FA : factorized attn./conv., \u2191S : spatial upsampling, \u2191T : temporal upsampling, AR : auto-regressive 16 \fon the evaluation data set. However, some papers evaluate their model in a zero-shot setting, where the model has not been trained on the evaluation data set at all. These discrepancies between evaluation setups mean that a direct comparison of benchmark results across studies should be taken with a grain of salt. The benchmark results for video generation are summarized in Table 4. Make-A-Video (Singer et al., 2022), one of the early diffusion-based video models, still holds state-of-the-art FVD and IS scores on the UCF-101 conditional generation benchmark. It not only outperforms all GAN and auto-regressive models, but also the newer diffusion-based models. It is pre-trained on both the WebVid-10M and HD-Villa-100M data sets, which gives it an advantage in terms of the quantity of the training data over most other models. MakeA-Video also holds the best CLIP-similarity and FID scores in the zero-shot text-conditioned MSR-VTT benchmark. Make-A-Video is outperformed by Make-Your-Video (Xing et al., 2023a) when it comes to zeroshot performance on UCF-101, although the latter uses depth maps as additional conditioning. Therefore, both models are not directly comparable. VideoFusion (Luo et al., 2023) has achieved the best FVD score on the unconditional Tai-Chi-HD and Sky Time-lapse benchmarks, as well as the best KVD score on TaiChi-HD. It is outperformed by LVDM (He et al., 2022b) when it comes to KVD on the Sky Time-lapse benchmark. MagicVideo (Zhou et al., 2022) has the best FID score on UCF-101 and the best FVD score on MSR-VTT, although our comparison includes only few other datasets competing in those categories. Table 4: Video generation benchmarks. UCF-101 MSR-VTT Tai-Chi-HD Sky Time-lapse Model Resolution Zero-Shot Conditioning Training FID\u2193 FVD\u2193 IS\u2191 CLIP-Sim\u2191 FID\u2193 FVD\u2193 FVD\u2193 KVD\u2193 FVD\u2193 KVD\u2193 GAN Models MoCoGAN (2018) 64\u00d764 No 26998 12.42 TGAN-v2 (2020) 64\u00d764 No 3431 26.6 TGAN-v2 (2020) 128\u00d7128 No 3497 28.87 TGAN-F (2020) 64\u00d764 No 8942 13.62 TGAN-F (2020) 128\u00d7128 No 7817 22.91 DVD-GAN (2019) 128\u00d7128 No Class 32.97 MoCoGAN-HD (2021) 256\u00d7256 No UCF-101: test split incl. 700 34 144.7 25.4 183.6 13.9 DIGAN (2022) 128\u00d7128 No 655 29.7 128.1 20.6 114.6 6.8 DIGAN (2022) 128\u00d7128 No UCF-101: test split incl. 577 32.7 128.1 20.6 114.6 6.8 Transformer Models VideoGPT (2021) 128\u00d7128 No 24.69 NUWA (2022a) 128\u00d7128 No VATEX 0.2439 47.7 TATS-base (2022) 128\u00d7128 No 420 57.6 94.6 8.8 132.6 5.7 TATS-base (2022) 128\u00d7128 No Class 332 79.3 CogVideo (Chinese) (2022) 480\u00d7480 Yes Text internal data 185 751.3 23.6 0.2614 24.8 CogVideo (English) (2022) 480\u00d7480 Yes Text internal data 179 701.6 25.3 0.2631 23.6 CogVideo (English) (2022) 160\u00d7160 Yes Text internal data 626 50.5 49 1294 Diffusion Models VDM (2022c) 64\u00d764 No 295 57 Make-a-Video (2022) 256\u00d7256 Yes Text WebVid10M, HD-VILA-10M 367.2 33 0.3049 13.2 Make-a-Video (2022) 256\u00d7256 No Text WebVid10M, HD-VILA-10M 81.3 82.6 MagicVideo (2022) 256\u00d7256 Yes Text WebVid10M, HD-VILA-100M 145 665 36.5 998 LVDM (2022b) 256\u00d7256 No UCF-101: test split incl. 372 99 15.3 95.2 3.9 VideoLDM (SD 1.4) (2023b) 1280\u00d72048 Yes Text WebVid-10M 656.5 29.5 0.2848 VideoLDM (SD 2.1) (2023b) 1280\u00d72048 Yes Text WebVid-10M 550.6 33.5 0.2929 PYoCo (2023) 1024\u00d71024 Yes Text internal data 355.19 47.76 22.14 Make-Your-Video (2023a) 256\u00d7256 Yes Text + Depth WebVid-10M 330.5 VDT (2023b) 64\u00d764 No 225.7 VideoFusion (2023) 128\u00d7128 No 220 72.2 56.4 6.9 47 5.3 VideoFusion (2023) 128\u00d7128 No Text 173 80 GLOBER (2023) 128\u00d7128 No 239.5 124.2 GLOBER (2023) 128\u00d7128 No Text 151.5 GLOBER (2023) 256\u00d7256 No 252.7 78.1 GLOBER (2023) 256\u00d7256 No Text 168.9 LaVie (2023c) 512\u00d7320 Yes Text Vimeo25M 526.3 0.2949 7 Video Generation 7.1 Unconditional Generation & Text-to-Video Unconditional video generation and text-conditioned video generation are common benchmarks for generative video models. Prior to diffusion models, Generative Adversarial Networks (GANs, Goodfellow et al. 2014, Melnik et al. 2024) and auto-regressive transformer models (Vaswani et al., 2017) have been popular choices for generative video tasks. In the following, we provide a short overview over a few representative GAN and auto-regressive transformer video models. We then introduce a selection of competing diffusion models starting in Sec. 7.1.3. 17 \f7.1.1 GAN Video Models TGAN (Saito et al., 2017) employs two generator networks: The temporal generator creates latent features that represent the motion trajectory of a video. This feature vector can be fed into an image generator that creates a fixed number of video frames in pixel space. TGAN-v2 (Saito et al., 2020) uses a cascade of generator modules to create videos at various temporal resolutions, making the process more efficient. TGAN-F (Kahembwe & Ramamoorthy, 2020) is another improved version that relies on lower-dimensional kernels in the discriminator network. MoCoGAN (Tulyakov et al., 2018) decomposes latent space into motion and content-specific parts by employing two separate discriminators for individual frames and video sequences. At inference time, the content vector is kept fixed while the next motion vector for each frame is predicted in an auto-regressive manner using a neural network. MoCoGAN was evaluated on unconditional video generation on the UCF-101 and Tai-Chi-HD datasets and achieved higher IS scores than the preceding TGAN and VGAN models. DVD-GAN (Clark et al., 2019) uses a similar dual discriminator setup to MoCoGAN. The main difference is that DVD-GAN does not use auto-regressive prediction but instead generates all video frames in parallel. It outperformed previous methods such as TGAN-v2 and MoCoGAN on the UCF-101 dataset in terms of IS score, although DVD-GAN conditioned its generation on class labels, whereas the other approaches were unconditional. MoCoGAN-HD (Tian et al., 2021) disentangles content and motion in a different way from the previous approaches. A motion generator is trained to predict a latent motion trajectory, which can then be passed as input to a fixed image generator. It outperformed previous approaches on unconditional generation in the UCF-101, Tai-Chi-HD, and Sky Time-lapse benchmarks. DIGAN (Yu et al., 2022) introduces an implicit neural representation-based video GAN architecture that can efficiently represent long video sequences. It follows a similar content-motion split as discussed above. The motion discriminator judges temporal dynamics based on pairs of video frames rather than the whole sequence. These improvements enable the model to generate longer video sequences of 128 frames. DIGAN achieved state-of-the-art results on UCF-101 in terms of IS and FVD score, as well as on Sky Time-lapse and Tai-Chi-HD in terms of FVD and KVD scores. 7.1.2 Auto-Regressive Transformer Video Models VideoGPT (Yan et al., 2021) uses a 3D VQ-VAE (Van Den Oord et al., 2017) to learn a compact video representation. An auto-regressive transformer model is then trained to predict the latent code of the next frame based on the preceding frames. While VideoGPT did not outperform the best GAN-based models at the time, namely TGAN-v2 and DVD-GAN, it achieved a respectable IS score on the UCF-101 benchmark considering its simple architecture. N\u00dcWA (Wu et al., 2022a) also uses a 3D VQ-VAE with an auto-regressive transformer generator. It is pre-trained on a variety of data sets that enable it to perform various generation and editing tasks in the video and image domains. Its text-conditioned video generation capability was evaluated on the MSR-VTT data set. TATS (Ge et al., 2022) introduces several improvements that address the issue of quality degradation that auto-regressive transformer models face when generating long video sequences. It beat previous methods on almost all metrics for UFC-101 (unconditional and class-conditioned), Tai-Chi-HD, and Sky Time-lapse. Only DIGAN maintained a higher FVD score on the Sky Time-lapse benchmark. CogVideo (Hong et al., 2022) is a text-conditioned transformer model. It is based on the pre-trained text-to-image model CogView2 (Ding et al., 2022), which is expanded with spatio-temporal attention layers. The GPT-like transformer generates key frames in a latent VQ-VAE space and a second upsampling model interpolates them to a higher framerate. The model was trained on an internal data set of 5.4 mio. annotated videos with a resolution of 160 \u00d7 160. It was then evaluated on the UCF-101 data set in a zero-shot setting by using the 101 class labels as text prompts. It beats most other models in terms of FVD and IS score except for TATS. 18 \f7.1.3 Diffusion Models Producing realistic videos based on only a text prompt is one of the most challenging tasks for video diffusion models. A key problem lies in the relative lack of suitable training data. Publicly available video data sets are usually unlabeled, and human-annotated labels may not even accurately describe the complex relationship between spatial and temporal information. Many authors therefore supplement training of their models with large data sets of labeled images or build on top of a pre-trained text-to-image model. The first video diffusion models (Ho et al., 2022c) had very high computational demands paired with relatively low visual fidelity. Both aspects have significantly been improved through architectural advancements, such as moving the denoising process to the latent space of a variational auto-encoder (He et al., 2022b; Zhou et al., 2022; Chen et al., 2023a; 2024; Blattmann et al., 2023a; Zhang et al., 2023a) and using upsampling techniques such as CDMs (Ho et al. 2022a; Wang et al. 2023c, see section 4.3). Ho et al. (2022c) present an early diffusion-based video generation model called VDM. It builds on the 3D UNet architecture proposed by \u00c7i\u00e7ek et al. (2016), extending it by factorized spatio-temporal attention blocks. This produces videos that are 16 frames long and 64 \u00d7 64 pixels large. These low-resolution videos can then be extended to 128 \u00d7 128 pixels and 64 frames using a larger upsampling model. The models are trained on a relatively large data set of labeled videos as well as single frames from those videos, which enables text-guided video generation at time of inference. However, this poses a limitation of this approach since labeled video data is relatively difficult to come by. Singer et al.\u2019s (2022) Make-a-Video address this issue by combining supervised training of their model on labeled images with unsupervised training on unlabeled videos. This allows them to access a wider and more diverse pool of training data. They also split the convolution layers in their UNet model into 2D spatial convolutions and 1D temporal convolutions, thereby alleviating some of the computational burden associated with a full 3D Unet. Finally, they train a masked spatiotemporal decoder on temporal upsampling or video prediction tasks. This enables the generation of longer videos of up to 76 frames. Make-a-Video was evaluated on the UCF-101 and MSR-VTT benchmarks where it outperformed all previous GAN and autoregressive transformer models. Ho et al. (2022a) use a cascaded diffusion process (Ho et al. 2022b, see Fig. 4) that can generate highresolution videos in their model called ImagenVideo. They start with a base model that synthesizes videos with 40\u00d724 pixels and 16 frames, and upsample it over six additional diffusion models to a final resolution of 1280\u00d7768 pixels and 128 frames. The low-resolution base model uses factorized space-time convolutions and attention. To preserve computational resources, the upsampling models only rely on convolutions. ImagenVideo is trained on a large proprietary data set of labeled videos and images in parallel, enabling it to emulate a variety of visual styles. The model also demonstrates the ability to generate animations of text, which most other models struggle with. Zhou et al.\u2019s (2022) MagicVideo adapts the Latent Diffusion Models (Rombach et al. 2022, see Fig. 4) architecture for video generation tasks. In contrast to the previous models that operate in pixel space, their diffusion process takes place in a low-dimensional latent embedding space defined by a pre-trained variational auto-encoder (VAE). This significantly improves the efficiency of the video generation process. This VAE is trained on video data and can thereby reduce motion artifacts compared to VAEs used in text-to-image models. The authors use a pre-trained text-to-image model as the backbone of their video model with added causal attention blocks. The model is fine-tuned on data sets of labeled and unlabeled videos. It produces videos of 256\u00d7256 pixels and 16 frames that can be upsampled using separate spatial and temporal super resolution models to 1024\u00d71024 pixels and 61 frames. In addition to text-to-video generation, the authors also demonstrate video editing and image animation capabilities of their model. Blattmann et al. (2023b) present another adaptation of the Latent Diffusion Models (Rombach et al., 2022) architecture to text-to-video generation tasks called VideoLDM. Similar to Zhou et al. (2022), they add temporal attention layers to a pre-trained text-to-image diffusion model and fine-tune them on labeled video data. They demonstrate that, in addition to text-to-video synthesis, their model is capable of generating long driving car video sequences in an auto-regressive manner, as well as of producing videos of personalized characters using Dreambooth (Ruiz et al., 2023). 19 \fImage CLIP/VAE Encoder Self/CrossAttention I2V Model I2V Model VAE\u00a0 Encoder Concatenate w/ Input Noise Optical Flow I2V Model a) Attention Conditioning b) Input Conditioning c) Joint Conditioning w/ Motion Image Image Figure 11: Image conditioning methods for image-to-video generation models. a) input images can be conditioned in the attention layers of the video generation models. b) input images can be formed as the extra input channels to the diffusion models. c) input images can be jointly conditioned with other modalities, such as optical flow. 7.2 Training-Free Models Khachatryan et al.\u2019s (2023) Text2Video-Zero completely eschews the need for video training data, instead relying only on a pre-trained text-to-image diffusion model that is augmented with cross-frame attention blocks. Motion is simulated by applying a warping function to latent frames, although it has to be mentioned that the resulting movement lacks realism compared to models trained on video data. Spatio-temporal consistency is improved by masking foreground objects with a trained object detector network and smoothing the background across frames. Similar to Zhou et al. (2022), the diffusion process takes place in latent space. 7.3 Personalized Generation Personalized generation allows a user to adjust a pre-trained text-to-image model such that it learns concepts from a small set of personal images. Popular methods for this are model fine-tuning model (e.g. Dreambooth, Ruiz et al. 2023, LoRA, Hu et al. 2021), as well as textual inversion (Gal et al., 2022). Guo et al. (2023) offer a text-to-video model developed with personalized image generation in mind. Their AnimateDiff extends a pre-trained Stable Diffusion model with a temporal adapter module merely containing self-attention blocks trained on video data. In this way, simple movement can be induced. The authors demonstrate that their approach is compatible with personalized image generation techniques such as Dreambooth (Ruiz et al., 2023) and LoRA (Hu et al., 2021). 7.4 Image-Conditioned Generation An important limitation of text-to-video models is the lack of controllability, as the video content can only be determined by an input text prompt. To mitigate this issue, recent research has been focusing on introducing additional image conditional signals to the video generation process. Image conditioning can be achieved through injecting semantic image embeddings (e.g. CLIP (Radford et al., 2021) image embeddings) (Wang et al., 2024a; Zhang et al., 2023b; Xing et al., 2023b; Chen et al., 2023a) or image VAE latents (Ren et al., 2024; Zhang et al., 2024) in attention layers (Figure 11 a)), adding extra input channels that represent the conditioning image (Figure 11 b)) (Girdhar et al., 2023; Chen et al., 2023e; Zeng et al., 2023; Blattmann et al., 2023a), joint conditioning with other modalities such as optical flows (Figure 11 c))(Chen et al., 2023d; Shi et al., 2024), etc. Image-conditioned generation also enables a wide range of applications, such as autoregressive long video generation (Zeng et al., 2023; Chen et al., 2023e; Ren et al., 2024), looping video generation (Xing et al., 2023b), generative frame interpolation (Zeng et al., 2023; Xing et al., 2023b) and visual storytelling (Zeng et al., 2023; Xing et al., 2023b). Chen et al. (2023d) focus on the task of animating images in accordance with motion cues. Their MotionConditioned Diffusion Model (MCDiff) accepts an input image and lets the user indicate the desired motion by drawing strokes on top of it. The model then produces a short video sequence in which objects move in 20 \faccordance with the motion cues. It can dissociate between foreground (e.g. actor movement) or background motion (i.e. camera movement), depending on the context. The authors use an auto-regressive approach to generate each video frame conditioned on the previous frame and predicted motion flow. For this, the input motion strokes are decomposed into smaller segments and passed to a UNet flow completion model to predict motion in the following frame. A denoising diffusion model receives this information and uses it to synthesize the next frame. The flow completion model and the denoising model are first trained separately but later fine-tuned jointly on unannotated videos. Chen et al.\u2019s (2023e) SEINE proposes to train an image-conditioned video generation model by concatenating the VAE latent of the image along the channel dimension of the input noise and adding an extra mask channel indicating which frame needs to be predicted. This enables flexible image conditioning such that the model can generate videos providing any given frames as conditional signals. SEINE is initialized from the text-to-video model LaVie (Wang et al., 2023c) and trained on WebVid-10M (Bain et al., 2021) along with internal private data. During inference, the model is able to perform autoregressive long video generation (by reusing the last frame of a previous video clip as the first frame to predict the next video), generating transitions between different scenes (by using two frames from different scenes as the conditioning first frame and last frame and generate the intermediate frames) and image animation (by conditioning the video generation process on the input first frame). 8 Video Completion & Long Video Generation Most video diffusion models can only generate a fixed number of video frames per sequence. In order to circumvent this limitation, auto-regressive extension and temporal upsampling methods have been proposed (see Section 5.2). Models adopting these methods often adjust and combine them in unique ways that benefit computational speed or consistency. A common problem with these approaches is that they tend to generate videos that suffer from repetitive content. Some models have therefore explored ways to generate videos with changing scenes by varying the text prompts over time. 8.1 Temporal Upsampling & Video Prediction Yin et al.\u2019s (2023) NUWA-XL model uses an iterative hierarchical approach to generate long video sequences of several minutes. It first generates evenly spaced key frames from separate text prompts that form a rough outline of the video. The frames in-between are then filled in with a local diffusion model conditioned on two key frames. This process is applied iteratively to increase the temporal resolution with each pass. Since this can be parallelized, the model achieves much faster computation times than auto-regressive approaches for long video generation. The authors train the model on a new training data set consisting of annotated Flintstones cartoons. Simple temporal convolution and attention blocks are inserted into the pre-trained text-to-image model to learn temporal dynamics. He et al. (2022b) tackle the task of generating long videos with over 1,000 frames with their Long Video Diffusion Model (LVDM). It combines auto-regressive and hierarchical approaches for first generating long sequences of key frames and then filling in missing frames. In order to reduce quality degradation induced by auto-regressive sampling, the authors use classifier-free guidance and conditional latent perturbation which conditions the denoising process on noisy latents of reference frames. The model utilizes a dedicated video encoder and combines 2D spatial with 1D temporal self-attention. It can be used for unconditional video generation or text-to-video tasks. Harvey et al. (2022) similarly explore methods for generating long video sequences with video models that have a fixed number of output frames. Their Flexible Diffusion Model (FDM) accepts an arbitrary number of conditioning frames to synthesize new frames, thereby allowing it to either extend the video in an autoregressive manner or to use a hierarchical approach (similar to NUWA-XL, Yin et al. 2023). The authors explore variations of these sampling techniques and suggest an automated optimization routine that finds the best one for a given training data set. Lu et al. (2023b) propose Video Diffusion Transformer (VDT), a diffusion-based video model that uses a vision transformer architecture (Peebles & Xie, 2022). The reported advantages of this type of architecture 21 \fover the commonly used UNet include the ability to capture long-range temporal dynamics, to accept conditioning inputs of varying lengths, and the scalability of the model. VDT was trained on more narrow data sets of unlabeled videos and accomplished tasks such as video prediction, temporal interpolation, and image animation in those restricted domains. 8.2 Alternative Approaches Wang et al.\u2019s (2023) Gen-L-Video generates long video sequences by denoising overlapping shorter video segments in parallel. A video diffusion model predicts the denoised latent in each video segment individually. The noise prediction for a given frame is than aggregated through interpolation across all segments in which it appears. This leads to greater coherence across the long video sequence. The authors apply this new method to existing frameworks in the text-to-video (LVDM, He et al. 2022b), tuning-free video-to-video (Pix2Video, Ceylan et al. 2023), and one-shot tuning video-to-video (Tune-A-Video, Wu et al. 2022b) domains. Zhu et al. (2023) follow a unique approach for generating long video sequences in their MovieFactory model. Rather than extending a single video clip, they generate a movie-like sequence of separate related clips from a single text prompt. ChatGPT is used to turn the brief text prompt into ten detailed scene descriptions. Each scene description is then passed as a prompt to the video diffusion model to generate a segment of the video sequence. Finally, audio clips matching each video scene are retrieved from a sound database. The pre-trained text-to-image model (Stable Diffusion 2.0) is first expanded by additional ResNet and attention blocks that are trained in order to produce wide-screen images. In a second training step, 1D temporal convolution and attention blocks are added to learn temporal dynamics. Sun et al.\u2019s (2023) GLOBER is a model for generating videos of arbitrary length that does not rely on auto-regressive or hierarchical approaches. Instead, it first uses a video KL-VAE auto-encoder to extract global 2D features from key frames. It then provides these global features along with arbitrary frame indices to a UNET diffusion model that can directly generate frames at those positions. To ensure the temporal coherence and realism of the generated frames, a novel adversarial loss is introduced. During training, an adversarial discriminator model receives pairs of video frames at random positions along with their indices and has to predict whether the frames both originated from the input video, or whether one or both were generated by the diffusion model. To enable inference, a generator model based on the Diffusion Transformer architecture (Peebles & Xie, 2022) is trained to produce global features that mimic those of the video encoder given text prompts. GLOBER surpasses several competing models in terms of FVD score, but its main advantage is a much faster computation time compared to auto-regressive methods. Luo et al. (2023) improve the temporal coherence in their VideoFusion model by decomposing the noise added during the forward diffusion process. A base noise component is shared across all frames and characterizes the content of the entire video. Meanwhile, a residual component is specific to each frame and is partially related to the motion of objects. This approach saves computational resources since a smaller residual generator denoising model can be used to estimate the residual noise for each frame, whereas the base noise has to be estimated only once for the entire video using a pre-trained image model. The pre-trained base generator is fine-tuned jointly with the residual generator. Hu et al.\u2019s (2023a) GAIA-1 is a hybrid transformer-diffusion model that can generate driving car videos conditioned on images, text, or action tokens that represent speed and movement trajectories. During training, it first uses a VQ-GAN to transform input video frames into discrete tokens. An auto-regressive transformer world model is used to predict the next token in the sequence based on all preceding tokens using causal masking. A diffusion-based video decoder then translates the tokens back to pixel space by denoising random noise patterns conditioned on the generated token sequence. The decoder is trained to enable flexible applications such as auto-regressive video generation and frame interpolation. 9 Audio-conditioned Synthesis Multimodal synthesis might be the most challenging task for video diffusion models. A key problem lies in how associations between different modalities can be learned. Similar to how CLIP models (Radford et al., 22 \f2021) encode text and images in a shared embedding space, many models learn a shared semantic space for audio, text, and / or video through techniques such as contrastive learning (Chen et al., 2020). 9.1 Audio-conditioned Generation & Editing Lee et al.\u2019s (2023a) Soundini model enables local editing of scenic videos based on sound clips. A binary mask can be specified to indicate a video region that is intended to be made visually consistent with the auditory contents of the sound clip. To this end, a sliding window selection of the sound clip\u2019s mel spectrogram is encoded into a shared audio-image semantic space. During training, two loss-functions are minimized to condition the denoising process on the embedded sound clips: The cosine similarity between the encoded audio clip and the image latent influences the generated video content, whereas the cosine similarity between the image and audio gradients is responsible for synchronizing the video with the audio signal. In contrast to other models, Soundini does not extend its denoising UNet to the video domain, only generating single frames in isolation. To improve temporal consistency, bidirectional optical flow guidance is used to warp neighboring frames towards each other. Lee et al. (2023b) generate scenic videos from text prompts and audio clips with their Audio-Aligned Diffusion Framework (AADiff). An audio clip is used to identify a target token from provided text tokens, based on the highest similarity of the audio clip embedding with one of the text token embeddings. For instance, a crackling sound might select the word \u201cburning\u201d. While generating video frames, the influence of the selected target token on the output frame is modulated through attention map control (similar to Promptto-Prompt, Hertz et al. 2022) in proportion to the sound magnitude. This leads to changes in relevant video elements that are synchronized with the sound clip. The authors also demonstrate that their model can be used to animate a single image and that several sound clips can be inserted in parallel. The model uses a pre-trained text-to-image model to generate each video frame without additional fine-tuning on videos or explicit modeling of temporal dynamics. Liu et al.\u2019s (2023d) Generative Disco provides an interactive interface to support the creation of music visualizations. They are implemented as visual transitions between image pairs created with a diffusion model from user-specified text prompts. The interval in-between the two images is filled according to the beat of the music, using a form of interpolation that employs design patterns to cause shifts in color, subject, or style, or set a transient video focus on subjects. A large language model can further assist the user with choosing suitable prompts. While the model is restricted to simple image transitions and is therefore not able to produce realistic movement, it highlights the creative potential of video diffusion models for music visualization. Tang et al. (2023) present a model called Composable Diffusion that can generate any combination of output modalities based on any combination of input modalities. This includes text, images, videos, and sound. Encoders for the different modalities are aligned in a shared embedding space through contrastive learning. The diffusion process can then be flexibly conditioned on any combination of input modalities by linearly interpolating between their embeddings. A separate denoising diffusion model is trained for each of the output modalities and information between the modality-specific models is shared through crossattention blocks. The video model uses simple temporal attention as well as the temporal shift method from An et al. (2023) to ensure consistency between frames. 9.2 Talking Head Generation Stypu\u0142kowski et al. (2023) have developed the first diffusion model for generating videos of talking heads. Their model Diffused Heads takes a reference image of the intended speaker as well as a speech audio clip as input. The audio clip is divided into short chunks that are individually embedded through a pre-trained audio encoder. During inference, the reference image as well as the last two generated video frames are concatenated with the noisy version of the current video frame and passed through a 2D UNet. Additionally, the denoising process is conditioned on a sliding window selection of the audio embeddings. The generated talking faces move their lips in sync with the audio and display realistic facial expressions. 23 \fZhua et al. (2023) follow a similar approach, but instead of using a reference image, their model accepts a reference video that is transformed to align with the desired audio clip. Face landmarks are first extracted from the video, and then encoded into eye blink embeddings and mouth movement embeddings. The mouth movements are aligned with the audio clip using contrastive learning. Head positions and eye blinks are encoded with a VAE, concatenated together with the synchronized mouth movement embeddings, and passed as conditioning information to the denoising UNet. Casademunt et al. (2023) focus on the unique task of laughing head generation. Similar to Diffused Heads (Stypu\u0142kowski et al., 2023), the model takes a reference image and an audio clip of laughter to generate a matching video sequence. The model combines 2D spatial convolutions and attention blocks with 1D temporal convolutions and attention. This saves computational resources over a fully 3D architecture and allows it to process 16 video frames in parallel. Longer videos can be generated in an auto-regressive manner. The authors demonstrate the importance of using a specialized audio-encoder for embedding the laughter clips in order to generate realistic results. 10 Video Editing Editing can mean a potentially wide range of operations such as adjusting the lighting, style, or background, changing, replacing, re-arranging, or removing objects or persons, modifying movements or entire actions, and more. To avoid having to make cumbersome specifications for possibly a large number of video frames, a convenient interface is required. To achieve this, most approaches rely on textual prompts that offer a flexible way to specify desired edit operations at a convenient level of abstraction and generality. However, completely unconstrained edit requests may be in conflict with desirable temporal properties of a video, leading to a major challenge of how to balance temporal consistency and editability (see Section 5.3). To this end, many authors have experimented with conditioning the denoising process based on preprocessed features of the input video. One-shot tuning methods first fine-tune their parameters on the ground truth video. This ensures that the video content and structure can be reconstructed with good quality. On the other hand, tuning-free methods are not fine-tuned on the ground truth video, which makes the editing computationally more efficient. 10.1 One-Shot Tuning Methods Molad et al. (2023) present a diffusion video editing model called Dreamix based on the ImagenVideo (Ho et al., 2022a) architecture. It first downsamples an input video, adds Gaussian noise to the low resolution version, then applies a denoising process conditioned on a text prompt. The model is finetuned on each input video and follows the joint training objective of preserving the appearance of both the entire video and individual frames. The authors demonstrate that the model can edit the appearance of objects as well as their actions. It is also able to take either a single input image or a collection of images depicting the same object and animate it. Like ImagenVideo (Ho et al., 2022a), Dreamix operates in pixel space rather than latent space. Together with the need to finetune the model on each video, this makes it computationally expensive. Wu et al. (2022b) base their Tune-A-Video on a pre-trained text-to-image diffusion model. Rather than fine-tuning the entire model on video data, only the projection matrices in the attention layers are trained on a given input video. The spatial self-attention layer is replaced with a spatio-temporal layer attending to previous video frames, while a new 1D temporal attention layer is also added. The structure of the original frames is roughly preserved by using latents obtained with DDIM inversion as the input for the generation process. The advantages of this approach are that fine-tuning the model on individual videos is relatively quick and that extensions developed for text-to-image tasks such as ControlNet (Zhang & Agrawala, 2023) or Dreambooth (Ruiz et al., 2023) can be utilized. Several models have subsequently built upon the TuneA-Video approach and improved it in different ways: Qi et al. (2023) employ an attention blending method inspired by Prompt-to-Prompt (Hertz et al., 2022) in their FateZero model. They first obtain a synthetic text description of the middle frame from the original video through BLIP (Li et al., 2022) that can be edited by the user. While generating a new image from 24 \fthe latent obtained through DDIM inversion, they blend selfand cross-attention masks of unedited words with the original ones obtained during the inversion phase. In addition to this, they employ a masking operation that limits the edits to regions affected by the edited words in the prompt. This method improves the consistency of generated videos while allowing for greater editability compared to Tune-A-Video. Liu et al. (2023b) also base their Video-P2P model on Tune-A-Video and similar to FateZero, they incorporate an attention tuning method inspired by Prompt-to-Prompt. Additionally, they augment the DDIM inversion of the original video by using Null-text inversion (Mokady et al., 2023), thereby improving its reconstruction ability. 10.2 Depth-conditioned Editing Ceylan et al.\u2019s (2023) Pix2Video continues the trend of using a pre-trained text-to-image model as the backbone for video editing tasks. In contrast to the previous approaches, it however eliminates the need for fine-tuning the model on each individual video. In order to preserve the coarse spatial structure of the input, the authors use DDIM inversion and condition the denoising process on depth maps extracted from the original video. Temporal consistency is ensured by injecting latent features from previous frames into selfattention blocks in the decoder portion of the UNet. The projection matrices from the stock text-to-image model are not altered. Despite using a comparatively lightweight architecture, the authors demonstrate good editability and consistency in their results. Esser et al.\u2019s (2023) Runway Gen-1 enables video style editing while preserving the content and structure of the original video. This is achieved on the one hand by conditioning the diffusion process on CLIP embeddings extracted from a reference video frame (in addition to the editing text prompt), and on the other hand by concatenating extracted depth estimates to the latent video input. The model uses 2D spatial and 1D temporal convolutions as well as 2D + 1D attention blocks. It is trained on video and image data in parallel. Predictions of both modes are combined in a way inspired by classifier-free guidance (Ho & Salimans, 2022), allowing for fine-grained control over the tradeoff between temporal consistency and editability. The successor model Runway Gen-2 (unpublished) also adds image-to-video and text-to-video capabilities. Xing et al. (2023a) extend a pre-trained text-to-image model conditioned on depth maps to video editing tasks in their Make-Your-Video model, similar to Pix2Video (Ceylan et al., 2023). They add 2D spatial convolution and 1D temporal convolution layers, as well as cross-frame attention layers to their UNet. A causal attention mask limits the number of reference frames to the four immediately preceding ones, as the authors note that this offers the best trade-off between image quality and coherence. The temporal modules are trained on a large unlabeled video data set (WebVid-10M, Bain et al. 2021). 10.3 Pose-conditioned Editing Ma et al.\u2019s (2023) Follow Your Pose conditions the denoising process in Tune-A-Video on pose features extracted from an input video. The pose features are encoded and downsampled using convolutional layers and passed to the denoising UNet through residual connections. The pose encoder is trained on image data, whereas the spatio-temporal attention layers (same as in Tune-A-Video) are trained on video data. The model generates output that is less bound by the source video while retaining relatively natural movement of subjects. Zhao et al.\u2019s (2023) Make-A-Protagonist combines several expert models to perform subject replacement and style editing tasks. Their pipeline is able to detect and isolate the main subject (i.e. the \u201cprotagonist\u201d) of a video through a combination of Blip-2 (Li et al., 2023b) interrogation, Grounding DINO (Liu et al., 2023c) object detection, Segment Anything (Kirillov et al., 2023) object segmentation, and XMem (Cheng & Schwing, 2022) mask tracking across the video. The subject can then be replaced with that from a reference image through Stable Diffusion inpainting with ControlNet depth map guidance. Additionally, the background can be changed based on a text prompt. The pre-trained Stable Diffusion UNet model is extended by cross-frame attention and fine-tuned on frames from the input video. 25 \f10.4 Leveraging Pre-trained Video Generation Model for Video Editing Instead of adapting a pre-trained image generation model for video editing, Bai et al.\u2019s (2024) UniEdit investigates the approach of leveraging a pre-trained text-to-video generation model for zero-shot video editing. Specifically, they propose to use the LaVie (Wang et al., 2023c) T2V model and employ feature injection mechanisms to condition the T2V generation process on the input video. This is achieved by introducing the auxiliary reconstruction branch and motion-reference branch during video denoising. The video features from these auxiliary branches are extracted and injected into the spatial and temporal selfattention layers of the main editing path to ensure the output video contains the same spatial structure and motion as the source video. A concurrent approach of UniEdit is Ku et al.\u2019s (2024) AnyV2V, which employs pre-trained image-to-video (I2V) generation models for zero-shot video editing tasks. AnyV2V breaks video editing into two stages. In the first stage, an image editing method is used to modify the first frame of the video into an edited frame. In the second stage, the edited frame and the DDIM inverted latent of the source video is passed into the I2V generation model to render the edited video. AnyV2V also adopts feature injection mechanisms similar to PnP (Tumanyan et al., 2023) to preserve the structure and motion of the source video. Because of the proposed two-stage editing strategy, AnyV2V is compatible with any off-the-shelf image editing models and can be employed in a broad spectrum of video editing tasks, such as prompt-based video editing, referencebased style transfer, identity manipulation and subject-driven video editing. The framework also supports different I2V models, such as I2VGen-XL (Zhang et al., 2023b), ConsistI2V (Ren et al., 2024) and SEINE (Chen et al., 2023e). 10.5 Multi-conditional Editing Zhang et al.\u2019s (2023c) ControlVideo model extends ControlNet (Zhang & Agrawala, 2023) to video generation tasks. ControlNet encodes preprocessed image features using an auto-encoder and passes them through a fine-tuned copy of the first half of the Stable Diffusion UNet. The resulting latents at each layer are then concatenated with the corresponding latents from the original Stable Diffusion model during the decoder portion of the UNet to control the structure of the generated images. In order to improve the spatio-temporal coherence between video frames, ControlVideo adds full cross-frame attention to the self-attention blocks of the denoising UNet. Furthermore, it mitigates flickering of small details by interpolating between alternating frames. Longer videos can be synthesized by first generating a sequence of key frames and then generating the missing frames in several batches conditioned on two key frames each. In contrast to other video-to-video models that rely on a specific kind of preprocessed feature, ControlVideo is compatible with all ControlNet models, such as Canny or OpenPose. The pre-trained Stable Diffusion and ControlNet models also do not require any fine-tuning. 10.6 Other Approaches Wang et al. (2023b) also adapt a pre-trained text-to-image model to video editing tasks without fine-tuning. Similar to Tune-A-Video and Pix2Video, their vid2vid-zero model replaces self-attention blocks with crossframe attention without changing the transformation matrices. While the cross-frame attention in those previous models is limited to the first and immediately preceding frame, Wang et al. extend attention to the entire video sequence. Vid2vid-zero is not conditioned on structural depth maps, instead using a traditional DDIM inversion approach. To achieve better alignment between the input video and user-provided prompt, it optimizes the null-text embedding used for classifier-free guidance. Huang et al. (2023) present Style-A-Video, a model aimed at editing the style of a video based on a text prompt while preserving its content. It utilizes a form of classifier-free guidance that balances three separate guidance conditions: CLIP embeddings of the original frame preserve semantic information, CLIP embeddings of the text prompt introduce stylistic changes, while CLIP embeddings of thresholded affinity matrices from self-attention layers in the denoising UNet encode the spatial structure of the image. Flickering is reduced through a flow-based regularization network. The model operates on each individual frame without 26 \fany form of cross-frame attention or fine-tuning of the text-to-image backbone. This makes it one of the lightest models in this comparison. Yang et al. (2023b) also use ControlNet for spatial guidance in their Rerender A Video model. Similar to previous models, sparse causal cross-frame attention blocks are used to attend to an anchor frame and the immediately preceding frame during each denoising step. During early denoising steps, frame latents are additionally interpolated with those from the the anchor frame for rough shape guidance. Furthermore, the anchor frame and previous frame are warped in pixel space to align with the current frame, encoded, and then interpolated in latent space. To reduce artifacts associated with repeated encoding, the authors estimate the encoding loss and shift the encoded latent along the negative gradient of the loss function to counteract the degradation. A form of color correction is finally applied to ensure color coherence across frames. This pipeline is used to generate key frames that are then filled in using patch-based propagation. The model produces videos that look fairly consistent when showing slow moving scenes but struggles with faster movements due to the various interpolation methods used. 10.7 Video Restoration Liu et al. (2023a) present ColorDiffuser, a model specialized on colorization of grayscale video footage. It utilizes a pre-trained text-to-image model and specifically trained adapter modules to colorize short video sequences in accordance with a text prompt. Color Propagation Attention computes affinities between the current grayscale frame as Query, the reference grayscale frame as Key, and the (noisy) colorized reference frame latent as Value. The resulting frame is concatenated with the current grayscale frame and fed into a Coordinator Module that follows the same architecture as the Stable Diffusion UNet. Feature maps from the Coordinator module are then injected into the corresponding layers of the denoising UNet to guide the diffusion process (similar to ControlNet). During inference, an alternating sampling strategy is employed, whereby the previous and following frame are in turn used as reference. In this way, color information can propagate through the video in both temporal directions. Temporal consistency and color accuracy is further improved by using a specifically trained vector-quantized variational auto-encoder (VQVAE) that decodes the entire denoised latent video sequence. 11 Video Diffusion Models for Intelligent Decision Making Capable generative models are beginning to see widespread usage in control and intelligent decisionmaking (Yang et al., 2023a), including for downstream representation learning, world modeling, and generative data augmentation. So far, use cases have primarily focused on image-based and low-dimensional diffusion models, but we elucidate where these may naturally be extended to video. 11.1 Representation Learning Representation learning (Bengio et al., 2013) is a popular way to transfer useful features learned from large-scale training to downstream tasks. Recent work has shown that diffusion models are an effective way to do so, particularly for image and video-based tasks. A large family of methods has considered extracting representations from text-to-image diffusion models like Stable Diffusion (Rombach et al., 2022). For example, Tian et al. (2023); Wang et al. (2023) extract segmentation masks for computer vision based on intermediate attention maps from the UNet. This is possible since the diffusion model has already internalized the concept of objects. On the other hand, Yang & Wang (2023); Gupta et al. (2024) propose to extract representations from the diffusion model from intermediate layers of the network to be used for classification or robotic control tasks, this is depicted in Fig. 12. These vision-language representations often significantly outperform related methods such as CLIP. Due to the similarity in architecture for image and video UNets, these methods could readily be adapted to the video domain. On the other hand, Sariyildiz et al. (2023) pre-train a visual representation learner directly with synthetic diffusion data targeted to ImageNet classification labels. Another way pre-trained diffusion models can be used for downstream classification tasks is through likelihood-based methods. Diffusion Classifier (Li et al., 27 \fFigure 12: Depiction of how vision-language representations can be extracted from pre-trained diffusion UNets. Given an image-text prompt, one may encode and noise the image and feed it into the UNet together with the language prompt. Features may then be aggregated from multiple levels of the downsampling process. Similar techniques may be extended to video diffusion UNets. Reproduced with permission from Gupta et al. (2024). 2023a) exploits the fact that diffusion models can act as conditional density models, and classify images by adding noise to them and then selecting the class label that best predicts the added noise. 11.2 World Models An exciting application of more realistic video diffusion models is the ability to accurately simulate the real world. As posited by LeCun (2022), learning an accurate world model is a crucial step in the path towards autonomous intelligence, enabling an agent to robustly plan and reason about the outcome of their actions. Diffusion models have already been used as trajectory world models (Janner et al., 2022; Ajay et al., 2023) in receding horizon control style setups for low-dimensional environments. In these settings, trajectories of any arbitrary quality can be biased towards high return through classifier-guided or classifier-free guidance. Further advances in video world modelling (Yang et al., 2024; Wang et al., 2024b; Hu et al., 2023a) could lead to similar techniques being scaled towards real-world settings. A notable example of this is GENIE (Bruce et al., 2024), a video world model (albeit not diffusion-based) trained from YouTube videos that learns to plan under latent actions. Crucially, this enables agents to be trained from synthetic environments based on the vast amounts of unlabeled video on the internet. The remaining challenges with current methods include improving the frame-by-frame consistency of generated trajectories as control policies often are very sensitive to the quality of observations, and speed of generation so that such models are useable in real-time. 11.3 Synthetic Training Data Finally, as we begin to exhaust the available supply of real labeled images and video, synthetic generative data has emerged as a powerful method to augment existing training datasets for downstream tasks. In supervised learning, diffusion models have been used to generate additional class-conditional data for classification (He et al., 2022a; Azizi et al., 2023) resulting in significant boosts in performance. This enables the distillation of internet-scale knowledge into these models. With more realistic video generation, we could similarly generate data for video classification or captioning tasks. In control, there is often a lack of readily available robotics data, and as such diffusion models are a particularly powerful method to generate policy training data for reinforcement learning agents. This could be done by simply naively upsampling existing datasets (Lu et al., 2023a) or in a guided fashion (Jackson et al., 2024) which generates training data that is on-policy with the current agent being optimized. These methods vastly improve the sample efficiency of trained agents. In the visual setting, ROSIE (Yu et al., 2023) and 28 \fGenAug (Chen et al., 2023f) have considered using image diffusion models to synthesize datapoints with novel backgrounds and items in order to boost the generalization performance of learned policies. Video diffusion models represent a significant improvement to single-timestep data augmentation and would enable an agent to fully simulate the outcome of a long sequence of actions. 12 Outlook and Challenges Video diffusion models have already demonstrated impressive results in a variety of use cases. However, there are still several challenges that need to be overcome before we arrive at models capable of producing longer video sequences with good temporal consistency. One issue is the relative lack of suitable training data. While there are large data sets of labeled images that have been scraped from the internet (Sec. 6.2), the available labeled video data are much smaller in size (Sec. 6.1). Many authors have therefore reverted to training their models jointly on labeled images and unlabeled videos or fine-tuning a pre-trained text-to-image model on unlabeled video data. While this compromise allows for the learning of diverse visual concepts, it may not be ideal for capturing object-specific motion. One possible solution is to manually annotate video sequences (Yin et al., 2023), although it seems unlikely that this can be done on the scale required for training generalized video models. It is to be hoped that in the future automated annotation methods will develop that allow for generation of accurate video descriptions (Zare & Yazdi, 2022). An even more fundamental problem is that simple text labels are often inadequate for describing the temporally evolving content of videos. This hampers the ability of current video models to generate more complex sequences of events. For this reason, it might be beneficial to examine alternative ways to describe video contents that represent different aspects more explicitly, such as the actors, their actions, the setting, camera angle, lighting, scene transitions, and so on. A different challenge lies in the modeling of (long-term) temporal dependencies. Due to the memory limitations of current graphics cards, video models can typically only process a fixed number of video frames at a time. To generate longer video sequences, the model is extended either in an auto-regressive or hierarchical fashion, but this usually introduces artifacts or leads to degraded image quality over time. Possible improvements could be made on an architectural level. Most video diffusion models build on the standard UNet architecture of text-to-image models. To capture temporal dynamics, the model is extended by introducing cross-frame convolutions and / or attention. Using full 3D spatio-temporal convolutions and attention blocks is however prohibitively expensive. Many models therefore have adopted a factorized pseudo-3D architecture, whereby a 2D spatial block is followed by a 1D temporal block. While this compromise seems necessary in the face of current hardware limitations, it stands to reason that full 3D architectures might be better able to capture complex spatio-temporal dynamics once the hardware allows it. In the meantime, other methods for reducing the computational burden of video generation will hopefully be explored. This could also enable new applications of video diffusion, such as real-time video-to-video translation. 13" +} \ No newline at end of file diff --git a/abs_9K/test_abstract_short_2405.03188v1.json b/abs_9K/test_abstract_short_2405.03188v1.json new file mode 100644 index 0000000000000000000000000000000000000000..c83e3ed32c7edec0e5623204e7bd02a518fe6353 --- /dev/null +++ b/abs_9K/test_abstract_short_2405.03188v1.json @@ -0,0 +1,16 @@ +{ + "url": "http://arxiv.org/abs/2405.03188v1", + "title": "Hyperbolic Geometric Latent Diffusion Model for Graph Generation", + "abstract": "Diffusion models have made significant contributions to computer vision,\nsparking a growing interest in the community recently regarding the application\nof them to graph generation. Existing discrete graph diffusion models exhibit\nheightened computational complexity and diminished training efficiency. A\npreferable and natural way is to directly diffuse the graph within the latent\nspace. However, due to the non-Euclidean structure of graphs is not isotropic\nin the latent space, the existing latent diffusion models effectively make it\ndifficult to capture and preserve the topological information of graphs. To\naddress the above challenges, we propose a novel geometrically latent diffusion\nframework HypDiff. Specifically, we first establish a geometrically latent\nspace with interpretability measures based on hyperbolic geometry, to define\nanisotropic latent diffusion processes for graphs. Then, we propose a\ngeometrically latent diffusion process that is constrained by both radial and\nangular geometric properties, thereby ensuring the preservation of the original\ntopological properties in the generative graphs. Extensive experimental results\ndemonstrate the superior effectiveness of HypDiff for graph generation with\nvarious topologies.", + "authors": "Xingcheng Fu, Yisen Gao, Yuecen Wei, Qingyun Sun, Hao Peng, Jianxin Li, Xianxian Li", + "published": "2024-05-06", + "updated": "2024-05-06", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "label": "Original Paper", + "paper_cat": "Diffusion AND Model", + "gt": "Diffusion models have made significant contributions to computer vision,\nsparking a growing interest in the community recently regarding the application\nof them to graph generation. Existing discrete graph diffusion models exhibit\nheightened computational complexity and diminished training efficiency. A\npreferable and natural way is to directly diffuse the graph within the latent\nspace. However, due to the non-Euclidean structure of graphs is not isotropic\nin the latent space, the existing latent diffusion models effectively make it\ndifficult to capture and preserve the topological information of graphs. To\naddress the above challenges, we propose a novel geometrically latent diffusion\nframework HypDiff. Specifically, we first establish a geometrically latent\nspace with interpretability measures based on hyperbolic geometry, to define\nanisotropic latent diffusion processes for graphs. Then, we propose a\ngeometrically latent diffusion process that is constrained by both radial and\nangular geometric properties, thereby ensuring the preservation of the original\ntopological properties in the generative graphs. Extensive experimental results\ndemonstrate the superior effectiveness of HypDiff for graph generation with\nvarious topologies.", + "main_content": "Introduction Graphs in the real world contain variety and important of topologies, and these topological properties often reflect 1Key Lab of Education Blockchain and Intelligent Technology, Ministry of Education, Guangxi Normal University, Guilin, China 2Institute of Artificial Intelligence, Beihang University, Beijing, China 3School of Software, Beihang University, Beijing, China 4Beijing Advanced Innovation Center for Big Data and Brain Computing, School of Computer Science and Engineering, Beihang University, Beijing, China. Correspondence to: Xingcheng Fu , Jianxin Li , Xianxian Li . Proceedings of the 41 st International Conference on Machine Learning, Vienna, Austria. PMLR 235, 2024. Copyright 2024 by the author(s). physical laws and growth patterns, such as rich-clubs, smallworlds, hierarchies, fractal structures, etc. Traditional random graph models based on graph theory, such as ErdosRenyi (Erd\u02dd os et al., 1960), Watts-Strogatz (Watts & Strogatz, 1998) and Barabasi-Albert (Barab\u00b4 asi & Albert, 1999), etc., need artificial heuristics to build the algorithms for single nature topologies and lack the flexibility to model various complex graphs. Therefore, many deep learning models have been developed for graph generation, such as Variational Graph Auto-Encoder (VGAE) (Kipf & Welling, 2016), Generative Adversarial Networks(GAN) (Goodfellow et al., 2014), and other technologies. Recently, the Denoising Diffusion Probabilistic Model(DDPM) (Ho et al., 2020) have demonstrated great power and potential in image generation, attracting huge attention from the community of graph learning. For graph generation, a straightforward idea involves designing discretized diffusion methods for the graph structural information. (Vignac et al., 2022; Jo et al., 2022; Luo et al., 2022), and the other way is to develop advanced graph encoders to preserve structural information throughout the diffusion process within a continuous potential space (Xu et al., 2021; 2023). However, because of the irregular and non-Euclidean structure of graph data, the realization of the diffusion model for graphs still has two main limitations: (1) High computational complexity. The core to graph generation is to handle the discreteness, sparsity and other topological properties of the non-Euclidean structure. Since the Gaussian noise perturbation used in the vanilla diffusion model is not suitable for discrete data, the discrete graph diffusion model usually has high time and space complexity due to the problem of structural sparsity. Moreover, the discrete graph diffusion model relies on a continuous Gaussian noise process to create fully connected, noisy graphs (Zhang et al., 2023; Ingraham et al., 2019) which loses structural information and underlying topological properties. (2) Anisotropy of non-Euclidean structure. Different from the regular structure data (e.g. pixel matrix or grid structure), the \u201dirregular\u201d non-Euclidean structure embeddings of graph data are anisotropic in continuous latent space (Elhag et al., 2022). As shown in Figure 1(b), the node embeddings of a graph in Euclidean space exhibit significant anisotropy in several specific directions. Recently, some studies (Yang et al., 2023) have shown that isotropic 1 arXiv:2405.03188v1 [cs.LG] 6 May 2024 \fHyperbolic Geometric Latent Diffusion Model for Graph Generation (a) Original structure. (b) Euclidean latent space. (c) Hyperbolic latent space. Figure 1. Visualization of node embeddings by singular value decomposition (SVD); (a) Original structure visualization of the NCAA football graph and different colors indicate different labels(teams); (b) Visualization of node embeddings in 2D Euclidean space and planar projection; (c) Visualization of node embeddings in 2D hyperbolic space and Poincar\u00b4 e disk projection. diffusion of the node embedding of the graph in the latent space will treat the anisotropic structural information as noise, and this useful structural information will be lost in the denoising process. Hyperbolic geometric space is widely recognized as an ideal continuous manifold for representing discrete tree-like or hierarchical structures (Cannon et al., 1997; Ungar, 1999; Krioukov et al., 2010; Sun et al., 2024b), and has been widely studied and applied to various graph learning tasks (Sun et al., 2021; Tifrea et al., 2019; Nickel & Kiela, 2017; Sala et al., 2018; Chami et al., 2019; Sun et al., 2024a). Inspired by these studies, we find that hyperbolic geometry has great potential to address non-Euclidean structural anisotropy in graph latent diffusion processes. As shown in Figure 1(c), in hyperbolic space, we can observe that the distribution of node embeddings tends to be isotropic globally, while anisotropy is preserved locally. In addition, hyperbolic geometry unifies angular and radial measures of polar coordinates as shown in Figure 2(a), and can provide geometric measures with physical semantics and interpretability (Papadopoulos et al., 2012). It is exciting that hyperbolic geometry can provide a geometrically latent space with graph geometric priors, able to help deal with the anisotropy of graph structures by special geometric measures. Based on the above insights, we aim to establish a suitable geometrically latent space based on hyperbolic geometry to design an efficient diffusion process to the non-Euclidean structure for topology-preserving graph generation tasks. However, there are two primary challenges: (1) the additivity of continuous Gaussian distributions is undefined in hyperbolic latent space; (2) devising an effective anisotropic diffusion process for non-Euclidean structures. Contributions. To address the challenges, we propose a novel Hyperbolic Geometric Latent Diffusion (HypDiff) model for the graph generation. For the additive issue of continuous Gaussian distribution in hyperbolic space, we propose an approximate diffusion process based on radial measures. Then the angular constraint was utilized to constrain the anisotropic noise to preserve more structural prior, guiding the diffusion model to finer details of the graph structure. Our contributions are summarized as: \u2022 We are the first to study the anisotropy of nonEuclidean structures for graph latent diffusion models from a geometric perspective, and propose a novel hyperbolic geometric latent diffusion model HypDiff. \u2022 We proposed a novel geometrically latent diffusion process based on radial and angular geometric constraints in hyperbolic space, and addresses the additivity of continuous Gaussian distributions and the issue of anisotropic noise addition in hyperbolic space. \u2022 Extensive experiments on synthetic and real-world datasets demonstrate a significant and consistent improvement of HypDiff and provide insightful analysis for graph generation. 2. Related Works 2.1. Graph Generative Diffusion Model Different from that learn to generate samples once, like GAN (Goodfellow et al., 2014; Wang et al., 2018; Dai et al., 2018), VGAE (Yu et al., 2018; Xu & Durrett, 2018; Grattarola et al., 2019) or GraphRNN (You et al., 2018), the diffusion model (Ho et al., 2020) aims to gradually convert the sample into pure noise by a parameterized Markov chain process. Some recent works (Xu et al., 2021; 2023) employ advanced graph encoders to effectively preserve the inherent structural information throughout the diffusion process within a continuous potential space. Gaussian noise is added on the distribution of nodes and edges of the graph (Vignac et al., 2022), and Gaussian processes are performed on the neighborhood or spectral domain of the graph (Vignac et al., 2022; Jo et al., 2022; Luo et al., 2022). However, existing discrete diffusion models have many challenges in capturing the non-Euclidean structure and preserving underlying topological properties. 2 \fHyperbolic Geometric Latent Diffusion Model for Graph Generation (a) Geometric interpretation. (b) Hyperbolic latent diffusion. Figure 2. (a) Geometric interpretation of the hyperbolic geometry, which unifies the radius and angle measurements in polar coordinates and interprets as popularity and similarity respectively; (b) Hyperbolic latent diffusion processing with isotropic/anisotropic noise; 2.2. Hyperbolic Graph Learning Hyperbolic geometric space was introduced into complex networks earlier to represent the small-world and scale-free complex networks (Krioukov et al., 2010; Papadopoulos et al., 2012). With high capacity and hierarchical-structurepreserving ability, hyperbolic geometry is also used in NLP (Nickel & Kiela, 2017; Tifrea et al., 2019) to learn word representations with hypernym structure. For graph neural networks, hyperbolic space is recently introduced into graph neural networks (Liu et al., 2019; Chami et al., 2019; Sun et al., 2021; 2022). P-VAE (Mathieu et al., 2019) and Hyper-ANE (Liu et al., 2018) extend VAE and GAN into the hyperbolic versions to learn the hierarchical representations. To sum up, hyperbolic geometry provides an intuitive and efficient way of understanding the underlying structural properties of the graph. 3. Methodology In this section, we present our Hyperbolic geometric latent Diffusion model (HypDiff) for addressing the two main challenges. The key insight is that we leverage hyperbolic geometry to abstract the implicit hierarchy of nodes in the graph and introduce two geometric constraints to preserve important topological proprieties, such as scale-free, navigability, and modularity. Considering the successful experiences of graph latent diffusion models (Xu et al., 2023), we adopt a two-stage training strategy framework in our practice. We first train the hyperbolic autoencoder to obtain the pre-trained node embeddings, and then train the hyperbolic geometric latent diffusion process. The architecture is shown in Figure 3. 3.1. Hyperbolic Geometric Autoencoding We first need to embed the graph data G = (X, A) into a low-dimensional hyperbolic geometric space to improve the graph latent diffusion process. Hyperbolic Encoder and Decoder. We consider a hyperbolic variant of the auto-encoder, consisting of the hyperbolic geometric encoder and the Fermi-Dirac decoder. Where the hyperbolic geometric encoder encodes the graph G = (X, A) into the hyperbolic geometric space to obtain a suitable hyperbolic representation, and the Fermi-Dirac decoder decodes the hyperbolic representation back into the graph data domain. The hyperbolic manifold Hd and the tangent space Tx can be mapped to each other via exponential map and logarithmic map (Ganea et al., 2018b). Then, we can leverage Multi-Layer Perceptrons(MLP) or Graph Neural Networks(GNNs) by exponential and logarithmic mapping as hyperbolic geometric encoders. In this paper, we use Hyperbolic Graph Convolutional Neural Networks(HGCN) (Chami et al., 2019) as the hyperbolic geometric encoder. Optimization of Autoencoding. Due to the additive failure of the Gaussian distribution in hyperbolic space, we cannot directly use Riemannian normal distribution or wrapped normal distribution. Instead of hyperbolic diffusion embedding (Lin et al.) using the product space of multiple manifolds, we propose a new diffusion process in hyperbolic space, which will be described in detail in Section 3.2. Following P-VAE (Mathieu et al., 2019), for compute efficiency, the Gaussian distribution of hyperbolic space is approximated by the Gaussian distribution of the tangent plane T\u00b5. The optimization of hyperbolic geometric auto3 \fHyperbolic Geometric Latent Diffusion Model for Graph Generation Figure 3. An illustration of HypDiff architecture. encoding is as follows: LHAE = \u2212Eq\u03d5(zx|x)logmapc op\u03be (x|zx) , (1) where logc o is the logarithmic mapping of the north pole (origin) o of hyperbolic space to simplify the computation. 3.2. Hyperbolic Geometric Latent Diffusion Process Unlike the linear addition in Euclidean space, hyperbolic space utilizes M\u00a8 obius addition, posing challenges for diffusion over a hyperbolic manifold. Furthermore, the isotropic noise leads to a rapid reduction of signal-to-noise ratio making it difficult to preserve topological information, and for the detailed results and analysis please refer to Appendix B. In light of these issues, we propose a novel diffusion process to address both of them. Hyperbolic Anisotropic Diffusion. The anisotropy of the graph in the latent space contains an inductive bias of the graph structure, where the most critical challenge is how to determine the dominant directions of the anisotropic features. In additionally, on hyperbolic manifolds, neither the wrapped normal distribution of the isotropic setup nor the anisotropic setup satisfies this property: \u03b7 \u0338\u223c\u03b71 \u2295c \u03b72, \u03b7 \u223cN c H \u00000, (\u03c32 1 + \u03c32 2)I \u0001 , \u03b71 \u223cN c H \u00000, \u03c32 1I \u0001 , \u03b72 \u223cN c H \u00000, \u03c32 2I \u0001 . (2) where c is Hyperbolic curvature and N c H is the Wrapped Gaussian distribution. We propose a hyperbolic anisotropic diffusion framework to solve both challenges. The detailed proof process can be found in the Appendix C.1. The core idea is to select the main diffusion direction (i.e., angle) based on the similarity clustering of nodes, which is equivalent to dividing the hyperbolic latent space into multiple sectors. Then we project the nodes of each cluster onto its center\u2019s tangent plane for diffusion. Let h denote the embedding of the graph in the hyperbolic space and hi denote the i-th node in it. Let hi belong to the k-th cluster and its clustering center coordinates are \u00b5k, then the node hi is represented in the tangent space of \u00b5k as x0i: x0i = logmapc \u00b5k (hi) . (3) where \u00b5k is the central point of cluster k obtained by Hyperbolic-Kmeans (h-kmeans) (Hajri et al., 2019) algorithm. Note that the clusters can be obtained by any clustering algorithm based on similarity in the pre-processing stage. Moreover, the hyperbolic clustering parameter k has the following property: Theorem 3.1. Given the hyperbolic clustering parameter k \u2208[1, n], which represents the number of sectors dividing the hyperbolic space (disk). The hyperbolic anisotropic diffusion is equivalent to directional diffusion in the Klein model Kn c with multi-curvature ci\u2208|k|, which is an approximate projecting onto the tangent plane set Toi\u2208{|k|} of the centroids oi\u2208{|k|}. The proof is in the Appendix C.2. This property elegantly establishes the relationship between our approximation algorithm and the Klein model with multiple curvatures. Our algorithm exhibits specific behaviors based on the value of k, it allows for a more flexible and nuanced representation of anisotropy based on the underlying hyperbolic geometry, enabling improved accuracy and efficiency in subsequent noise addition and training. Geometric Constraints. Hyperbolic geometry can naturally and geometrically describe the connection pattern of nodes during graph growth (Papadopoulos et al., 2012). As shown in Figure 2(a), the popularity of a node can be abstracted by its radial coordinates and the similarity can be expressed by its angular coordinate distances in the hyperbolic space, and more detail can be referred to Appendix D. Our goal is to model a diffusion with geometric radial growth, and where this radial growth is consistent with hyperbolic properties. Considering that we need to maintain this kind of hyperbolic growth tendency in the tangent plane, 4 \fHyperbolic Geometric Latent Diffusion Model for Graph Generation we use the following formulas: xt = \u221a\u03b1tx0 + \u221a 1 \u2212\u03b1t\u03f5 + \u03b4 tanh[\u221ac\u03bbc ot/T0]x0, (4) where \u03f5 is Gaussian noise and \u03b4 is the radial popularity coefficient that controls the diffusion strength of each node in hyperbolic space. T0 is a constant to control the speed of control of radial growth rate.\u03bbc x = 2 1+c\u2225x\u22252 Then, we discuss the content only on a cluster tangent plane. The main reason why the general diffusion model does not perform well on the graph is the decline of the fast signal-tonoise ratio. Inspired by directional diffusion model (Yang et al., 2023), we designate the direction of the geodesic between each cluster\u2019s center point and the north pole o as the target diffusion direction while imposing constraints for forward diffusion processes. Specifically, the angular similarity constraints for each node i can be obtained by: z = sgn (logmapc o (h\u00b5i)) \u2217\u03f5, \u03f5 \u223cN (0, I) , (5) where z represents the angle constrained noise,\u03f5 is the Gaussian noise, h\u00b5i is the clustering center corresponding to the i-th node. Combining the radial and angular constraints, our geometric diffusion process can be described as: xt = \u221a\u03b1tx0 + \u221a 1 \u2212\u03b1tz + \u03b4 tanh[\u221ac\u03bbc ot/T0]x0, (6) Theorem 3.2. Let xt indicate the node x at the t-step in the forward diffusion process Eq (6). As t \u2192\u221e, the lowdimensional latent representation xt of node x satisfies: lim t\u2192\u221ext \u223cNf (\u03b4x0, I) . (7) where Nf is an approximate folded normal distribution. More detail and proof can be referred to in the Appendix E. Figure 2(b) illustrates examples of the diffusion process with/without geometric constraints in hyperbolic space. We can observe that by adding isotropic noise to the hyperbolic latent diffusion process, the final diffusion result is completely random noise. In contrast, the hyperbolic latent diffusion process with geometric constraints can significantly preserve the anisotropy of the graph. In other words, after the graph diffusion, the result still preserves the important inductive bias of the graph below rather than the completely random noise, which will directly affect the performance and generation quality of the denoising process Training and generation. Then, we follow the standard denoising process (Ho et al., 2020; Yang et al., 2023) and train a denoising network to simulate the process of reverse diffusion. We use a denoising network architecture of DDM based on UNET for training to predict x0, as follows: LHDM = E \u2225f\u03b8 (Xt, A, t) \u2212X0\u22252 . (8) Algorithm 1 Training of HypDiff Input: Graph G = {X, A}; Number of training epochs E; Parameter: \u03b8 initialization; Output:Predicted raw embedding \u02c6 xH Encoding node to hyperbolic space xH \u2190Eq. (1); Compute k-clusters by h-Kmeans; Project the embeddings onto each Toi\u2208{|k|} for e = 1 to E do Get the embeddings xHt of t-steps Eq. (6) ; Predict the raw embeddings \u02c6 xH ; Compute the loss L = LHDM\u2190Eq. (8); Update \u03b8 \u2190\u03b8 \u2212\u03b7\u2207\u03b8. end for Note that the loss function of our geometric diffusion model remains consistent with DDPM (Ho et al., 2020) based on Theorem 3.2. The proof refers to the Appendix F. Regarding the generation, we propose an efficient sampling method based on theorem 3.1. Furthermore, we demonstrate that it is possible to sample at once in the same tangent space instead of sampling in different cluster center tangent spaces to improve efficiency. As to the denoising process, we adopt a denoising process that can be used in generalized diffusion models(Yang et al., 2023). Specifically, where a recovery operator and a noise addition operator are abstracted for use in various diffusion methods. All the specifics regarding each stage of the diffusion process, along with the theoretical derivation, are documented in the Appendix F. Similar to other hyperbolic learning model (Krioukov et al., 2010; Chami et al., 2019; Ganea et al., 2018a), we utilize the Fermi-Dirac decoder (Krioukov et al., 2010; Nickel & Kiela, 2017) to compute the connection probability. The diffusion and reverse processes are summarized in Algorithm 1 and Algorithm 2. Complexity Analysis Let G = (X, E) be one of the graphs set Gs, where X is the n-dimensional node eigenvector and E is the m \u2217m-dimensional adjacency matrix of the graph. s is the number of graphs in the graph set Gs. Time complexity: The time complexity of hyperbolic graph encoding is O((1(t) + k)md). For the forward diffusion process, the complexity is O(md). The training of denoising networks is essentially the same as other diffusion models and does not require additional computing time as O(md)\u22171(t). Overall, the total time complexity of the diffusion process is O(1(t) \u22172md) + O((k + 2)md) in one epoch. Space complexity In our approach, since we embed the graphs in hyperbolic space, each graph is represented as a m \u2217ddimensional vector in the hyperbolic space, which means that our diffusion scale is O(smd). For a more detailed complexity analysis please refer to Appendix G. 5 \fHyperbolic Geometric Latent Diffusion Model for Graph Generation 4. Experiment In this section, we conduct comprehensive experiments to demonstrate the effectiveness and adaptability of HypDiff 1 in various datasets and tasks. We first presented the experimental settings and then showcased the results. 4.1. Datasets We estimate the capabilities of HypDiff in various downstream tasks while conducting experiments on synthetic and real-world datasets. In addition, we construct and apply node-level and graph-level datasets for node classification and graph generation tasks. Statistics of the real-world datasets Table H can be found in Appendix H. We elaborate on more details as follows. Synthetic Datasets. We first use two well-accepted graph theoretical models, Stochastic Block Model (SBM) and Barab\u00b4 asi-Albert (BA), to generate a node-level synthetic dataset with 1000 nodes for node classification, respectively. (1) SBM portrays five equally partitioned communities with the edge creation of intra-community p = 0.21 and intercommunity q = 0.025 probabilities. (2) BA is grown by attaching new nodes each with random edges between 1 and 10. Then we employ four generic datasets with different scales of nodes |V | for graph generation tasks. Then, four datasets are generated for the graph-level task. (3) Community contains 500 two-community small graphs with 12 \u2264|V | \u226420. Each graph is generated by the Erd\u02dd osR\u00b4 enyi model with the probability for edge creation p = 0.3 and added 0.05 |V | inter-community edges with uniform probability. (4) Ego comprises 1050 3-hop ego-networks extracted from the PubMed network with |V | \u226420. Nodes indicate documents and edges represent their citation relationship. (5) Barab\u00b4 asi-Albert (G) is a generated graphlevel dataset by the Barab\u00b4 asi-Albert model (aka. BA-G to distinct node-level BA) with 500 graphs where the degree of each node is greater than four. (6) Grid describes 100 standard 2D grid graphs which have each node connected to its four nearest neighbors. Real-world Datasets. We also carry out our experiments on several real-world datasets. For the node classification task, we utilize (1) two citation networks of academic papers including Cora and Citeseer, where nodes express documents and edges represent citation links, and (2) Polblogs dataset which is political blogs and is a larger size dataset we used. With the graph generation task, we exploit four datasets from different fields. (3) MUTAG is a molecular network whose each graph denotes a nitro compound molecule. (4) IMDB-B is a social network, symbolizing the co-starring of the actors. (5) PROTEINS is a protein network in which 1The code is available at https://github.com/ RingBDStack/HypDiff. nodes represent the amino acids and two nodes are connected by an edge if they are less than 6 Angstroms apart. (6) COLLAB is a scientific collaboration dataset, reflecting the collaboration of the scientists. 4.2. Experimental Setup Baselines. To evaluate the proposed HypDiff , we compare it with well-known or state-of-the-art graph learning methods which include: (1) Euclidean graph representation methods: VGAE (Kipf & Welling, 2016) designs a variational autoencoder for graph representation learning. ANE (Dai et al., 2018) trains a discriminator to align the embedding distribution with a predetermined fixed prior. GraphGAN (Wang et al., 2018) learns the sampling distribution for negative node sampling from the graph. (2) Hyperbolic graph representation learning: P-VAE (Mathieu et al., 2019) is a variational autoencoder utilizing the Poincar\u00b4 e ball model within hyperbolic geometric space. Hype-ANE (Liu et al., 2018) is a hyperbolic adversarial network embedding model that extends ANE into hyperbolic geometric space. (3) Deep graph generative models: VGAE (Kipf & Welling, 2016) can be used for graph generation tasks by treating each graph as a batch size. GraphRNN (You et al., 2018) is a deep auto-regressive generative model that focuses on graph representations under different node orderings. (4) Graph diffusion generative models: GDSS (Jo et al., 2022) simultaneously diffuses node features and adjacency matrices to learn their scoring functions within the neural network correspondingly. DiGress (Vignac et al., 2022) is a discrete denoising diffusion model that progressively recovers graph properties by manipulating edges. GraphGDP (Huang et al., 2022) is a position-enhanced graph score-based diffusion model for graph generation. EDGE (Chen et al., 2023) is a discrete diffusion process for large graph generation. Settings. A fair parameter setting for the baselines is the default value in the original papers and for the training on new datasets make appropriate adjustments. For HypDiff, the encoder is 2-layer HGCN with 256 representation dimensions, the edge dropping probability to 2%, the learning rate to 0.001, and hyperbolic curvature c = 1. Additionally, the diffusion processing set diffusion strength \u03b4 as 0.5, and the number of 6 latent layers in denoising is 64, 128, 256, 128, 256, 128. We use Adam as an optimizer and set L2 regularization strength as 1e-5. For the metric, we use the F1 scores of the node classification task and the maximum mean discrepancy scores of Degree, Cluster, and Spectre and the F1 score of precision-recall and density-coverage (F1 pr and F1 dc) to evaluate graph generation results. The richer experimental results under the other indicators are shown in Appendix J. All experiments adopt the implementations from the PyTorch Geometric Library and Deep 6 \fHyperbolic Geometric Latent Diffusion Model for Graph Generation Table 1. Summary of node classification Micro-F1 and Macro-F1 scores (%) based on the average of five runs on synthetic and real-world datasets. (Result: average score \u00b1 standard deviation (rank); Bold: best; Underline: runner-up.) Method Synthetic Datasets Real-world Datasets Avg. R. SBM BA Cora Citeseer Polblogs Mi-F1 Ma-F1 Mi-F1 Ma-F1 Mi-F1 Ma-F1 Mi-F1 Ma-F1 Mi-F1 Ma-F1 VGAE 20.5\u00b12.1 15.4\u00b11.1 37.4\u00b11.7 15.9\u00b12.3 79.7\u00b10.4 78.1\u00b10.2 63.8\u00b11.4 55.5\u00b11.3 79.4\u00b10.8 79.4\u00b10.8 4.6 ANE 39.9\u00b11.1 33.9\u00b11.8 46.0\u00b13.0 19.3\u00b12.7 69.3\u00b10.1 66.4\u00b10.1 50.2\u00b10.1 49.5\u00b10.6 80.8\u00b10.1 80.7\u00b10.1 4.3 GraphGAN 38.6\u00b10.5 38.9\u00b10.3 43.6\u00b10.6 24.6\u00b10.5 71.7\u00b10.1 69.8\u00b10.1 49.8\u00b11.0 45.7\u00b10.1 77.5\u00b10.6 76.9\u00b10.4 4.8 P-VAE 57.9\u00b11.3 53.0\u00b11.5 38.4\u00b11.4 20.0\u00b10.3 79.6\u00b12.2 77.5\u00b12.5 67.9\u00b11.7 60.2\u00b11.9 79.4\u00b10.1 79.4\u00b10.1 3.2 Hype-ANE 18.8\u00b10.3 11.9\u00b10.1 56.9\u00b12.4 31.6\u00b11.2 80.7\u00b10.1 79.2\u00b10.3 64.4\u00b10.3 58.7\u00b10.0 83.6\u00b10.4 83.6\u00b10.4 3.0 HypDiff 70.5\u00b10.1 69.4\u00b10.1 58.3\u00b10.1 40.0\u00b10.1 82.4\u00b10.1 81.2\u00b10.1 67.8\u00b10.2 60.4\u00b10.3 85.7\u00b10.1 85.4\u00b10.1 1.1 Table 2. Generation results about the MMD distance between the original and generated graphs. (Result: scores (rank) and average rank;Bold: best; Underline: runner-up.) Method Synthetic Datasets Real-world Datasets Community BA-G MUTAG PROTRINS Degree Cluster Spectre Degree Cluster Spectre Degree Cluster Spectre Degree Cluster Spectre VGAE 0.365 0.025 0.507 0.775 1.214 0.398 0.255 2.000 0.744 0.705 0.979 0.700 GraphRNN 0.002 0.027 0.004 0.122 0.262 0.007 0.537 0.013 0.476 0.009 0.071 0.017 GDSS 0.094 0.031 0.052 0.978 0.468 0.917 0.074 0.021 0.003 1.463 0.168 0.013 DiGress 0.226 0.158 0.194 0.654 1.171 0.268 0.100 0.351 0.082 0.108 0.062 0.079 GraphGDP 0.046 0.016 0.042 0.698 0.188 0.053 0.127 0.057 0.050 0.103 0.240 0.088 EDGE 0.021 0.013 0.040 0.282 0.010 0.090 0.024 0.597 0.468 0.033 0.523 0.024 HypDiff 0.002 0.010 0.028 0.216 0.021 0.004 0.048 0.001 0.040 0.133 0.004 0.012 Graph Library. The reported results are the average scores and standard deviations over 5 runs. All models were trained and tested on a single Nvidia A100 40GB GPU. 4.3. Performance Evaluation We show the F1 scores of the node classification task in Table 1 and the statistics of MMD distance and F1 scores between the original and generated graph in the graph generation task in Table 2 and Table C.4. A higher score reported in F1 indicates a more accurate prediction of the node and fidelity of the generated graph. At the same time, a smaller MMD distance suggests better generative capabilities of the model from the perspective of graph topological properties. Node classification. HypDiff demonstrates superior performance which outperforms nearly all baseline models, achieving the highest ranking and revealing excellent generalization. This implies that HypDiff can preserve essential properties within complex structures, enabling better distinctive and utility of the dependencies between nodes across hierarchical levels in hyperbolic space. Graph Generation. Successively, we focused on validating the graph generation capability of HypDiff. Using the finer-grained metrics, we consistently observed our approach\u2019s outstanding performance. More results are shown in Table C.3. We are further concerned with the fidelity and diversity of the generated results which yielded conclusions consistent with the previous and are reported in Table C.4. Specifically, HypDiff depicts superior overall performance compared to the state-of-the-art model autoregressive model GraphRNN and discrete diffusion method DiGress. Furthermore, our model can effectively capture the local structure through similarity constraints and achieve competitive performance on highly connected graph data (Community). 4.4. Analysis of HypDiff In this subsection, we present the experimental results to intuitively convey our discovery and initiate a series of discussions and analyses. Ablation Study. This study is to highlight the role of radial popularity diffusion and angular similarity diffusion constraints of HypDiff. We conducted experiments on three real-world datasets to validate the node classification performance and removed radial popularity (HypDiff (w/o P)), angular similarity (HypDiff (w/o S)) and total geometric prior(HypDiff (w/o PS)) components as the variant models. We show the results in Figure 4. The radial popularity is evident in facilitating hyperbolic diffusion processes, thereby showcasing the advantage of hyperbolic geometry in capturing the underlying graph topology. Furthermore, the angular 7 \fHyperbolic Geometric Latent Diffusion Model for Graph Generation Figure 4. Ablation study results. Figure 5. Sensitivity analysis of geometric constraints. 11 12 13 14 15 Average Time of 1000 Timesteps (s) 3000 4000 5000 6000 7000 GPU Memory (MB) HypDiff GPU: 2519MB Time: 11.2 s GDSS GPU: 3501MB Time: 12.5 s GraphGDP GPU: 5902MB Time: 13.6 s EDGE GPU: 6205MB Time: 11.8 s DiGress GPU: 5800MB Time: 12.1 s Figure 6. Efficiency analysis on IMDB-B for graph generation. similarity also significantly preserves the local structure of the graph, compensating for the limitations of hyperbolic space in capturing local connectivity patterns. In summary, the hyperbolic geometric prior plays a crucial role in capturing non-Euclidean structures. Sensitivity Analysis of Geometric Constraints. To investigate the impact of both the number of clusters k and the geometric prior coefficient \u03b4 on the model performance, we conducted the sensitivity analysis on the real-world and synthetic graph datasets, respectively. The number of clusters k can be understood as the strength of the angular constraint, the results of three datasets with different structures are shown in Fig 5 (Left). Specifically, Cora has a realworld connected structure, SBM has a complex community structure, and Fractal has self-similarity and hierarchy properties. It can be observed that k has different sensitivities in different structured datasets, indicating that different graph structures have different approximate accuracies for anisotropy capture. Correspondingly, the geometric prior coefficient \u03b4 can be understood as the strength of the radial constraint, the results of three real-world datasets are shown in Fig 5 (Right). The stronger the constraint, the smaller the diffusion step in the radial direction of the hyperbolic space. It can be observed that the data set with a tree-like structure requires lower radial constraints, while the graph with high connectivity requires stronger radial constraints. For the experimental setup and a more detailed analysis of the results please refer to Appendix I. Diffusion Efficiency Analysis. We report the training time for our HypDiff and other graph diffusion baselines with the same configurations on IMDB-B. We conduct experiments with the hardware and software configurations listed in Section 4.2. We comprehensively report the results from the time and space costs of the diffusion process. The result is shown in Figure 6, our HypDiff comprehensively outperforms other baselines in diffusion time and GPU memory cost. Compared with the discrete graph diffusion model, our model directly diffuses each node of the graph with structure-preserving based on the latent diffusion model, so the space complexity is much lower than that of the direct diffusion of discrete and sparse structural information(e.g. adjacent/Laplace matrix). The performance of each dataset is in the Appendix K, Visualization. We compare the contributions of two diffusion generation models, HypDiff and GDSS, to graph generation tasks by visualizing networks generated by five well-accepted graph theoretical models. We discuss and show the visualization as Figure C.3 in the Appendix J.3. 5." +} \ No newline at end of file diff --git a/abs_9K/test_abstract_short_2405.03251v1.json b/abs_9K/test_abstract_short_2405.03251v1.json new file mode 100644 index 0000000000000000000000000000000000000000..769d6d676f68823918eecd135e3c836c4d029357 --- /dev/null +++ b/abs_9K/test_abstract_short_2405.03251v1.json @@ -0,0 +1,17 @@ +{ + "url": "http://arxiv.org/abs/2405.03251v1", + "title": "Exploring the Frontiers of Softmax: Provable Optimization, Applications in Diffusion Model, and Beyond", + "abstract": "The softmax activation function plays a crucial role in the success of large\nlanguage models (LLMs), particularly in the self-attention mechanism of the\nwidely adopted Transformer architecture. However, the underlying learning\ndynamics that contribute to the effectiveness of softmax remain largely\nunexplored. As a step towards better understanding, this paper provides a\ntheoretical study of the optimization and generalization properties of\ntwo-layer softmax neural networks, providing theoretical insights into their\nsuperior performance as other activation functions, such as ReLU and\nexponential. Leveraging the Neural Tangent Kernel (NTK) framework, our analysis\nreveals that the normalization effect of the softmax function leads to a good\nperturbation property of the induced NTK matrix, resulting in a good convex\nregion of the loss landscape. Consequently, softmax neural networks can learn\nthe target function in the over-parametrization regime. To demonstrate the\nbroad applicability of our theoretical findings, we apply them to the task of\nlearning score estimation functions in diffusion models, a promising approach\nfor generative modeling. Our analysis shows that gradient-based algorithms can\nlearn the score function with a provable accuracy. Our work provides a deeper\nunderstanding of the effectiveness of softmax neural networks and their\npotential in various domains, paving the way for further advancements in\nnatural language processing and beyond.", + "authors": "Jiuxiang Gu, Chenyang Li, Yingyu Liang, Zhenmei Shi, Zhao Song", + "published": "2024-05-06", + "updated": "2024-05-06", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "label": "Original Paper", + "paper_cat": "Diffusion AND Model", + "gt": "The softmax activation function plays a crucial role in the success of large\nlanguage models (LLMs), particularly in the self-attention mechanism of the\nwidely adopted Transformer architecture. However, the underlying learning\ndynamics that contribute to the effectiveness of softmax remain largely\nunexplored. As a step towards better understanding, this paper provides a\ntheoretical study of the optimization and generalization properties of\ntwo-layer softmax neural networks, providing theoretical insights into their\nsuperior performance as other activation functions, such as ReLU and\nexponential. Leveraging the Neural Tangent Kernel (NTK) framework, our analysis\nreveals that the normalization effect of the softmax function leads to a good\nperturbation property of the induced NTK matrix, resulting in a good convex\nregion of the loss landscape. Consequently, softmax neural networks can learn\nthe target function in the over-parametrization regime. To demonstrate the\nbroad applicability of our theoretical findings, we apply them to the task of\nlearning score estimation functions in diffusion models, a promising approach\nfor generative modeling. Our analysis shows that gradient-based algorithms can\nlearn the score function with a provable accuracy. Our work provides a deeper\nunderstanding of the effectiveness of softmax neural networks and their\npotential in various domains, paving the way for further advancements in\nnatural language processing and beyond.", + "main_content": "Introduction 3 2 Related Work 4 3 Preliminary 5 3.1 Neural Tangent Kernel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 4 Main Results 7 5 Proof Sketch 8 6 Application in Di\ufb00usion 9 6.1 Preliminary of Di\ufb00usion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 6.2 Main Result of Di\ufb00usion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 7 Discussion and Future Work 11 8 Conclusion 12 A De\ufb01nition 13 B Basic Concentration 14 B.1 Some Concentration Basic Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 B.2 Kernel Perturbation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 C Induction 19 C.1 Main Result . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 C.2 Induction Part 1. For Weights . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 C.3 Induction Part 2. For Loss . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 C.4 Induction Part 3. For Gradient . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 D Induction Part 1: For Weights 22 D.1 Bounding the Gradient at any Time . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 D.2 Bounding the Initialization Loss . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 E Induction Part 2: For Loss 23 E.1 Decomposition for \u2225vec(F(\u03c4 + 1) \u2212Y )\u22252 2 . . . . . . . . . . . . . . . . . . . . . . . . . 24 E.2 Choice of Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 E.3 Bounding C0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 E.4 Bounding C1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 E.5 Bounding C2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 E.6 Bounding \u2225F(\u03c4 + 1) \u2212F(\u03c4)\u22252 F . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 F NTK Regression 35 F.1 Equivalence between Trained Net and Kernel Regression . . . . . . . . . . . . . . . . 35 1 \fG Di\ufb00usion 39 G.1 Main Result of Di\ufb00usion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 G.2 Tools From Previous Works . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 2 \f1 Introduction Large Language Models (LLMs) like GPT4 [AAA+23] from OpenAI and Claude 3 [Ant24] from Anthropic have widely and profoundly changed the world. Some researchers believe they split human history into two parts, the Pre-LLM Era and the LLM Era. The LLMs have been widely used in human activities, such as education [KSK+23], law [Sun23], \ufb01nance [LWDC23], bio-informatics [TTE+23], coding [HZL+24], and even top AI conference reviews such as ICML, ICLR, and NeurIPS [LIZ+24]. To make LLMs successful, one of the cores of LLMs is the Transformer model architecture [VSP+17], which has many advantages, including faster-parallelized inference rather than sequential inference like RNN [HS97]; being easy to scale up the model capacity to support the scaling laws in neural language models [KMH+20], i.e. since the input and output dimension of each Transformer blocks is the same, we can stack an arbitrary number of layers as we want. The kernel design of the Transformer block is self-attention layers, where each block has many attention heads and each head has its three important private parameter matrices for key, query, and value operation. Many papers believe that the self-attention operation is the critical reason for emergent ability [WTB+22], including in-context learning [OEN+22, Red24] and compositional ability to solve complex task [DLS+24, LPC+24]. The Transformer is so successful and has been widely certi\ufb01ed that this architecture can be adopted in many other modalities such as tabular data, image/video generation, e.g. the video di\ufb00usion model SORA [Ope24] from OpenAI using Transformer [PX23] as its backbone. When we delve into the self-attention mechanism, we \ufb01nd the softmax function plays a crucial role [VSP+17]. It enables the model to focus on the most related information among input sequences by giving higher attention scores to the positions that are more relevant for the current position\u2019s representation and to capture dependencies between positions. [CLJ20] \ufb01nd that softmax attention is more expressive and performs better than any convolutional layer. [DSZ23] exhibits softmax attention outperforms linear attention in most scenarios. Although the softmax function code has been executed every second on thousands of servers, there is a limited understanding of the following question: (\u2217) What is the learning mechanism that makes softmax so powerful? To demystify the black box, in this paper, we analyze the Gradient Descent (GD) training dynamics for two-layer Neural Networks (NN) with softmax activation function for multi-dimensional regression, i.e., F(W, x, a) \u2208Rd and F(W, x, a)\u2113:= m\u27e8a\u2113, exp(W \u22a4x)\u27e9\u00b7 \u27e8exp(W \u22a4x), 1m\u27e9\u22121 \u2200\u2113\u2208{1, . . . , d}, where m is number of hidden neurons, exp(\u00b7) is element-wise exponential function, a\u2113, W are the \ufb01rst and second layer weights respectively and x is the input data. Note that, the self-attention could be written as F(W KX, W QX, W V X) \u2208Rd\u00d7n\u2032, where W K, W Q, W V \u2208Rd\u00d7d denotes key, query, and value matrix and X \u2208Rd\u00d7n\u2032 is a sequence of n\u2032 tokens. Thus, studying the two-layer softmax network is the prerequisite to understanding self-attention. See more discussion in Section 7. There is a rich line of work studying two-layer NN learning trajectory under ReLU activation function ([LL18, DZPS19, AZLS19a, ADH+19a, SY19, MMM19, SYZ21, BPSW21, MOSW22, CB20, ZGJ21, LLWA21, CCBG22] and many more) or exponential activation function from the latest work [GMS23]. As far as we know, our work is the \ufb01rst to theoretically study the optimization and generalization of the two-layer softmax network and it is a \ufb01rst step on understanding the power of softmax. 3 \fReLU ([MOSW22]) exp ([GMS23]) Softmax (ours) m \u2126(\u03bb\u22122n2 log(n)) \u2126(\u03bb\u22122n2+o(1) log2(n)) \u2126(\u03bb\u22122n2+o(1) log2(n)) b T \u2126(\u03bb\u22122n2 log(n/\u01eb)) \u2126(\u03bb\u22122n2+o(1) log(n/\u01eb)) \u2126(\u03bb\u22122n2+o(1) log(n/\u01eb)) Table 1: Comparing hidden neuron number m in two-layer neural networks and training steps b T are required under di\ufb00erent activation functions to guarantee that, for any \u01eb > 0, with probability at least 0.99, the training loss is smaller or equal to \u01eb. Here, n is the number of training samples and \u03bb is the smallest eigenvalue for the matrix of neural tangent kernel, where n > 1 and \u03bb < 1. We can see that the two-layer NN with softmax activation function requires almost the same number of neurons and training steps to converge as that with ReLU or exponential activation functions. More details: Theorem 3.6 in [MOSW22] for ReLU; Theorem 1.1 in [GMS23] for exp; Corollary 4.3 in our paper for softmax. One popular analysis method for studying over-parameterized NN is Neural Tangent Kernel (NTK) [JGH18], where overparameterized networks are approximately linear models around their initialization, so the network training is almost convex. To answer our (\u2217) question above, we adopt the powerful NTK analysis paradigm in this work. Our analysis shows that, because of the normalization e\ufb00ect of the denominator, the Neural Tangent Kernel induced by the softmax has a good perturbation property (Lemma 5.1), which means the loss landscape of softmax version has a large convex region. Thus, the softmax NN requires almost the same number of neurons and training steps to \ufb01t the data and converge as ReLU or exponential NN, which is illustrated in Table 1 clearly (Theorem 4.2). To demonstrate the broad applicability of our theoretical \ufb01ndings, we apply our analysis in a practical case study to show the generalization ability of softmax NN, where the task is learning score estimation functions in di\ufb00usion models with noisy labels, a promising approach for generative modeling, as we can smartly transfer it to a multi-dimensional regression task (Theorem 6.5). Thus, we show that gradient-based algorithms can learn the score function with a provable accuracy. Our paper\u2019s contributions are summarized as follows: \u2022 Softmax NTK: We build up the \ufb01rst NTK analysis framework for two-layer NN with softmax activation function. Furthermore, our multi-dimensional regression setting is more general than previous work [MOSW22, GMS23] (ReLU and exp) and can be degenerated to the linear regression setting. \u2022 Di\ufb00usion Models Case Study: We apply our results in learning score estimation functions in di\ufb00usion models with noisy labels to verify our analysis e\ufb00ectiveness. 2 Related Work Softmax and Attention in LLMs. Recently, signi\ufb01cant advances have been achieved in language modeling, particularly with the introduction of Transformer architectures and attention mechanisms [VSP+17]. Self-attention to capture long-range dependencies in text, revolutionizing the \ufb01eld of NLP, e.g., BERT [DCLT19], PaLM [CND+22], LLaMA [TLI+23], LLaMA 2 [TMS+23], ChatGPT [Ope22], GPT4 [AAA+23], Claude 3 [Ant24] and so on. Many works demonstrate the softmax is beyond other activation functions such as ReLU attention or linear attention in di\ufb00erent aspects, e.g, approximation power [DSZ23, SHT24, NLL+24, GLL+24a], prompt tuning [ORST23], in-context learning ability [GSX23, SWXL23, CPM+24, CSWY24], compositional ability[XSL24]. Many works study to generalize the softmax into high order attention [AS24b] or 4 \fto accelerate softmax computation [WLK+20, CLD+20, SZZ+21, QSD+21, AS23, BSZ24, AS24a, HJK+24, HLSL24, DSY24, SYZ24, GSY23, GSYZ23, KMZ23, GLL+24b]. Another line of work analyzes a one-layer softmax network trained on the linear regression task [LSX+23, DLMS23, DLS23, CSY24, GSWY23, SCWZ24], while our work studies a two-layer softmax setting. Neural Tangent Kernel. Recently many studies show that the analysis of optimization and generalization for deep learning should be interwoven together. One line of work uses the \ufb01rst-order Tyler expansion to study su\ufb03ciently over-parameterized neural networks around its initialization like NTK, e.g. [MRH+18, ZCZG18, JGH18, LL18, AZLS19b, ZG19, OS19, LXS+19, NXL+19, Yan19, SY19, DLL+19, AZLS19a, COB19, OFLS19, ADH+19a, CG19, JT19, AZLL19, OS20, CFW+20, ZCZG20, GSJW20, BPSW21, MZ22, MOSW22, GMS23, QSS23, QMS+23, QSY23, SY23, GQSW24, SZZ24] and more. Thus, the neural network optimization can be a convex problem. The NTK method has been widely used in di\ufb00erent scenarios, such as preprocessing analysis [SYZ21, HSWZ22, ALS+23, SCL+23, SSLL23, SSL24, GQSW24], federated learning [LSY23], LoRA adaptation [HWAZ+21, XSW+24, SMF+23] of LLMs [MWY+23], and learning score estimation functions in di\ufb00usion models [HRX24]. Di\ufb00usion Model. Score-based generative di\ufb00usion models can generate high-quality image samples comparable to GANs which requires adversarial optimization [HJA20, SSDK+21, KLL+24]. Based on the U-Net [RFB15], stable di\ufb00usion can successfully generate business-used images. Based on the softmax-based self-attention [PX23], OpenAI released a video di\ufb00usion model, SORA [Ope24], with a surprising performance. Another line of work studying how to train the di\ufb00usion models to have a better theoretical guarantee [SE19, SE20, SK21, SGSE20, SDME21, LLT22, KFL22, SDCS23, LKB+23, CLL23, CDD23, CHZW23, SCK23, YFZ+23, BDD23, GKL24, CCL+24, GLB+24, WCL+24, CKS24]. In this work, we adapt our analysis in di\ufb00usion models. 3 Preliminary We \ufb01rst introduce some notations. Then, we will introduce our problem setup. Notations. We use N(\u00b5, \u03a3) to denote the Gaussian distribution with \u00b5 and covariance \u03a3. For any positive integer n, we use [n] to denote set {1, 2, \u00b7 \u00b7 \u00b7 , n}. Let a vector z \u2208Rn. We denote the \u21132 norm as \u2225z\u22252 := (Pn i=1 z2 i )1/2, the \u21131 norm as \u2225z\u22251 := Pn i=1 |zi|, \u2225z\u22250 as the number of non-zero entries in z, \u2225z\u2225\u221eas maxi\u2208[n] |zi|. We use z\u22a4to denote the transpose of a z. We use \u27e8\u00b7, \u00b7\u27e9to denote the inner product. Let A \u2208Rn\u00d7d, we use vec(A) to denote a length nd vector. We denote the Frobenius norm as \u2225A\u2225F := (P i\u2208[n],j\u2208[d] A2 i,j)1/2. For a function f(x), we say f is L-Lipschitz if \u2225f(x)\u2212f(y)\u22252 \u2264L\u00b7\u2225x\u2212y\u22252. Let D denote a distribution. We use x \u223cD to denote that we sample a random variable x from distribution D. We use E[] to denote expectation and Pr[] to denote probability. We use p.s.d. to denote the positive-semide\ufb01nite matrix. As we have multiple index, to avoid confusion, we usually use i, j \u2208[n] to index the training data, \u2113\u2208[d] to index the output dimension, r \u2208[m] to index neuron number. Models. We consider a two-layer softmax neural network. The hidden layer has m neurons, and we use the softmax function as the activation function, F(W, \u00b7, a) : Rd1 \u2192Rd2 and F(W, x, a)\u2113:= m\u27e8a\u2113, exp(W \u22a4x)\u27e9\u00b7 \u27e8exp(W \u22a4x), 1m\u27e9\u22121 \u2200\u2113\u2208[d2], (1) where exp(\u00b7) is element-wise exponential function. We use m as a normalization factor. Note that we can reduce the d2 to 1 for the linear regression setting. To simplify the proof we let d1 = d2. 5 \fNote that our proof can generalize to di\ufb00erent d1, d2 easily. We only optimizing W and not both W and a simultaneously as many previous works to simplify optimization, e.g., [DZPS19, SY19, MOSW22], where x \u2208Rd represents the input, w1, \u00b7 \u00b7 \u00b7 , wm \u2208Rd are weight vectors in the \ufb01rst layer, i.e., W = [w1, \u00b7 \u00b7 \u00b7 , wm] \u2208Rd\u00d7m, and a1, \u00b7 \u00b7 \u00b7 , ad \u2208Rm are weights in the second layer. We can simplify the notation as F(W, x) when the context is clear. Data. We have n training data points Dn = {(xi, yi)}n i=1, where x \u2208Rd and y \u2208Rd.1 We denote X = [x1, . . . , xn] \u2208Rd\u00d7n and Y = [y1, . . . , yn] \u2208Rd\u00d7n. We assume that \u2225xi\u22252 \u22641 and \u2225yi\u22252 \u22641, \u2200i \u2208[n]. We have the softmax function S \u2208Rm\u00d7n, where Si \u2208Rm denotes \u27e8exp(W \u22a4xi), 1m\u27e9\u22121 \u00b7 exp(W \u22a4xi) and Si,r \u2208R denotes \u27e8exp(W \u22a4xi), 1m\u27e9\u22121 \u00b7exp(w\u22a4 r xi), \u2200r \u2208[m], \u2200i \u2208[n]. For simplicity, we denote \u03b1i as \u27e81m, exp(W \u22a4xi)\u27e9, expi as exp(W \u22a4xi) and expi,r as exp(w\u22a4 r xi), \u2200r \u2208[m], \u2200i \u2208[n], when the context is clear. Gradient Descent. We use er to denote a vector where the r-th coordinate is 1 and everywhere else is 0. \u2200r \u2208[m], \u2200\u2113\u2208[d], we have \u2202F (W,x,a)\u2113 \u2202wr \u2208Rd can be written as \u2202F(W, x, a)\u2113 \u2202wr = + m\u27e8a\u2113\u25e6er, exp(W \u22a4x)\u27e9\u00b7 \u27e8exp(W \u22a4x), 1m\u27e9\u22121x \u2212m\u27e8a\u2113, exp(W \u22a4x)\u27e9\u00b7 \u27e8exp(W \u22a4x), 1m\u27e9\u22122 \u00b7 \u27e8exp(W \u22a4x), er \u25e61m\u27e9x = + m\u27e8a\u2113\u25e6er, S\u27e9\u00b7 x \u2212m\u27e8a\u2113, S\u27e9\u00b7 \u27e8S, er \u25e61m\u27e9x. (2) We use W(\u03c4) to denote the weights of the \ufb01rst layer on the timestamp \u03c4 and similar for S(\u03c4) and F(\u03c4) when the context is clear. Now, we introduce some necessary de\ufb01nition used. De\ufb01nition 3.1 (F(\u03c4), dynamic prediction). We de\ufb01ne Fi(\u03c4) \u2208Rd, for any timestamp \u03c4, as F\u2113,i(\u03c4) := m\u27e8a\u2113, exp(W(\u03c4)\u22a4xi)\u27e9\u00b7 \u27e8exp(W(\u03c4)\u22a4xi), 1m\u27e9\u22121. Here xi \u2208Rd. It can be rewritten as F\u2113,i(\u03c4) = m\u27e8a\u2113, Si(\u03c4)\u27e9. We consider d-dimensional MSE loss. De\ufb01nition 3.2 (Loss function over time). We de\ufb01ne the objective function L as below: L(W(\u03c4)) := 1 2 X i\u2208[n] X \u2113\u2208[d] (F\u2113,i(\u03c4) \u2212y\u2113,i)2. Thus, we de\ufb01ne the gradient of w. De\ufb01nition 3.3 (\u2206wr(\u03c4)). For any r \u2208[m], we de\ufb01ne \u2206wr(\u03c4) \u2208Rd as below: \u2206wr(\u03c4) := m n X i=1 d X \u2113=1 (F\u2113,i(\u03c4) \u2212y\u2113,i) \u00b7 \u0010 \u27e8a\u2113\u25e6er, Si(\u03c4)\u27e9\u2212\u27e8a\u2113, Si(\u03c4)\u27e9\u00b7 \u27e8Si(\u03c4), er \u25e61m\u27e9 \u0011 \u00b7 xi where Si(\u03c4) = \u27e8exp(W(\u03c4)\u22a4xi), 1m\u27e9\u22121 \u00b7 exp(W(\u03c4)\u22a4xi) \u2208Rm. Note that we can simplify the gradient calculation by the fact 1 = \u27e81m, Si(\u03c4)\u27e9. Thus, we have the following claim. Claim 3.4. \u2206wr(\u03c4) := m Pn i=1 Pd \u2113=1(F\u2113,i(\u03c4) \u2212y\u2113,i) \u00b7 \u0010 (\u27e8a\u2113,r \u00b7 1m \u2212a\u2113, Si(\u03c4)\u27e9) \u00b7 Si,r(\u03c4) \u0011 \u00b7 xi. 1Our analysis can extend to xi \u2208Rd1 and yi \u2208Rd2 easily. 6 \fWe use the gradient descent (GD) algorithm with the learning rate \u03b7 to train the network. As we only train the hidden layer W and \ufb01x a, we have the following gradient update rule. De\ufb01nition 3.5 (Gradient descent). The gradient descent algorithm for optimizing the weight matrix W is de\ufb01ned as: W(\u03c4 + 1) = W(\u03c4) \u2212\u03b7\u2206W(\u03c4). where \u2206W(\u03c4) \u2208Rd\u00d7m and \u2206wr(\u03c4) \u2208Rd is the r-th column of \u2206W(\u03c4) de\ufb01ned in De\ufb01nition 3.3. 3.1 Neural Tangent Kernel Now, we are ready to introduce our key tools, Neural Tangent Kernel induced by the softmax. We de\ufb01ne the kernel with respect to timestamp \u03c4. De\ufb01nition 3.6 (Kernel function). For simplicity, we denote S(W \u22a4xi) as Si \u2208Rm \u22650 and v\u2113,r = a\u2113,r \u00b7 1m \u2212a\u2113\u2208Rm. We de\ufb01ne the function (Gram matrix) H : Rd\u00d7m \u2192Rnd\u00d7nd as following H(W) := \uf8ee \uf8ef \uf8ef \uf8ef \uf8f0 H1,1 H1,2 \u00b7 \u00b7 \u00b7 H1,d H2,1 H2,2 \u00b7 \u00b7 \u00b7 H2,d . . . . . . ... . . . Hd,1 Hd,2 \u00b7 \u00b7 \u00b7 Hd,d \uf8f9 \uf8fa \uf8fa \uf8fa \uf8fb, and for each \u21131, \u21132 \u2208[d], we have H\u21131,\u21132 \u2208Rn\u00d7n is de\ufb01ned as [H\u21131,\u21132]i,j(W) := 1 mx\u22a4 i xj m X r=1 \u27e8v\u21131,r, Si\u27e9\u00b7 mSi,r \u00b7 \u27e8v\u21132,r, Sj\u27e9\u00b7 mSj,r. For any timestamp \u03c4, for simplicity, we denote H(\u03c4) := H(W(\u03c4)) and denote H(0) as H\u2217. Note that H\u2217is a positive semi-de\ufb01nite matrix, and we denote its minimum eigenvalue as \u03bb := \u03bbmin(H\u2217). Initialization. We use symmetric initialization, which is widely used in previous works [DM20, DLS22, MOSW22, SWL22, SWL24]. De\ufb01nition 3.7 (Symmetric initialization). For each r \u2208[m/2], we initialize weights as below \u2022 We draw w2r\u22121 from N(0, \u03c32Id) and uniformly draw a2r\u22121 from {\u22121, +1}d. \u2022 We assign a2r = \u2212a2r\u22121 and w2r\u22121 = w2r. Due to symmetric initialization, we can easily see that F(W(0), x) = 0, \u2200x \u2208Rd. 4 Main Results We \ufb01rst de\ufb01ne a constant we used. De\ufb01nition 4.1. Let C > 10 denote a su\ufb03ciently large constant. We de\ufb01ne parameter B as follows B := max{C\u03c3 p log(nd/\u03b4), 1}. Now, we are ready to present our main result, whose complete proof is in Appendix C.1. 7 \fTheorem 4.2 (Main result). Let \u03bb = \u03bbmin(H\u2217) > 0, m = \u2126(\u03bb\u22122n2d2 exp(18B) log2(nd/\u03b4)), \u03b7 = 0.1\u03bb/(mn2d2 exp(16B)) and b T = \u2126((m\u03b7\u03bb)\u22121 log(nd/\u01eb)) = \u2126(\u03bb\u22122n2d2 exp(16B) \u00b7 log(nd/\u01eb)). For any \u01eb, \u03b4 \u2208(0, 0.1), after b T iterations, with probability at least 1 \u2212\u03b4, we have \u2225F( b T) \u2212Y \u22252 F \u2264\u01eb. If we \ufb01x \u03b4 and \u03c3 in B de\ufb01ned in the De\ufb01nition 4.1, since exp(\u0398(B)) = (nd)o(1), we can simplify the m = \u2126(\u03bb\u22122(nd)2+o(1)) and b T = \u2126(\u03bb\u22122(nd)2+o(1)). The Theorem 4.2 means that as we have poly(nd) number of neurons and training steps, the softmax NN can \ufb01t any training datasets with n number of d-dim training samples on d-dim regression task. Corollary 4.3. Consider the 1-dimension linear regression setting, i.e., d1 = d and d2 = 1. Let \u03bb = \u03bbmin(H\u2217) > 0, m = \u2126(\u03bb\u22122n2 exp(18B) log2(n/\u03b4)), \u03b7 = 0.1\u03bb/(mn2 exp(16B)) and b T = \u2126((m\u03b7\u03bb)\u22121 log(n/\u01eb)) = \u2126(\u03bb\u22122n2 exp(16B) \u00b7 log(n/\u01eb)). For any \u01eb, \u03b4 \u2208(0, 0.1), after b T iterations, with probability at least 1 \u2212\u03b4, we have \u2225F( b T ) \u2212Y \u22252 2 \u2264\u01eb. Proof. Directly follow Theorem 4.2. As shown in Table 1, our two-layer softmax network needs the same number of training steps b T and number of neurons m as two-layer ReLU networks or two-layer exponential networks. 5 Proof Sketch We \ufb01rst show a key Lemma below, showing that the weight w perturbation will not change the Neural Tangent Kernel too much. Lemma 5.1 (Weight value perturbation \u21d2kernel value perturbation). Let R \u2208(0, 0.01). If the following conditions hold \u2022 Let f W = [ e w1, \u00b7 \u00b7 \u00b7 , e wm] \u2208Rd\u00d7m, where e w1, \u00b7 \u00b7 \u00b7 , e wm are i.i.d. draw from N(0, \u03c32Id). \u2022 Let W = [w1, \u00b7 \u00b7 \u00b7 , wm] \u2208Rd\u00d7m and satisfy \u2225e wr \u2212wr\u22252 \u2264R for any r \u2208[m]. Then with probability at least 1 \u2212\u03b4, we have \u2225H(W) \u2212H(f W)\u2225F \u2264Rnd exp(10B). Please see Appendix B.2 for the proof of Lemma 5.1. We can see that the kernel matrix has a small perturbation when the weights w perturb. Note that in Lemma 4.2 [MOSW22], they have \u2225H(W)\u2212H(f W )\u2225F \u22642Rn for the ReLU activation function and in Lemma 6.7 [GMS23], they have \u2225H(W)\u2212H(f W)\u2225F \u22643Rn1+o(1) for the exp activation function. When we consider the 1-dimension linear regression task, we have \u2225H(W) \u2212H(f W)\u2225F \u2264Rn1+o(1), which is almost the same as the other two cases. Remark 5.2. In the proof of Lemma B.2, we do not use concentration bound as previous work [SY19, MOSW22, GMS23]. The reason is that we consider the worst case. In general, E[H(W)\u2212H(f W)] \u0338= 0nd\u00d7nd. Thus, using the concentration bound may not gain any bene\ufb01ts. 8 \fBased on Lemma 5.1, we can use math induction to \ufb01nish the proof of our main Theorem. We show the induction statement below. Lemma 5.3 (Induction). Let \u03c4 be a \ufb01xed integer. Assume the same condition as Theorem 4.2. Let D be de\ufb01ned as De\ufb01nition A.2 and D < R. If the following conditions hold \u2022 Weights Property. \u2225wr(i) \u2212wr(0)\u22252 \u2264R, \u2200i \u2208[\u03c4] \u2022 Loss Property. \u2225F(i) \u2212Y \u22252 F \u2264\u2225F(0) \u2212Y \u22252 F \u00b7 (1 \u2212m\u03b7\u03bb/2)i, \u2200i \u2208[\u03c4] \u2022 Gradient Property. \u03b7\u2225\u2206wr(i)\u22252 \u22640.01 for all r \u2208[m], \u2200i \u2208[\u03c4] Then, for \u03c4 + 1 and \u2200r \u2208[m], we have \u2022 Weights Induction. \u2225wr(\u03c4 + 1) \u2212wr(0)\u22252 \u2264D. \u2022 Loss Induction. \u2225F(\u03c4 + 1) \u2212Y \u22252 F \u2264(1 \u2212m\u03b7\u03bb/4)\u03c4+1 \u00b7 \u2225F(0) \u2212Y \u22252 F . \u2022 Gradient Induction. \u03b7\u2225\u2206wr(\u03c4 + 1)\u22252 \u22640.01, \u2200r \u2208[m]. Please refer to Appendix C.2, Appendix C.3 and Appendix C.4 for the proof of weights, loss, gradient induction in Lemma 5.3 respectively. Lemma 5.3 means that, at a \ufb01xed timestamp \u03c4, if the weights w(\u03c4) is close to its initialization, the loss is decreasing and the gradient is also small, then we can conclude at timestamp \u03c4 + 1, these conditions still hold as local convexity proved by Lemma 5.1. Thus, after checking the initial condition, we can conclude Theorem 4.2. 6 Application in Di\ufb00usion Now, we apply our results in learning score estimation functions in di\ufb00usion models with noisy labels. We introduce problem setup in Section 6.1 and show our results in Section 6.2. 6.1 Preliminary of Di\ufb00usion In this section, we brie\ufb02y introduce the di\ufb00usion model proposed in [SSDK+21]. Forward Process. During the forward process, we progressively inject the noise into the original data distribution, which can be characterized by the following Stochastic Di\ufb00erential Equation (SDE) [SE20, HJA20]: dx(t) = \u22121 2g(t)x(t) dt + p g(t)dBt, x(0) \u223cp0, (3) where x(t) is the data at the di\ufb00usion process time t, g(t) > 0 is a deterministic weighting function; and (Bt)t\u22650 is a standard d-dimensional Brownian motion/noise. The p0 represents the original/target data distribution that we learn, and we only have few number of accesses to it, i.e., n times. We denote pt as the distribution of x(t) at di\ufb00usion process time t. Then, we can write the explicit solution to Eq. (3) as x(t) = e\u2212 R t 0 1 2g(s)dsx(0) + e\u2212 R t 0 1 2g(s)ds Z t 0 e R s 0 1 2g(u)dup g(s)dBs. 9 \fBackward Process. We denote y(t) = x(T \u2212t) to reverse the forward process in time [HP86, F\u00a8 ol05, CCGL21] that transforms noise into samples from the target distribution. We have a backward process associated to Eq. (3) as: dy(t) = (1 2g(T \u2212t)y(t) + g(T \u2212t)\u2207log pT\u2212t(y(t)))dt + p g(T \u2212t)d \u00af Bt, y(0) \u223cq0. (4) where ( \u00af Bt)t\u22650 is another d-dim Brownian motion/noise. Following the literature, we call \u2207log pt(\u00b7) as \u201cscore function\u201d [SSDK+21]. We have q0 is the initial distribution of the backward process and the score function \u2207log pt(\u00b7) as the gradient of log density of x(t). However, In practice, Eq.(4) cannot be directly used as both the score function and the distribution pT are unknown. To solve the problem, we (1) randomly select a noise distribution as the initial distribution of the backward process pT ; (2) replace the ground-truth score function \u2207log pt(x(t)) by an estimator s\u03b8(x(t), t). The parameterized estimator s\u03b8 is learned by a neural network such as U-Net [HJA20, RBL+22] and Transformer [PX23]. Thus, we obtain a practically implementable approximation of the backward SDE: dy(t) = (1 2g(T \u2212t)y(t) + g(T \u2212t)s\u03b8(y(t), t))dt + p g(T \u2212t)d \u00af Bt, y(0) \u223cN(0, Id), which can be used for sampling/data generation [SE20, CHZW23, CCL+23] Score Matching. When estimate the score function, usually we use L2 loss between the estimated and actual score: min \u03b8 1 T Z T 0 \u03bb(t)E[\u2225s\u03b8(x(t), t) \u2212\u2207log pt(x(t))\u22252 2]dt, (5) where \u03bb(t) is the weighting function that captures time inhomogeneity. As the hardness of estimate \u2207log pt term in Eq. (5), equivalently, we minimize the following denoising score matching [Vin11]: min \u03b8 1 T \u2212T0 Z T T0 \u03bb(t)E[\u2225s\u03b8(x(t), t) \u2212\u2207log pt|0(x(t) | x(0))\u22252 2]dt. (6) In practice, the estimator of the score function is parameterized by a neural network and we have the following sampling procedure for any i \u2208[n], x(0)i \u223cp0, ti \u223cUnif(0, T), x(ti)i \u223cpti|0(\u00b7|x(0)i), and we get the training dataset {x(0)i, (ti, x(ti)i)}n i=1, where x(0)i \u2208Rd and (ti, x(ti)i) \u2208Rd+1. We denote x(0) as the noisy label and E[x(0)|x(t)] as the true label. For simplicity, we denote x(0)i as yi \u2208Rd and (ti, x(ti)i) as xi \u2208Rd+1 and the training dataset as Dn = {(xi, yi)}n i=1. Here, y denotes the image from a dataset and x denotes the noised image with its di\ufb00usion process time t. Neural Network Parameterization. Recall that we consider a two-layer network with softmax activation function as the di\ufb00usion model in Eq. (1), satisfying \u2200\u2113\u2208[d], F(W, x, a)\u2113= m\u27e8a\u2113, exp(W \u22a4x)\u27e9\u00b7 \u27e8exp(W \u22a4x), 1m\u27e9\u22121. Note that, we do not train the top-layer weights a, so we can denote it as Fnn(W, x). Then, similar as [HJA20, HRX24], our loss function Eq. (6) can be rewrite as min W L(W) := 1 2 N X j=1 \u2225Fnn(W, xj) \u2212yj\u22252 2. We denote the target function as F\u2217(t, x(t)) := E[y | (t, x(t))]. Let H be the reproducing Hilbert space (RKHS) induced by the NTK [CDVTU10, JGH18] and let FH in the RKHS H such that \u2225FH\u22252 H \u2264RH. 10 \f6.2 Main Result of Di\ufb00usion We \ufb01rst introduce some natural assumptions we used. Assumption 6.1. Based on normalization, we assume \u2225yi\u22252 \u22641, \u2225xi\u22252 \u22641, \u2200i \u2208[n]. Assumption 6.2. Assume \u03bb = \u03bbmin(H\u2217) > 0. Assumption 6.3. The function g is almost everywhere continuous and bounded on [0, \u221e). Assumption 6.4. For all (t, x(t)) \u2208(0, \u221e) \u00d7 Rd, the function F\u2217(t, x(t)) is \u03b2x-Lipschitz in x, i.e., \u2225F\u2217(t, x(t)) \u2212F\u2217(t, x\u2032(t))\u22252 \u2264\u03b2x\u2225x(t) \u2212x\u2032(t)\u22252. We denote A(RH) := c1\u039b( \u221aRH \u039b )\u22122 d log( \u221aRH \u039b ) and \u039b = O( \u221a d) and \u0393\u03b4 := 2d2A(RH) \u03bb log3/2(e(dn)3/2A(RH) \u03bb ) + 1 \u221an !2 + d2A2(RH) \u03bb2 (log(1/\u03b4) + log(log n)). Now, we are ready to present our main Theorem for di\ufb00usion. Theorem 6.5 (Main results of score estimation and generalization). Suppose Assumptions 6.1, 6.2, 6.3, 6.4 hold and we set m = \u2126(\u03bb\u22122n3d3 exp(18B) log2(nd/\u03b4)) and \u03b7 = 0.1\u03bb/(mn2d2 exp(16B)). Moreover, suppose b T satis\ufb01es Assumption G.3 with corresponding \u01eb(n, b T). Then for large enough RH, with probability at least 1 \u2212\u03b4, it holds that 1 T Z T 0 \u03bb(t)E[\u2225sW ( b T)(t, x(t)) \u2212\u2207log pt(Xt)\u22252 2]dt \u2264O \u0012 1 \u03bb\u221an + \u01eb(n, b T) + dA2(RH) + dA(RH) + p dA(RH)\u0393\u03b4 + \u0393\u03b4 \u0013 . Please refer to Appendix G.1 for the complete proof. Here we provide a proof sketch. Proof sketch of Theorem 6.5. In Theorem F.2, we show the \u201cequivalence\u201d between softmax NN learning and corresponding neural tangent kernel regression, i.e., the gap between them is always small. Then, we can borrow the generalization ability of kernel regression to the generalization ability of two-layer softmax NN. On the other hand, by Claim G.1, we can decompose the loss into a coupling gap, a label mismatch gap, an early stopping gap, and an approximation gap. By using our Theorem 4.2, Theorem F.2 with some tools from [HRX24], we \ufb01nish the proof. From Theorem 6.5, we know that, under some natural assumptions, the GD algorithm trained two-layer softmax NN can learn a provable accuracy on the score estimation functions in the di\ufb00usion model with noisy labels. We use this practical case study to demonstrate the broad applicability of our theoretical \ufb01ndings. 7 Discussion and Future Work Self-attention Learning. The self-attention can be written as F(W KX, W QX, W V X) \u2208Rd\u00d7n\u2032, (7) where W K, W Q, W V \u2208Rd\u00d7d denotes key, query, and value matrix respectively and X \u2208Rd\u00d7n\u2032 is a sequence of n\u2032 tokens. As our work is a \ufb01rst step to understanding softmax, it is natural to consider 11 \fhow to extend our results to self-attention. It is well-known that using two reformulation tricks: tensor-trick and SVM-trick [GSWY23, GSX23, AS24a], any analysis for softmax function can be naturally generalized to attention function F(W KX, W QX, W V X). Therefore, we conjecture that we can borrow the idea from [GSWY23, GSX23, AS24a] to decouple Eq (7) into the value term and the softmax term. And, we can alternatively optimize the weights for the softmax term (W k, W Q) and the value term (W V ). We leave this valuable direction as a future work. Feature Learning. Recently, there is a line of work showing that feature learning may be beyond NTK on sample complexity or time complexity, e.g., [AZL19, WLLM19, HN19, AZLL19, DM20, CBL+20, YH20, HY20, LMZ20, GMMM20, RGKZ21, MKAS21, LXMZ21, DLS22, SWL22, SWL24] and many more. It is worth studying the feature learning ability of two-layer softmax NN to \ufb01gure out what feature pattern the softmax prefers to learn and how it happens. We leave this valuable direction as a future work. 8" +} \ No newline at end of file diff --git a/abs_9K/test_abstract_short_2405.03280v1.json b/abs_9K/test_abstract_short_2405.03280v1.json new file mode 100644 index 0000000000000000000000000000000000000000..2650f038865507d51326c4794dfd275fb578a7ea --- /dev/null +++ b/abs_9K/test_abstract_short_2405.03280v1.json @@ -0,0 +1,17 @@ +{ + "url": "http://arxiv.org/abs/2405.03280v1", + "title": "Animate Your Thoughts: Decoupled Reconstruction of Dynamic Natural Vision from Slow Brain Activity", + "abstract": "Reconstructing human dynamic vision from brain activity is a challenging task\nwith great scientific significance. The difficulty stems from two primary\nissues: (1) vision-processing mechanisms in the brain are highly intricate and\nnot fully revealed, making it challenging to directly learn a mapping between\nfMRI and video; (2) the temporal resolution of fMRI is significantly lower than\nthat of natural videos. To overcome these issues, this paper propose a\ntwo-stage model named Mind-Animator, which achieves state-of-the-art\nperformance on three public datasets. Specifically, during the fMRI-to-feature\nstage, we decouple semantic, structural, and motion features from fMRI through\nfMRI-vision-language tri-modal contrastive learning and sparse causal\nattention. In the feature-to-video stage, these features are merged to videos\nby an inflated Stable Diffusion. We substantiate that the reconstructed video\ndynamics are indeed derived from fMRI, rather than hallucinations of the\ngenerative model, through permutation tests. Additionally, the visualization of\nvoxel-wise and ROI-wise importance maps confirms the neurobiological\ninterpretability of our model.", + "authors": "Yizhuo Lu, Changde Du, Chong Wang, Xuanliu Zhu, Liuyun Jiang, Huiguang He", + "published": "2024-05-06", + "updated": "2024-05-06", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.AI" + ], + "label": "Original Paper", + "paper_cat": "Diffusion AND Model", + "gt": "Reconstructing human dynamic vision from brain activity is a challenging task\nwith great scientific significance. The difficulty stems from two primary\nissues: (1) vision-processing mechanisms in the brain are highly intricate and\nnot fully revealed, making it challenging to directly learn a mapping between\nfMRI and video; (2) the temporal resolution of fMRI is significantly lower than\nthat of natural videos. To overcome these issues, this paper propose a\ntwo-stage model named Mind-Animator, which achieves state-of-the-art\nperformance on three public datasets. Specifically, during the fMRI-to-feature\nstage, we decouple semantic, structural, and motion features from fMRI through\nfMRI-vision-language tri-modal contrastive learning and sparse causal\nattention. In the feature-to-video stage, these features are merged to videos\nby an inflated Stable Diffusion. We substantiate that the reconstructed video\ndynamics are indeed derived from fMRI, rather than hallucinations of the\ngenerative model, through permutation tests. Additionally, the visualization of\nvoxel-wise and ROI-wise importance maps confirms the neurobiological\ninterpretability of our model.", + "main_content": "Introduction Researchers in computational neuroscience and the field of artificial intelligence have long sought to decipher and simulate the brain\u2019s visual information processing mechanisms to advance the development of brain-inspired models [1\u20133]. In recent years, functional magnetic resonance imaging (fMRI) has emerged as a reliable tool for measuring brain activity due to its high spatial resolution as a non-invasive brain signal recording technique [4]. fMRI-based neural decoders, which map brain signals to visual stimuli, facilitate a deeper understanding of the human visual perception system. Neural decoding can be categorized into classification, identification, and reconstruction, with this study focusing on the most challenging aspect: reconstruction. Prior research has made significant strides in the classification [3, 5\u20138] and identification [4, 9\u201311] of static stimulus images, and remarkably, some researchers have advanced to the point where they can reconstruct [12\u201322] images from brain signals that closely resemble the original stimulus images. \u2217Equal contributions \u2020Huiguang He is the corresponding author. Preprint. Under review. arXiv:2405.03280v1 [cs.CV] 6 May 2024 \fIn reality, the majority of visual stimuli we encounter in daily life are continuous and dynamic. As depicted in Figure 1, when a subject views dynamic scenes, the primary visual cortex firstly processes low-level structural information like location, shape, size, and color [23], leading to the preliminary recognition of a black silhouette at the edge of a yellow background. Subsequently, motion information of the object is perceived [24], noting that the silhouette is moving from right to left. Lastly, in the higher visual cortex, the interpretation of category and description gives rise to high-level semantic understanding [25], comprehending the scene as a soldier walking from right to left in a desert. What is this scenario? (High-level Semantic Information) -A soldier is walking in a desert. Where is the object in the scenario, what is its size, color, shape? (Low-level Structure Information) -A black figure on the edge of a yellow background. How the objects in the scene move? (Motion Information) -The figure moves from right to left in the movie. Figure 1: The human brain\u2019s comprehension of dynamic visual scenes. When receiving dynamic visual information, human brain gradually comprehends low-level structural details such as position, shape and color in the primary visual cortex, discerns motion information, and ultimately constructs high-level semantic information in the higher visual cortex, such as an overall description of the scene. Due to the inherent nature of fMRI, which relies on the slow blood oxygenation level dependent (BOLD) [26, 27]signal, the sampling frequency is restricted to around 0.5Hz. This frequency is notably lower than the typical 30Hz frame rate of most videos. As a result, a significant discrepancy exists between the temporal resolution of fMRI and the nature video. In fact, each fMRI signal integrates information from approximately 60 video frames. This disparity makes the task of reconstructing video from fMRI signals an exceedingly complex challenge. To address this challenge, Nishimoto [28] transforms the video reconstruction task into a identification problem, employing the Motion-Energy model [29] and Bayesian inference to reconstruct videos from a predefined video library. Subsequently, Han [30] and Wen [31] et al. map brain responses to the feature spaces of deep neural network (DNN) to reconstruct down-sampled (with the frame rate reduced to 1Hz) video stimuli. Wang [32] et al. develope an f-CVGAN that learns temporal and spatial information in fMRI through separate discriminators [33]. To mitigate the scarcity of video-fMRI data, Kupershmidt [34] et al. utilize self-supervised learning [35] to incorporate a large amount of unpaired video data. These efforts have validated the feasibility of video reconstruction from fMRI, albeit with a lack of explicit semantic information in the results. Chen [36] et al. utilize contrastive learning to map fMRI to the Contrastive Language-Image Pre-Training (CLIP) [37] representation space and fine-tuned inflated Stable Diffusion [38, 39] on a video-text dataset as a video generation model, successfully reconstructing coherent videos with clear semantic information for the first time. However, this work does not consider structure information such as color and position, and it is uncertain whether the motion information in the reconstructed videos originated from the fMRI or the video generation model. Method Semantic Structure Motion Frame rate Resolution Nishimoto [28] (Current Biology 2011) \u00d7 \u00d7 \u2713 \u2014\u2014 \u2014\u2014 Wen [31] (Cerebral Cortex 2017) \u00d7 \u2713 \u00d7 \u2014\u2014 64x64 Han [30] (NeuroImage 2019) \u00d7 \u2713 \u00d7 \u2014\u2014 128x128 Kupershmidt [34] \u00d7 \u2713 (\u2713) 4Hz 112x112 Wang [32] (Cerebral Cortex 2022) \u00d7 \u2713 \u2713 4Hz 64x64 Chen [36] (NeurIPS 2023 Oral) \u2713 \u00d7 (\u2713) 3Hz 256x256 Ours \u2713 \u2713 \u2713 4Hz 512x512 Table 1: Comparison of modal information used in Mind-Animator and related works. Parentheses indicate the utilization of external video data in the decoding of this feature. In summary, current video reconstruction models face two challenges: (1) As shown in Table 1, they fail to simultaneously capture semantic, structure, and motion information within the reconstructed videos. Moreover, the resolution of the video is low. 2 \f(2) The reliance on external video datasets and video generation models introduces uncertainty regarding the true source of motion information, leading to the possibility that the reconstructed videos may represent a \"hallucination\" of the video generation model rather than an accurate dynamic decoding from the fMRI data. To address the aforementioned issues, we introduce Mind-Animator, a video reconstruction model that decouples semantic, structure, and motion information from fMRI, as illustrated in Figure 2. Specifically, we map fMRI to the CLIP representation space and the Vector Quantized-Variational Autoencoder (VQ-VAE) [40] latent space to capture semantic and structure information. We design a Transformer-based [41] motion decoder to extract motion information frame by frame from fMRI through a next-frame-prediction task. Finally, the decoded semantic, structure, and motion information is fed into an inflated Stable Diffusion [38, 39] without any fine-tuning to generate each frame of the video, ensuring that all information is derived solely from the fMRI data. The contributions are summarized as follows: (1) We propose Mind-Animator, which for the first time successfully decouples semantic, structure, and motion information from fMRI to enable video reconstruction. To extract the motion and spatial information from fMRI, we propose temporal and spatial attention modules respectively, which decode subtle but significant motion information. (2) We validate through a permutation test that the motion information in our reconstructed videos indeed originates from the fMRI, rather than being a \"hallucination\" generated by the video generation model. (3) We introduce seven evaluation metrics that comprehensively assess the reconstruction results of our model and all previous models across three dimensions\u2014semantic, structure, and spatiotemporal consistency\u2014on three publicly available high-quality video-fMRI datasets. Our model achieves state-of-the-art (SOTA) performance in five of these metrics and secures second place in the remaining two, with a notable 76% improvement in Structural Similarity Index (SSIM) over the previous SOTA. This establishes our work as the first comprehensive and unbiased benchmark for subsequent researchers. The code and data have been anonymously released at: https://github.com/Zuskd/Mind-Animator. 2 Methodology 2.1 Overview Figure 2 presents the overall architecture of the proposed Mind-Animator, a video reconstruction model based on fMRI. The model consists of two stages: fMRI-to-feature and feature-to-video. In the fMRI-to-feature stage, as depicted in Figure 1, we begin by emulating the human visual system\u2019s approach to interpreting dynamic visual stimuli. This process involves the decomposition of video stimuli into high-level semantic feature, low-level structural feature, and motion feature. Then three separate decoders are trained to decode these features from fMRI: (a) for decoding semantic feature, we employ a contrastive learning loss to map fMRI into the visual-linguistic embedding space of CLIP[37], (b) we utilize the frame token extracted by VQ-VAE[40] as the video\u2019s structural feature[42], followed by a simple Multi-Layer Perceptron (MLP) to fit it, and (c) we design a Transformer-based Consistency Motion Generator for decoding motion information. After training with a next-frame-prediction task, this module sequentially generates each subsequent frame token based on the first frame token decoded in section (b). In the feature-to-video stage, depicted in Figure 2 (d), the decoded features are input into an inflated Text-to-Image (T2I) model, facilitating the reconstruction of the stimulus video without the interference of external training videos. 2.2 Problem Statement We aim to decode videos from brain activity recorded with fMRI when healthy participants watch a sequence of natural videos. Let X and Y denote the voxel space and pixel space, respectively. Let Xi \u2208R1\u00d7n be the fMRI signal when a video Vi,j \u2208R1\u00d73\u00d7512\u00d7512 is presented to the participant, where n is the number of fMRI voxels, j is the frame ID of video i and i \u2208[1, N], j \u2208[1, 8], with N 3 \fConsistency Motion Generator Inflated T2I Denoised U-net \u00d7T VQ-VAE Decoder Semantic Decoder Structure Decoder \ud835\udc6a \ud835\udc81 \ud835\udc81\ud835\udc84 \ud835\udc6e\ud835\udc82\ud835\udc96\ud835\udc94\ud835\udc94\ud835\udc8a\ud835\udc82\ud835\udc8f\ud835\udc8f\ud835\udc90\ud835\udc8a\ud835\udc94\ud835\udc86 Video stimulus Reconstructed video fMRI signal (d) Inference pipeline of Mind-Animator \ud835\udc53 1 \u2219\ud835\udc63\ud835\udc5b \ud835\udc53 \ud835\udc5b\u2219\ud835\udc631 \ud835\udc53 \ud835\udc5b\u2219\ud835\udc632 \ud835\udc53 \ud835\udc5b\u2219\ud835\udc633 \u22ef \ud835\udc87\ud835\udc8f\u2219\ud835\udc97\ud835\udc8f \u22ef \ud835\udc53 2 \u2219\ud835\udc63\ud835\udc5b \ud835\udc53 3 \u2219\ud835\udc63\ud835\udc5b \ud835\udc87\ud835\udfcf\u2219\ud835\udc95\ud835\udfcf\ud835\udc53 1 \u2219\ud835\udc612 \ud835\udc53 1 \u2219\ud835\udc613 \u22ef \ud835\udc53 1 \u2219\ud835\udc61\ud835\udc5b \ud835\udc53 2 \u2219\ud835\udc611\ud835\udc87\ud835\udfd0\u2219\ud835\udc95\ud835\udfd0\ud835\udc53 2 \u2219\ud835\udc613 \u22ef \ud835\udc53 2 \u2219\ud835\udc61\ud835\udc5b \ud835\udc53 3 \u2219\ud835\udc611 \ud835\udc53 3 \u2219\ud835\udc612 \ud835\udc87\ud835\udfd1\u2219\ud835\udc95\ud835\udfd1\u22ef \ud835\udc53 3 \u2219\ud835\udc61\ud835\udc5b \ud835\udc53 \ud835\udc5b\u2219\ud835\udc611 \ud835\udc53 \ud835\udc5b\u2219\ud835\udc612 \ud835\udc53 \ud835\udc5b\u2219\ud835\udc613 \u22ef \ud835\udc87\ud835\udc8f\u2219\ud835\udc95\ud835\udc8f \u22ef \u22ef \u22ef \u22ef \u22ef Semantic Decoder CLIP vision encoder CLIP text encoder \ud835\udc95\ud835\udfcf \ud835\udc95\ud835\udfd0 \ud835\udc95\ud835\udfd1 \ud835\udc95\ud835\udc8f \u22ef \ud835\udc97\ud835\udc8f \ud835\udc87\ud835\udfcf \ud835\udc87\ud835\udfd0 \ud835\udc87\ud835\udfd1 \ud835\udc87\ud835\udc8f \u22ef \ud835\udc87\ud835\udfcf \ud835\udc87\ud835\udfd0 \ud835\udc87\ud835\udfd1 \ud835\udc87\ud835\udc8f \u22ef fMRI signals \u22ef Video stimuli Text captions \ud835\udc3f\ud835\udc60\ud835\udc52\ud835\udc5a\ud835\udc4e\ud835\udc5b\ud835\udc61\ud835\udc56\ud835\udc50=\ud835\udefc\u2219\ud835\udc3f\ud835\udc53\ud835\udc61+ (1 \u2212\ud835\udefc) \u2219\ud835\udc3f\ud835\udc53\ud835\udc63 Structure Decoder VQ-VAE Encoder fMRI signals The first frame of video \ud835\udc3f\ud835\udc46\ud835\udc61\ud835\udc5f\ud835\udc62\ud835\udc50\ud835\udc61\ud835\udc62\ud835\udc5f\ud835\udc52 \ud835\udc6a \ud835\udc81 Consistency Motion Generator Spatial Module Temporal Module Embedding Module Masking Random Masking \ud835\udc3f\ud835\udc50\ud835\udc5c\ud835\udc5b\ud835\udc60\ud835\udc56\ud835\udc60\ud835\udc61\ud835\udc52\ud835\udc5b\ud835\udc50\ud835\udc66 \ud835\udc81\ud835\udc84 fMRI signal Semantic feature Structure feature Motion feature Trainable Frozen (a) Training: Semantic Decoder (b) Training: Structure Decoder (c) Training: Consistency Motion Generator Stage 1: fMRI-to-feature Stage 2: feature-to-video Figure 2: The overall architecture of Mind-Animator, a two-stage video reconstruction model based on fMRI. As illustrated in subfigures (a), (b), and (c), three decoders are trained during the fMRI-tofeature stage to disentangle semantic, structural, and motion information from fMRI, respectively. Subfigure (d) demonstrates that, in the feature-to-video stage, the decoded information is input into an inflated Text-to-Image (T2I) model for video reconstruction. the total number of videos. Let Z(k) denote the feature space, k \u2208{semantic, structure, motion}. The goal of fMRI-to-feature stage is to train decoders D(k) : X \u2192Z(k), and the goal of featureto-video stage is to construct a video generation model G : Z(semantic) \u00d7 Z(structure) \u00d7 Z(motion) \u2192Y , without introducing motion information from external video data. 2.3 fMRI-to-feature Stage Semantic Decoder Due to the low signal-to-noise ratio of the fMRI signal Xi and the substantial dimension discrepancy with the text condition ci \u2208R1\u00d720\u00d7768, directly learning a mapping between them is prone to overfitting. Considering the robust semantic information embedded in the latent space of CLIP[43], and given that CLIP has been shown to outperform various single-modal DNNs in explaining cortical activity[44, 45], we employ bidirectional InfoNCE loss to align the fMRI with the latent space of CLIP (Vit-B/32)\u2208R512, followed by a two-layer MLP to map it to text condition ci, LBiInfoNCE = \u22121 B B X i=1 \u0010 log exp(s(\u02c6 zi, zi)/\u03c4) PB j=1 exp(s(\u02c6 zi, zj)/\u03c4) + log exp(s(\u02c6 zi, zi)/\u03c4) PB k=1 exp(s(\u02c6 zi, zk)/\u03c4) \u0011 . (1) where s is the cosine similarity, zi and \u02c6 zi are the latent representation from two modalities, B is the batch size, and \u03c4 is a learned temperature parameter. Then, given f \u2208RB\u00d7512, v \u2208RB\u00d7512, and t \u2208RB\u00d7512 as the respective representations of fMRI, video, and text embeddings, the fMRI-visionlanguage trimodal loss is: LSemantic = \u03b1 \u00b7 LBiInfoNCE(f, t) + (1 \u2212\u03b1) \u00b7 LBiInfoNCE(f, v). (2) Subsequently, to map the fMRI embedding fi to the text condition ci for the purpose of conditioning generative image models, a projection loss is utilized, LP rojection = 1 B B X i=1 \u2225MLP(fi) \u2212ci\u22252 2. (3) Finally, we combine the Semantic and Projection losses using tuned hyperparameters \u03bb1, \u03bb2, LCombined = \u03bb1 \u00b7 LSemantic + \u03bb2 \u00b7 LP rojection. (4) 4 \fStructure Decoder For a short video clip, it can be assumed that the low-level information (size, shape, and color) contained in each frame remains largely consistent with that of the first frame. Consequently, we utilize the token extracted from the first frame by VQ-VAE as structural information and train the structural decoder using the standard mean squared error (MSE) loss function. Let \u03a6 denote the encoder of VQVAE, the structure loss is defined as: LStructure = 1 B B X i=1 \u2225DStructure(fi) \u2212\u03a6(Vi,1)\u22252 2. (5) Consistency Motion Generator Inspired by natural language processing, we treat each video frame token as a word embedding, and develop an L-layer Transformer-based Consistency Motion Generator. For a more detailed introduction, please refer to Appendix B.1. Visible frame tokens Positional Encoding Layer Norm Sparse Causal Self-attention Temporal Module Layer Norm Cross attention Spatial Module Predicted tokens fMRI signal FFN Q K V Training mask Inference mask Fixed Masking: -\u221e Random Masking\uff1a-\u221e Unmasking: 0 \u00d7 L Embedding layer Q K V Figure 3: The detailed architectural diagram of the Consistency Motion Generator. In the Temporal Module, visible video frame tokens \u03a6(Vi) \u2208Rm\u00d7dtoken and positional encoding Epos \u2208Rm\u00d7dtoken are jointly input into a Sparse Causal Self-Attention (SCSA) layer to learn inter-frame temporal information. This attention layer incorporates a specially designed Sparse Causal Mask to ensure sparsity between frames and accelerate training. As illustrated in Figure 3, the mask is divided into fixed and random components. The fixed mask ensures that each frame cannot access information from subsequent frames, while the random mask maintains sparsity among visible frames, preventing the model from taking shortcuts[52]. During inference, we eliminate the random mask. As shown in Eq. 6, the model also applies residual connections and layer normalization (LN) to the variable zl, z0 =[\u03a6(Vi,1), \u03a6(Vi,2), . . . , \u03a6(Vi,m)] + Epos, zl =LN(SCSA(zl\u22121)) + zl\u22121. l = 1, 2, . . . , L (6) As shown in Eq. 7, in the Spatial Module, the embedding of the visible frames zl serves as the Query, while the fMRI signal f, after passing through an embedding layer, serves as the Key and Value in the cross-attention block. Following residual connections and layer normalization, zl is input into the Feed Forward Network (FFN) to predict the subsequent unseen frame tokens \u02c6 \u03a6(Vi,j), j \u2208[m + 1, 8]: zl =CrossAttention(Q, K, V ), l = 1, 2, . . . , L (7) Q =W l Q \u00b7 zl, K = W l K \u00b7 Emb(f), V = W l V \u00b7 Emb(f), zl =FFN(LN(zl) + zl\u22121). l = 1, 2, . . . , L (8) Then, the final motion consistency loss is defined as: LConsistency = 1 B B X i=1 8 X j=m+1 \u2225 \u02c6 \u03a6(Vi,j) \u2212\u03a6(Vi,j)\u22252 2. (9) 2.4 Feature-to-video Stage Inflated Stable Diffusion for Video Reconstruction Despite the rapid development of video generation models capable of producing vivid videos from text conditions, it is crucial to emphasize that the objective of our project is to disentangle semantic, structural, and motion information from 5 \ffMRI to fully reconstruct the stimulus video. Utilizing pre-trained video generation models could obscure whether the motion information in the reconstructed video originates from the fMRI or external video data. To address this issue, we employ the network inflation[39, 46, 47] technique to implement an inflated Stable Diffusion, which is used to reconstruct each frame of the video without introducing additional motion information. For further details, please refer to the Appendix B.2. 3 Experiment 3.1 Datasets In this study, we utilize three publicly available video-fMRI datasets, which encompass paired stimulus videos and their corresponding fMRI responses. As depicted in Table 2, these datasets collectively comprise brain signals recorded from multiple healthy subjects while they are viewing the videos. The video stimuli are diverse, covering animals, humans, and natural scenery. For detailed information on the datasets and preprocessing steps, please refer to Appendix C. Dataset Adopted participants TR Train samples Test samples CC2017[31] 3 2s 4320 1200 HCP[48] 3 1s 2736 304 Algonauts2021[49] 10 1.75s 900 100 Table 2: Characteristics of the video-fMRI datasets used in our experiments 3.2 Evaluation Metrics To comprehensively and fairly evaluate the performance of our model, we propose the following evaluation metrics. Semantic-level metrics Following prior studies[19, 36], we use the N-way top-K accuracy classification test and VIFI-score as the semantics-level metrics. For the classification test, we compare the ground truth (GT) against the predicted video (PV) classifications using a classifier. A trial is successful if the GT class ranks within the top-K probabilities from the PV\u2019s classification among N randomly selected classes. Additionally, we implement two modes: image-based (2-way-I) and video-based (2-way-V). We describe this evaluation method in Algorithm 2. For the VIFI-score, we utilize VIFICLIP[53]\u2014a CLIP model fine-tuned on the video dataset\u2014to extract features from both the GT and the PV, followed by the calculation of cosine similarity. Pixel-level metrics We employ the structural similarity index measure (SSIM), peak signal-to-noise ratio (PSNR), and hue-based Pearson correlation coefficient (Hue-pcc) as pixel-level metrics. Spaciotemporal (ST) -level metric We adopt the CLIP-pcc, a common metric in the field of video editing, which involves computing CLIP image embeddings on each frame of the predicted videos and reporting the average cosine similarity between all pairs of adjacent video frames. 4 Results 4.1 Comparative Experimental Results We compare our model with all previous video reconstruction models3 on the CC2017 dataset. Visual comparisons are presented in Figure 4, while quantitative comparisons are detailed in Table 3. In the computation of quantitative metrics, the results of Wen et al.[31] pertain to the first segment of the test set, whereas the results of other researchers are derived from the whole test set. The findings on HCP and Algonauts2021 datasets are elaborated in Appendices E.2 and E.3, respectively. 3It should be noted that when replicating the results of Nishimoto et al.[28] on the CC2017 dataset, we utilized videos from the training sets of both CC2017 and HCP as the natural movie prior. 6 \fGT Ours Chen (NeurIPS 2023 Oral) Kupershmit, 2022 Wang (Cerebral Cortex 2022) Wen (Cerebral Cortex 2017) and Nishimoto, 2011 Figure 4: Reconstruction results of CC2017 dataset. Our reconstructed results are highlighted with red boxes, while those of Wen and Nishimoto are delineated by blue and green boxes, respectively. Semantic-level \u2191 Pixel-level \u2191 ST-level \u2191 2-way-I 2-way-V VIFI-score SSIM PSNR Hue-pcc CLIP-pcc Nishimoto[28] 0.727\u00b10.04 \u2014\u2014 \u2014\u2014 0.116\u00b10.09 8.012\u00b12.31 0.753\u00b10.12 \u2014\u2014 Wen[31] 0.758\u00b10.03 \u2014\u2014 \u2014\u2014 0.114\u00b10.15 7.646\u00b13.48 0.647\u00b10.11 \u2014\u2014 Wang[32] 0.713\u00b10.04 0.773\u00b10.03 0.596\u00b10.07 0.118\u00b10.08 11.432\u00b12.42 0.589\u00b10.18 0.402\u00b10.41 Kupershmidt[34] 0.764\u00b10.03 0.771\u00b10.03 0.585\u00b10.08 0.135\u00b10.08 8.761\u00b12.22 0.606\u00b10.14 0.386\u00b10.47 Chen[36] 0.792\u00b10.03 0.853\u00b10.03 0.587\u00b10.08 0.171\u00b10.08 8.662\u00b11.52 0.760\u00b10.10 0.408\u00b10.46 Ours(sub1) 0.809\u00b10.03 0.837\u00b10.02 0.602\u00b10.07 0.301\u00b10.09 9.134\u00b11.48 0.768\u00b10.12 0.425\u00b10.42 Ours(sub2) 0.804\u00b10.29 0.832\u00b10.03 0.604\u00b10.08 0.287\u00b10.11 9.049\u00b11.45 0.795\u00b10.12 0.426\u00b10.42 Ours(sub3) 0.792\u00b10.03 0.833\u00b10.03 0.600\u00b10.08 0.349\u00b10.11 9.306\u00b11.54 0.791\u00b10.12 0.415\u00b10.39 Table 3: Quantitative comparison of reconstruction results. All metrics indicate superior performance with larger values, with the best results highlighted in bold and the second-best results underlined. Table 3 indicates that our model achieves SOTA performance in 5 out of 7 metrics, securing the second place in the remaining two. Specifically, our model outperforms the previous SOTA model by 76% in terms of SSIM, which underscores the benefits of incorporating structural information. Specifically, as depicted in Figure 4, our reconstruction results contain richer semantic information compared to earlier models, such as a girl and a yellow dog being held in someone\u2019s arms. In contrast to Mind-video by Chen et al.[36], our results are more consistent with the ground truth in terms of fine-grained structural and motion information. For instance, the reconstructed girl\u2019s clothing color, the dog\u2019s fur color, and the positioning of the forest along the coastline are closer to the stimulus videos. Regarding motion information, our results depict the dog being petted and a noticeable camera movement in the coral reef scene. Additional reconstruction results on other subjects, as well as instances of reconstruction failure, are presented in Appendix E.1. 4.2 Ablation Study In this subsection, we conduct a detailed ablation study to assess the effectiveness of the three decoders we proposed and to evaluate the impact of various hyperparameters on video reconstruction (See Appendix E.4). First, we present the results obtained using the full model. Then, on the basis of the full model, we separately eliminate the semantic decoder (w/o Semantic) and the structure decoder (w/o Structure) by replacing their outputs with random noise. For the consistency motion generator, we replaced it with 8 simple MLPs to model each frame individually (w/o Motion). Table 4 demonstrates that the removal of any decoder results in a significant decline in the model\u2019s performance across nearly all metrics, which shows the efficacy of our proposed decoders. 7 \fSemantic-level\u2191 Pixel-level\u2191 ST-level \u2191 2-way-I 2-way-V VIFI-score SSIM PSNR Hue-pcc CLIP-pcc w/o Semantic 0.679\u00b10.04 0.766\u00b10.04 0.523\u00b10.07 0.097\u00b10.09 8.005\u00b11.57 0.737\u00b10.11 0.123\u00b10.31 w/o Structure 0.789\u00b10.03 0.814\u00b10.03 0.555\u00b10.08 0.184\u00b10.08 8.712\u00b11.37 0.791\u00b10.11 0.260\u00b10.41 w/o Motion 0.674\u00b10.04 0.789\u00b10.03 0.585\u00b10.08 0.136\u00b10.13 8.611\u00b12.43 0.715\u00b10.14 0.376\u00b10.42 Full Model 0.809\u00b10.03 0.837\u00b10.02 0.602\u00b10.07 0.301\u00b10.10 9.134\u00b11.51 0.768\u00b10.11 0.425\u00b10.41 Table 4: Ablation study on our proposed decoders. 100 repetitions are conducted on the metrics 2-way-I and 2-way-V, while 5 trials are performed on other metrics, with the results being averaged across all samples in test set and trials. Colors reflect statistical significance (paired t-test) compared to the Full Model. p < 0.0001 (purple); p < 0.01 (pink); p < 0.05 (yellow); p > 0.05 (green). 5 Interpretability Analysis 5.1 Have we truly decoded motion information from fMRI? This work focuses on the video reconstruction from fMRI, aiming for motion consistency between the reconstructed and stimulus videos. We specifically design a Consistency Motion Generator (CMG) to decode motion information from fMRI. Following the work of Wang et al. [32], we perform a permutation test on 3 subjects from the CC2017 dataset to ascertain whether this module decodes the correct motion information from fMRI. Specifically, for each 8-frame reconstructed video clip from each subject, we randomly shuffle the frame order 100 times and compute pixel-level and spaciotemporal-level metrics between the actual and shuffled frames. Subsequently, we estimate the P-value by the following formula: P = P100 i=1 \u03b4i/100, where \u03b4i = 1 if the ith permutation outperforms the reconstruction result in the original order based on the metrics; otherwise, \u03b4i = 0. A lower P-value signifies a closer alignment between the sequential order of the reconstructed video and the ground truth. We repeat the permutation test 5 times under conditions with and without the CMG, as illustrated in Figure 5. It can be observed that the P-value significantly increased across nearly all metrics for all subjects when the CMG is removed, suggesting that we truly decodes motion information from fMRI. *** *** *** NS *** *** *** *** *** *** *** * (a) sub01 (b) sub02 (c) sub03 Figure 5: The result of permutation test on the CC2017 dataset. The experiment is repeated 5 times on 3 subjects, with the mean and std presented in subplots (a), (b), and (c), respectively. Paired t-tests are performed, with significance denoted as p < 0.001(\u2217\u2217\u2217), p < 0.01(\u2217\u2217), p < 0.05(\u2217), and p > 0.05(NS) for non-significant results. 5.2 Which brain regions are responsible for decoding different features, respectively? To investigate voxels in which brain regions are responsible for decoding different features (semantic, structure, motion) during the fMRI-to-feature stage, we compute the voxel-wise importance maps in the visual cortex. Specifically, for a trained decoder, we multiply the weight matrix of the linear layers, then average the result across the feature dimension, and normalize it to estimate the importance weight for each voxel. A higher weight indicates that the voxel plays a more significant role in feature decoding. We project the importance maps of subject 1\u2019s voxels from the CC2017 dataset onto the visual cortex, as depicted in Figure 6. To obtain ROI-wise importance maps, we calculate the average of the importance weights of voxels contained within each Region of Interest (ROI), with the results presented in Figure 7. The results from other subjects are presented in Appendix E.5. 8 \fDorsal Ventral Anterior Anterior 0 1 Normalized weights for decoding semantic feature Dorsal Ventral Anterior Anterior 0 1 Normalized weights for decoding structure feature Dorsal Ventral Anterior Anterior 0 1 Normalized weights for spatial-temporal attention (a) Semantic (b) Structure (c) Motion Figure 6: Voxel-wise importance maps projected onto the visual cortex of subject 1. The lighter the color, the greater the weight of the voxel in the interpretation of feature. Figure 6 (a) indicates that high-level visual cortex (HVC, such as MT, MST and TPOJ) contribute more significantly to the decoding of semantic feature, with a calculated weight of 2.588, accounting for 60.5% of the total, as shown in Figure 7 (a). In contrast, low-level visual cortex (LVC, such as V1, V2, V3) have a weight of 1.685, representing 39.5%. Although it is not immediately apparent from Figure 6 (b) which ROI contributes most to the decoding of structural feature, Figure 7 (b) reveals that V1 and V2 have the greatest weight, with HVC having a weight of 1.279 (36.03%), and LVC weighing 2.271 (63.97%). Considering the aforementioned findings, our results are plausible from the perspective of cognitive neuroscience. It is generally believed that LVC is predominantly responsible for processing low-level information of visual stimuli [4, 54, 55], such as orientation and contour. Meanwhile, V4 is involved in color processing [56, 57]. In contrast, HVC is responsible for processing high-level semantic information of objects[58], including category. (a) Semantic (b) Structure (c) Motion Figure 7: ROI-wise importance maps in the visual cortex of subject 1. Figure 6 (c) indicates that both LVC and HVC contribute to the decoding of motion information, with significant weight attributed to V1 and MT. As derived from Figure 7 (c), the weight distribution between LVC and HVC is comparable, accounting for 42.4% and 57.6%, respectively. This observation is consistent with previous work[59], which validates the function of MT in visual motion processing. Furthermore, our findings affirm that the spatial and temporal modules designed in CMG effectively capture spatiotemporal information from across both LVC and HVC. 6" +} \ No newline at end of file diff --git a/abs_9K/test_abstract_short_2405.03485v1.json b/abs_9K/test_abstract_short_2405.03485v1.json new file mode 100644 index 0000000000000000000000000000000000000000..892f14d3366264ebd8ee0d71e361b7fb910bfdd1 --- /dev/null +++ b/abs_9K/test_abstract_short_2405.03485v1.json @@ -0,0 +1,17 @@ +{ + "url": "http://arxiv.org/abs/2405.03485v1", + "title": "LGTM: Local-to-Global Text-Driven Human Motion Diffusion Model", + "abstract": "In this paper, we introduce LGTM, a novel Local-to-Global pipeline for\nText-to-Motion generation. LGTM utilizes a diffusion-based architecture and\naims to address the challenge of accurately translating textual descriptions\ninto semantically coherent human motion in computer animation. Specifically,\ntraditional methods often struggle with semantic discrepancies, particularly in\naligning specific motions to the correct body parts. To address this issue, we\npropose a two-stage pipeline to overcome this challenge: it first employs large\nlanguage models (LLMs) to decompose global motion descriptions into\npart-specific narratives, which are then processed by independent body-part\nmotion encoders to ensure precise local semantic alignment. Finally, an\nattention-based full-body optimizer refines the motion generation results and\nguarantees the overall coherence. Our experiments demonstrate that LGTM gains\nsignificant improvements in generating locally accurate, semantically-aligned\nhuman motion, marking a notable advancement in text-to-motion applications.\nCode and data for this paper are available at https://github.com/L-Sun/LGTM", + "authors": "Haowen Sun, Ruikun Zheng, Haibin Huang, Chongyang Ma, Hui Huang, Ruizhen Hu", + "published": "2024-05-06", + "updated": "2024-05-06", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.GR" + ], + "label": "Original Paper", + "paper_cat": "Diffusion AND Model", + "gt": "In this paper, we introduce LGTM, a novel Local-to-Global pipeline for\nText-to-Motion generation. LGTM utilizes a diffusion-based architecture and\naims to address the challenge of accurately translating textual descriptions\ninto semantically coherent human motion in computer animation. Specifically,\ntraditional methods often struggle with semantic discrepancies, particularly in\naligning specific motions to the correct body parts. To address this issue, we\npropose a two-stage pipeline to overcome this challenge: it first employs large\nlanguage models (LLMs) to decompose global motion descriptions into\npart-specific narratives, which are then processed by independent body-part\nmotion encoders to ensure precise local semantic alignment. Finally, an\nattention-based full-body optimizer refines the motion generation results and\nguarantees the overall coherence. Our experiments demonstrate that LGTM gains\nsignificant improvements in generating locally accurate, semantically-aligned\nhuman motion, marking a notable advancement in text-to-motion applications.\nCode and data for this paper are available at https://github.com/L-Sun/LGTM", + "main_content": "INTRODUCTION In this paper, we address the problem of text-to-motion, i.e., given a textual description of movements for a character, we aim to automatically generate plausible and realistic 3D human motions. The successful automation of this process holds significant potential for a variety of downstream applications, including the creation of content for augmented and virtual reality environments, advancements in robotics, and enhancements in human-machine interactions [Chen et al. 2021; Lan et al. 2023; Scanlon et al. 2023; Zhao et al. 2022]. As a longstanding challenge at the confluence of natural language processing, machine learning, and computer graphics, textto-motion generation has garnered significant attention in recent research [Jiang et al. 2023; Petrovich et al. 2022; Tevet et al. 2022a]. The advent of diffusion models, as highlighted in various studies [Alexanderson et al. 2023; Poole et al. 2022; Rombach et al. 2022], has propelled notable advancements in this field [Tevet et al. 2022b]. Despite these strides, the task of generating motions that are both locally semantic accurate and globally coherent from textual descriptions remains a formidable hurdle. Current methods often face difficulties in effectively capturing the nuanced local semantics embedded in motion descriptions and in producing motions that align accurately with these semantic cues. In particular, existing approaches in text-to-motion synthesis often encounter issues such as local semantic leakage and missing elements [Chen et al. 2023a; Tevet et al. 2022b]. For instance, when prompted with a description like \u201ca man kicks something with his left leg\u201d, these methods might erroneously generate a motion that arXiv:2405.03485v1 [cs.CV] 6 May 2024 \fHaowen Sun, Ruikun Zheng, Haibin Huang, Chongyang Ma, Hui Huang, and Ruizhen Hu corresponds to a \u201cright kick\u201d. Similarly, prompts involving complex actions requiring coordination of multiple body parts frequently result in motions with certain parts omitted. Our observations reveal two primary shortcomings in these methods. Firstly, most existing techniques utilize a single global text descriptor for all local body motions. This approach requires the network to learn the association between local motion semantics and respective body parts from a unified global text source. This process proves challenging, especially when the textual content bears similarity across different body parts, leading to difficulties in differentiating specific actions for each part. Secondly, the text encoders used in these methods exhibit limited effectiveness in encoding motionrelated text. This limitation is apparent in the high feature similarity observed among different motion texts, as detailed in recent studies [Petrovich et al. 2023]. This homogeneity in encoded text features further exacerbates the network\u2019s struggle to discern and accurately represent subtle variations in local textual semantics. Towards this end, we present a novel diffusion-based text-tomotion generation architecture, LGTM, adept at producing motions that are both in alignment with textual descriptions and precise in local semantic accuracy. LGTM operates through a local-to-global approach, structured in two main stages. The first stage implements an efficient strategy to tackle the issue of local semantic accuracy. Here, we introduce a partition module that employs large language models (LLMs) to dissect global motion descriptions into narratives specific to each body part. Subsequently, dedicated body-part motion encoders independently process these part-specific narratives. This focused approach effectively circumvents local semantic inaccuracies by reducing redundant information and preventing semantic leakage, thus maintaining a sharp focus on relevant local semantics. However, as each body-part motion encoder functions independently, without awareness of other parts\u2019 movements, it is imperative to synchronize these individual motions to avoid fullbody coordination issues. To address this, the second stage of LGTM introduces an attention-based full-body optimizer. This component is specifically designed to facilitate the integration of information among different body parts, ensuring that the overall motion is not only locally precise but also globally coherent and fluid. To evaluate the effectiveness of LGTM, we further conduct experiments on text-driven motion generation and provide both quantitative and qualitative results. Our experiments show that our proposed LGTM can generate faithful motions that better align with the input text both locally and globally, and outperform stateof-the-art methods. To summarize, our contributions are as follows: \u2022 We present LGTM, a novel diffusion-based architecture that translate textual descriptions into accurate and coherent human motions, marking a significant improvement over previous text-to-motion approaches. \u2022 LGTM introduces a unique partition module that utilizes LLMs to decompose complex motion descriptions into part-specific narratives. This significantly enhances local semantic accuracy in motion generation. \u2022 Our experiments demonstrate the effective integration of independent body-part motion encoders with an attention-based full-body optimizer, ensuring both local precision and global coherence in generated motions, providing a promising improvement for text-to-motion generation. 2 RELATED WORK The generation of motion sequences is a longstanding challenge within the domain of computer graphics, where the objective is to produce a series of motion frames guided by conditional control signals. Given that our approach is centered on body-partition-based text-to-motion synthesis, we explore relevant literature across two primary aspects: body partition modeling and text-to-motion generation. Part-based motion modeling. Partitioning the human body into distinct segments facilitates the control of motion synthesis at a more granular level, allowing for localized adjustments. Several studies have explored the concept of combining motions of individual body parts to synthesize novel motions. [Hecker et al. 2008] introduced a retargeting algorithm that composes motions at the level of individual body parts to generate diverse character animations. [Jang et al. 2008] divided motions into upper and lower body segments, merging them through an algorithm to augment their motion database. [Soga et al. 2016] synthesized dance motions from existing datasets by focusing on body partitions. [Jang et al. 2022] performed style transfer at the part level, utilizing a graph convolutional network to assemble different body part motions into new, coherent sequences, preserving local styles while transferring them to specific body parts without compromising the integrity of other parts or the entire body. However, these methods rely on pre-existing motion data, and hence are more accurately described as synthesis rather than generation. For more detailed local control, [Starke et al. 2020] proposed a local phase model based on body partitions used to generate basketball player movements, achieving higher local fidelity compared to global phase approaches [Starke et al. 2019; Zhang et al. 2018]. [Starke et al. 2021] introduced a neural animation layering technique that combines trajectories of different body parts produced by control modules, providing animators with more granular control and enabling the creation of high-quality motion. [Lee et al. 2022] developed an algorithm for reassembling physically-based part motions, allowing the combination of partial movements from characters with varying skeletal structures. By operating in a physically simulated virtual environment, they employed part-wise timewarping and optimization-based assembly to ensure improved spatial and temporal alignment. [Bae et al. 2023] utilized part-wise motion discriminators to enhance motion variety and a global control policy to maintain the physical realism of the movements. Text-to-motion generation. Text provides a user-friendly interface for directing motion generation due to its ease of use and editing capabilities. However, a significant challenge arises from the difficulty in precisely controlling the outcome of the generated motion through text. In this subsection, we examine text-to-motion generation techniques and identify their limitations. Certain text-to-motion approaches are founded on the encoderdecoder architecture and focus on aligning modalities within a unified latent space. [Ahuja and Morency 2019] trained their network by alternating between encoding motions and texts, then \fLGTM: Local-to-Global Text-Driven Human Motion Diffusion Model Motion Rearrange Partition Module \u201cbends over\u201d \u201cFlaps arm\u201d Attention Encoder + Global Text Encoder Part Motion Encoders Full-Body Motion Optimizer \ud835\udc5b \ud835\udc5b Linear Part Text Encoder Conformer Smooth Net \u201ca person bends over and flaps his arms.\u201d torso left_arm noised motion \ud835\udc0c\ud835\udc5b cleaned motion \ud835\udc0c\ud835\udfce Figure 2: Overview of our LGTM framework, which consists of three major components. (1) The partition module utilizes ChatGPT to deconstruct motion descriptions \ud835\udc47into body part level text \ud835\udc47part, and decomposes full-body motion M to body part motion Mpart; (2) The part motion encoders encodes part-level motions with corresponding part-level text independently and a diffusion time step \ud835\udc5b; (3) The full-body motion optimizer utilizes an attention module to optimize fused body part motion with full-body text semantic. decoding them back into motion, thereby implicitly aligning the two modalities. [Ghosh et al. 2021; Petrovich et al. 2022] encoded text and motion concurrently and decoded them into motion, employing additional loss functions to bring the modalities closer within the latent space. These methods struggle with generating motions from lengthy textual descriptions. [Athanasiou et al. 2022] tackled long motion generation by producing short motion clips in an auto-regressive fashion, but this requires manual segmentation of long textual descriptions into shorter segments and specification of action duration. To utilize visual priors, [Tevet et al. 2022a] employed a frozen CLIP [Radford et al. 2021] text encoder to encode motion descriptions and aligned the motion latent space with that of CLIP. Nevertheless, the images used for alignment, rendered from random motion frames, can confuse the network when the frames are not representative. Moreover, [Petrovich et al. 2023] observed that motion descriptions tend to cluster closely in the CLIP latent space, as the distribution of motion-related text is narrower than that of the broader text datasets used to train CLIP. Recent developments in neural diffusion models for image generation have inspired text-to-motion methods that leverage these models to achieve superior quality. [Tevet et al. 2022b; Zhang et al. 2022] utilized Transformer to denoise motion conditioned on text. [Chen et al. 2023b] introduced a U-Net-based DDIM generative model to denoise motion in latent space, resulting in expedited generation. However, these methods lack the ability to control localized motion generation through masking. Additionally, they struggle to learning correct mapping of the local semantics because all body parts share the same textual information, which potentially lead to semantically mismatched part motions. An alternative approach to motion generation involves processing motion in a discrete space through token prediction [Guo et al. 2022b; Jiang et al. 2023; Yao et al. [n. d.]]. But the limitations of these works are that the expressive capacity of the codebook can restrict the diversity of the generated motions, potentially causing the text input to be mapped to unintended motions. The challenges in controlling local motion semantics stem from: (1) the sharing of textual information across all body parts, and (2) the difficulty networks face in distinguishing text latent codes encoded by CLIP. These factors contribute to the difficulty of achieving precise local semantic control in motion generation, leading to issues such as semantic leakage. Drawing inspiration from the technological advancements and challenges identified in prior research, we propose a novel framework that combines body-part partitioning with independent local motion semantic injection and a global semantic joint optimization strategy. This framework is designed to enhance the fidelity and controllability of text-to-motion synthesis, addressing the need for more nuanced and accurate motion generation. 3 METHOD In this section, we delve into the specifics of LGTM, as illustrated in Figure 2. LGTM is structured as a local-to-global generation framework that initially creates local, part-level motion, followed by a global fusion and optimization process to produce the final full-body motion. At its core, LGTM operates by subdividing the fullbody text and motion spaces into body-part-specific subspaces. Such subdivision is adeptly handled by a dedicated Partition Module. For each of these subspaces, we have developed specialized part motion encoders. These encoders are trained to learn independently a series of mappings between part-level motions and part-level text. This strategy effectively mitigates the issues of incorrect local semantic mapping seen in previous methods. Following the localized encoding, LGTM introduces a full-body motion optimizer to establish correlations among the various subspaces and ensure the consistency and coherence of the final full-body motion. Below, we provide a detailed explanation of the functionalities and details of each module in LGTM. \fHaowen Sun, Ruikun Zheng, Haibin Huang, Chongyang Ma, Hui Huang, and Ruizhen Hu 3.1 Preliminary: Human Motion Diffusion Model Input representation. We define the input pair for our method as (M,\ud835\udc47), where M represents full-body motion data and \ud835\udc47denotes the raw full-body text description. Specifically, we use the HumanML3D representation proposed by [Guo et al. 2022a] as our motion data representation, which is calculated from the SMPL motion data [Loper et al. 2015] and includes redundant motion features that are helpful for network training. A full-body motion data M contains \ud835\udc39frames and \ud835\udc3d= 22 joints. Specifically, we denote M = [\u00a4 rroot, \ud835\udc63root,\u210e, p, r, v, c], where \u00a4 rroot \u2208R\ud835\udc39\u00d71, \ud835\udc63root \u2208R\ud835\udc39\u00d72 and \u210e\u2208R\ud835\udc39\u00d71 are the angular velocity around y-axis, linear velocity on x-z plane, and height of the root joint, p \u2208R\ud835\udc39\u00d7(\ud835\udc3d\u22121)\u00d73 and r \u2208R\ud835\udc39\u00d7(\ud835\udc3d\u22121)\u00d76 are local position and 6D rotation [Zhang et al. 2018] of all joints except root joint, v \u2208R\ud835\udc39\u00d7\ud835\udc3d\u00d73 is the local velocity of all joints, and c \u2208R\ud835\udc39\u00d74 is the contact signal of feet. Diffusion model. Our method is built upon a text-conditional diffusion model. In the training stage, this model adds noise to a clean motion M following the Markov process and trains a network to predict the added noise with an L2 loss. In the sampling stage, this model gradually reduces noise from a purely noised motion M\ud835\udc5bwith the predicted noise. We use the DDIM [Song et al. 2022] as our diffusion model to accelerate the sampling process. More details is provided in the supplementary material. 3.2 Partition Module The Partition Module is designed to inject local semantics into each body part for Part Motion Encoders. In practice, an input pair (M,\ud835\udc47) is divided into six parts, including head, left arm, right arm, torso, left leg, and right leg. The motion M is decomposed as follows: Mhead = [phead, rhead, vhead] \u2208R\ud835\udc39\u00d724 Mleft_arm = \u0002 pleft_arm, rleft_arm, vleft_arm \u0003 \u2208R\ud835\udc39\u00d748 Mright_arm = \u0002 pright_arm, rright_arm, vright_arm \u0003 \u2208R\ud835\udc39\u00d748 Mtorso = [ptorso, rtorso, vtorso, \u00a4 rroot, \ud835\udc63root,\u210e] \u2208R\ud835\udc39\u00d743 Mleft_leg = \u0002 pleft_leg, rleft_leg, vleft_leg, cleft_leg \u0003 \u2208R\ud835\udc39\u00d750 Mright_leg = \u0002 pright_leg, rright_leg, vright_leg, cright_leg \u0003 \u2208R\ud835\udc39\u00d750, where the subscript indicates where the feature from. For example, pright_leg includes all local positions of joints from the right leg. For the motion description \ud835\udc47, we leverage the knowledge inference capabilities of LLMs to decompose it into six parts: \ud835\udc47head, \ud835\udc47left_arm,\ud835\udc47right_arm,\ud835\udc47torso,\ud835\udc47left_leg and\ud835\udc47right_leg using crafted prompts. The prompt includes three sections: task definition, output requirements, and some output examples. The task definition instructs LLMs to extract principal descriptions for each motion part. The output requirements tell LLMs that we need structured output such as JSON format, body part naming, etc. Then, we employ a few-shot approach to guide LLMs in generating the desired output. More details of our prompts can be found in the supplementary materials. A decomposed description example is shown in Table 1. Table 1: An example of decomposing full-body motion description: \u201ca person waves the right hand and then slightly bends down to the right and takes a few steps forward.\u201d Part name Part description head dose nothing left arm dose nothing right arm waves hand torso slightly bends down left leg takes a few steps forward right leg takes a few steps forward Figure 3: The structure of an attention encoder block. 3.3 Part Motion Encoders The part motion encoders, {\ud835\udc38head, . . . , \ud835\udc38right_leg}, aim to learn local semantic mapping from part-level input pairs \u0010 M\ud835\udc5b part,\ud835\udc47part \u0011 independently. Since each encoder obtains information only from its corresponding part-level input pair and cannot access information from other body parts, the issue of semantic leakage is effectively alleviated. We denote the part-level encoding process as follows: z\ud835\udc5b part = \ud835\udc38part \u0010 M\ud835\udc5b part,\ud835\udc47part,\ud835\udc5b \u0011 , (1) where each part motion encoder, \ud835\udc38part, consists of three components: a linear layer, a text encoder, and a Conformer [Gulati et al. 2020]. The linear layer aims to align the size of the latent dimension with that of the text encoder. We use six different frozen part-level TMR text encoders [Petrovich et al. 2023], each corresponding to one of the six body parts, which are pretrained on part-level motiontext pairs \u0000Mpart,\ud835\udc47part \u0001 respectively. Since the TMR model is trained only on motion description and motion data, and not on large visual datasets, the motion-related text embedding encoded by TMR is easier for the network to distinguish than that by CLIP. The projected motion and text embedding are then fused and processed by a Conformer[Gulati et al. 2020]. The Conformer incorporates convolution blocks into the Transformer [Vaswani et al. 2017] architecture to better capture temporal local features. Moreover, previous work [Alexanderson et al. 2023] shows the success of Conformer on music-to-dance task. 3.4 Full-Body Motion Optimizer Since each part\u2019s motion and text are independently encoded to n z\ud835\udc5b head, \u00b7 \u00b7 \u00b7 , z\ud835\udc5b left_leg o independently, the network will ignore the correlations between the different body parts, therefore, we propose that the full-body motion optimizer \ud835\udc3aestablishes correlations by adjusting the movements of each body part based on full-body text information. \fLGTM: Local-to-Global Text-Driven Human Motion Diffusion Model Figure 4: Example results generated by our method. Specifically, we first concatenate all body part latent codes into a full-body latent code z\ud835\udc5bwhose shape is (\ud835\udc39,\ud835\udc46) = (\ud835\udc39, 6 \u00d7 128), and then fuse it with the global text embedding encoded by freezing the full-body level TMR text encoder. Next, we use an attention encoder [Vaswani et al. 2017] to compute a delta that adjusts each part in the latent code z\ud835\udc5b. The attention encoder is where the exchange of spatio-temporal information actually occurs. It consists of several attention encoder blocks, each containing a multi-head attention block and a feed-forward layer, as shown in Figure 3. Since the latent code z\ud835\udc5bis processed by a multi-head attention block on the temporal dimension \ud835\udc39, and feed-forward layers (FFN) operate on the spatial dimension \ud835\udc46, the latent code for each body part can continuously exchange temporal and spatial information. Next, we use a SmoothNet [Zeng et al. 2022] to reduce jitter, which contains a stacked MLP with residual connections and operates on the temporal dimension, acting as a low-pass filter in the latent space. Finally, we project the latent code to origin feature dimension, and get a clean motion \u02c6 M0. The full-body motion optimizer can be formulated as \u02c6 M0 = \ud835\udc3a \u0010 z\ud835\udc5b head, \u00b7 \u00b7 \u00b7 , z\ud835\udc5b left_leg,\ud835\udc47 \u0011 = Linear(SoothNet(z\ud835\udc5b+ AttentionEncoder(ztext + z\ud835\udc5b))) (2) 4 RESULTS In this section, we present the motions generated by our method and conduct a comparative analysis with other text-driven motion generation methods. Additionally, we perform several ablation studies to highlight the contributions of individual components within our framework. 4.1 Implementation Details The part-level motion description is generated by ChatGPT. (gpt3.5-turbo-1106) model. Our model is trained with AdamW optimizer with learning rate decaying strategy of fast warm cosine decay. The initial learning rate is 10\u22124 and the batch size is 64. The number of diffusion steps is 1K. The training time of our model on the HumanML3D dataset is about 8 hours on 3 NVIDIA RTX 4090 GPUs. 4.2 Qualitative Results Figure 4 shows several example results generated by our method. We can see that our method can generate motion with precise local semantics, such as body part semantic correspondence and action timing order, as our method injects local semantic information into corresponding parts independently, and the whole-body optimizer builds correct relationships between body parts in both spatial and temporal domains. For example, the result of \u201ca man leans forward and jumps high\u201d shows that the character does lean and jump in the correct order. The result of\u201ca man lock his hands to his face, and do a dance move net with his legs\u201d shows that the character keeps correct spatial relationship between hand and face while dancing. The result of \u201ca person doing air kicks with his right feet\u201d shows that the character do kick with correct body part. We also provide some visual comparisons to two baselines, including MDM [Tevet et al. 2022b] and MLD [Chen et al. 2023b]. Figure 5 shows that our method can generate more semantic wellmatched motion. In the first row, the character can pick something with both hands in our result, but with just left hand in MDM. In the second row, the character only jumps on the left foot correctly in our result, but jumps on both feet in MDM and dose not jump in MLD. In the third row, the result of MDM contains weird pose and the MLD dose not contain \u201cclaps\u201d, but our result is more correct. The last row shows that, for more complex text inputs, our method is able to generate more semantic accurate results than those two baselines. 4.3 Quantitative Evaluation Evaluation metrics. To quantitatively evaluate our method, we use the metrics suggested by [Guo et al. 2022a] which includes \fHaowen Sun, Ruikun Zheng, Haibin Huang, Chongyang Ma, Hui Huang, and Ruizhen Hu Figure 5: Qualitative comparison of results generated by our method with those from MDM [Tevet et al. 2022b] and MLD [Chen et al. 2023b]. (1) Fr\u00e9chet Inception Distance (FID) that evaluates the generated motion quality against real motion distribution; (2) Diversity (DIV) that calculates the variance of generated motion; (3) R Precision that calculates the top-n matching accuracy between generated motion and the corresponding text description; (4) Multi-Modal Distance (MM Dist) that calculates the distance between paired motion and text; (5) Part-level Multi-Modal Similarity (PMM Sim) that calculates the normalized cosine similarity between part-level paired motion and text. These metrics are calculated in the latent space using the text encoder and motion encoder from T2M [Guo et al. 2022a] as in previous works. As our method provides detailed control of generated motions, we also compare our method to baselines in terms of part-level motion quality using Part-level Multi-Modal Similarity (PMM Sim), by training both partlevel text encoder and motion encoder with contrastive learning as in TMR [Petrovich et al. 2023], which we believe makes motion samples in the latent space more dispersed allowing dissimilar motions can be distinguished more easily. Specifically, we calculate the PMM Sim in the TMR latent space as follows: \ud835\udc60part = 1 2 \ud835\udc67M part \u00b7 \ud835\udc67\ud835\udc47 part \u2225\ud835\udc67M part\u2225\u2225\ud835\udc67\ud835\udc47 part\u2225 + 1 ! (3) where both \ud835\udc67M part and \ud835\udc67\ud835\udc47 part are obtained by encoding part-level motion and text through TMR encoders. Although we mainly focus on semantically controllable generation, we also evaluate common artifacts in text-to-motion synthesis. We assess the generated motions using three specific metrics: sliding, penetration, and floating, as introduced by [Yuan et al. 2022]. Comparison results. The comparison results for full-body motion are presented in Tables 2, and the comparison results for part-level motion are presented in Table 3. The FID and DIV in Tables 2 indicate that our method generates more realistic and diverse motion. The R Precision and MM Dist indicate that our method can generate better globally semantically matching motion. Table 3 also shows that our method achieves the best local semantic matching, with performance very close to that of real data. Our local-to-global design injects local semantic information independently into body parts and refines it with global semantics, which provides more accurate and structured semantic information to the network to help generation and thus achieve higher quality. For artifact evaluation, as shown in Table 4, we can see that each method exhibits performance very close to the ground truth (the Real row) at the millimeter scale. The artifacts can be attributed to the dataset\u2019s intrinsic quality variances. 4.4 Ablation Studies We have designed two main experiments to assess the impact of different components of our approach. The first experiment investigates the influence of different text encoders on the motion quality. The second experiment evaluates the effect of the full-body motion optimizer on the the quality of motions generated by our method. The importance of text encoder. We test our method by replacing our pre-trained text encoder with CLIP as an alternative, demonstrating that the TMR text encoder we use can capture more detailed semantics. Furthermore, we also present the results obtained by MDM using either CLIP or the TMR text encoder for comparison. \fLGTM: Local-to-Global Text-Driven Human Motion Diffusion Model Table 2: Comparison of the visual quality and degree of semantic matching between input text and output full-body motion. These metrics are computed in the latent space of the T2M model [Guo et al. 2022a]. Method FID \u2193 DIV\u2191 R Precision\u2191 MM Dist \u2193 Top 1 Top 2 Top 3 Real 0.000 9.831 0.513 0.708 0.804 2.939 MotionDiffuse[2022] 0.687 8.894 0.318 0.531 0.677 3.118 MDM[2022b] 0.747 9.462 0.390 0.581 0.695 3.635 MLD[2023b] 1.753 8.970 0.383 0.573 0.687 3.682 Ours (LGTM) 0.218 9.638 0.490 0.689 0.788 3.013 Table 3: Comparison of text-to-motion generation using PMM Sim. These metrics are calculated in the latent space of the part-level TRM encoder. Higher values indicate better performance. Method head left arm right arm torso left leg right leg Real 0.803 0.716 0.723 0.759 0.755 0.760 MotionDiffuse[2022] 0.789 0.687 0.712 0.735 0.728 0.739 MDM[2022b] 0.783 0.699 0.691 0.740 0.717 0.723 MLD[2023b] 0.771 0.675 0.702 0.717 0.723 0.726 Ours (LGTM) 0.799 0.719 0.724 0.763 0.755 0.763 Table 4: Comparison of text to motion generation using metrics on artifact. Method sliding (cm/s) \u2193 penetration (cm) \u2193 floating (cm) \u2193 Real 0.743 1.442 0.079 MotionDiffuse[2022] 1.359 1.783 0.051 MDM[2022b] 0.721 1.622 0.102 MLD[2023b] 0.949 2.392 0.064 Ours (LGTM) 0.854 1.247 0.046 Table 5: Comparison of the impact of different text encoders on full-body metrics computed in the latent space of T2M model [Guo et al. 2022a]. Method FID \u2193 DIV\u2191 R Precision\u2191 MM Dist \u2193 Top 1 Top 2 Top 3 MDM + CLIP 0.747 9.462 0.390 0.581 0.695 3.635 MDM + TMR 0.403 9.687 0.455 0.653 0.759 3.266 Ours + CLIP 0.331 9.386 0.391 0.569 0.674 3.699 Ours + TMR 0.218 9.638 0.490 0.689 0.788 3.013 Table 5 and Table 6 evaluate full-body and part-level motion quality, respectively. In general, we observe that using the TMR text encoder consistently produces better results than using CLIP, for both our method and MDM as well as both local and global quality. When comparing our method to MDM using the same text encoder, our method generally performs better, further demonstrating the superiority of our local-to-global design. Table 6: Comparison of the impact of different text encoders on PMM Sim computed using the part-level TRM encoder. The greater the value, the better. Method head left arm right arm torso left leg right leg MDM + CLIP 0.783 0.699 0.691 0.740 0.717 0.723 MDM + TMR 0.803 0.704 0.707 0.756 0.734 0.743 Ours + CLIP 0.795 0.693 0.694 0.752 0.725 0.732 Ours + TMR 0.799 0.719 0.724 0.763 0.755 0.763 Table 7: Comparison of the impact of using Conformer versus Transformer in Part Motion Encoders on global quality. Method FID \u2193 DIV\u2191 R Precision\u2191 MM Dist \u2193 Top 1 Top 2 Top 3 Transformer 1.814 8.578 0.373 0.567 0.680 3.688 Conformer 0.218 9.638 0.490 0.689 0.788 3.013 Table 8: Comparison of the impact of using Conformer versus Transformer in Part Motion Encoders on PMM Sim. Higher values indicate better performance. Method head left arm right arm torso left leg right leg Transformer 0.784 0.712 0.718 0.750 0.728 0.732 Conformer 0.799 0.719 0.724 0.763 0.755 0.763 The impact of Conformer. The goal of replacing Transformer with Conformer in Part Motion Encoders is to improve the motion quality. To validate the improvement, we compare both configurations on global quality metrics. From Table 7 and Table 8, we observe that LGTM with Conformer can achieves better quality and semantic matching performance than with Transformer. This improvement can be attributed to the convolution blocks of Conformer, which capture local features better than self-attention. The importance of full-body motion optimizer. The goal of our fullbody motion optimizer is to establish correlations among different body part movements and improve the coordination of full-body movements. To validate the effect, we compare it to the setting \u201cw/o opt\u201d, where we remove the key component of our full-body optimizer, namely, the attention encoder From Table 9 and Table 10, we can see that the local motion quality drops, and the full-body motion quality is also much worse without the optimizer; see Figure 6 for one example result. Without the full-body optimizer, the character\u2019s two feet cannot coordinate well to step alternately during movement due to the lack of information exchange. 5" +} \ No newline at end of file diff --git a/abs_9K/test_abstract_short_2405.03549v1.json b/abs_9K/test_abstract_short_2405.03549v1.json new file mode 100644 index 0000000000000000000000000000000000000000..5c337d3ccbcc24c19718cf7d9ea98279cb3fb11d --- /dev/null +++ b/abs_9K/test_abstract_short_2405.03549v1.json @@ -0,0 +1,19 @@ +{ + "url": "http://arxiv.org/abs/2405.03549v1", + "title": "Bridging discrete and continuous state spaces: Exploring the Ehrenfest process in time-continuous diffusion models", + "abstract": "Generative modeling via stochastic processes has led to remarkable empirical\nresults as well as to recent advances in their theoretical understanding. In\nprinciple, both space and time of the processes can be discrete or continuous.\nIn this work, we study time-continuous Markov jump processes on discrete state\nspaces and investigate their correspondence to state-continuous diffusion\nprocesses given by SDEs. In particular, we revisit the $\\textit{Ehrenfest\nprocess}$, which converges to an Ornstein-Uhlenbeck process in the infinite\nstate space limit. Likewise, we can show that the time-reversal of the\nEhrenfest process converges to the time-reversed Ornstein-Uhlenbeck process.\nThis observation bridges discrete and continuous state spaces and allows to\ncarry over methods from one to the respective other setting. Additionally, we\nsuggest an algorithm for training the time-reversal of Markov jump processes\nwhich relies on conditional expectations and can thus be directly related to\ndenoising score matching. We demonstrate our methods in multiple convincing\nnumerical experiments.", + "authors": "Ludwig Winkler, Lorenz Richter, Manfred Opper", + "published": "2024-05-06", + "updated": "2024-05-06", + "primary_cat": "stat.ML", + "cats": [ + "stat.ML", + "cs.LG", + "math.DS", + "math.PR" + ], + "label": "Original Paper", + "paper_cat": "Diffusion AND Model", + "gt": "Generative modeling via stochastic processes has led to remarkable empirical\nresults as well as to recent advances in their theoretical understanding. In\nprinciple, both space and time of the processes can be discrete or continuous.\nIn this work, we study time-continuous Markov jump processes on discrete state\nspaces and investigate their correspondence to state-continuous diffusion\nprocesses given by SDEs. In particular, we revisit the $\\textit{Ehrenfest\nprocess}$, which converges to an Ornstein-Uhlenbeck process in the infinite\nstate space limit. Likewise, we can show that the time-reversal of the\nEhrenfest process converges to the time-reversed Ornstein-Uhlenbeck process.\nThis observation bridges discrete and continuous state spaces and allows to\ncarry over methods from one to the respective other setting. Additionally, we\nsuggest an algorithm for training the time-reversal of Markov jump processes\nwhich relies on conditional expectations and can thus be directly related to\ndenoising score matching. We demonstrate our methods in multiple convincing\nnumerical experiments.", + "main_content": "Introduction Generative modeling based on stochastic processes has led to state-of-the-art performance in multiple tasks of interest, all aiming to sample artificial data from a distribution that is only specified by a finite set of training data (Nichol & Dhariwal, 2021). The general idea is based on the concept of time-reversal: we let the data points diffuse until *Equal contribution (the author order was determined by numpy.random.rand(1)) 1Technical University of Berlin 2Zuse Institute Berlin 3dida Datenschmiede GmbH 4University of Birmingham 5University of Potsdam. Correspondence to: Ludwig Winkler , Lorenz Richter . Proceedings of the 41 st International Conference on Machine Learning, Vienna, Austria. PMLR 235, 2024. Copyright 2024 by the author(s). they are close to the equilibrium distribution of the process, from which we assume to be able to sample readily, such that the time-reversal then brings us back to the desired target distribution (Sohl-Dickstein et al., 2015). In this general setup, one can make several choices and take different perspectives. While the original attempt considers discrete-time, continuous-space processes (Ho et al., 2020), one can show that in the small step-size limit the models converge to continuous-time, continuous-space processes given by stochastic differential equations (SDEs) (Song et al., 2021). This continuous time framework then allows fruitful connections to mathematical tools such as partial differential equations, path space measures and optimal control (Berner et al., 2024). As an alternative, one can consider discrete state spaces in continuous time via Markov jump processes, which have been suggested for generative modeling in Campbell et al. (2022). Those are particularly promising for problems that naturally operate on discrete data, such as, e.g., text, images, graph structures or certain biological data, to name just a few. While discrete in space, an appealing property of those models is that timediscretization is not necessary \u2013 neither during training nor during inference1. While the connections between Markov jump processes and state-continuous diffusion processes have been studied extensively (see, e.g., Kurtz (1972)), a relationship between their time-reversals has only been looked at recently, where an exact correspondence is still elusive (Santos et al., 2023). In this work, we make this correspondence more precise, thus bridging the gap between discrete-state generative modeling with Markov jump processes and the celebrated continuous-state score-based generative modeling. A key ingredient will be the so-called Ehrenfest process, which can be seen as the discrete-state analog of the OrnsteinUhlenbeck process, that is usually employed in the continuous setting, as well as a new loss function that directly translates learning rate functions of a time-reversed Markov jump process to score functions in the continuous-state ana1Note that this is not true for the timeand space-continuous SDE case, where training can be done simulation-free, however, inference relies on a discretization of the reverse stochastic process. However, see Section 4.2 for high-dimensional settings in Markov jump processes. 1 arXiv:2405.03549v1 [stat.ML] 6 May 2024 \fBridging discrete and continuous state space: Exploring the Ehrenfest process in time-continuous diffusion models log. Our contributions can be summarized as follows: \u2022 We propose a loss function via conditional expectations for training state-discrete diffusion models, which exhibits advantages compared to previous loss functions. \u2022 We introduce the Ehrenfest process and derive the jump moments of its time-reversed version. \u2022 Those jump moments allow an exact correspondence to score-based generative modeling, such that, for the first time, the two methods can now be directly linked to one another. \u2022 In consequence, the bridge between discrete and continuous state space brings the potential that one setting can benefit from the respective other. This paper is organized as follows. After listing related work in Section 1.1 and defining notation in Section 1.2, we introduce the time-reversal of Markov jump processes in Section 2 and propose a loss function for learning this reversal in Section 2.1. We define the Ehrenfest process in Section 3 and study its convergence to an SDE in Section 3.1. In Section 3.2 we then establish the connection between the time-reversed Ehrenfest process and score-based generative modeling. Section 4 is devoted to computational aspects and Section 5 provides some numerical experiments that demonstrate our theory. Finally, we conclude in Section 6. 1.1. Related work Starting with a paper by Sohl-Dickstein et al. (2015), a number of works have contributed to the success of diffusionbased generative modeling, all in the continuous-state setting, see, e.g., Ho et al. (2020); Song & Ermon (2020); Kingma et al. (2021); Nichol & Dhariwal (2021); Vahdat et al. (2021). We shall highlight the work by Song et al. (2021), which derives an SDE formulation of score-based generative modeling and thus builds the foundation for further theoretical developments (Berner et al., 2024; Richter & Berner, 2024). We note that the underlying idea of timereversing a diffusion process dates back to work by Nelson (1967); Anderson (1982). Diffusion models on discrete state spaces have been considered by Hoogeboom et al. (2021) based on appropriate binning operations of continuous models. Song et al. (2020) proposed a method for discrete categorical data, however, did not perform any experiment. A purely discrete diffusion model, both in time and space, termed Discrete Denoising Diffusion Probabilistic Models (D3PMs) has been introduced in Austin et al. (2021). Continuous-time Markov jump processes on discrete spaces have first been applied to generative modeling in Campbell et al. (2022), where, however, different forward processes have been considered, for which the forward transition probability is approximated by solving the forward Kolmogorov equation. Sun et al. (2022) introduced the idea of categorical ratio matching for continuous-time Markov Chains by learning the conditional distribution occurring in the transition ratios of the marginals when computing the reverse rates. Recently, in a similar setting, Santos et al. (2023) introduced a pure death process as the forward process, for which one can derive an alternative loss function. Further, they formally investigate the correspondence between Markov jump processes and SDEs, however, in contrast to our work, without identifying a direct relationship between the corresponding learned models. Finally, we refer to the monographs Gardiner et al. (1985); Van Kampen (1992); Br\u00b4 emaud (2013) for a general introduction to Markov jump processes. 1.2. Notation For transition probabilities of a Markov jump process M we write pt|s(x|y) := P (M(t) = x|M(s) = y) for s, t \u2208[0, T] and x, y \u2208\u2126. With pt(x) we denote the (unconditional) probability of the process at time t. We use pdata := p0. With \u03b4x,y we denote the Kronecker delta. For a function f, we say that f(x) \u2208o(g(x)) if limx\u21920 f(x) g(x) = 0. 2. Time-reversed Markov jump processes We consider Markov jump processes M(t) that run on the time interval [0, T] \u2282R and are allowed to take values in a discrete set \u2126\u223c \u2282Zd. Usually, we consider \u2126\u223c = {0, . . . , S}d such that the cardinality of our space is |\u2126| = (S + 1)d. Jumps between the discrete states appear randomly, where the rate of jumping from state y to x at time t is specified by the function rt(x|y). The jump rates determine the jump probability in a time increment \u2206t via the relation pt+\u2206t|t(x|y) = \u03b4x,y + rt(x|y)\u2206t + o(\u2206t), (1) i.e. the higher the rate and the longer the time increment, the more likely is a transition between two corresponding states. For a more detailed introduction to Markov jump processes, we refer to Appendix B.1. In order to simulate the process backwards in time, we are interested in the rates of the time-reversed process \u20d7 M(t), which determine the backward transition probability via pt\u2212\u2206t|t(x|y) = \u03b4x,y + \u20d7 rt(x|y)\u2206t + o(\u2206t). (2) The following lemma provides a formula for the rates of the time-reversed process, cf. Campbell et al. (2022). Lemma 2.1. For two states x, y \u2208\u2126, the transition rates of the time-reversed process \u20d7 M(t) are given by \u20d7 rt(y|x) = Ex0\u223cp0|t(x0|x) \u0014pt|0(y|x0) pt|0(x|x0) \u0015 rt(x|y), (3) 2 \fBridging discrete and continuous state space: Exploring the Ehrenfest process in time-continuous diffusion models where rt is the rate of the forward process M(t). Proof. See Appendix A. Remark 2.2 (Conditional expectation). We note that the expectation appearing in (3) is a conditional expectation, conditioned on the value M(t) = x. This can be compared to the SDE setting, where the time-reversal via the score function can also be written as a conditional expectation, namely \u2207x log pSDE t (x) = Ex0\u223cpSDE 0|t (x0|x) h \u2207x log pSDE t|0 (x|x0) i , see Lemma A.1 in the appendix for more details. We will elaborate on this correspondence in Section 3.2. While the forward transition probability pt|0 can usually be approximated (e.g. by solving the corresponding master equation, see Appendix B.1), the time-reversed transition function p0|t is typically not tractable, and we therefore must resort to a learning task. One idea is to approximate p0|t \u2248p\u03b8 0|t by a distribution parameterized in \u03b8 \u2208Rp (e.g. via neural networks), see, e.g. Campbell et al. (2022) and Appendix C.2. We suggest an alternative method in the following. 2.1. Loss functions via conditional expectations Recalling that any conditional expectation can be written as an L2 projection (see Lemma A.2 in the appendix), we define the loss Ly(\u03c6y) = E \"\u0012 \u03c6y(x, t) \u2212pt|0(y|x0) pt|0(x|x0) \u00132# , (4) where the expectation is over x0 \u223cpdata, t \u223cU(0, T), x \u223c pt|0(x|x0). Assuming a sufficiently rich function class F, it then holds that the minimizer of the loss equals the conditional expectation in Lemma 2.1 for any y \u2208\u2126, i.e. arg min \u03c6y\u2208F Ly(\u03c6y) = Ex0\u223cp0|t(x0|x) \u0014pt|0(y|x0) pt|0(x|x0) \u0015 . (5) We can thus directly learn the conditional expectation. In contrast to approximating the reverse transition probability p0|t, this has the advantage that we do not need to model a distribution, but a function, which is less challenging from a numerical perspective. Furthermore, we will see that the conditional expectation can be directly linked to the score function in the SDE setting, such that our approximating functions \u03c6y can be directly linked to the approximated score. We note that the loss has already been derived in a more general version in Meng et al. (2022) and applied to the setting of Markov jump processes in Lou et al. (2023), however, following a different derivation. A potential disadvantage of the loss (4), on the other hand, is that we may need to approximate different functions \u03c6y for different y \u2208\u2126. This, however, can be coped with in two ways. On the one hand, we may focus on birth-death processes, for which r(y|x) is non-zero only for y = x \u00b1 1, such that we only need to learn 2 instead of S \u22121 functions \u03c6y. In the next section we will argue that birth-death process are in fact favorable for multiple reasons. On the one hand, we can do a Taylor expansion such that for certain processes it suffices to only consider one approximating function, as will be shown in Remark 3.3. 3. The Ehrenfest process In principle, we are free to choose any forward process M(t) for which we can compute the forward transition probabilities pt|0 and which is close to its stationary distribution after a not too long run time T. In the sequel, we argue that the Ehrenfest process is particularly suitable \u2013 both from a theoretical and practical perspective. For notational convenience, we make the argument in dimension d = 1, noting, however, that a multidimensional extension is straightforward. For computational aspects in high-dimensional spaces we refer to Section 4.1. We define the Ehrenfest process2 as ES(t) := S X i=1 Zi(t), (6) where each Zi is a process on the state space \u2126= {0, 1} with transition rates r(0|1) = r(1|0) = 1 2 (sometimes called telegraph or Kac process). We note that the Ehrenfest process is a birth-death process with values in {0, . . . , S} and transition rates r(x + 1|x) = 1 2(S \u2212x), r(x \u22121|x) = x 2 . (7) We observe that we can readily transform the timeindependent rates in (7) to time-dependent rates rt(x \u00b1 1|x) := \u03bbt r(x \u00b1 1|x) (8) via a time transformation, where \u03bb : [0, T] \u2192R, see Appendix B.2. Without loss of generality, we will focus on the time-independent rates (7) in the sequel. One compelling property of the Ehrenfest process is that we can sample without needing to simulate trajectories. Lemma 3.1. Assuming ES(0) = x0, the Ehrenfest process can be written as ES(t) = E0,S(t) + E1,S(t), (9) 2The Ehrenfest process was introduced by the Russian-Dutch and German physicists Tatiana and Paul Ehrenfest to explain the second law of thermodynamics, see Ehrenfest & EhrenfestAfanassjewa (1907). 3 \fBridging discrete and continuous state space: Exploring the Ehrenfest process in time-continuous diffusion models where E0,S(t) \u223cB(S \u2212x0, 1 \u2212f(t)) and E1,S(t) \u223c B(x0, f(t)) are independent binomial random variables and f(t) := 1 2 (1 + e\u2212t). Consequently, the forward transition probability is given by the discrete convolution pt|0(x|x0) = X z\u2208\u2126 P (E0,S(t) = z) P (E1,S(t) = x \u2212z) . (10) Proof. See Appendix A. We note that the sum in (10) can usually be numerically evaluated without great effort. 3.1. Convergence properties in the infinite state space limit It is known that certain (appropriately scaled) Markov jump processes converge to state-continuous diffusion processes when the state space size S + 1 tends to infinity (see, e.g., Kurtz (1972); Gardiner et al. (1985)). For the Ehrenfest process, this convergence can be studied quite rigorously. To this end, let us introduce the scaled Ehrenfest process e ES(t) := 2 \u221a S \u0012 ES(t) \u2212S 2 \u0013 (11) with transition rates r \u0012 x \u00b1 2 \u221a S \f \f \f \fx \u0013 = \u221a S 4 ( \u221a S \u2213x), (12) now having values in \u2126= n \u2212 \u221a S, \u2212 \u221a S + 2 \u221a S , . . . , \u221a S o . We are interested in the large state space limit S \u2192\u221e, noting that this implies 2 \u221a S \u21920 for the transition steps, thus leading to a refinement of the state space. The following convergence result is shown in Sumita et al. (2004, Theorem 4.1). Proposition 3.2 (State space limit of Ehrenfest process). In the limit S \u2192\u221e, the scaled Ehrenfest process e ES(t) converges in law to the Ornstein-Uhlenbeck process Xt for any t \u2208[0, T], where Xt is defined via the SDE dXt = \u2212Xt dt + \u221a 2 dWt, (13) with Wt being standard Brownian motion. For an illustration of the convergence we refer to Figure 1. Note that the convergence of the scaled Ehrenfest process to the Ornstein-Uhlenbeck process implies pt|0(x|x0) \u2248pOU t|0 (x|x0) := N(x; \u00b5t(x0), \u03c32 t ) (14) with \u00b5t(x0) = x0e\u2212t and \u03c32 t = (1\u2212e\u22122t). For the quantity in the conditional expectation (3) we can thus compute pt|0 \u0010 x \u00b1 \u03b4 \f \f \fx0 \u0011 pt|0(x|x0) \u2248exp \u0012\u22132(x \u2212\u00b5t(x0))\u03b4 \u2212\u03b42 2\u03c32 t \u0013 (15a) \u2248exp \u0012 \u2212\u03b42 2\u03c32 t \u0013 1 \u2213(x \u2212\u00b5t(x0))\u03b4 \u03c32 + ((x \u2212\u00b5t(x0))\u03b4)2 2\u03c34 ! , (15b) where we used the shorthand \u03b4 := 2 \u221a S . Remark 3.3 (Learning of conditional expectation). Note that the approximation (15b) allows us to define the loss LGau\u00df(\u03c6) := E \"\u0012 \u03c6(x, t) \u2212exp \u0012\u22132(x \u2212\u00b5t(x0))\u03b4 \u2212\u03b42 2\u03c32 t \u0013\u00132# . (16) Further, we can write Ex0 \uf8ee \uf8f0 pt|0 \u0010 x \u00b1 \u03b4 \f \f \fx0 \u0011 pt|0(x|x0) \uf8f9 \uf8fb\u2248exp \u0012 \u2212\u03b42 2\u03c32 t \u0013 \uf8eb \uf8ed1 \u2213(x \u2212Ex0 [\u00b5t(x0)])\u03b4 \u03c32 + Ex0 h ((x \u2212\u00b5t(x0))\u03b4)2i 2\u03c34 \uf8f6 \uf8f8, (17) where x0 \u223cp0|t(x0|x). In consequence, this allows us to consider the loss functions LTaylor(\u03c61) := E h (\u03c61(x, t) \u2212\u00b5t(x0))2i , (18) and LTaylor,2(\u03c62) := E \u0014\u0010 \u03c62(x, t) \u2212((x \u2212\u00b5t(x0))\u03b4)2\u00112\u0015 , (19) where the expectations are over x0 \u223c pdata, t \u223c U(0, T), x \u223cpt|0(x|x0). We can also only consider the first order term in the Taylor expansion (15b), such that we then only have to approximate one instead of two functions. Since the scaled forward Ehrenfest process converges to the Ornstein-Uhlenbeck process, we can expect the timereversed scaled Ehrenfest process to converge to the timereversal of the Ornstein-Uhlenbeck process. We shall study this conjecture in more detail in the sequel. 3.2. Connections between time-reversal of Markov jump processes and score-based generative modeling Inspecting Lemma 2.1, which specifies the rate function of a backward Markov jump process, we realize that the 4 \fBridging discrete and continuous state space: Exploring the Ehrenfest process in time-continuous diffusion models time-reversal essentially depends on two things, namely the forward rate function with switched arguments as well as the conditional expectation of the ratio between two forward transition probabilities. To gain some intuition, let us first assume that the state space size S + 1 is large enough and that the transition density pt|0 can be extended to R (which we call pt|0) such that it can be approximated via a Taylor expansion. We can then assume that r \u0012 x \u00b1 2 \u221a S \f \f \f \fx \u0013 \u2248r \u0012 x \f \f \f \fx \u2213 2 \u221a S \u0013 (20) as well as pt|0 \u0010 x \u00b1 2 \u221a S \f \f \fx0 \u0011 pt|0(x|x0) \u2248 pt|0(x|x0) \u00b1 2 \u221a S \u2207pt|0(x|x0) pt|0(x|x0) (21a) = 1 \u00b1 2 \u221a S \u2207log pt|0(x|x0), (21b) where the conditional expectation of \u2207log pt|0(x|x0) is reminiscent of the score function in SDE-based diffusion models (cf. Lemma A.1 in the appendix). This already hints at a close connection between the time-reversal of Markov jump processes and score-based generative modeling. Further, note that (21a) corresponds to (15b) for large enough S and pt|0 \u2248pOU t|0 . We shall make the above observation more precise in the following. To this end, let us study the first and second jump moments of the Markov jump process, given as b(x) = X y\u2208\u2126,y\u0338=x (y \u2212x)r(y|x), (22) D(x) = X y\u2208\u2126,y\u0338=x (y \u2212x)2r(y|x), (23) see Appendix B.3. For the scaled Ehrenfest process (11) we can readily compute b(x) = \u2212x, D(x) = 2, (24) which align with the drift and diffusion coefficient (which is the square root of D) of the Ornstein-Uhlebeck process in Proposition 3.2. In particular, we can show the following relation between the jump moments of the forward and the backward Ehrenfest processes, respectively. Proposition 3.4. Let b and D be the first and second jump moments of the scaled Ehrenfest process e ES. The first and second jump moments of the time-reversed scaled Ehrenfest \u20d7 e ES are then given by \u20d7 b(x, t) = \u2212b(x) + D(x) Ex0\u223cp0|t(x0|x) \u0014\u2206Spt|0(x|x0) pt|0(x|x0) \u0015 + o(S\u22121/2), (25) \u20d7 D(x) = D(x) + o(S\u22121/2), (26) where \u2206Spt|0(x|x0) := pt|0(x + 2 \u221a S |x0) \u2212pt|0(x|x0) 2 \u221a S (27) is a one step difference and pt|0 and p0|t are the forward and reverse transition probabilities of the scaled Ehrenfest process. Proof. See Appendix A. Remark 3.5 (Convergence of the time-reversed Ehrenfest process). We note that Proposition 3.4 implies that the timereversed Ehrenfest process in expected to converge in law to the time-reversed Ornstein-Uhlenbeck process. This can be seen as follows. For S \u2192\u221e, we know via Proposition 3.2 that the forward Ehrenfest process converges to the Ornstein-Uhlenbeck process, i.e. pt|0 converges to pOU t|0 , where pOU t|0 (x|x0) is the transition density of the OrnsteinUhlenbeck process (13) starting at X0 = x0. Together with the fact that the finite difference approximation operator \u2206S converges to the first derivative, this implies that Ex0\u223cp0|t(x0|x) h \u2206Spt|0(x|x0) pt|0(x|x0) i is expected to converge to Ex0\u223cpOU 0|t (x0|x) h \u2207log pOU t|0 (x|x0) i . Now, Lemma A.1 in the appendix shows that this conditional expectation is the score function of the Ornstein-Uhlenbeck process, i.e. \u2207log pOU t (x) = Ex0\u223cpOU 0|t (x0|x) h \u2207log pOU t|0 (x|x0) i . Finally, we note that the first and second jump moments converge to the drift and the square of the diffusion coefficient of the limiting SDE, respectively (Gardiner et al., 1985). Therefore, the scaled time-reversed Ehrenfest process \u20d7 e ES(t) is expected to converge in law to the process Yt given by dYt = \u0000Yt + 2\u2207log pOU T \u2212t(Yt) \u0001 dt + \u221a 2 dWt, (28) which is the time-reversal of the Ornstein-Uhlenbeck process stated in (13). Note that we write (28) as a forward process from t = 0 to t = T, where Wt is a forward Brownian motion, which induces the time-transformation t 7\u2192T \u2212t in the score function. Remark 3.6 (Generalizations). Following the proof of Proposition 3.4, we expect that the formulas for the first two jump moments of the time-reversed Markov jump process, stated 5 \fBridging discrete and continuous state space: Exploring the Ehrenfest process in time-continuous diffusion models in (25) and (26), are valid for any (appropriately scaled) birth-death process whose transition rates fulfill 1 S (r(x \u00b1 \u03b4|x) \u2212r(x|x \u2213\u03b4)) = o(S\u22121), (29) where \u03b4 is a jump step size that decreases with the state space size S + 1. Crucially, Remark 3.5 shows that we can directly link approximations in the (scaled) state-discrete setting to standard state-continuous score-based generative modeling via Ex0\u223cp0|t(x0|x) \"pt|0(x \u00b1 2 \u221a S |x0) pt|0(x|x0) # \u22481\u00b1 2 \u221a S \u2207log pOU t (x), (30) see also the proof of Proposition 3.4 in Appendix A. In particular, this allows for transfer learning between the two cases. E.g., we can train a discrete model and use the approximation of the conditional expectation (up to scaling) as the score function in a continuous model. Likewise, we can train a continuous model and approximate the conditional expectation by the score. We have illustrated the latter approach in Figure 1, where we have used the (analytically available) score function that transports a standard Gaussian to a multimodal Gaussian mixture in a discrete-state Ehrenfest process that starts at a binomial distribution which is designed in such a way that it converges to the standard Gaussian for S \u2192\u221e. Similar to (4), the correspondence (30) motivates to train a state-discrete scaled Ehrenfest model with the loss defined by LOU(e \u03c6) := E \u0014\u0010 e \u03c6(x, t) \u2212\u2207log pOU t|0 (x|x0) \u00112\u0015 (31a) = E \"\u0012 e \u03c6(x, t) + (x \u2212\u00b5t(x0)) \u03c32 t \u00132# , (31b) where the expectation is over x0 \u223cpdata, t \u223cU(0, T), x \u223c pt|0(x|x0) and where \u00b5t(x0) = x0e\u2212t and \u03c32 t = (1\u2212e\u22122t), as before. In fact, this loss is completely analog to the denoising score matching loss in the state-continuous setting. We later set \u03c6 = 1 \u00b1 2 \u221a S e \u03c6\u2217, where e \u03c6\u2217is the minimizer of (31), to get the approximated conditional expectation. Remark 3.7 (Ehrenfest process as discrete-state DDPM). To make the above considerations more precise, note that we can directly link the discrete-space Ehrenfest process to pretrained score models in continuous space, such as, e.g., the celebrated denoising diffusion probabilistic models (DDPM) (Ho et al., 2020). Those models usually transport a standard Gaussian to the target density that is supported on [\u22121, 1]d. In order to cope with the fact that the scaled Ehrenfest process terminates (approximately) at a standard Gaussian irrespective of the size S + 1, we typically choose 4 2 0 2 4 x 0.0 0.1 0.2 0.3 0.4 Prior and target distribution 2.0 1.5 1.0 0.5 0.0 t 4 2 0 2 4 x Time-reversed Ornstein-Uhlenbeck process 4 2 0 2 4 x 0.0 0.1 0.2 0.3 0.4 Prior and target distribution 2.0 1.5 1.0 0.5 0.0 t 4 2 0 2 4 x Time-reversed Ehrenfest process Figure 1. We display two time-reversed processes from t = 2 to t = 0 that transport a standard Gaussian (left panels, in green) to a multimodal Gaussian mixture model (left panels, in orange), or a binomial distribution to a binomial mixture, respectively, once using a diffusion process in continuous space (upper panel) and once a time-reversed (scaled) Ehrenfest process in discrete space with S = 100 (lower panel). Crucially, in both cases we use the (state-continuous) score function to employ the time-reversal, which for this problem is known analytically, see Appendix D.1. The plots demonstrate that the distributions of the processes seem indeed very close one another, implying that the approximation (30) is quite accurate even for a moderate state space size S + 1. S = 2552 such that the interval [\u22121, 1] contains 256 states that correspond to the RGB color values of images, recalling that the increments between the states are 2 \u221a S . Further, noting the actual Ornstein-Uhlenbeck process that DDPM is trained on, we employ the time scaling \u03bbt = 1 2\u03b2(t), where \u03b2 and further details are stated in Appendix D.2, and choose the (time-dependent) rates rt \u0012 x \u00b1 2 \u221a S \f \f \f \fx \u0013 = \u03b2(t) \u221a S 8 ( \u221a S \u2213x), (32) according to (8) and (12). 4. Computational aspects In this section, we comment on computational aspects that are necessary for the training and simulation of the timereversal of our (scaled) Ehrenfest process. For convenience, we refer to Algorithm 1 and Algorithm 2 in Appendix C.1 for the corresponding training and sampling algorithms, respectively. 4.1. Modeling of dimensions In order to make computations feasible in high-dimensional spaces \u2126d, we typically factorize the forward process, such that each dimension propagates independently, cf. Camp6 \fBridging discrete and continuous state space: Exploring the Ehrenfest process in time-continuous diffusion models bell et al. (2022). Note that this is analog to the OrnsteinUhlenbeck process in score-based generative modeling, in which the dimensions also do not interact, see, e.g., (13). We thus consider pt|0(x|y) = d Y i=1 p(i) t|0(x(i)|y(i)), (33) where p(i) t|0 is the transition probability for dimension i \u2208 {1, . . . , d} and x(i) is the i-th component of x \u2208\u2126d. In Campbell et al. (2022) it is shown that the forward and backward rates then translate to rt(x|y) = d X i=1 r(i) t (x(i)|y(i))\u0393x\u00aci,y\u00aci, (34) where \u0393x\u00aci,y\u00aci is one if all dimensions except the i-th dimension agree, and \u20d7 rt(x|y) = d X i=1 E \" pt|0(y(i)|x(i) 0 ) pt|0(x(i)|x(i) 0 ) # r(i) t (x(i)|y(i))\u0393x\u00aci,y\u00aci, (35) where the expectation is over x(i) 0 \u223cp0|t(x(i) 0 |x). Equation (35) illustrates that the time-reversed process does not factorize in the dimensions even though the forward process does. Note with (34) that for a birth-death process a jump appears only in one dimension at a time, which implies that rt(x \u00b1 \u03b4i|x) = r(i) t (x(i) \u00b1 \u03b4(i) i |x(i)), (36) where now \u03b4i = (0, . . . , 0, \u03b4(i) i , 0, . . . , 0)\u22a4with \u03b4(i) i being the jump step size in the i-th dimension. Likewise, (35) becomes \u20d7 rt(x \u00b1 \u03b4i|x) = E \" pt|0(y(i)|x(i) 0 ) pt|0(x(i)|x(i) 0 ) # r(i) t (x(i)|x(i) + \u03b4(i) i ), (37) where the expectation is over x(i) 0 \u223cp0|t(x(i) 0 |x), which still depends on all dimensions. For each dimension i \u2208{1, . . . , d} we can therefore approximate the conditional expectation appearing in (37) via the loss function (4) with two functions \u03c6i,b : Rd \u00d7 [0, T] \u2192R and \u03c6i,d : Rd \u00d7 [0, T] \u2192R. Alternatively, we can learn just two functions \u03c6b/d : Rd \u00d7 [0, T] \u2192Rd for the entire space and identify \u03c6i,b/d = \u03c6(i) b/d. 4.2. \u03c4-leaping The fact that jumps only happen in one dimension at a time implies that the naive implementation of changing component by component (e.g. by using the Gillespie\u2019s algorithm, Figure 2. We plot histograms of 500.000 samples from the timereversed scaled Ehrenfest process at different times. The processes have been trained with three different losses. see Gillespie (1976)) would require a very long sampling time. As suggested in Campbell et al. (2022), we can therefore rely on \u03c4-leaping for an approximate simulation methods (Gillespie, 2001). The general idea is to not simulate jump by jump, but wait for a time interval of length \u03c4 and apply all jumps at once. One can show that the number of jumps is Poisson distributed with a mean of \u03c4 \u20d7 rt (x|y). For further details we refer to Algorithm 2. 5. Numerical experiments In this section, we demonstrate our theoretical insights in numerical experiments. If not stated otherwise, we always consider the scaled Ehrenfest process defined in (11). We will compare the different variants of the loss (4), namely LGauss defined in (16), LTaylor defined in (18) and LOU defined in (31). 5.1. Illustrative example Let us first consider an illustrative example, for which the data distribution is tractable. We consider a process in d = 2 with S = 32, where the (S + 1)d = 332 different state combinations in pdata are defined to be proportional to the pixels of an image of the letter \u201cE\u201d. Since the dimensionality is d = 2, we can visually inspect the entire distribution at any time t \u2208[0, T] by plotting 2-dimensional histograms of the simulated processes. With this experiment we can in particular check that modeling the dimensions of the forward process independently from one another (as explained in Section 4.1) is no restriction for the backward process. Indeed Figure 2 shows that the time-reversed process, which is learned with (versions of) the loss (4), can transport the prior distribution (which is approximately binomial, or, loosely speaking, a binned Gaussian) to the specified target. Again, note that this plot does not display single realizations, but 7 \fBridging discrete and continuous state space: Exploring the Ehrenfest process in time-continuous diffusion models entire distributions, which, in this case, are approximated with 500.000 samples. We realize that in this simple problem LGau\u00df performs slightly better than LOU and LTaylor. As expected, the approximations work sufficiently well even for a moderate state space size S + 1. As argued in Section 3.1, this should get even better with growing S. For further details, we refer to Appendix D.3. 5.2. MNIST For a basic image modeling task, we consider the MNIST dataset, which consists of gray scale pixels and was resized to 32 \u00d7 32 to match the required input size of a U-Net neural network architecture3, such that d = 32 \u00d7 32 = 1024 and S = 255. As before, we train our time-reversed Ehrenfest model by using the variants of the loss introduced in Section 2.1. In Figure 3 we display generated samples from a model trained with LOU. The models with the other losses look equally good, so we omit them. For further details, we refer to Appendix D.4. Figure 3. MNIST samples obtained with the time-reversed scaled Ehrenfest process which was trained with LOU. 5.3. Image modeling with CIFAR-10 As a more challenging task, we consider the CIFAR-10 data set, with dimension d = 3 \u00d7 32 \u00d7 32 = 3072, each taking 256 different values (Krizhevsky et al., 2009). In the experiments we again compare our three different losses, however, realize that LGau\u00df did not produce satisfying results and had convergence issues, which might follow from numerical issues due to the exponential term appearing in (16). Further, we consider three different scenarios: we train a model from scratch, we take the U-Net model that was pretrained in the state-continuous setting, and we take the same model and further train it with our state-discrete training algorithm (recall Remark 3.7, which describes how to link the Ehrenfest process to DDPM). We display the metrics in Table 1. When using only transfer 3Taken from the repository https://github.com/ w86763777/pytorch-ddpm. learning, the different losses indicate different ways of incorporating the pretrained model, see Appendix D.2. We realize that both losses produce comparable results, with small advantages for LOU. Even without having invested much time in finetuning hyperparameters and sampling strategies, we reach competitive performance with respect to the alternative methods LDR (Campbell et al., 2022) and D3PM (Austin et al., 2021). Remarkably, even the attempt with transfer learning returns good results, without having applied any further training. For further details, we refer to Appendix D.5, where we also display more samples in Figures 6-9. Figure 4. CIFAR-10 samples from the Ehrenfest process with a pretrained model, further finetuned with LOU. Figure 5. CIFAR-10 samples from the Ehrenfest process with a pretrained model, further finetuned with LTaylor. IS (\u2191) FID (\u2193) Ehrenfest LOU 8.75 11.57 (transfer learning) LTaylor 8.68 11.72 Ehrenfest LOU 9.50 5.08 (from scratch) LTaylor 9.66 5.12 LTaylor2 9.40 5.44 Ehrenfest LOU 9.14 6.63 (pretrained) LTaylor 9.06 6.91 \u03c4-LDR (0) 8.74 8.10 Alternative \u03c4-LDR (10) 9.49 3.74 methods D3PM Gauss 8.56 7.34 D3PM Absorbing 6.78 30.97 Table 1. Performance in terms of Inception Score (IS) (Salimans et al., 2016) and Frechet Inception Distance (FID) (Heusel et al., 2017) on CIFAR-10 over 50.000 samples. We compare two losses and consider three different scenarios: we train a model from scratch, we take the U-Net model that was pretrained in the statecontinuous setting (called \u201ctransfer learning\u201d) or we take the same model and further train it with our state-discrete training algorithm (called \u201cpretraining\u201d). 6." +} \ No newline at end of file diff --git a/abs_9K/test_abstract_short_2405.03606v1.json b/abs_9K/test_abstract_short_2405.03606v1.json new file mode 100644 index 0000000000000000000000000000000000000000..10ebf47aed81912e4e6987496eeb0f5d5edaa4dd --- /dev/null +++ b/abs_9K/test_abstract_short_2405.03606v1.json @@ -0,0 +1,18 @@ +{ + "url": "http://arxiv.org/abs/2405.03606v1", + "title": "Strang Splitting for Parametric Inference in Second-order Stochastic Differential Equations", + "abstract": "We address parameter estimation in second-order stochastic differential\nequations (SDEs), prevalent in physics, biology, and ecology. Second-order SDE\nis converted to a first-order system by introducing an auxiliary velocity\nvariable raising two main challenges. First, the system is hypoelliptic since\nthe noise affects only the velocity, making the Euler-Maruyama estimator\nill-conditioned. To overcome that, we propose an estimator based on the Strang\nsplitting scheme. Second, since the velocity is rarely observed we adjust the\nestimator for partial observations. We present four estimators for complete and\npartial observations, using full likelihood or only velocity marginal\nlikelihood. These estimators are intuitive, easy to implement, and\ncomputationally fast, and we prove their consistency and asymptotic normality.\nOur analysis demonstrates that using full likelihood with complete observations\nreduces the asymptotic variance of the diffusion estimator. With partial\nobservations, the asymptotic variance increases due to information loss but\nremains unaffected by the likelihood choice. However, a numerical study on the\nKramers oscillator reveals that using marginal likelihood for partial\nobservations yields less biased estimators. We apply our approach to\npaleoclimate data from the Greenland ice core and fit it to the Kramers\noscillator model, capturing transitions between metastable states reflecting\nobserved climatic conditions during glacial eras.", + "authors": "Predrag Pilipovic, Adeline Samson, Susanne Ditlevsen", + "published": "2024-05-06", + "updated": "2024-05-06", + "primary_cat": "stat.ME", + "cats": [ + "stat.ME", + "math.ST", + "stat.TH" + ], + "label": "Original Paper", + "paper_cat": "Diffusion AND Model", + "gt": "We address parameter estimation in second-order stochastic differential\nequations (SDEs), prevalent in physics, biology, and ecology. Second-order SDE\nis converted to a first-order system by introducing an auxiliary velocity\nvariable raising two main challenges. First, the system is hypoelliptic since\nthe noise affects only the velocity, making the Euler-Maruyama estimator\nill-conditioned. To overcome that, we propose an estimator based on the Strang\nsplitting scheme. Second, since the velocity is rarely observed we adjust the\nestimator for partial observations. We present four estimators for complete and\npartial observations, using full likelihood or only velocity marginal\nlikelihood. These estimators are intuitive, easy to implement, and\ncomputationally fast, and we prove their consistency and asymptotic normality.\nOur analysis demonstrates that using full likelihood with complete observations\nreduces the asymptotic variance of the diffusion estimator. With partial\nobservations, the asymptotic variance increases due to information loss but\nremains unaffected by the likelihood choice. However, a numerical study on the\nKramers oscillator reveals that using marginal likelihood for partial\nobservations yields less biased estimators. We apply our approach to\npaleoclimate data from the Greenland ice core and fit it to the Kramers\noscillator model, capturing transitions between metastable states reflecting\nobserved climatic conditions during glacial eras.", + "main_content": "Introduction Second-order stochastic differential equations (SDEs) are an effective instrument for modeling complex systems showcasing both deterministic and stochastic dynamics, which incorporate the second derivative of a variable the acceleration. These models are extensively applied in many fields, including physics (Rosenblum and Pikovsky, 2003), molecular dynamics (Leimkuhler and Matthews, 2015), ecology (Johnson et al., 2008; Michelot and Blackwell, 2019), paleoclimate research (Ditlevsen et al., 2002), and neuroscience (Ziv et al., 1994; Jansen and Rit, 1995). arXiv:2405.03606v1 [stat.ME] 6 May 2024 \fStrang Splitting Parameter Estimator for Second-order SDEs A PREPRINT The general form of a second-order SDE in Langevin form is given as follows: \u00a8 Xt = F(Xt, \u02d9 Xt, \u03b2) + \u03a3\u03bet. (1) Here, Xt \u2208Rd denotes the variable of interest, the dot indicates derivative with respect to time t, drift F represents the deterministic force, and \u03bet is a white noise representing the system\u2019s random perturbations around the deterministic force. We assume that \u03a3 is constant, that is the noise is additive. The main goal of this study is to estimate parameters in second-order SDEs. We first reformulate the d-dimensional second-order SDE (1) into a 2d-dimensional SDE in It\u00f4\u2019s form. We define an auxiliary velocity variable, and express the second-order SDE in terms of its position Xt and velocity Vt: dXt = Vt dt, X0 = x0, dVt = F (Xt, Vt; \u03b2) dt + \u03a3 dWt, V0 = v0, (2) where Wt is a standard Wiener process. We refer to Xt and Vt as the smooth and rough coordinates, respectively. A specific example of model (2) is F(x, v) = \u2212c(x, v)v \u2212\u2207U(x), for some function c(\u00b7) and potential U(\u00b7). Then, model (2) is called a stochastic damping Hamiltonian system. This system describes the motion of a particle subjected to potential, dissipative, and random forces (Wu, 2001). An example of a stochastic damping Hamiltonian system is the Kramers oscillator introduced in Section 2.1. Let Yt = (X\u22a4 t , V\u22a4 t )\u22a4, e F(x, v; \u03b2) = (v\u22a4, F(x, v; \u03b2)\u22a4)\u22a4and e \u03a3 = (0\u22a4, \u03a3\u22a4)\u22a4. Then (2) is formulated as dYt = e F (Yt; \u03b2) dt + e \u03a3 dWt, Y0 = y0. (3) The notation e over an object indicates that it is associated with process Yt. Specifically, the object is of dimension 2d or 2d \u00d7 2d. When it exists, the unique solution of (3) is called a diffusion or diffusion process. System (3) is usually not fully observed since the velocity Vt is not observable. Thus, our primary objective is to estimate the underlying drift parameter \u03b2 and the diffusion parameter \u03a3, based on discrete observations of either Yt (referred to as complete observation case), or only Xt (referred to as partial observation case). Diffusion Yt is said to be hypoelliptic since the matrix e \u03a3e \u03a3\u22a4= \u00140 0 0 \u03a3\u03a3\u22a4 \u0015 (4) is not of full rank, while Yt admits a smooth density. Thus, (2) is a subclass of a larger class of hypoelliptic diffusions. Parametric estimation for hypoelliptic diffusions is an active area of research. Ditlevsen and S\u00f8rensen (2004) studied discretely observed integrated diffusion processes. They proposed to use prediction-based estimating functions, which are suitable for non-Markovian processes and which do not require access to the unobserved component. They proved consistency and asymptotic normality of the estimators for N \u2192\u221e, but without any requirements on the sampling interval h. Certain moment conditions are needed to obtain results for fixed h, which are often difficult to fulfill for nonlinear drift functions. The estimator was applied to paleoclimate data in Ditlevsen et al. (2002), similar to the data we analyze in Section 5. Gloter (2006) also focused on parametric estimation for discretely observed integrated diffusion processes, introducing a contrast function using the Euler-Maruyama discretization. He studied the asymptotic properties as the sampling interval h \u21920 and the sample size N \u2192\u221e, under the so-called rapidly increasing experimental design Nh \u2192\u221e and Nh2 \u21920. To address the ill-conditioned contrast from the Euler-Maruyama discretization, he suggested using only the rough equations of the SDE. He proposed to recover the unobserved integrated component through the finite difference approximation (Xtk+1 \u2212Xtk)/h. This approximation makes the estimator biased and requires a correction factor of 3/2 in one of the terms of the contrast function for partial observations. Consequently, the correction increases the asymptotic variance of the estimator of the diffusion parameter. Samson and Thieullen (2012) expanded the ideas of (Gloter, 2006) and proved the results of (Gloter, 2006) in more general models. Similar to (Gloter, 2006), their focus was on contrasts using the Euler-Maruyama discretization limited to only the rough equations. Pokern et al. (2009) proposed an It\u00f4-Taylor expansion, adding a noise term of order h3/2 to the smooth component in the numerical scheme. They argued against the use of finite differences for approximating unobserved components. Instead, he suggested using the It\u00f4-Taylor expansion leading to non-degenerate conditionally Gaussian approximations of the transition density and using Markov Chain Monte Carlo (MCMC) Gibbs samplers for conditionally imputing missing components based on the observations. They found out that this approach resulted in a biased estimator of the drift parameter of the rough component. 2 \fStrang Splitting Parameter Estimator for Second-order SDEs A PREPRINT Ditlevsen and Samson (2019) focused on both filtering and inference methods for complete and partial observations. They proposed a contrast estimator based on the strong order 1.5 scheme (Kloeden and Platen, 1992), which incorporates noise of order h3/2 into the smooth component, similar to (Pokern et al., 2009). Moreover, they retained terms of order h2 in the mean, which removed the bias in the drift parameters noted in (Pokern et al., 2009). They proved consistency and asymptotic normality under complete observations, with the standard rapidly increasing experimental design Nh \u2192\u221eand Nh2 \u21920. They adopted an unconventional approach by using two separate contrast functions, resulting in marginal asymptotic results rather than a joint central limit theorem. The model was limited to a scalar smooth component and a diagonal diffusion coefficient matrix for the rough component. Melnykova (2020) developed a contrast estimator using local linearization (LL) (Ozaki, 1985; Shoji and Ozaki, 1998; Ozaki et al., 2000) and compared it to the least-squares estimator. She employed local linearization of the drift function, providing a non-degenerate conditional Gaussian discretization scheme, enabling the construction of a contrast estimator that achieves asymptotic normality under the standard conditions Nh \u2192\u221eand Nh2 \u21920. She proved a joint central limit theorem, bypassing the need for two separate contrasts as in Ditlevsen and Samson (2019). The models in Ditlevsen and Samson (2019) and Melnykova (2020) allow for parameters in the smooth component of the drift, in contrast to models based on second-order differential equations. Recent work by Gloter and Yoshida (2020, 2021) introduced adaptive and non-adaptive methods in hypoelliptic diffusion models, proving asymptotic normality in the complete observation regime. In line with this work, we briefly review their non-adaptive estimator. It is based on a higher-order It\u00f4-Taylor expansion that introduces additional Gaussian noise onto the smooth coordinates, accompanied by an appropriate higher-order mean approximation of the rough coordinates. The resulting estimator was later termed the local Gaussian (LG), which should be differentiated from LL. The LG estimator can be viewed as an extension of the estimator proposed in Ditlevsen and Samson (2019), with fewer restrictions on the class of models. Gloter and Yoshida (2020, 2021) found that using the full SDE to create a contrast reduces the asymptotic variance of the estimator of the diffusion parameter compared to methods using only rough coordinates in the case of complete observations. The most recent contributions are Iguchi et al. (2023a,b); Iguchi and Beskos (2023), building on the foundation of the LG estimator and focusing on high-frequency regimes addressing limitations in earlier methods. Iguchi et al. (2023b) presented a new closed-form contrast estimator for hypoelliptic SDEs (denoted as Hypo-I) based on Edgeworth-type density expansion and Malliavin calculus that achieves asymptotic normality under the less restrictive condition of Nh3 \u21920. Iguchi et al. (2023a) focused on a highly degenerate class of SDEs (denoted as Hypo-II) where smooth coordinates split into further sub-groups and proposed estimators for both complete and partial observation settings. Iguchi and Beskos (2023) further refined the conditions for estimators asymptotic normality for both Hypo-I and Hypo-II under a weak design Nhp \u21920, for p \u22652. The existing methods are generally based on approximations with varying degrees of refinements to correct for possible nonlinearities. This implies that they quickly degrade for highly nonlinear models if the step size is increased. In particular, this is the case for Hamiltonian systems. Instead, we propose to use splitting schemes, more precisely the Strang splitting scheme. Splitting schemes are established techniques initially developed for solving ordinary differential equations (ODEs) and have proven to be effective also for SDEs (Ableidinger et al., 2017; Buckwar et al., 2022; Pilipovic et al., 2024). These schemes yield accurate results in many practical applications since they incorporate nonlinearities in their construction. This makes them particularly suitable for second-order SDEs, where they have been widely used. Early work in dissipative particle dynamics (Shardlow, 2003; Serrano et al., 2006), applications to molecular dynamics (Vanden-Eijnden and Ciccotti, 2006; Melchionna, 2007; Leimkuhler and Matthews, 2015) and studies on internal particles (Pavliotis et al., 2009) all highlight the scheme\u2019s versatility. Burrage et al. (2007), Bou-Rabee and Owhadi (2010), and Abdulle et al. (2015) focused on the long-run statistical properties such as invariant measures. Bou-Rabee (2017); Br\u00e9hier and Gouden\u00e8ge (2019) and Adams et al. (2022) used splitting schemes for stochastic partial differential equations (SPDEs). Despite the extensive use of splitting schemes in different areas, statistical applications have been lacking. We have recently proposed statistical estimators for elliptic SDEs (Pilipovic et al., 2024). The straightforward and intuitive schemes lead to robust, easy-to-implement estimators, offering an advantage over more numerically intensive and less user-friendly state-of-the-art methods. We use the Strang splitting scheme to approximate the transition density between two consecutive observations and derive the pseudo-likelihood function since the exact likelihood function is often unknown or intractable. Then, to estimate parameters, we employ maximum likelihood estimation (MLE). However, two specific statistical problems arise due to hypoellipticity and partial observations. First, hypoellipticity leads to degenerate Euler-Maruyama transition schemes, which can be addressed by constructing the pseudo-likelihood solely from the rough equations of the SDE, referred to as the rough likelihood hereafter. The 3 \fStrang Splitting Parameter Estimator for Second-order SDEs A PREPRINT Strang splitting technique enables the estimator to incorporate both smooth and rough components (referred to as the full likelihood). It is also possible to construct Strang splitting estimators using only the rough likelihood, raising the question of which estimator performs better. Our results are in line with Gloter and Yoshida (2020, 2021) in the complete observation setting, where we find that using the full likelihood reduces the asymptotic variance of the diffusion estimator. We found the same results in the simulation study for the LL estimator proposed by Melnykova (2020). Second, we suggest to treat the unobserved velocity by approximating it using finite difference methods. While Gloter (2006) and Samson and Thieullen (2012) exclusively use forward differences, we investigate also central and backward differences. The forward difference approach leads to a biased estimator unless it is corrected. One of the main contributions of this work is finding suitable corrections of the pseudo-likelihoods for different finite difference approximations such that the Strang estimators are asymptotically unbiased. This also ensures consistency of the diffusion parameter estimator, at the cost of increasing its asymptotic variance. When only partial observations are available, we explore the impact of using the full likelihood versus the rough likelihood and how different finite differentiation approximations influence the parametric inference. We find that the choice of likelihood does not affect the asymptotic variance of the estimator. However, our simulation study on the Kramers oscillator suggests that using the full likelihood in finite sample setups introduce more bias than using only the rough marginal likelihood, which is the opposite of the complete observation setting. Finally, we analyze a paleoclimate ice core dataset from Greenland using a second-order SDE. The main contributions of this paper are: 1. We extend the Strang splitting estimator of (Pilipovic et al., 2024) to hypoelliptic models given by second-order SDEs, including appropriate correction factors to obtain consistency. 2. When complete observations are available, we show that the asymptotic variance of the estimator of the diffusion parameter is smaller when maximizing the full likelihood. In contrast, for partial observations, we show that the asymptotic variance remains unchanged regardless of using the full or marginal likelihood of the rough coordinates. 3. We discuss the influence on the statistical properties of using the forward difference approximation for imputing the unobserved velocity variables compared to using the backward or the central difference. 4. We evaluate the performance of the estimators through a simulation study of a second-order SDE, the Kramers oscillator. Additionally, we show numerically in a finite sample study that the marginal likelihood for partial observations is more favorable than the full likelihood. 5. We fit the Kramers oscillator to a paleoclimate ice core dataset from Greenland and estimate the average time needed to pass between two metastable states. The structure of the paper is as follows. In Section 2, we introduce the class of SDE models, define hypoellipticity, introduce the Kramers oscillator, and explain the Strang splitting scheme and its associated estimators. The asymptotic properties of the estimator are established in Section 3. The theoretical results are illustrated in a simulation study on the Kramers Oscillator in Section 4. Section 5 illustrates our methodology on the Greenland ice core data, while the technical results and the proofs of the main theorems and properties are in Section 6 and Supplementary Material S1, respectively. Notation. We use capital bold letters for random vectors, vector-valued functions, and matrices, while lowercase bold letters denote deterministic vectors. \u2225\u00b7 \u2225denotes both the L2 vector norm in Rd. Superscript (i) on a vector denotes the i-th component, while on a matrix it denotes the i-th column. Double subscript ij on a matrix denotes the component in the i-th row and j-th column. The transpose is denoted by \u22a4. Operator Tr(\u00b7) returns the trace of a matrix and det(\u00b7) the determinant. Id denotes the d-dimensional identity matrix, while 0d\u00d7d is a d-dimensional zero square matrix. We denote by [ai]d i=1 a vector with coordinates ai, and by [bij]d i,j=1 a matrix with coordinates bij, for i, j = 1, . . . , d. For a real-valued function g : Rd \u2192R, \u2202x(i)g(x) denotes the partial derivative with respect to x(i) and \u22022 x(i)x(j)g(x) denotes the second partial derivative with respect to x(i) and x(j). The nabla operator \u2207x denotes the gradient vector of g with respect of x, that is, \u2207xg(x) = [\u2202x(i)g(x)]d i=1. H denotes the Hessian matrix of function g, Hg(x) = [\u2202x(i)x(j)g(x)]d i,j=1. For a vector-valued function F : Rd \u2192Rd, the differential operator Dx denotes the Jacobian matrix DxF(x) = [\u2202x(i)F (j)(x)]d i,j=1. Let R represent a vector (or a matrix) valued function defined on (0, 1) \u00d7 Rd (or (0, 1) \u00d7 Rd\u00d7d), such that, for some constant C, \u2225R(a, x)\u2225< aC(1 + \u2225x\u2225)C for all a, x. When denoted by R, it refers to a scalar function. For an open set A, the bar A indicates closure. We write P \u2212 \u2192for convergence in probability P. 4 \fStrang Splitting Parameter Estimator for Second-order SDEs A PREPRINT 2 Problem setup Let Y = (Yt)t\u22650 in (3) be defined on a complete probability space (\u2126, F, P\u03b8) with a complete right-continuous filtration F = (Ft)t\u22650, and let the d-dimensional Wiener process W = (Wt)t\u22650 be adapted to Ft. The probability measure P\u03b8 is parameterized by the parameter \u03b8 = (\u03b2, \u03a3). Rewrite equation (3) as follows: dYt = e A(\u03b2)(Yt \u2212e b(\u03b2)) dt + e N (Yt; \u03b2) dt + e \u03a3 dWt, (5) where e A(\u03b2) = \u0014 0d\u00d7d Id Ax(\u03b2) Av(\u03b2) \u0015 , e b(\u03b2) = \u0014 b(\u03b2) 0d \u0015 , e N(x, v; \u03b2) = \u0014 0d N(x, v; \u03b2) \u0015 . (6) Function F in (2) is thus split as F(x, v; \u03b2) = Ax(\u03b2)(x \u2212b(\u03b2)) + Av(\u03b2)v + N(x, v; \u03b2). Let \u0398\u03b2 \u00d7 \u0398\u03a3 = \u0398 denote the closure of the parameter space with \u0398\u03b2 and \u0398\u03a3 being two convex open bounded subsets of Rr and Rd\u00d7d, respectively. The function N : R2d \u00d7 \u0398\u03b2 \u2192Rd is assumed locally Lipschitz; functions Ax and Av are defined on \u0398\u03b2 and take values in Rd\u00d7d; and the parameter matrix \u03a3 takes values in Rd\u00d7d. The matrix \u03a3\u03a3\u22a4is assumed to be positive definite, shaping the variance of the rough coordinates. As any square root of \u03a3\u03a3\u22a4induces the same distribution, \u03a3 is identifiable only up to equivalence classes. Hence, estimation of the parameter \u03a3 means estimation of \u03a3\u03a3\u22a4. The drift function e F in (3) is divided into a linear part given by the matrix e A and a nonlinear part given by e N. The true value of the parameter is denoted by \u03b80 = (\u03b20, \u03a30), and we assume that \u03b80 \u2208\u0398. When referring to the true parameters, we write Ax,0, Av,0, b0, N0(x), F0(x) and \u03a3\u03a3\u22a4 0 instead of Ax(\u03b20), Av(\u03b20), b(\u03b20), N(x; \u03b20), F(x; \u03b20) and \u03a30\u03a3\u22a4 0 , respectively. We write Ax, Av, b, N(x), F(x), and \u03a3\u03a3\u22a4for any parameter \u03b8. 2.1 Example: The Kramers oscillator The abrupt temperature changes during the ice ages, known as the Dansgaard\u2013Oeschger (DO) events, are essential elements for understanding the climate (Dansgaard et al., 1993). These events occurred during the last glacial era spanning approximately the period from 115,000 to 12,000 years before present and are characterized by rapid warming phases followed by gradual cooling periods, revealing colder (stadial) and warmer (interstadial) climate states (Rasmussen et al., 2014). To analyze the DO events in Section 5, we propose a stochastic model of the escape dynamics in metastable systems, the Kramers oscillator (Kramers, 1940), originally formulated to model the escape rate of Brownian particles from potential wells. The escape rate is related to the mean first passage time \u2014 the time needed for a particle to exceed the potential\u2019s local maximum for the first time, starting at a neighboring local minimum. This rate depends on variables such as the damping coefficient, noise intensity, temperature, and specific potential features, including the barrier\u2019s height and curvature at the minima and maxima. We apply this framework to quantify the rate of climate transitions between stadial and interstadial periods. This provides an estimate on the probability distribution of the ocurrence of DO events, contributing to our understanding of the global climate system. Following Arnold and Imkeller (2000), we introduce the Kramers oscillator as the stochastic Duffing oscillator an example of a second-order SDE and a stochastic damping Hamiltonian system. The Duffing oscillator (Duffing, 1918) is a forced nonlinear oscillator, featuring a cubic stiffness term. The governing equation is given by: \u00a8 xt + \u03b7 \u02d9 xt + d dxU(xt) = f(t), where U(x) = \u2212ax2 2 + bx4 4 , with a, b > 0, \u03b7 \u22650. (7) The parameter \u03b7 in (7) indicates the damping level, a regulates the linear stiffness, and b determines the nonlinear component of the restoring force. In the special case where b = 0, the equation simplifies to a damped harmonic oscillator. Function f represents the driving force and is usually set to f(t) = \u03b7 cos(\u03c9t), which introduces deterministic chaos (Korsch and Jodl, 1999). When the driving force is f(t) = \u221a2\u03b7T\u03be(t), where \u03be(t) is white noise, equation (7) characterizes the stochastic movement of a particle within a bistable potential well, interpreting T > 0 as the temperature of a heat bath. Setting \u03c3 = \u221a2\u03b7T, equation (7) can be reformulated as an It\u00f4 SDE for variables Xt and Vt = \u02d9 Xt, expressed as: dXt = Vt dt, dVt = \u0012 \u2212\u03b7Vt \u2212d dxU(Xt) \u0013 dt + \u03c3 dWt, (8) 5 \fStrang Splitting Parameter Estimator for Second-order SDEs A PREPRINT where Wt denotes a standard Wiener process. The parameter set of SDE (8) is \u03b8 = {\u03b7, a, b, \u03c32}. The existence and uniqueness of the invariant measure \u03bd0(dx, dy) of (8) is proved in Theorem 3 in (Arnold and Imkeller, 2000). The invariant measure \u03bd0 is linked to the invariant density \u03c00 through \u03bd0(dx, dy) = \u03c00(x, v) dx dy. Here we write \u03c00(x, v) instead of \u03c0(x, v; \u03b80), and \u03c0(x, v) instead of \u03c0(x, v; \u03b8). The Fokker-Plank equation for \u03c0 is given by \u2212v \u2202 \u2202x\u03c0(x, v) + \u03b7\u03c0(x, v) + \u03b7v \u2202 \u2202v \u03c0(x, v) + d dxU(x) \u2202 \u2202v \u03c0(x, v) + \u03c32 2 \u22022 \u2202v2 \u03c0(x, v) = 0. (9) The invariant density that solves the Fokker-Plank equation is: \u03c0(x, v) = C exp \u0012 \u22122\u03b7 \u03c32 U(x) \u0013 exp \u0010 \u2212\u03b7 \u03c32 v2\u0011 , (10) where C is the normalizing constant. The marginal invariant probability of Vt is thus Gaussian with zero mean and variance \u03c32/(2\u03b7). The marginal invariant probability of Xt is bimodal driven by the potential U(x): \u03c0(x) = C exp \u0012 \u22122\u03b7 \u03c32 U(x) \u0013 . (11) At steady state, for a particle moving in any potential U(x) and driven by random Gaussian noise, the position x and velocity v are independent of each other. This is reflected by the decomposition of the joint density \u03c0(x, v) into \u03c0(x)\u03c0(v). Fokker-Plank equation (9) can also be used to derive the mean first passage time \u03c4 which is inversely related to Kramers\u2019 escape rate \u03ba (Kramers, 1940): \u03c4 = 1 \u03ba \u2248 2\u03c0 \u0012q 1 + \u03b72 4\u03c92 \u2212 \u03b7 2\u03c9 \u0013 \u2126 exp \u0012\u2206U T \u0013 , where xbarrier = 0 is the local maximum of U(x) and xwell = \u00b1 p a/b are the local minima, \u03c9 = p |U \u2032\u2032(xbarrier)| = \u221aa, \u2126= p U \u2032\u2032(xwell) = \u221a 2a, and \u2206U = U(xbarrier) \u2212U(xwell) = a2/4b, . The formula is derived assuming strong friction, or an over-damped system (\u03b7 \u226b\u03c9), and a small parameter T/\u2206U \u226a1, indicating sufficiently deep potential wells. For the potential defined in (7), the mean waiting time \u03c4 is then approximated by \u03c4 \u2248 \u221a 2\u03c0 q a + \u03b72 4 \u2212\u03b7 2 exp \u0012 a2\u03b7 2b\u03c32 \u0013 . (12) 2.2 Hypoellipticity The SDE (5) is said to be hypoelliptic if its quadratic diffusion matrix e \u03a3e \u03a3\u22a4is not of full rank, while its solutions admit a smooth transition density with respect to the Lebesgue measure. According to H\u00f6rmander\u2019s theorem (Nualart, 2006), this is fulfilled if the SDE in its Stratonovich form satisfies the weak H\u00f6rmander condition. Since \u03a3 does not depend on y, the It\u00f4 and Stratonovich forms coincide. We begin by recalling the concept of Lie brackets: for smooth vector fields f, g : R2d \u2192R2d, the i-th component of the Lie bracket, [f, g](i), is defined as [f, g](i) := D\u22a4 y g(i)(y)f(y) \u2212D\u22a4 y f (i)(y)g(y). We define the set H of vector fields by initially including e \u03a3(i), i = 1, 2, ..., 2d, and then recursively adding Lie brackets H \u2208H \u21d2[e F, H], [e \u03a3(1), H], . . . , [e \u03a3(2d), H] \u2208H. The weak H\u00f6rmander condition is met if the vectors in H span R2d at every point y \u2208R2d. The initial vectors span {(0, v) \u2208R2d | v \u2208Rd}, a d-dimensional subspace. We therefore need to verify the existence of some H \u2208H with a non-zero first element. The first iteration of the system yields [e F, e \u03a3(i)](1) = \u2212\u03a3(i), [e \u03a3(i), e \u03a3(j)](1) = 0, for i, j = 1, 2, ..., 2d. The first equation is non-zero, as are all subsequent iterations. Thus, the second-order SDE defined in (5) is always hypoelliptic. 6 \fStrang Splitting Parameter Estimator for Second-order SDEs A PREPRINT 2.3 Assumptions The following assumptions are a generalization of those presented in (Pilipovic et al., 2024). Let T > 0 be the length of the observed time interval. We assume that (5) has a unique strong solution Y = {Yt | t \u2208[0, T]}, adapted to F = {Ft | t \u2208[0, T]}, which follows from the following first two assumptions (Theorem 2 in Alyushina (1988), Theorem 1 in Krylov (1991), Theorem 3.5 in Mao (2007)). We need the last three assumptions to prove the properties of the estimators. (A1) Function N is twice continuously differentiable with respect to both y and \u03b8, i.e., N \u2208C2. Moreover, it is globally one-sided Lipschitz continuous with respect to y on R2d \u00d7 \u0398\u03b2. That is, there exists a constant C > 0 such that for all y1, y2 \u2208R2d, (y1 \u2212y2)\u22a4(N(y1; \u03b2) \u2212N(y2; \u03b2)) \u2264C\u2225y1 \u2212y2\u22252. (A2) Function N exhibits at most polynomial growth in y, uniformly in \u03b8. Specifically, there exist constants C > 0 and \u03c7 \u22651 such that for all y1, y2 \u2208R2d, \u2225N (y1; \u03b2) \u2212N (y2; \u03b2) \u22252 \u2264C \u00001 + \u2225y1\u22252\u03c7\u22122 + \u2225y2\u22252\u03c7\u22122\u0001 \u2225y1 \u2212y2\u22252. Additionally, its derivatives exhibit polynomial growth in y, uniformly in \u03b8. (A3) The solution Y to SDE (5) has invariant probability \u03bd0(dy). (A4) \u03a3\u03a3\u22a4is invertible on \u0398\u03a3. (A5) \u03b2 is identifiable, that is, if F(y, \u03b21) = F(y, \u03b22) for all y \u2208R2d, then \u03b21 = \u03b22. Assumption (A1) ensures finiteness of the moments of the solution X (Tretyakov and Zhang, 2013), i.e., E[ sup t\u2208[0,T ] \u2225Yt\u22252p] < C(1 + \u2225y0\u22252p), \u2200p \u22651. (13) Assumption (A3) is necessary for the ergodic theorem to ensure convergence in distribution. Assumption (A4) ensures that the model (5) is hypoelliptic. Assumption (A5) ensures the identifiability of the drift parameter. 2.4 Strang splitting scheme Consider the following splitting of (5): dY[1] t = e A(Y[1] t \u2212e b) dt + e \u03a3 dWt, Y[1] 0 = y0, (14) dY[2] t = e N(Y[2] t ) dt, Y[2] 0 = y0. (15) There are no assumptions on the choice of e A and e b, and thus the nonlinear function e N. Indeed, we show that the asymptotic results hold for any choice of e A and e b in both the complete and the partial observation settings. This extends the results in Pilipovic et al. (2024), where it is shown to hold in the elliptic complete observation case, as well. While asymptotic results are invariant to the choice of e A and e b, finite sample properties of the scheme and the corresponding estimators are very different, and it is important to choose the splitting wisely. Intuitively, when the process is close to a fixed point of the drift, the linear dynamics are dominating, whereas far from the fixed points, the nonlinearities might be dominating. If the drift has a fixed point y\u22c6, we therefore suggest setting e A = Dye F(y\u22c6) and e b = y\u22c6. This choice is confirmed in simulations (for more details see Pilipovic et al. (2024)). Solution of SDE (14) is an Ornstein\u2013Uhlenbeck (OU) process given by the following h-flow: Y[1] tk = \u03a6[1] h (Y[1] tk\u22121) = e \u00b5h(Y[1] tk\u22121; \u03b2) + e \u03b5h,k, (16) e \u00b5h(y; \u03b2) := e e Ah(y \u2212e b) + e b, (17) e \u2126h = Z h 0 e e A(h\u2212u) e \u03a3e \u03a3\u22a4e e A\u22a4(h\u2212u) du, (18) where e \u03b5h,k i.i.d \u223cN2d(0, e \u2126h) for k = 1, . . . , N. It is useful to rewrite e \u2126h in the following block matrix form, e \u2126h = \" \u2126[SS] h \u2126[SR] h \u2126[RS] h \u2126[RR] h # , (19) 7 \fStrang Splitting Parameter Estimator for Second-order SDEs A PREPRINT where S in the superscript stands for smooth and R stands for rough. The Schur complement of e \u2126h with respect to \u2126[RR] h and the determinant of e \u2126h are given by: \u2126[S|R] h := \u2126[SS] h \u2212\u2126[SR] h (\u2126[RR] h )\u22121\u2126[RS] h , det e \u2126h = det \u2126[RR] h det \u2126[S|R] h . Assumptions (A1)-(A2) ensure the existence and uniqueness of the solution of (15) (Theorem 1.2.17 in Humphries and Stuart (2002)). Thus, there exists a unique function e fh : R2d \u00d7 \u0398\u03b2 \u2192R2d, for h \u22650, such that Y[2] tk = \u03a6[2] h (Y[2] tk\u22121) = e fh(Y[2] tk\u22121; \u03b2). (20) For all \u03b2 \u2208\u0398\u03b2, the h-flow e fh fulfills the following semi-group properties: e f0(y; \u03b2) = y, e ft+s(y; \u03b2) = e ft( e fs(y; \u03b2); \u03b2), t, s \u22650. For y = (x\u22a4, v\u22a4)\u22a4, we have: e fh(x, v; \u03b2) = \u0014 x fh(x, v; \u03b2) \u0015 , (21) where fh(x, v; \u03b2) is the solution of the ODE with vector field N(x, v; \u03b2). We introduce another assumption needed to define the pseudo-likelihood based on the splitting scheme. (A6) Inverse function e f \u22121 h (y; \u03b2) is defined asymptotically for all y \u2208R2d and all \u03b2 \u2208\u0398\u03b2, when h \u21920. Then, the inverse of \u02dc fh can be decomposed as: e f \u22121 h (x, v; \u03b2) = \u0014 x f \u22c6\u22121 h (x, v; \u03b2) \u0015 , (22) where f \u22c6\u22121 h (x, v; \u03b2) is the rough part of the inverse of e f \u22121 h . It does not equal f \u22121 h since the inverse does not propagate through coordinates when fh depends on x. We are now ready to define the Strang splitting scheme for model (5). Definition 2.1 (Strang splitting) Let Assumptions (A1)-(A2) hold. The Strang approximation of the solution of (5) is given by: \u03a6[str] h (Y[str] tk\u22121) = (\u03a6[2] h/2 \u25e6\u03a6[1] h \u25e6\u03a6[2] h/2)(Y[str] tk\u22121) = e fh/2(e \u00b5h( e fh/2(Y[str] tk\u22121)) + e \u03b5h,k). (23) Remark 1 The order of composition in the splitting schemes is not unique. Changing the order in the Strang splitting leads to a sum of 2 independent random variables, one Gaussian and one non-Gaussian, whose likelihood is not trivial. Thus, we only use the splitting (23). 2.5 Strang splitting estimators In this section, we introduce four estimators, all based on the Strang splitting scheme. We distinguish between estimators based on complete observations (denoted by C when both X and V are observed) and partial observations (denoted by P when only X is observed). In applications, we typically only have access to partial observations, however, the full observation estimator is used as a building block for the partial observation case. Additionally, we distinguish the estimators based on the type of likelihood function employed. These are the full likelihood (denoted by F) and the marginal likelihood of the rough component (denoted by R). We furthermore use the conditional likelihood based on the smooth component given the rough part (denoted by S | R) to decompose the full likelihood. 2.5.1 Complete observations Assume we observe the complete sample Y0:tN := (Ytk)N k=1 from (5) at time steps 0 = t0 < t1 < ... < tN = T. For notational simplicity, we assume equidistant step size h = tk \u2212tk\u22121. Strang splitting scheme (23) is a nonlinear transformation of a Gaussian random variable e \u00b5h( e fh/2(Y[str] tk\u22121)) + e \u03b5h,k. We define: e Zk,k\u22121(\u03b2) := e f \u22121 h/2(Ytk; \u03b2) \u2212e \u00b5h( e fh/2(Ytk\u22121; \u03b2); \u03b2), (24) 8 \fStrang Splitting Parameter Estimator for Second-order SDEs A PREPRINT and apply change of variables to get: p(ytk | ytk\u22121) = pN (0,e \u2126h)(e zk,k\u22121 | ytk\u22121)| det Dy e f \u22121 h/2(ytk)|. Using \u2212log | det Dy e f \u22121 h/2 (y; \u03b2) | = log | det Dy e fh/2 (y; \u03b2) | and det Dy e fh/2 (y; \u03b2) = det Dvfh/2 (y; \u03b2), together with the Markov property of Y0:tN , we get the following objective function based on the full log-likelihood: L[CF](Y0:tN ; \u03b8) := N X k=1 \u0010 log det e \u2126h(\u03b8) + e Zk,k\u22121(\u03b2)\u22a4e \u2126h(\u03b8)\u22121e Zk,k\u22121(\u03b2) + 2 log | det Dvfh/2(Ytk; \u03b2)| \u0011 . (25) Now, split e Zk,k\u22121 from (24) into the smooth and rough parts e Zk,k\u22121 = ((Z[S] k,k\u22121)\u22a4, (Z[R] k,k\u22121)\u22a4)\u22a4defined as: Z[S] k,k\u22121(\u03b2) := [ e Z(i) k,k\u22121(\u03b2)]d i=1 = Xtk \u2212\u00b5[S] h ( e fh/2(Ytk\u22121; \u03b2); \u03b2), (26) Z[R] k,k\u22121(\u03b2) := [ e Z(i) k,k\u22121(\u03b2)]2d i=d+1 = f \u22c6\u22121 h/2 (Ytk; \u03b2) \u2212\u00b5[R] h ( e fh/2(Ytk\u22121; \u03b2); \u03b2), (27) where \u00b5[S] h (y; \u03b2) := [e \u00b5(i) h (y; \u03b2)]d i=1, \u00b5[R] h (y; \u03b2) := [e \u00b5(i) h (y; \u03b2)]2d i=d+1. (28) We also define the following sequence of vectors Z[S|R] k,k\u22121(\u03b2) := Z[S] k,k\u22121(\u03b2) \u2212\u2126[SR] h (\u2126[RR] h )\u22121Z[R] k,k\u22121(\u03b2). (29) The formula for jointly normal distributions yields: pN (0,e \u2126h)(e zk,k\u22121 | ytk\u22121) = pN (0,\u2126[RR] h )(z[R] k,k\u22121 | ytk\u22121) \u00b7 pN (\u2126[SR] h (\u2126[RR] h )\u22121z[R] k,k\u22121,\u2126[S|R] h )(z[S] k,k\u22121 | z[R] k,k\u22121, ytk\u22121). This leads to dividing the full log-likelihood L[CF] into a sum of the marginal log-likelihood L[CR](Y0:tN ; \u03b8) and the smooth-given-rough log-likelihood L[CS|R](Y0:tN ; \u03b8): L[CF](Y0:tN ; \u03b8) = L[CR](Y0:tN ; \u03b8) + L[CS|R](Y0:tN ; \u03b8), where L[CR] (Y0:tN ; \u03b8) := N X k=1 log det \u2126[RR] h (\u03b8) + Z[R] k,k\u22121 (\u03b2)\u22a4\u2126[RR] h (\u03b8)\u22121Z[R] k,k\u22121 (\u03b2) + 2 log \f \fdet Dvfh/2 (Ytk; \u03b2) \f \f ! , (30) L[CS|R] (Y0:tN ; \u03b8) := N X k=1 \u0010 log det \u2126[S|R] h (\u03b8) + Z[S|R] k,k\u22121(\u03b2)\u22a4\u2126[S|R] h (\u03b8)\u22121Z[S|R] k,k\u22121(\u03b2) \u0011 . (31) The terms containing the drift parameter in L[CR] in (30) are of order h1/2, as in the elliptic case, whereas the terms containing the drift parameter in L[CS|R] in (31) are of order h3/2. Consequently, under a rapidly increasing experimental design where Nh \u2192\u221eand Nh2 \u21920, the objective function (31) is degenerate for estimating the drift parameter. However, it contributes to the estimation of the diffusion parameter when the full objective function (25) is used. We show in later sections that employing (25) results in a lower asymptotic variance for the diffusion parameter making it more efficient in complete observation scenarios. The estimators based on complete observations are then defined as: \u02c6 \u03b8[obj] N := arg min \u03b8 L[obj] (Y0:tN ; \u03b8) , obj \u2208{[CF], [CR]}. (32) Although the full objective function is based on twice as many equations as the marginal likelihood, its implementation complexity, speed, and memory requirements are similar to the marginal objective function. Therefore, if the complete observations are available, we recommend using the objective function (25) based on the full likelihood. 9 \fStrang Splitting Parameter Estimator for Second-order SDEs A PREPRINT 2.5.2 Partial observations Assume we only observe the smooth coordinates X0:tN := (Xtk)N k=0. The observed process Xt alone is not a Markov process, although the complete process Yt is. To approximate Vtk, we define the backward difference process: \u2206hXtk := Xtk \u2212Xtk\u22121 h . (33) From SDE (2) it follows that \u2206hXtk = 1 h Z tk tk\u22121 Vt dt. (34) We propose to approximate Vtk using \u2206hXtk by any of the three approaches: 1. Backward difference approximation: Vtk \u2248\u2206hXtk; 2. Forward difference approximation: Vtk \u2248\u2206hXtk+1; 3. Central difference approximation: Vtk \u2248 \u2206hXtk +\u2206hXtk+1 2 . The forward difference approximation performs best in our simulation study, which is also the approximation method employed in Gloter (2006) and Samson and Thieullen (2012). In the field of numerical approximations of ODEs, backward and forward finite differences have the same order of convergence, whereas the central difference has a higher convergence rate. However, the diffusion parameter estimator based on the central difference (Xtk+1 \u2212Xtk\u22121)/2h is less suitable because this approximation skips a data point and thus increases the estimator\u2019s variance. For further discussion, see Remark 6. Thus, we focus exclusively on forward differences, following Gloter (2006); Samson and Thieullen (2012), and all proofs are done for this approximation. Similar results also hold for the backward difference, with some adjustments needed in the conditional moments due to filtration issues. We start by approximating e Z for the case of partial observations denoted by e Z: e Zk+1,k,k\u22121(\u03b2) := e f \u22121 h/2(Xtk, \u2206hXtk+1; \u03b2) \u2212e \u00b5h( e fh/2(Xtk\u22121, \u2206hXtk; \u03b2); \u03b2). (35) The smooth and rough parts of e Z are thus equal to: Z [S] k,k\u22121(\u03b2) := Xtk \u2212\u00b5[S] h ( e fh/2(Xtk\u22121, \u2206hXtk; \u03b2); \u03b2), (36) Z [R] k+1,k,k\u22121(\u03b2) := f \u22c6\u22121 h/2 (Xtk, \u2206hXtk+1; \u03b2) \u2212\u00b5[R] h ( e fh/2(Xtk\u22121, \u2206hXtk; \u03b2); \u03b2), (37) and Z [S|R] k+1,k,k\u22121(\u03b2) := Z [S] k,k\u22121(\u03b2) \u2212\u2126[SR] h (\u2126[RR] h )\u22121Z [R] k+1,k,k\u22121(\u03b2). (38) Compared to Z[R] k,k\u22121 in (27), Z [R] k+1,k,k\u22121 in (37) depends on three consecutive data points, with the additional point Xtk+1 entering through \u2206hXtk+1. Furthermore, Xtk enters both f \u22c6\u22121 h/2 and e \u00b5[R] h , rending them coupled. This coupling has a significant influence on later derivations of the estimator\u2019s asymptotic properties, in contrast to the elliptic case where the derivations simplify. While it might seem straightforward to incorporate e Z, Z [S] k,k\u22121 and Z [R] k,k\u22121 into the objective functions (25), (30) and (31), it introduces bias in the estimators of the diffusion parameters, as also discussed in (Gloter, 2006; Samson and Thieullen, 2012). The bias arises because Xtk enters in both f \u22c6\u22121 h/2 and e \u00b5[R] h , and the covariances of e Z, Z [S] k,k\u22121, and Z [R] k,k\u22121 differ from their complete observation counterparts. To eliminate this bias, Gloter (2006); Samson and Thieullen (2012) applied a correction of 2/3 multiplied to log det of the covariance term in the objective functions, which is log det \u03a3\u03a3\u22a4in the Euler-Maruyama discretization. We also need appropriate corrections to our objective functions (25), (30) and (31), however, caution is necessary because log det e \u2126h(\u03b8) depends on both drift and diffusion parameters. To counterbalance this, we also incorporate an adjustment to h in \u2126h. Moreover, we add the term 4 log | det Dvfh/2| to objective function (31) to obtain consistency of the drift estimator under partial observations. The detailed derivation of these correction factors will be elaborated in the following sections. 10 \fStrang Splitting Parameter Estimator for Second-order SDEs A PREPRINT We thus propose the following objective functions: L[PF](X0:tN ; \u03b8) := 4 3(N \u22122) log det e \u21263h/4(\u03b8) (39) + N\u22121 X k=1 \u0010e Zk+1,k,k\u22121(\u03b2)\u22a4e \u2126h(\u03b8)\u22121e Zk+1,k,k\u22121(\u03b2) + 6 log | det Dvfh/2(Xtk, \u2206hXtk+1; \u03b2)| \u0011 , L[PR] (X0:tN ; \u03b8) := 2 3(N \u22122) log det \u2126[RR] 3h/2(\u03b8) (40) + N\u22121 X k=1 \u0010 Z [R] k+1,k,k\u22121 (\u03b2)\u22a4\u2126[RR] h (\u03b8)\u22121Z [R] k+1,k,k\u22121 (\u03b2) + 2 log \f \fdet Dvfh/2 \u0000Xtk, \u2206hXtk+1; \u03b2 \u0001\f \f \u0011 , L[PS|R] (X0:tN ; \u03b8) := 2(N \u22122) log det \u2126[S|R] h (\u03b8) (41) + N\u22121 X k=1 \u0010 Z [S|R] k+1,k,k\u22121(\u03b2)\u22a4\u2126[S|R] h (\u03b8)\u22121Z [S|R] k+1,k,k\u22121(\u03b2) + 4 log | det Dvfh/2(Xtk, \u2206hXtk+1; \u03b2)| \u0011 . (42) Remark 2 Due to the correction factors in the objective functions, we now have that L[PF](X0:tN ; \u03b8) \u0338= L[PR](X0:tN ; \u03b8) + L[PS|R](X0:tN ; \u03b8). (43) However, when expanding the objective functions (39)-(41) using Taylor series to the lowest necessary order in h, their approximations will satisfy equality in (43), as shown in Section 6. Remark 3 Adding the extra term 4 log | det Dvfh/2| in (41) is necessary to keep the consistency of the drift parameter. However, this term is not initially present in objective function (31), making this correction somehow artificial. This can potentially make the objective function further from the true log-likelihood. The estimators based on the partial sample are then defined as: \u02c6 \u03b8[obj] N := arg min \u03b8 L[obj] (X0:tN ; \u03b8) , obj \u2208{[PF], [PR]}. (44) In the partial observation case, the asymptotic variances of the diffusion estimators are identical whether using (39) or (40), in contrast to the complete observation scenario. This variance is shown to be 9/4 times higher than the variance of the estimator \u02c6 \u03b8[CF] N , and 9/8 times higher than that of the estimator based on the marginal likelihood \u02c6 \u03b8[CR] N . The numerical study in Section 4 shows that the estimator based on the marginal objective function (40) is less biased than the one based on the full objective function (39) in finite sample scenarios with partial observations. A potential reason for this is discussed in Remark 3. Therefore, we recommend using the objective function (40) for partial observations. 3 Main results This section states the two main results \u2013 consistency and asymptotic normality of all four proposed estimators. The key ideas for proofs are presented in Supplementary Materials S1. First, we state the consistency of the estimators in both complete and partial observation cases. Let L[obj] N be one of the objective functions (25), (30), (39) or (40) and b \u03b8[obj] N the corresponding estimator. Thus, obj \u2208{[CF], [CR], [PF], [PR]}. We use superscript [C\u00b7] to refer to any objective function in the complete observation case. Likewise, [\u00b7R] stands for an objective function based on the rough marginal likelihood either in the complete or the partial observation case. Theorem 3.1 (Consistency of the estimators) Assume (A1)-(A6), h \u21920, and Nh \u2192\u221e. Then under the complete observation or partial observation case, it holds: b \u03b2[obj] N P\u03b80 \u2212 \u2212 \u2192\u03b20, d \u03a3\u03a3 [obj] N P\u03b80 \u2212 \u2212 \u2192\u03a3\u03a3\u22a4 0 . 11 \fStrang Splitting Parameter Estimator for Second-order SDEs A PREPRINT Remark 4 We split the full objective function (25) into the sum of the rough marginal likelihood (30) and the conditional smooth-given-rough likelihood (31). Even if (31) cannot identify the drift parameter \u03b2, it is an important intermediate step in understanding the full objective function (25). This can be seen in the proof of Theorem 3.1, where we first establish consistency of the diffusion estimator with a convergence rate of \u221a N, which is faster than \u221a Nh, the convergence rate of the drift estimators. Then, under complete observations, we show that 1 Nh(L[CR] N (\u03b2, \u03c30) \u2212L[CR] N (\u03b20, \u03c30)) P\u03b80 \u2212 \u2212 \u2212 \u2212 \u2212 \u2192 Nh\u2192\u221e h\u21920 Z (F0(y) \u2212F(y))\u22a4(\u03a3\u03a3\u22a4)\u22121(F0(y) \u2212F(y)) d\u03bd0(y). (45) The right-hand side of (45) is non-negative, with a unique zero for F = F0. Conversely, for objective function (31), it holds: 1 Nh(L[CS|R] N (\u03b2, \u03c3) \u2212L[CS|R] N (\u03b20, \u03c3)) P\u03b80 \u2212 \u2212 \u2212 \u2212 \u2212 \u2192 Nh\u2192\u221e h\u21920 0. (46) Hence, (46) does not have a unique minimum, making the drift parameter unidentifiable. Similar conclusions are drawn in the partial observation case. Now, we state the asymptotic normality of the estimator. First, we need some preliminaries. Let \u03c1 > 0 and B\u03c1 (\u03b80) = {\u03b8 \u2208\u0398 | \u2225\u03b8 \u2212\u03b80\u2225\u2264\u03c1} be a ball around \u03b80. Since \u03b80 \u2208\u0398, for sufficiently small \u03c1 > 0, B\u03c1(\u03b80) \u2208\u0398. For \u02c6 \u03b8[obj] N \u2208B\u03c1 (\u03b80), the mean value theorem yields: \u0012Z 1 0 HL[obj] N (\u03b80 + t(\u02c6 \u03b8[obj] N \u2212\u03b80)) dt \u0013 (\u02c6 \u03b8[obj] N \u2212\u03b80) = \u2212\u2207\u03b8L[obj] N (\u03b80) . (47) Define: C[obj] N (\u03b8) := \uf8ee \uf8ef \uf8f0 h 1 Nh\u22022 \u03b2(i1)\u03b2(i2)L[obj] N (\u03b8) ir i1,i2=1 h 1 N \u221a h\u22022 \u03b2(i)\u03c3(j)L[obj] N (\u03b8) ir,s i=1,j=1 h 1 N \u221a h\u22022 \u03c3(j)\u03b2(i)L[obj] N (\u03b8) ir,s i=1,j=1 h 1 N \u22022 \u03c3(j1)\u03c3(j2)L[obj] N (\u03b8) is j1,j2=1 \uf8f9 \uf8fa \uf8fb, (48) s[obj] N := \"\u221a Nh( \u02c6 \u03b2[obj] N \u2212\u03b20) \u221a N(\u02c6 \u03c3[obj] N \u2212\u03c30) # , \u03bb[obj] N := \uf8ee \uf8ef \uf8f0 \u2212 1 \u221a Nh \u2207\u03b2L[obj] N (\u03b80) \u22121 \u221a N \u2207\u03c3L[obj] N (\u03b80) \uf8f9 \uf8fa \uf8fb, (49) and D[obj] N := R 1 0 C[obj] N (\u03b80 + t(\u02c6 \u03b8[obj] N \u2212\u03b80)) dt. Then, (47) is equivalent to D[obj] N s[obj] N = \u03bb[obj] N . Let: [C\u03b2(\u03b80)]i1,i2 := Z (\u2202\u03b2(i1)F0(y))\u22a4(\u03a3\u03a3\u22a4 0 )\u22121(\u2202\u03b2(i2)F0(y)) d\u03bd0(y), 1 \u2264i1, i2 \u2264r, (50) [C\u03c3(\u03b80)]j1,j2 := Tr((\u2202\u03c3(j1)\u03a3\u03a3\u22a4 0 )(\u03a3\u03a3\u22a4 0 )\u22121(\u2202\u03c3(j2)\u03a3\u03a3\u22a4 0 )(\u03a3\u03a3\u22a4 0 )\u22121), 1 \u2264j1, j2 \u2264s. (51) Theorem 3.2 Let assumptions (A1)-(A6) hold, and let h \u21920, Nh \u2192\u221e, and Nh2 \u21920. Then under complete observations, it holds: \"\u221a Nh( \u02c6 \u03b2[CR] N \u2212\u03b20) \u221a N(\u02c6 \u03c3[CR] N \u2212\u03c30) # d \u2212 \u2192N \u0012 0, \u0014 C\u03b2(\u03b80)\u22121 0r\u00d7s 0s\u00d7r 2C\u03c3(\u03b80)\u22121 \u0015\u0013 , \"\u221a Nh( \u02c6 \u03b2[CF] N \u2212\u03b20) \u221a N(\u02c6 \u03c3[CF] N \u2212\u03c30) # d \u2212 \u2192N \u0012 0, \u0014 C\u03b2(\u03b80)\u22121 0r\u00d7s 0s\u00d7r C\u03c3(\u03b80)\u22121 \u0015\u0013 , under P\u03b80. If only partial observations are available and the unobserved coordinates are approximated using the forward or backward differences, then \"\u221a Nh( \u02c6 \u03b2[PR] N \u2212\u03b20) \u221a N(\u02c6 \u03c3[PR] N \u2212\u03c30) # d \u2212 \u2192N \u0012 0, \u0014 C\u03b2(\u03b80)\u22121 0r\u00d7s 0s\u00d7r 9 4C\u03c3(\u03b80)\u22121 \u0015\u0013 , \"\u221a Nh( \u02c6 \u03b2[PF] N \u2212\u03b20) \u221a N(\u02c6 \u03c3[PF] N \u2212\u03c30) # d \u2212 \u2192N \u0012 0, \u0014 C\u03b2(\u03b80)\u22121 0r\u00d7s 0s\u00d7r 9 4C\u03c3(\u03b80)\u22121 \u0015\u0013 , under P\u03b80. 12 \fStrang Splitting Parameter Estimator for Second-order SDEs A PREPRINT Here, we only outline the proof. According to Theorem 1 in Kessler (1997) or Theorem 1 in S\u00f8rensen and Uchida (2003), Lemmas 3.3 and 3.4 below are enough for establishing asymptotic normality of \u02c6 \u03b8N. For more details, see proof of Theorem 1 in S\u00f8rensen and Uchida (2003). Lemma 3.3 Let CN(\u03b80) be defined in (48). For h \u21920 and Nh \u2192\u221e, it holds: C[CR] N (\u03b80) P\u03b80 \u2212 \u2212 \u2192 \u0014 2C\u03b2(\u03b80) 0r\u00d7s 0s\u00d7r C\u03c3(\u03b80) \u0015 , C[PR] N (\u03b80) P\u03b80 \u2212 \u2212 \u2192 \u00142C\u03b2(\u03b80) 0r\u00d7s 0s\u00d7r 2 3C\u03c3(\u03b80) \u0015 , C[CF] N (\u03b80) P\u03b80 \u2212 \u2212 \u2192 \u0014 2C\u03b2(\u03b80) 0r\u00d7s 0s\u00d7r 2C\u03c3(\u03b80) \u0015 , C[PF] N (\u03b80) P\u03b80 \u2212 \u2212 \u2192 \u00142C\u03b2(\u03b80) 0r\u00d7s 0s\u00d7r 8 3C\u03c3(\u03b80) \u0015 . Moreover, let \u03c1N be a sequence such that \u03c1N \u21920, then in all cases it holds: sup \u2225\u03b8\u2225\u2264\u03c1N \u2225C[obj] N (\u03b80 + \u03b8) \u2212C[obj] N (\u03b80)\u2225 P\u03b80 \u2212 \u2212 \u21920. Lemma 3.4 Let \u03bbN be defined (49). For h \u21920, Nh \u2192\u221eand Nh2 \u21920, it holds: \u03bb[CR] N d \u2212 \u2192N \u0012 0, \u0014 4C\u03b2(\u03b80) 0r\u00d7s 0s\u00d7r 2C\u03c3(\u03b80) \u0015\u0013 , \u03bb[PR] N d \u2212 \u2192N \u0012 0, \u0014 4C\u03b2(\u03b80) 0r\u00d7s 0s\u00d7r C\u03c3(\u03b80) \u0015\u0013 , \u03bb[CF] N d \u2212 \u2192N \u0012 0, \u0014 4C\u03b2(\u03b80) 0r\u00d7s 0s\u00d7r 4C\u03c3(\u03b80) \u0015\u0013 , \u03bb[PF] N d \u2212 \u2192N \u0012 0, \u0014 4C\u03b2(\u03b80) 0r\u00d7s 0s\u00d7r 16C\u03c3(\u03b80) \u0015\u0013 , under P\u03b80. Now, the two previous lemmas suggest s[obj] N = (D[obj] n )\u22121\u03bb[obj] N d \u2212 \u2192C[obj] N (\u03b80)\u22121\u03bb[obj] N . The previous line is not completely formal, but it gives the intuition. For more details on formally deriving the result, see Section 7.4 in Pilipovic et al. (2024) or proof of Theorem 1 in S\u00f8rensen and Uchida (2003). 4 Simulation study This Section illustrates the simulation study of the Kramers oscillator (8), demonstrating the theoretical aspects and comparing our proposed estimators against estimators based on the EM and LL approximations. We chose to compare our proposed estimators to these two, because the EM estimator is routinely used in applications, and the LL estimator has shown to be one of the best state-of-the-art methods, see Pilipovic et al. (2024) for the elliptic case. The true parameters are set to \u03b70 = 6.5, a0 = 1, b0 = 0.6 and \u03c32 0 = 0.1. We outline the estimators specifically designed for the Kramers oscillator, explain the simulation procedure, describe the optimization implemented in the R programming language R Core Team (2022), and then present and interpret the results. 4.1 Estimators used in the study For the Kramers oscillator (8), the EM transition distribution is: \u0014 Xtk Vtk \u0015 | \u0014 Xtk\u22121 Vtk\u22121 \u0015 = \u0014 x v \u0015 \u223cN \u0012\u0014 x + hv v + h \u0000\u2212\u03b7v + ax \u2212bx3\u0001 \u0015 , \u0014 0 0 0 h\u03c32 \u0015\u0013 . The ill-conditioned variance of this discretization restricts us to an estimator that only uses the marginal likelihood of the rough coordinate. The estimator for complete observations directly follows from the Gaussian distribution. The estimator for partial observations is defined as (Samson and Thieullen, 2012): b \u03b8[PR] EM = arg min \u03b8 ( 2 3(N \u22123) log \u03c32 + 1 h\u03c32 N\u22122 X k=1 (\u2206hXtk+1 \u2212\u2206hXtk \u2212h(\u2212\u03b7\u2206hXtk\u22121 + aXtk\u22121 \u2212bX3 tk\u22121))2 ) . To our knowledge, the LL estimator has not previously been applied to partial observations. Given the similar theoretical and computational performance of the Strang and LL discretizations, we suggest (without formal proof) to adjust the LL objective functions with the same correction factors as used in the Strang approach. The numerical evidence indicates 13 \fStrang Splitting Parameter Estimator for Second-order SDEs A PREPRINT that the LL estimator has the same asymptotic properties as those proved for the Strang estimator. We omit the definition of the LL estimator due to its complexity (see Melnykova (2020); Pilipovic et al. (2024) and accompanying code). To define S estimators based on the Strang splitting scheme, we first split SDE (8) as follows: d \u0014 Xt Vt \u0015 = \u0014 0 1 \u22122a \u2212\u03b7 \u0015 | {z } A \u0014 Xt Vt \u0015 \u2212 \u0014 x\u22c6 \u00b1 0 \u0015 | {z } b ! dt + \u0014 0 aXt \u2212bX3 t + 2a(Xt \u2212x\u22c6 \u00b1) \u0015 | {z } N(Xt,Vt) dt + \u0014 0 \u03c3 \u0015 dWt, where x\u22c6 \u00b1 = \u00b1 p a/b are the two stable points of the dynamics. Since there are two stable points, we suggest splitting with x\u22c6 +, when Xt > 0, and x\u22c6 \u2212, when Xt < 0. This splitting follows the guidelines from (Pilipovic et al., 2024). Note that the nonlinear ODE driven by N(x, v) has a trivial solution where x is a constant. To obtain Strang estimators, we plug in the corresponding components in the objective functions (25), (30), (39) and (40). 4.2 Trajectory simulation We simulate a sample path using the EM discretization with a step size of hsim = 0.0001 to ensure good performance. To reduce discretization errors, we sub-sample from the path at wider intervals to get time step h = 0.1. The path has N = 5000 data points. We repeat the simulations to obtain 250 data sets. 4.3 Optimization in R For optimizing the objective functions, we proceed as in Pilipovic et al. (2024) using the R package torch (Falbel and Luraschi, 2022), which allows automatic differentiation. The optimization employs the resilient backpropagation algorithm, optim_rprop. We use the default hyperparameters and limit the number of optimization iterations to 2000. The convergence criterion is set to a precision of 10\u22125 for the difference between estimators in consecutive iterations. The initial parameter values are set to (\u22120.1, \u22120.1, 0.1, 0.1). 4.4 Results The results of the simulation study are presented in Figure 1. Figure 1A) presents the distributions of the normalized estimators in the complete and partial observation cases. The S and LL estimators exhibit nearly identical performance, particularly in the complete observation scenario. In contrast, the EM method displays significant underperformance and notable bias. The variances of the S and LL rough-likelihood estimators of \u03c32 are higher compared to those derived from the full likelihood, aligning with theoretical expectations. Interestingly, in the partial observation scenario, Figure 1A) reveals that estimators employing the full likelihood display greater finite sample bias compared to those based on the rough likelihood. Possible reasons for this bias are discussed in Remark 3. However, it is noteworthy that this bias is eliminated for smaller time steps, e.g. h = 0.0001 (not shown), thus confirming the theoretical asymptotic results. This observation suggests that the rough likelihood is preferable under partial observations due to its lower bias. Backward finite difference approximations of the velocity variables perform similarly to the forward differences and are therefore excluded from the figure for clarity. We closely examine the variances of the S estimators of \u03c32 in Figure 1B). The LL estimators are omitted due to their similarity to the S estimators, and because the computation times for the LL estimators are prohibitive. To align more closely with the asymptotic predictions, we opt for h = 0.02 and conduct 1000 simulations. Additionally, we set \u03c32 0 = 100 to test different noise levels. Atop each empirical distribution, we overlay theoretical normal densities that match the variances as per Theorem 3.2. The theoretical variance is derived from C\u03c32(\u03b80) in (51), which for the Kramers oscillator in (8) is: C\u03c32(\u03b80) = 1 \u03c34 0 . (52) Figure 1 illustrates that the lowest variance of the diffusion estimator is observed when using the full likelihood with complete observations. The second lowest variance is achieved using the rough likelihood with complete observations. The largest variance is observed in the partial observation case; however, it remains independent of whether the full or rough likelihood is used. Once again, we observe that using the full likelihood introduces additional finite sample bias. In Figure 1C), we compare running times calculated using the tictoc package in R. Running times are measured from the start of the optimization step until convergence. The figure depicts the median over 250 repetitions to mitigate the influence of outliers. The EM method is notably the fastest; however, the S estimators exhibit only slightly slower performance. The LL estimators are 10-100 times slower than the S estimators, depending on whether complete or partial observations are used and whether the full or rough likelihood is employed. 14 \fStrang Splitting Parameter Estimator for Second-order SDEs A PREPRINT Figure 1: Parameter estimates in a simulation study for the Kramers oscillator, eq. (8). The color code remains consistent across all three figures. A) Normalized distributions of parameter estimation errors (\u02c6 \u03b8N \u2212\u03b80) \u2298\u03b80 in both complete and partial observation cases, based on 250 simulated data sets with h = 0.1 and N = 5000. Each column corresponds to a different parameter, while the color indicates the type of estimator. Estimators are distinguished by superscripted objective functions (F for full and R for rough). B) Distribution of b \u03c32 N estimators based on 1000 simulations with h = 0.02 and N = 5000 across different observation settings (complete or partial) and likelihood choices (full or rough) using the Strang splitting scheme. The true value of \u03c32 is set to \u03c32 0 = 100. Theoretical normal densities are overlaid for comparison. Theoretical variances are calculated based on C\u03c32(\u03b80), eq. (52). C) Median computing time in seconds for one estimation of various estimators based on 250 simulations with h = 0.1 and N = 5000. Shaded color patterns represent times in the partial observation case, while no color pattern indicates times in the complete observation case. 15 \fStrang Splitting Parameter Estimator for Second-order SDEs A PREPRINT Figure 2: Ice core data from Greenland. Left: Trajectories over time (in kilo years) of the centered negative logarithm of the Ca2+ measurements (top) and forward difference approximations of its rate of change (bottom). The two vertical dark red lines represent the estimated stable equilibria of the double-well potential function. Green points denote upand down-crossings of level \u00b10.6, conditioned on having crossed the other level. Green vertical lines indicate empirical estimates of occupancy in either of the two metastable states. Right: Empirical densities (black) alongside estimated invariant densities with confidence intervals (dark red), prediction intervals (light red), and the empirical density of a simulated sample from the estimated model (blue). 5 Application to Greenland Ice Core Data During the last glacial period, significant climatic shifts known as Dansgaard-Oeschger (DO) events have been documented in paleoclimatic records (Dansgaard et al., 1993). Proxy data from Greenland ice cores, particularly stable water isotope composition (\u03b418O) and calcium ion concentrations (Ca2+), offer valuable insights into these past climate variations (Boers et al., 2017, 2018; Boers, 2018; Ditlevsen et al., 2002; Lohmann and Ditlevsen, 2019; Hassanibesheli et al., 2020). The \u03b418O ratio, reflecting the relative abundance of 18O and 16O isotopes in ice, serves as a proxy for paleotemperatures during snow deposition. Conversely, calcium ions, originating from dust deposition, exhibit a strong negative correlation with \u03b418O, with higher calcium ion levels indicating colder conditions. Here, we prioritize Ca2+ time series due to its finer temporal resolution. In Greenland ice core records, the DO events manifest as abrupt transitions from colder climates (stadials) to approximately 10 degrees warmer climates (interstadials) within a few decades. Although the waiting times between state switches last a couple of thousand years, their spacing exhibits significant variability. The underlying mechanisms driving these changes remain largely elusive, prompting discussions on whether they follow cyclic patterns, result from external forcing, or emerge from noise-induced processes (Boers, 2018; Ditlevsen et al., 2007). We aim to determine if the observed data can be explained by noise-induced transitions of the Kramers oscillator. The measurements were conducted at the summit of the Greenland ice sheet as part of the Greenland Icecore Project (GRIP) (Anklin et al., 1993; Andersen et al., 2004). Originally, the data were sampled at 5 cm intervals, resulting in a non-equidistant time series due to ice compression at greater depths, where 5 cm of ice core spans longer time periods. For our analysis, we use a version of the data transformed into a uniformly spaced series through 20-year binning and averaging. This transformation simplifies the analysis and highlights significant climatic trends. The dataset is available in the supplementary material of (Rasmussen et al., 2014; Seierstad et al., 2014). 16 \fStrang Splitting Parameter Estimator for Second-order SDEs A PREPRINT To address the large amplitudes and negative correlation with temperature, we transform the data to minus the logarithm of Ca2+, where higher values of the transformed variable indicate warmer climates at the time of snow deposition. Additionally, we center the transformed measurements around zero. With the 20-year binning, to obtain one point per 20 years, we average across the bins, resulting in a time step of h = 0.02kyr (1kyr = 1000 years). Additionally, we addressed a few missing values using the na.approx function from the zoo package. Following the approach of Hassanibesheli et al. (2020), we analyze a subset of the data with a sufficiently good signal-to-noise ratio. Hassanibesheli et al. (2020) examined the data from 30 to 60kyr before present. Here, we extend the analysis to cover 30kyr to 80kyr, resulting in a time interval of T = 50kyr and a sample size of N = 2500. We approximate the velocity of the transformed Ca2+ by the forward difference method. The trajectories and empirical invariant distributions are illustrated in Figure 2. We fit the Kramers oscillator to the \u2212log Ca2+ time series and estimate parameters using the Strang estimator. Following Theorem 3.2, we compute C\u03b2(\u03b80) from (50). Applying the invariant density \u03c00(x, v) from (10), which decouples into \u03c00(x) (11) and a Gaussian zero-mean and \u03c32 0/(2\u03b70) variance, leads us to: C\u03b2(\u03b80) = \uf8ee \uf8ef \uf8ef \uf8f0 1 2\u03b70 0 0 0 1 \u03c32 0 R \u221e \u2212\u221ex2\u03c00(x) dx \u22121 \u03c32 0 R \u221e \u2212\u221ex4\u03c00(x) dx 0 \u22121 \u03c32 0 R \u221e \u2212\u221ex4\u03c00(x) dx 1 \u03c32 0 R \u221e \u2212\u221ex6\u03c00(x) dx \uf8f9 \uf8fa \uf8fa \uf8fb. (53) Thus, to obtain 95% confidence intervals (CI) for the estimated parameters, we plug b \u03b8N into (52) and (53). The estimators and confidence intervals are shown in Table 1. We also calculate the expected waiting time \u03c4, eq. (12), of crossing from one state to another, and its confidence interval using the Delta Method. Parameter Estimate 95% CI \u03b7 62.5 59.4 \u221265.6 a 296.7 293.6 \u2212299.8 b 219.1 156.4 \u2212281.7 \u03c32 9125 8589 \u22129662 \u03c4 3.97 3.00 \u22124.94 Table 1: Estimated parameters of the Kramers oscillator from Greenland ice core data. The model fit is assessed in the right panels of Figure 2. Here, we present the empirical distributions of the two coordinates along with the fitted theoretical invariant distribution and a 95% confidence interval. Additionally, a prediction interval for the distribution is provided by simulating 1000 datasets from the fitted model, matching the size of the empirical data. We estimate the empirical distributions for each simulated dataset and construct a 95% prediction interval using the pointwise 2.5th and 97.5th percentiles of these estimates. A single example trace is included in blue. While the fitted distribution for \u2212log Ca2+ appears to fit well, even with this symmetric model, the velocity variables are not adequately captured. This discrepancy is likely due to the presence of extreme values in the data that are not effectively accounted for by additive Gaussian noise. Consequently, the model compensates by estimating a large variance. We estimate the waiting time between metastable states to be approximately 4000 years. However, this approximation relies on certain assumptions, namely 62.5 \u2248\u03b7 \u226b\u221aa \u224817.2 and 73 \u2248\u03c32/2\u03b7 \u226aa2/4b \u2248100. Thus, the accuracy of the approximation may not be highly accurate. Defining the current state of the process is not straightforward. One method involves identifying successive upand down-crossings of predefined thresholds within the smoothed data. However, the estimated occupancy time in each state depends on the level of smoothing applied and the distance of crossing thresholds from zero. Using a smoothing technique involving running averages within windows of 11 data points (equivalent to 220 years) and detecting downand up-crossings of levels \u00b10.6, we find an average occupancy time of 4058 years in stadial states and 3550 years in interstadial states. Nevertheless, the actual occupancy times exhibit significant variability, ranging from 60 to 6900 years, with the central 50% of values falling between 665 and 2115 years. This classification of states is depicted in green in Figure 2. Overall, the estimated mean occupancy time inferred from the Kramers oscillator appears reasonable. 6 Technical results In this Section, we present all the necessary technical properties that are used to derive the main results of the paper. 17 \fStrang Splitting Parameter Estimator for Second-order SDEs A PREPRINT We start by expanding e \u2126h and its block components \u2126[RR] h (\u03b8)\u22121, \u2126[S|R] h (\u03b8)\u22121, log det \u2126[RR] h (\u03b8), log det \u2126[S|R] h (\u03b8) and log | det Dfh/2 (y; \u03b2) | when h goes to zero. Then, we expand e Zk,k\u22121(\u03b2) and e Zk+1,k,k\u22121(\u03b2) around Ytk\u22121 when h goes to zero. The main tools used are It\u00f4\u2019s lemma, Taylor expansions, and Fubini\u2019s theorem. The final result is stated in Propositions 6.6 and 6.7. The approximations depend on the drift function F, the nonlinear part N, and some correlated sequences of Gaussian random variables. Finally, we obtain approximations of the objective functions (25), (30), (31) and (39) (41). Proofs of all the stated propositions and lemmas in this section are in Supplementary Material S1. 6.1 Covariance matrix e \u2126h The covariance matrix e \u2126h is approximated by: e \u2126h = Z h 0 e e A(h\u2212u) e \u03a3e \u03a3\u22a4e e A\u22a4(h\u2212u) du = he \u03a3e \u03a3\u22a4+ h2 2 ( e Ae \u03a3e \u03a3\u22a4+ e \u03a3e \u03a3\u22a4e A\u22a4) + h3 6 ( e A2 e \u03a3e \u03a3\u22a4+ 2 e Ae \u03a3e \u03a3\u22a4e A\u22a4+ e \u03a3e \u03a3\u22a4( e A2)\u22a4) + h4 24( e A3 e \u03a3e \u03a3\u22a4+ 3 e A2 e \u03a3e \u03a3\u22a4e A\u22a4+ 3 e Ae \u03a3e \u03a3\u22a4( e A2)\u22a4+ e \u03a3e \u03a3\u22a4( e A3)\u22a4) + R(h5, y0). (54) The following lemma approximates each block of e \u2126h up to the first two leading orders of h. The result follows directly from equations (4), (6), and (54). Lemma 6.1 The covariance matrix e \u2126h defined in (54)-(19) approximates block-wise as: \u2126[SS] h (\u03b8) = h3 3 \u03a3\u03a3\u22a4+ h4 8 (Av(\u03b2)\u03a3\u03a3\u22a4+ \u03a3\u03a3\u22a4Av(\u03b2)\u22a4) + R(h5, y0), \u2126[SR] h (\u03b8) = h2 2 \u03a3\u03a3\u22a4+ h3 6 (Av(\u03b2)\u03a3\u03a3\u22a4+ 2\u03a3\u03a3\u22a4Av(\u03b2)\u22a4) + R(h4, y0), \u2126[RS] h (\u03b8) = h2 2 \u03a3\u03a3\u22a4+ h3 6 (2Av(\u03b2)\u03a3\u03a3\u22a4+ \u03a3\u03a3\u22a4Av(\u03b2)\u22a4) + R(h4, y0), \u2126[RR] h (\u03b8) = h\u03a3\u03a3\u22a4+ h2 2 (Av(\u03b2)\u03a3\u03a3\u22a4+ \u03a3\u03a3\u22a4Av(\u03b2)\u22a4) + R(h3, y0). Building on Lemma 6.1, we calculate products, inverses, and logarithms of the components of e \u2126h in the following lemma. Lemma 6.2 For the covariance matrix e \u2126h defined in (54) it holds: (i) \u2126[RR] h (\u03b8)\u22121 = 1 h(\u03a3\u03a3\u22a4)\u22121 \u22121 2((\u03a3\u03a3\u22a4)\u22121Av(\u03b2) + Av(\u03b2)\u22a4(\u03a3\u03a3\u22a4)\u22121) + R(h, y0); (ii) \u2126[SR] h (\u03b8)\u2126[RR] h (\u03b8)\u22121 = h 2 I \u2212h2 12 (Av \u2212\u03a3\u03a3\u22a4Av(\u03b2)\u22a4(\u03a3\u03a3\u22a4)\u22121) + R(h3, y0); (iii) \u2126[SR] h (\u03b8)\u2126[RR] h (\u03b8)\u22121\u2126[RS] h (\u03b8) = h3 4 \u03a3\u03a3\u22a4+ h4 8 (Av(\u03b2)\u03a3\u03a3\u22a4+ \u03a3\u03a3\u22a4Av(\u03b2)\u22a4) + R(h5, y0); (iv) \u2126[S|R] h (\u03b8) = h3 12 \u03a3\u03a3\u22a4+ R(h5, y0); (v) log det \u2126[RR] h (\u03b8) = d log h + log det \u03a3\u03a3\u22a4+ h Tr Av(\u03b2) + R(h2, y0); (vi) log det \u2126[S|R] h (\u03b8) = 3d log h + log det \u03a3\u03a3\u22a4+ R(h2, y0); (vii) log det e \u2126h(\u03b8) = 4d log h + 2 log det \u03a3\u03a3\u22a4+ h Tr Av(\u03b2) + R(h2, y0). Remark 5 We adjusted the objective functions for partial observations using the term c log det \u2126[\u00b7] h/c, where c is a correction constant. This adjustment keeps the term h Tr Av(\u03b2) in (v)-(vii) constant, not affecting the asymptotic distribution of the drift parameter. There is no h4-term in \u2126[S|R] h (\u03b8) which simplifies the approximation of \u2126[S|R] h (\u03b8)\u22121 and log det \u2126[S|R] h (\u03b8). Consequently, this makes (41) a bad choice for estimating the drift parameter. 18 \fStrang Splitting Parameter Estimator for Second-order SDEs A PREPRINT 6.2 Nonlinear solution e fh We now state a useful proposition for the nonlinear solution e fh (Section 1.8 in (Hairer et al., 1993)). Proposition 6.3 Let Assumptions (A1), (A2) and (A6) hold. When h \u21920, the h-flow of (15) approximates as: e fh(y) = y + h e N(y) + h2 2 (Dy e N(y)) e N(y) + R(h3, y), (55) e f \u22121 h (y) = y \u2212h e N(y) + h2 2 (Dy e N(y)) e N(y) + R(h3, y). (56) Applying the previous proposition on (21) and (22), we get: fh(y) = v + hN(y) + h2 2 (DvN(y))N(y) + R(h3, y), (57) f \u22c6\u22121 h (y) = v \u2212hN(y) + h2 2 (DvN(y))N(y) + R(h3, y). (58) The following lemma approximates log | det Dfh/2 (y; \u03b2) | in the objective functions and connects it with Lemma 6.2. Lemma 6.4 Let e fh be the function defined in (21). It holds: 2 log | det Dfh/2 (Ytk; \u03b2) | = h Tr DvN(Ytk\u22121; \u03b2) + R(h3/2, Ytk\u22121), 2 log | det Dfh/2 \u0000Xtk, \u2206hXtk+1; \u03b2 \u0001 | = h Tr DvN(Ytk\u22121; \u03b2) + R(h3/2, Ytk\u22121). An immediate consequence of the previous lemma and that DvF(y; \u03b2) = Av(\u03b2) + DvN(y; \u03b2) is log det \u2126[RR] h (\u03b8) + 2 log | det Dfh/2 (Ytk; \u03b2) | = log det h\u03a3\u03a3\u22a4+ h Tr DvF(Ytk\u22121; \u03b2) + R(h3/2, Ytk\u22121). The same equality holds when Ytk is approximated by (Xtk, \u2206hXtk+1). The following lemma expands function \u00b5h( e fh/2(y)) up to the highest necessary order of h. Lemma 6.5 For the functions e fh in (21) and e \u00b5h in (28), it holds \u00b5[S] h ( e fh/2(y)) = x + hv + h2 2 F(y) + R(h3, y), (59) \u00b5[R] h ( e fh/2(y)) = v + h(F(y) \u22121 2N(y)) + R(h2, y). (60) 6.3 Random variables e Zk,k\u22121 and e Zk+1,k,k\u22121 To approximate the random variables Z[S] k,k\u22121(\u03b2), Z[R] k,k\u22121(\u03b2), Z [S] k,k\u22121(\u03b2), and Z [R] k+1,k,k\u22121(\u03b2) around Ytk\u22121, we start by defining the following random sequences: \u03b7k\u22121 := 1 h1/2 Z tk tk\u22121 dWt, (61) \u03bek\u22121 := 1 h3/2 Z tk tk\u22121 (t \u2212tk\u22121) dWt, \u03be\u2032 k := 1 h3/2 Z tk+1 tk (tk+1 \u2212t) dWt, (62) \u03b6k\u22121 := 1 h5/2 Z tk tk\u22121 (t \u2212tk\u22121)2 dWt, \u03b6\u2032 k := 1 h5/2 Z tk+1 tk (tk+1 \u2212t)2 dWt. (63) The random variables (61)-(63) are Gaussian with mean zero. Moreover, at time tk they are Ftk+1 measurable and independent of Ftk. The following linear combinations of (61)-(63) appear in the expansions in the partial observation case: Uk,k\u22121 := \u03be\u2032 k + \u03bek\u22121, (64) Qk,k\u22121 := \u03b6\u2032 k + 2\u03b7k\u22121 \u2212\u03b6k\u22121. (65) 19 \fStrang Splitting Parameter Estimator for Second-order SDEs A PREPRINT It is not hard to check that \u03be\u2032 k + \u03b7k\u22121 \u2212\u03be\u2032 k\u22121 = Uk,k\u22121. This alternative representation of Uk,k\u22121 will be used later in proofs. The It\u00f4 isometry yields: E\u03b80[\u03b7k\u22121\u03b7\u22a4 k\u22121 | Ftk\u22121] = I, E\u03b80[\u03b7k\u22121\u03be\u22a4 k\u22121 | Ftk\u22121] = E\u03b80[\u03b7k\u22121\u03be\u2032\u22a4 k\u22121 | Ftk\u22121] = 1 2I, (66) E\u03b80[\u03bek\u22121\u03be\u2032\u22a4 k\u22121 | Ftk\u22121] = 1 6I, E\u03b80[\u03bek\u22121\u03be\u22a4 k\u22121 | Ftk\u22121] = E\u03b80[\u03be\u2032 k\u03be\u2032\u22a4 k | Ftk\u22121] = 1 3I, (67) E\u03b80[Uk,k\u22121U\u22a4 k,k\u22121 | Ftk\u22121] = 2 3I, E\u03b80[Uk,k\u22121(Uk,k\u22121 + 2\u03be\u2032 k\u22121)\u22a4| Ftk\u22121] = I. (68) The covariances of other combinations of the random variables (61)-(63) are not needed for the proofs. However, to derive asymptotic properties, we need some fourth moments calculated in Supplementary Materials S1. The following two propositions are the last building blocks for approximating the objective functions (30)-(31) and (40)-(41). Proposition 6.6 The random variables e Zk,k\u22121(\u03b2) in (24) and e Zk+1,k,k\u22121(\u03b2) in (35) are approximated by: Z[S] k,k\u22121(\u03b2) = h3/2\u03a30\u03be\u2032 k\u22121 + h2 2 (F0(Ytk\u22121) \u2212F(Ytk\u22121)) + h5/2 2 DvF0(Ytk\u22121)\u03a30\u03b6\u2032 k\u22121 + R(h3, Ytk\u22121), Z[R] k,k\u22121(\u03b2) = h1/2\u03a30\u03b7k\u22121 + h(F0(Ytk\u22121) \u2212F(Ytk\u22121)) \u2212h3/2 2 DvN(Ytk\u22121)\u03a30\u03b7k\u22121 + h3/2DvF0(Ytk\u22121)\u03a30\u03be\u2032 k\u22121 + R(h2, Ytk\u22121), Z [S] k,k\u22121(\u03b2) = \u2212h2 2 F(Ytk\u22121) \u2212h5/2 2 DvF(Ytk\u22121)\u03a30\u03be\u2032 k\u22121 + R(h3, Ytk\u22121), Z [R] k+1,k,k\u22121(\u03b2) = h1/2\u03a30Uk,k\u22121 + h(F0(Ytk\u22121) \u2212F(Ytk\u22121)) \u2212h3/2 2 DvN(Ytk\u22121)\u03a30Uk,k\u22121 \u2212h3/2DvF(Ytk\u22121)\u03a30\u03be\u2032 k\u22121 + h3/2 2 DvF0(Ytk\u22121)\u03a30Qk,k\u22121 + R(h2, Ytk\u22121). Remark 6 Proposition 6.6 yield E\u03b80[Z[R] k,k\u22121(\u03b2)Z[R] k,k\u22121(\u03b2)\u22a4| Ytk\u22121] = h\u03a3\u03a3\u22a4 0 + R(h2, Ytk\u22121) = \u2126[RR] h + R(h2, Ytk\u22121), E\u03b80[Z [R] k+1,k,k\u22121(\u03b2)Z [R] k+1,k,k\u22121(\u03b2)\u22a4| Ytk\u22121] = 2 3h\u03a3\u03a3\u22a4 0 + R(h2, Ytk\u22121) = 2 3\u2126[RR] h + R(h2, Ytk\u22121). Thus, the correction factor 2/3 in (40) compensates for the underestimation of the covariance of Z [R] k+1,k,k\u22121(\u03b2). Similarly, it can be shown that the same underestimation happens when using the backward difference. On the other hand, when using the central difference, it can be shown that E\u03b80[Z [R],central k+1,k,k\u22121(\u03b2)Z [R],central k+1,k,k\u22121(\u03b2)\u22a4| Ytk\u22121] = 5 12h\u03a3\u03a3\u22a4 0 + R(h2, Ytk\u22121), which is a larger deviation from \u2126[RR] h , yielding a larger correcting factor and larger asymptotic variance of the diffusion parameter estimator. Proposition 6.7 Let e Zk,k\u22121(\u03b2) and e Zk+1,k,k\u22121(\u03b2) be defined in (24) and (35), respectively. Then, Z[S|R] k,k\u22121(\u03b2) = \u2212h3/2 2 \u03a30(\u03b7k\u22121 \u22122\u03be\u2032 k\u22121) + h5/2 12 (Av \u2212\u03a3\u03a3\u22a4Av(\u03b2)\u22a4(\u03a3\u03a3\u22a4)\u22121)\u03a30\u03b7k\u22121 + h5/2 4 DvN(Ytk\u22121)\u03a30\u03b7k\u22121 \u2212h5/2 2 DvF0(Ytk\u22121)\u03a30(\u03be\u2032 k\u22121 \u2212\u03b6\u2032 k\u22121) + R(h3, Ytk\u22121), Z [S|R] k+1,k,k\u22121(\u03b2) = \u2212h3/2 2 \u03a30Uk,k\u22121 \u2212h2 2 F0(Ytk\u22121) + h5/2 12 (Av \u2212\u03a3\u03a3\u22a4Av(\u03b2)\u22a4(\u03a3\u03a3\u22a4)\u22121)\u03a30Uk,k\u22121 + h5/2 4 DvN(Ytk\u22121)\u03a30Uk,k\u22121 \u2212h5/2 4 DvF0(Ytk\u22121)\u03a30Qk,k\u22121 + R(h3, Ytk\u22121). 20 \fStrang Splitting Parameter Estimator for Second-order SDEs A PREPRINT 6.4 Objective functions Starting with the complete observation case, we approximate objective functions (30) and (31) up to order R(h3/2, Ytk\u22121) to prove the asymptotic properties of the estimators \u02c6 \u03b8[CR] N and \u02c6 \u03b8[CS|R] N . After omitting the terms of order R(h, Ytk\u22121) that do not depend on \u03b2, we obtain the following approximations: L[CR] N (Y0:tN ; \u03b8) = (N \u22121) log det \u03a3\u03a3\u22a4+ N X k=1 \u03b7\u22a4 k\u22121\u03a3\u22a4 0 (\u03a3\u03a3\u22a4)\u22121\u03a30\u03b7k\u22121 (69) + 2 \u221a h N X k=1 \u03b7\u22a4 k\u22121\u03a3\u22a4 0 (\u03a3\u03a3\u22a4)\u22121(F(Ytk\u22121; \u03b20) \u2212F(Ytk\u22121; \u03b2)) + h N X k=1 (F(Ytk\u22121; \u03b20) \u2212F(Ytk\u22121; \u03b2))\u22a4(\u03a3\u03a3\u22a4)\u22121(F(Ytk\u22121; \u03b20) \u2212F(Ytk\u22121; \u03b2)) \u2212h N X k=1 \u03b7\u22a4 k\u22121\u03a3\u22a4 0 DvF(Ytk\u22121; \u03b2)\u22a4(\u03a3\u03a3\u22a4)\u22121\u03a30\u03b7k\u22121 + h N X k=1 Tr DvF(Ytk; \u03b2), L[CS|R] N (Y0:tN ; \u03b8) = (N \u22121) log det \u03a3\u03a3\u22a4+ 3 N X k=1 (\u03b7k\u22121 \u22122\u03be\u2032 k\u22121)\u22a4\u03a3\u22a4 0 (\u03a3\u03a3\u22a4)\u22121\u03a30(\u03b7k\u22121 \u22122\u03be\u2032 k\u22121) (70) \u22123h N X k=1 (\u03b7k\u22121 \u22122\u03be\u2032 k\u22121)\u22a4\u03a3\u22a4 0 (\u03a3\u03a3\u22a4)\u22121DvN(Ytk\u22121; \u03b2)\u03a30\u03b7k\u22121 \u2212h N X k=1 (\u03b7k\u22121 \u22122\u03be\u2032 k\u22121)\u22a4\u03a3\u22a4 0 (\u03a3\u03a3\u22a4)\u22121(Av(\u03b2) \u2212\u03a3\u03a3\u22a4Av(\u03b2)\u22a4(\u03a3\u03a3\u22a4)\u22121)\u03a30\u03b7k\u22121 L[CF] N (Y0:tN ; \u03b8) = L[CR] N (Y0:tN ; \u03b8) + L[CS|R] N (Y0:tN ; \u03b8) . (71) The two last sums in (70) converge to zero because E\u03b80[(\u03b7k\u22121 \u22122\u03be\u2032 k\u22121)\u03b7\u22a4 k\u22121|Ftk\u22121] = 0. Moreover, (70) lacks the quadratic form of F(Ytk\u22121) \u2212F0(Ytk\u22121), that is crucial for the asymptotic variance of the drift estimator. This implies that the objective function L[CS|R] N is not suitable for estimating the drift parameter. Conversely, (70) provides a correct and consistent estimator of the diffusion parameter, indicating that the full objective function (the sum of L[CR] N and L[CS|R] N ) consistently estimates \u03b8. Similarly, the approximated objective functions in the partial observation case are: L[PR] N (Y0:tN ; \u03b8) = 2 3(N \u22122) log det \u03a3\u03a3\u22a4+ N\u22121 X k=1 U\u22a4 k,k\u22121\u03a3\u22a4 0 (\u03a3\u03a3\u22a4)\u22121\u03a30Uk,k\u22121 (72) + 2 \u221a h N X k=1 U\u22a4 k,k\u22121\u03a3\u22a4 0 (\u03a3\u03a3\u22a4)\u22121(F(Ytk\u22121; \u03b20) \u2212F(Ytk\u22121; \u03b2)) + h N\u22121 X k=1 (F(Ytk\u22121; \u03b20) \u2212F(Ytk\u22121; \u03b2))\u22a4(\u03a3\u03a3\u22a4)\u22121(F(Ytk\u22121; \u03b20) \u2212F(Ytk\u22121; \u03b2)) \u2212h N\u22121 X k=1 (Uk,k\u22121 + 2\u03be\u2032 k\u22121)\u22a4\u03a3\u22a4 0 DvF(Ytk\u22121; \u03b2)\u22a4(\u03a3\u03a3\u22a4)\u22121\u03a30Uk,k\u22121 + h N\u22121 X k=1 Tr DvF(Ytk; \u03b2), L[PS|R] N (Y0:tN ; \u03b8) = 2(N \u22122) log det \u03a3\u03a3\u22a4+ 3 N\u22121 X k=1 U\u22a4 k,k\u22121\u03a3\u22a4 0 (\u03a3\u03a3\u22a4)\u22121\u03a30Uk,k\u22121 (73) + 6 \u221a h N X k=1 U\u22a4 k,k\u22121\u03a3\u22a4 0 (\u03a3\u03a3\u22a4)\u22121F(Ytk\u22121; \u03b20) \u22123h N\u22121 X k=1 U\u22a4 k,k\u22121\u03a3\u22a4 0 DvN(Ytk\u22121; \u03b2)\u22a4(\u03a3\u03a3\u22a4)\u22121\u03a30Uk,k\u22121 + 2h N\u22121 X k=1 Tr DvN(Ytk; \u03b2), 21 \fStrang Splitting Parameter Estimator for Second-order SDEs A PREPRINT L[PF] N (Y0:tN ; \u03b8) = L[PR] N (Y0:tN ; \u03b8) + L[PS|R] N (Y0:tN ; \u03b8) . (74) This time, the term with Av(\u03b2) \u2212\u03a3\u03a3\u22a4Av(\u03b2)\u22a4(\u03a3\u03a3\u22a4)\u22121 vanishes because Tr(\u03a30Uk,k\u22121U\u22a4 k,k\u22121\u03a3\u22a4 0 (\u03a3\u03a3\u22a4)\u22121(Av(\u03b2) \u2212\u03a3\u03a3\u22a4Av(\u03b2)\u22a4(\u03a3\u03a3\u22a4)\u22121)) = 0 due to the symmetry of the matrices and the trace cyclic property. Even though the partial observation objective function L[PR] (X0:tN ; \u03b8) (40) depends only on X0:tN , we could approximate it with L[PR] N (Y0:tN ; \u03b8) (72). This is useful for proving the asymptotic normality of the estimator since its asymptotic distribution will depend on the invariant probability \u03bd0 defined for the solution Y. The absence of the quadratic form F(Ytk\u22121) \u2212F0(Ytk\u22121) in (73) indicates that L[PS|R] N is not suitable for estimating the drift parameter. Additionally, the penultimate term in (73) does not vanish, needing an additional correction term of 2h PN\u22121 k=1 Tr DvN(Ytk; \u03b2) for consistency. This correction is represented as 4 log | det Dvfh/2| in (41). Notably, this term is absent in the complete objective function (31), making this adjustment somewhat artificial and could potentially deviate further from the true log-likelihood. Consequently, the objective function based on the full likelihood (39) inherits this characteristic from (73), suggesting that in the partial observation scenario, using only the rough likelihood (72) may be more appropriate. 7" +} \ No newline at end of file diff --git a/abs_9K/test_abstract_short_2405.03690v2.json b/abs_9K/test_abstract_short_2405.03690v2.json new file mode 100644 index 0000000000000000000000000000000000000000..a8820ee330a4a8eaf4492eda8743a8412584e3c7 --- /dev/null +++ b/abs_9K/test_abstract_short_2405.03690v2.json @@ -0,0 +1,16 @@ +{ + "url": "http://arxiv.org/abs/2405.03690v2", + "title": "How Good is my Video LMM? Complex Video Reasoning and Robustness Evaluation Suite for Video-LMMs", + "abstract": "Recent advancements in Large Language Models (LLMs) have led to the\ndevelopment of Video Large Multi-modal Models (Video-LMMs) that can handle a\nwide range of video understanding tasks. These models have the potential to be\ndeployed in real-world applications such as robotics, AI assistants, medical\nsurgery, and autonomous vehicles. The widespread adoption of Video-LMMs in our\ndaily lives underscores the importance of ensuring and evaluating their robust\nperformance in mirroring human-like reasoning and interaction capabilities in\ncomplex, real-world contexts. However, existing benchmarks for Video-LMMs\nprimarily focus on general video comprehension abilities and neglect assessing\ntheir reasoning capabilities over complex videos in the real-world context, and\nrobustness of these models through the lens of user prompts as text queries. In\nthis paper, we present the Complex Video Reasoning and Robustness Evaluation\nSuite (CVRR-ES), a novel benchmark that comprehensively assesses the\nperformance of Video-LMMs across 11 diverse real-world video dimensions. We\nevaluate 9 recent models, including both open-source and closed-source\nvariants, and find that most of the Video-LMMs, especially open-source ones,\nstruggle with robustness and reasoning when dealing with complex videos. Based\non our analysis, we develop a training-free Dual-Step Contextual Prompting\n(DSCP) technique to enhance the performance of existing Video-LMMs. Our\nfindings provide valuable insights for building the next generation of\nhuman-centric AI systems with advanced robustness and reasoning capabilities.\nOur dataset and code are publicly available at:\nhttps://mbzuai-oryx.github.io/CVRR-Evaluation-Suite/.", + "authors": "Muhammad Uzair Khattak, Muhammad Ferjad Naeem, Jameel Hassan, Muzammal Naseer, Federico Tombari, Fahad Shahbaz Khan, Salman Khan", + "published": "2024-05-06", + "updated": "2024-05-08", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "label": "Original Paper", + "paper_cat": "Multi AND Modal AND LLM", + "gt": "Recent advancements in Large Language Models (LLMs) have led to the\ndevelopment of Video Large Multi-modal Models (Video-LMMs) that can handle a\nwide range of video understanding tasks. These models have the potential to be\ndeployed in real-world applications such as robotics, AI assistants, medical\nsurgery, and autonomous vehicles. The widespread adoption of Video-LMMs in our\ndaily lives underscores the importance of ensuring and evaluating their robust\nperformance in mirroring human-like reasoning and interaction capabilities in\ncomplex, real-world contexts. However, existing benchmarks for Video-LMMs\nprimarily focus on general video comprehension abilities and neglect assessing\ntheir reasoning capabilities over complex videos in the real-world context, and\nrobustness of these models through the lens of user prompts as text queries. In\nthis paper, we present the Complex Video Reasoning and Robustness Evaluation\nSuite (CVRR-ES), a novel benchmark that comprehensively assesses the\nperformance of Video-LMMs across 11 diverse real-world video dimensions. We\nevaluate 9 recent models, including both open-source and closed-source\nvariants, and find that most of the Video-LMMs, especially open-source ones,\nstruggle with robustness and reasoning when dealing with complex videos. Based\non our analysis, we develop a training-free Dual-Step Contextual Prompting\n(DSCP) technique to enhance the performance of existing Video-LMMs. Our\nfindings provide valuable insights for building the next generation of\nhuman-centric AI systems with advanced robustness and reasoning capabilities.\nOur dataset and code are publicly available at:\nhttps://mbzuai-oryx.github.io/CVRR-Evaluation-Suite/.", + "main_content": "Introduction Recently, Large Language Models (LLMs) [Touvron et al., 2023, Zheng et al., 2023, Jiang et al., 2024] have demonstrated impressive reasoning and planning capabilities while simultaneously handling a wide range of NLP tasks [Wei et al., 2022a, Brown et al., 2020]. Consequently, their integration with the vision modality, specifically for video understanding tasks, has given rise to Video Large Multi-modal Models (Video-LMMs) [Li et al., 2023b]. These models act as visual chatbots that accept both text and video as input and handle a diverse set of tasks, including video comprehension [Maaz et al., 2023], detailed video understanding [Lin et al., 2023], and action grounding [Zhang et al., 2023]. As these models directly capture video data, they hold substantial potential for deployment in real-world applications such as robotics, surveillance, medical surgery, and autonomous vehicles. However, as these models assume an expanding role in our everyday lives, assessing their performance in comprehending complex videos and demonstrating reliable reasoning and robustness capabilities arXiv:2405.03690v2 [cs.CV] 8 May 2024 \fBenchmark Textual Complex In the wild Contextual Multiple Temporal Order Robustness Reasoning (OOD) Dependency Actions & Fine-grained MSVD-QA [Xu et al., 2017] MSRVTT-QA [Xu et al., 2017] TGIF-QA [Jang et al., 2017] Activity Net-QA [Yu et al., 2019] VideoChat-GPT [Maaz et al., 2023] MVBench [Li et al., 2023c] SEED-Bench [Li et al., 2023a] CVRR-ES (ours) Table 1: Comparison of CVRR-ES with existing benchmarks for video QA. The CVRR-ES benchmark represents an initial effort to assess Video-LMMs in the context of their applicability and suitability in real-world applications. Non-existent actions with non-existent scene depictions. 6.0% Multiple actions in a single video. 13.25% Fine-grained action understanding. 9.58% Partial actions. 8.58% Non-existent actions with existent scene depictions. 5.75% Interpretation of visual context. 11.38% Continuity and Object Instance Count. 7.38% Unusual and Physically Anomalous activities. 7.92% Interpretation of social context. 11.67% Understanding of emotional context. 12.17% Time order understanding. 6.33% CVRR Evaluation Suite 0 20 40 60 80 100 Accuracy % (averaged over 11 video dimensions) Video LLaVa MovieChat LLaMA-VID Video-LLaMA-2 Video-ChatGPT VideoChat TimeChat Gemini-Pro GPT4V(ision) Human Video LMMs 15.92% 16.41% 16.46% 21.62% 24.96% 25.78% 32.89% 53.2% 70.78% 96.67% Figure 1: Left: CVRR-ES comprises of 11 diverse complex video evaluation dimensions encompassing a variety of complex, real-world contexts. Right: Overall performance of Video-LMMs on the CVRR-ES benchmark. Results for each Video-LMM are averaged across 11 video dimensions. across diverse real-world contexts becomes essential. Video-LMMs with such capabilities will be more effective when integrated into our daily lives for solving perception tasks and will be a promising step towards building human-centric AI-assistive systems. Several attempts in literature have been made to benchmark Video-LMMs. SEED-Bench [Li et al., 2023a] curated a MCQ-based benchmarking dataset including 3 evaluation dimensions for videos. Similarly, MV-Bench [Li et al., 2023c] constructed the Video-LMM benchmark and assembled 20 challenging video tasks for evaluating the spatial and temporal understanding of these models. While these methods aim at benchmarking Video-LMMs, they predominantly evaluate video and/or temporal comprehension abilities and overlook the complex reasoning aspects of Video-LMMs for real-world context, and their robustness towards user input text queries; both of which are crucial to ensure their responsible engagement with humans in various real-world situations in the wild. While some studies have explored similar areas such as hallucinations in image-based LLMs [Liu et al., 2023a, Qian et al., 2024], no such comprehensive study exists for the case of Video-LMMs. Motivated by the wide-scale applications of Video-LMMs and the lack of world-centric complex video benchmarking efforts, we present a new benchmark, Complex Video Reasoning and Robustness Evaluation Suite (CVRR-ES), to comprehensively assess the performance of Video-LMMs. As shown in Tab. 1, CVRR-ES evaluates Video-LMMs on key aspects of robustness and reasoning in videos, encompassing video domains that more accurately test models in real-world scenarios such as videos having contextual dependency and in-the-wild aspects. CVRR-ES is an open-ended video QA benchmark comprising 11 real-world video category dimensions (Fig. 1, left) that encompass diverse evaluation aspects. These dimensions span from context-dependent (e.g., social, emotional, etc.) categories to ones that often take place in the wild such as videos containing physically anomalous activities. We comprehensively evaluate a representative set of 9 recent Video-LMMs (Fig. 1, right) including both open-source and closed-source models on the CVRR-ES benchmark using a LLM-assisted automatic evaluation framework [Maaz et al., 2023, Cai et al., 2023]. The performance of Video-LMMs on the CVRR-ES benchmark reveals that these models struggle to correctly comprehend complex videos indicating their weak reasoning and lack of robustness to the textual user queries (Fig. 2). For instance, state-of-the-art Video-LLaVA [Lin et al., 2023] achieves only 15.92% performance averaged across 11 video dimensions of CVRR-ES. In contrast, closed-source models including GPT4V(vision) [OpenAI, 2023] and Gemini-Vision-Pro [Google, 2023] exhibit relatively stronger performance but still lag behind the performance of humans. Using CVRR-ES benchmark, we extensively perform quantitative and qualitative analysis formulating important insights into these Video-LMMs based on their failure cases and individual performances across the diverse video dimensions. 2 \f Based on the visual cues provided in the sequence of images, it appears that four individuals exit the vehicle. Correct Response: Three people exit the car to retrieve black bags from the trunk. 2. Two individuals exit the vehicle to retrieve items from the trunk. Two individuals exit the vehicle to retrieve items from the trunk. Two individuals exit the vehicle to retrieve items from the trunk. In the video, we can see a young man and woman exiting the car \u2026 How many individuals exit the vehicle to retrieve items from the trunk? Based on \u2026, one person is performing a prank on another individual. The prank seems to involve pouring a yellow liquid from a bowl, which is sneakily positioned behind the unsuspecting person's back at waist level. Correct Response: one man uses his back muscles to crack eggs placed by another man into a bowl behind him. The process is amusing and ends with a celebratory turn by the man holding the eggs. \u2026 Another man is holding a bowl of food behind the first man \u2026 the second man appears to be pouring the contents of the bowl down the first man's shorts. A man is seen standing in a kitchen with his hands on his hips. Another man is standing behind him and pouring something into a bowl. a guy is standing with his back facing someone else. The person standing behind him pours water on his back, causing him to shiver. In the video, a man is seen standing in a kitchen and washing dishes. He then puts on a white robe and continues washing dishes. In the video, we see a young man pouring water from a bucket to a plastic bowl. What is happening in the video? Why is the golfer celebrating in the video? Based on \u2026 the golfer is celebrating because they successfully hit the golf ball from a sand bunker onto the green and possibly close to the hole\u2026 Correct Response: The golfer is not celebrating and instead he is expressing his anger and frustration by hitting the golf stick into the ground. The golfer is celebrating because he made a hole-in-one. A hole-in-one is \u2026 The golfer is celebrating because he hit the ball into hole on his first attempt. The golfer is celebrating because he has just hit the ball into the hole. The golfer is celebrating because he has just made a hole in one, \u2026 The golfer is celebrating in the video because he is successfully teeing off and \u2026 Before the mug was moved, a tennis ball became visible underneath it. Correct Response: No other object was visible before the mug was moved in the video. A yellow tennis ball was visible in the scene before the mug was moved. A tennis ball. Before the mug was moved, a tennis ball was visible in the scene. In the beginning of the video, the person was holding a mug with a tennis ball on a table. Before the mug was moved, a white vase was visible in the scene. Before the mug was moved, what other object was visible in the scene? A tennis ball was visible in the scene before the mug was moved. GPT4(Vision) Gemini-Pro-V Video-LLaVA LLaMA-VID MovieChat TimeChat Figure 2: We observe that most Video-LMMs struggle to reason over complex videos (rows 1-3) and exhibit weak robustness and rectification capabilities when prompted to generate answers for user questions that can sometimes be confusing (row 4). The QA pairs in Comprehensive Video Reasoning and Robustness Evaluation Suite (CVRR-ES) benchmark assess the performance of Video-LMMs beyond general video comprehension. Based on our analysis, we observe that standard prompting of Video-LMMs struggles in steering their focus for complex video understanding. Additionally, their limitations in reasoning and robust video understanding of real-world scenarios are dominantly driven by the quality of textual inputs (i.e., user questions). Based on these insights, we develop a training-free Dual-Step Contextual Prompting (DSCP) technique, which effectively steers the model\u2019s behavior during inference to elicit video-specific reasoning and improved robustness within Video-LMMs. With DSCP, Video-LMMs show substantial improvements on our benchmark, suggesting the potential of prompting techniques for Video-LMMs. Our main contributions can be summarised as follows: \u2022 We present the Complex Video Robustness and Reasoning Evaluation suite (CVRR-ES), a Video Question Answering benchmark designed to assess the reasoning and robustness capabilities of Video-LMMs across 11 diverse world-centric complex video dimensions. \u2022 We comprehensively evaluate both open-source and closed-source Video-LMMs on the CVRR-ES benchmark and find that most models exhibit weak performance, highlighting their limited reasoning in complex videos and lack of robustness towards user text queries. \u2022 We conduct extensive analysis and formulate important conclusions about Video-LMMs based on their failure cases and performance on the CVRR-ES benchmark. Our findings provide valuable insights for building the next generation of human-centric AI systems with improved robustness and reasoning capabilities. \u2022 To improve Video-LMMs\u2019 reasoning and robustness abilities, we formulate a model-agnostic and training-free prompting technique that effectively enhances their performance. 3 \f2 Related Works Video Large Multi-modal models (Video-LMMs). Video-LMMs [Lin et al., 2023, Li et al., 2023d, Zhang et al., 2023] are advanced visual chatbots capable of performing a wide range of video understanding tasks, including video comprehension and captioning, video question-answering, and action grounding. These models accept both video and textual inputs and generate textual responses. From an architectural perspective, Video-LMMs typically combine pre-trained vision backbones [Radford et al., 2021, Fang et al., 2023, Wang et al., 2022b] with large language models [Touvron et al., 2023, Zheng et al., 2023] using connector modules such as MLP adapters, Q-former [Dai et al., 2023], and gated attention [Alayrac et al., 2022]. VideoChat [Li et al., 2023b] and VideoChat-GPT [Li et al., 2023d] presented initial open-source efforts in this direction and were trained with two stages of alignment and video-instruction following objectives. Recently, more advanced Video-LMMs have emerged in the field, with some models focusing on improving model architectures [Li et al., 2023d], expanding to new tasks [Munasinghe et al., 2023], and enabling support for long videos [Song et al., 2023, Ren et al., 2023]. In this work, we aim to develop a comprehensive benchmarking evaluation framework to assess the reasoning and robustness capabilities of Video-LMMs and develop a training-free prompting technique to improve their performance on these fronts. Benchmarking Video-LMMs. With the growing number of Video-LMMs emerging in the research community, several works have presented evaluation frameworks to assess and quantify these models for benchmarking and analysis purposes. SEED-Bench [Li et al., 2023a] evaluates the visual capabilities in both image and Video-LMMs across 12 unique dimensions. MV-Bench [Li et al., 2023c] curates 20 challenging video tasks to evaluate spatial and temporal understanding of VideoLMMs. Video-ChatGPT [Maaz et al., 2023] develops a quantitative evaluation framework to assess model understanding across five aspects of general video comprehension, such as the correctness and consistency of model captions. While these evaluation frameworks provide effective insights, their assessments do not extend beyond general video-comprehension metrics to more advanced aspects of reasoning and robustness, particularly for real-world context cases. In contrast, our work focuses on providing a complex video reasoning and robustness benchmark across 11 diverse real-world-centric evaluation types and offers a more thorough assessment of Video-LMMs in practical applications. Training-free Prompting Techniques. Steering model behavior at inference time using prompting has become a common paradigm in the NLP domain. Prompting [Wei et al., 2022b, Wang et al., 2022a] refers to the set of instructions given as a prefix to the language model to better align model responses with human intent without the need for task-specific fine-tuning. Prompting techniques can be as simple as a single sentence (e.g., \"Let\u2019s think step by step\") such as zero-shot chain of thought [Wei et al., 2022b] prompting, to more detailed techniques such as combining chain-ofthought prompting with few-shot learning [Brown et al., 2020] and self-consistency chain of thought prompting [Wang et al., 2022a]. Surprisingly, training-free prompting techniques for Video Large Multi-modal Models (Video-LMMs) have been minimally explored. In this work, we develop a dual-step prompting technique based on principled prompt instructions specifically designed to steer the model\u2019s behavior for improved reasoning and robustness over complex videos. 3 Complex Video Reasoning and Robustness Evaluation Suite As Video-LMMs are touching new real-world applications, it is essential to ensure that they robustly handle the user inputs, comprehend the visual world, and exhibit human-like reasoning capabilities. In this work, our goal is to establish a comprehensive benchmark that specifically assess the robustness and reasoning capabilities of Video-LMMs in a variety of complex and contextual videos covering diverse scenarios. To this end, we present Complex Video Reasoning and Robustness Evaluation Suite (CVRR-ES). We first provide a holistic overview of CVRR-ES benchmark below and detail the video evaluation dimensions in Sec. 3.1. Subsequently, we present the CVRR-ES creation process in Sec. 3.2. We provide details on the dataset quality and human evaluation in Appendix B. Overview of CVRR-ES Benchmark. CVRR-ES encompasses evaluation dimensions that cover diverse video categories related to real-world scenarios, ranging from context-dependent (e.g., social, emotional) categories to video types that often take place in the wild (e.g., anomalous activities). Specifically, we have compiled 11 video evaluation dimensions and curated 2,400 high-quality openended question-answer (QA) pairs, spanning 217 high-quality videos. The average video duration is 22.3 seconds, with maximum and minimum durations of 183 and 2 seconds, respectively. In Fig. 4 \fFigure 3: CVRR-ES Benchmark Statistics. Left: Frequency distribution of the type of questions. Right: Illustration of the most frequent keywords in the answer-set of CVRR-ES benchmark. 3 (left), we quantify the distribution of different question types present in our benchmark. This diverse set of questions aims to comprehensively capture the model\u2019s answering capabilities based on reasoning and robustness criteria. We show the word cloud plot based on the frequency of key words in the answer set of CVRR-ES in Fig. 3 (right). The frequent words correspond to objects and attributes with which Video-LMMs could most likely interact when deployed in practical scenarios. 3.1 CVRR-ES Video Category definitions. To assess the robustness and reasoning capabilities of Video-LMMs in the CVRR-ES benchmark, we carefully curate 11 diverse benchmark evaluation categories. As shown in Fig. 1 (left), these categories encompass a wide range of real-world complex and contextual videos within each category. Below, we define each video evaluation dimension of the CVRR-ES benchmark in detail. 1) Multiple actions in a single video. This category includes videos that contain multiple activities within a single video. The number of activities varies from 2 to 4 in these videos, mostly featuring humans performing multiple activities. We curate QA pairs in this category aiming to identify whether the model can reason over challenging questions concerning multiple actions and understand the interrelation between different actions within a video. 2) Fine-grained action understanding. We gather video samples with fine-grained actions. These actions encompass various fine-grained activities performed by humans, including pushing, opening, closing, spreading, sitting, etc. This category presents a challenge to the model\u2019s comprehension of subtle and fine-grained actions through carefully crafted questions. 3) Partial actions. Based on our observations that Video-LMMs predominantly generate content that may be contextually relevant and likely to co-occur with the depicted scene in the video, we compile videos featuring actions that have a high probability of being followed by subsequent actions but are not executed in the video. For instance, an action such as cracking an egg in a kitchen setting often anticipates the subsequent action of frying/cooking the egg. 4) Time order understanding. Accurately recognizing the temporal sequence of activities in videos is crucial for distinguishing between atomic actions, such as pushing and pulling. We collect videos of fine-grained actions occurring in a particular temporal direction and curate challenging questions. 5) Non-existent actions with existent scene depictions. This category examines the model\u2019s robustness and reasoning behavior in scenarios where we introduce non-existent activities into the video without altering the physical and spatial scenes or environmental details in it. 6) Non-existent actions with non-existent scene depictions. In this evaluation category, we make the QA task more challenging by creating questions that include both non-existent activities and non-existent scene comprehension. Non-existent scene comprehension involves changing the objects, attributes of objects, and background scene description. This evaluates the model\u2019s reliability to correct misleading questions and avoid generating imaginary content. 7) Continuity and object instance count. This category contains videos (both real and simulations) designed to test the models\u2019 ability to accurately recognize the number of instances of objects, people, etc., and distinguish between existing objects and new ones introduced in the same video scene. 8) Unusual and physically anomalous activities. This category consists of videos with unconventional activities and physical phenomena that seemingly defy the laws of physics. We meticulously 5 \fcollect relevant videos from various sources on the internet, focusing on capturing unusual activities such as a person floating in the air or driving a motorbike on a running river. We believe that assessing Video-LMMs in such scenarios is crucial, as it allows us to determine whether they can generalize to understand actions in out-of-distribution videos that can occur in practical situations. 9) Interpretation of social context. In the real world, human actions are often influenced by social context in their surroundings. For instance, a person might be helping an elderly individual cross the road. This category evaluates Video-LMMs on such scenarios to determine their ability to accurately infer the rationale behind actions based on the depicted social context. We gather diverse videos from the internet and create challenging questions that encompass the social context dimension. 10) Understanding of emotional context. Similar to social context, humans can accurately understand and interpret each other\u2019s actions by considering the emotional context. For example, a person being emotionally moved and crying in a gathering could be a happy moment if it is one stemming from success/joy. We collect videos and curate challenging reasoning questions aimed at recognizing the nature of actions solely based on emotional context for evaluating Video-LMMs. 11) Interpretation of visual context. This dimension focuses on assessing the model\u2019s reasoning abilities to recognize the actions by leveraging the overall visual contextual cues in the video. We curate specific videos containing actions where activity identification and reasoning require visual contextual cues. For example, to identify the number of people present based on the presence of shadows, one must utilize the visual context from the shadows to reason about the question. Qualitative Examples. Fig. 2 shows examples of collected videos for the CVRR-ES benchmark. The curated videos are carefully selected to be diverse and contain rich spatio-temporal content, aligned with the proposed video evaluation dimensions. 3.2 Building CVRR-ES Benchmark After defining the video evaluation dimensions, we now proceed toward building the CVRR-ES benchmark which consists of three stages. We present each stage in detail below. Stage 1: Data collection and Annotation. We first collect high-quality videos and annotate each video using human assistance. To ensure that each evaluation dimension captures the relevant attributes and information, we meticulously select videos that are representative of specific characteristics associated with that dimension. Across the 11 dimensions, 214 unique videos are selected for the benchmark with around 20 videos per evaluation category. Around 60% of these videos are collected from public academic datasets. To introduce diversity in the benchmark distribution, we incorporate video samples from multiple academic datasets including Something-Something-v2 [Goyal et al., 2017], CATER [Girdhar and Ramanan, 2020], Charades [Sigurdsson et al., 2016], ActivityNet [Caba Heilbron et al., 2015], HMDB51 [Kuehne et al., 2011], YFCC100M [Thomee et al., 2016]. The remaining 40% of videos are collected from the internet. Following the video collection process, two experienced human annotators are assigned to generate captions for each video. For videos where initial captions or metadata are available from academic datasets, the captions are generated by the annotators based on them. For videos collected from the internet, captions are entirely generated by human annotators. To ensure consistency and high quality, we provide annotation instructions to annotators, who generate captions accordingly. Personalized annotation guidelines are used for each video category. Refer to additional details in Appendix B. Stage 2: Question-Answer Generation. The first challenge is to select an evaluation setting to assess Video-LMMs. Humans typically engage in free-form conversation to interact with each other in day-to-day life. Inspired by this, we aim to simulate a similar style of interaction with Video-LMMs by curating open-ended QA pairs to evaluate these models for robustness and reasoning. We feed detailed ground-truth video captions to GPT-3.5 LLM, which are utilized to generate open-ended questions covering both reasoning and robustness aspects. Reasoning QA pairs: With Video-LMMs beginning to interact more directly with humans in our lives, it\u2019s crucial to validate the reasoning abilities of Video-LMMs for more reliable Human-AI interaction. When evaluating the reasoning capabilities of Video-LMMs, we aim to determine whether these models can understand the input video not only by analyzing spatial content but also by grasping the underlying rationale behind the occurring activities and their relationships with the surrounding context. This involves creating questions that go beyond simple video comprehension and scene 6 \fdescription and require the model to engage in complex logical inference, contextual understanding, and reasoning about counterfactual and hypothetical scenarios. Robustness QA pairs: In addition to evaluating the reasoning capabilities of LLMs, it is important to assess Video-LMMs to ensure their robust and responsible performance in real-world scenarios. In the context of Video-LMMs, robustness can be evaluated from both visual (video input) and textual interfaces. Our focus in this work lies on textual interface robustness by particularly testing the model\u2019s comprehension when posed with misleading or confusing questions. This scenario mirrors realistic situations where users, based on their expertise levels, may pose irrelevant, misleading, or confusing questions. It is crucial for models to demonstrate reliability and robustness in handling such queries and avoid generating unreal or hallucinated content for input videos. We curate specific prompts for each evaluation dimension to instruct LLM in generating QA pairs. Example prompts used as an instruction to LLMs for curating QA pairs for robustness and reasoning aspects are provided in Fig. 14 in the Appendix D. Stage 3: QA Pairs Filtration. After generating QA pairs, a manual filtration step is employed, with human assistance to verify each generated QA pair. Approximately 30% of the QA pairs generated by GPT-3.5 are found to be noisy, containing questions that are unrelated to the video evaluation dimensions or unanswerable based on the provided ground-truth captions. Additionally, many questions contain answers within the question itself. Therefore, an exhaustive filtering process is conducted which involves QA rectification and removing those samples which are not relevant to the video or evaluation type. This process results in a final set of 2400 high-quality QA pairs for the CVRR-ES benchmark. Examples of QA pairs are shown in Tab. 4 in the Appendix. Stage 4: Evaluation Procedure. Previous methods in the literature [Maaz et al., 2023, Cai et al., 2023, Liu et al., 2023a, Qian et al., 2024] have explored using LLM models as judges for quantifying results in open-ended QA benchmarks. We adopt a similar approach and instruct LLMs to act as teachers to assess the correctness of predicted responses from Video-LMMs compared to ground-truth answers. We generate open-ended predictions from Video-LMMs by providing video-question pairs as inputs and then present the model predictions and their corresponding ground-truth responses to the LLM Judge alongside the evaluation prompt. The Judge determines whether the prediction is correct or incorrect through a binary judgment, assigns a score from 1 to 5 representing the quality of the prediction, and provides a reasoning to explain its decision. Our ablative analysis in the Appendix. D demonstrates that reasoning-constrained LLM-based evaluation aligns well with human-based judgment. The evaluation prompt is shown in Fig. 13 in the Appendix D. 4 Dual-Step Contextual Prompting for Video-LMMs. Given their wide-scale potential in practical downstream applications, new Video-LMMs are frequently introduced by the research community. Despite the availability of numerous Video-LMMs, the majority of them are trained using only positive examples and video-conversational templates that are primarily limited to tasks such as video-captioning and video question answering. This leads to highly over-affirmative behavior and a lack of self-rectification abilities in these models (Sec. 5.4). Dual Step Contextual Prompting for Video-LMMs Retrieving Contextual reasoning information (Step 1) As an intelligent video comprehension model, focus on these guidelines: 1. Differentiate recurring objects, count accurately, and identify movements and poses. 2. Understand directional movements and temporal order. 3. Pay attention to fine-grained actions with precision. 4. Assess incomplete actions without assuming completion. 5. Detect emotional, social, and visual cues. 6. Capture and analyze all relevant actions. 7. Identify unusual actions accurately. 8. Disagree with incorrect information given in question. 9. If you do not find the evidence in the frames, you can give a definite answer by assuming that the asked action/attribute is not present. 10. Provide to the point and concise response. Now, proceed with answering the following question faithfully while keeping above guidelines in mind: Question: What is happening in the video? Context conditioned question-answering (Step 2) Context for the given video is: {step 1 response}. Now answer a question truthfully based on the video and the provided context. Question: {User question} Figure 4: Principled prompt instructions in our DSCP method for improving reasoning and robustness in Video-LMMs. Additionally, the templates have minimal focus on enhancing reasoning and robustness capabilities through reasoning-based instruction-tuning pairs, resulting in weak performance of such models against robustness and reasoning QA evaluations in the CVRR-ES benchmark. Furthermore, curating reasoning-based instruction fine-tuning datasets requires meticulous data curation steps, and retraining these models is computationally expensive [Li et al., 2023d, Ren et al., 2023]. Alternatively, training-free prompting techniques in NLP literature have shown effectiveness in eliciting reasoning abilities in LLMs such as chain of thought and self-consistency prompting [Wei et al., 2022b, Wang et al., 2022a]. Inspired by these approaches, we introduce a prompting technique called Dual Step Contextual Prompting (DSCP), which aims to steer VideoLMM focus for enhanced reasoning while simultaneously encouraging the models to provide robust and 7 \fVideo-LLaVA LLaMA-VID MovieChat How many different wigs with different colors did the young girl try before settling on the black one? I can see the young girl wearing a black wig, and a brunette wig is in the background. The girl is wearing a pink shirt in the beginning of the video The young girl tried on three different wigs with different colors before settling on the black one. The young girl tried on three different wigs with different colors before settling on the black one. Improved video reasoning and robustness with our DSCP Method The video does not provide information about the number of different wigs the young girl tried before settling on the black one. I cannot give an accurate answer as the video does not show the different wigs that the young girl tried. The video does not provide information about the number of wigs the young girl tried before settling on the black one Is the atmosphere in the dressing room tense and uncomfortable as the girl receives her new hair? Yes, the atmosphere in the dressing room is tense and uncomfortable as the girl receives her new hair. The video does not provide any information about the atmosphere in the dressing room. The video does not provide any information about the atmosphere in the dressing room. However, it is possible \u2026 Improved video reasoning and robustness with our DSCP Method No, the atmosphere in the dressing room is not tense and uncomfortable. The woman brushing the hair is smiling, and the woman in the chair is smiling as \u2026 \u2026, but the video does not suggest that the girl is uncomfortable or tense while receiving her new hair. No, the atmosphere in the dressing room is not tense or uncomfortable. The girl is laughing and smiling \u2026, indicating that she is enjoying the experience. Can we see visitors petting the white tiger through the cage bars? Sure, in this video we can see a baby panda inside a cage with bars. There are also other \u2026 Yes, we can see visitors petting the white tiger through the cage bars in the video. Yes, we can see visitors petting the white tiger through the cage bars. Improved video reasoning and robustness with our DSCP Method No, we cannot see visitors petting the white tiger through the cage bars in the video. No, we do not see visitors petting the white tiger through the cage bars in the video. No, we cannot see visitors petting the white tiger through the cage bars in the video. Does the chimpanzee use tools to repair the zoo vehicle in the video? Yes, the chimpanzee in the zoo is seen using tools to fix the zoo vehicle. Yes, the chimpanzee is seen using a tool to fix the vehicle in the video. Yes, the chimpanzee is seen using a tool to repair the zoo vehicle in the video. Improved video reasoning and robustness with our DSCP Method No, the chimpanzee does not use tools to repair the zoo vehicle in the video. \u2026 the chimpanzee driving the car \u2026 The chimpanzee does not use tools to repair the zoo vehicle in the video. It can be seen driving the car \u2026 No, the chimpanzee does not use tools to repair the zoo vehicle in the video. Figure 5: Qualitative results of DSCP prompting method. Using our DSCP approach, Video-LMMs demonstrate enhanced robustness and reasoning capabilities over complex videos. grounded answers. DSCP is a two-step prompting method that 1) ensures that the model comprehends the video while reasoning over crucial aspects of complex video understanding such as contextual information and decoding the complex relationships between objects and motions, etc., and 2) encourages robustness by generating the response against the question while conditioning both on video and the context retrieved in the first step. Below we discuss each step of DSCP in detail. Step 1: Reasoning over the video. We first guide Video-LMMs using principled prompts to interpret video content from a reasoning perspective. As shown in Fig. 4 (in blue), we formulate ten principled reasoning-based instructions for prompting, Preason, which directs Video-LMMs to not only comprehend the general video content but also steers them to reason over the rationale behind occurring activities and their relationships with the surrounding context. These prompt instructions include specific considerations like contextual priors, the temporal order of actions, instance count, and attributes. Additionally, the prompting technique incorporates instructions to ensure conciseness and factuality, aiming to mitigate hallucinations. Given a Video-LMM F and input video V, we retrieve contextual reasoning information Icontext by providing principled reasoning prompt Preason along with the video to the LMM, Icontext = F(Preason|V). The contextual information is utilized in the second step of DSCP to generate a more grounded response to the user question. Step 2: Context conditioned question answering. As discussed earlier, Video-LMMs are primarily trained with positive examples to answer questions, with limited emphasis on reasoning and robustness aspects. Consequently, enabling direct interaction of Video-LMMs with users in real-world scenarios can result in undesired responses when the user question is confusing and deceiving due to their extreme over-affirmative behavior. To address these challenges, we propose incorporating an additional inference step in Video-LMMs before answering the user\u2019s question. We note that Video-LMMs often possess factual knowledge about the video content but may become distracted and produce hallucinations when prompted with confusing or misleading questions (more details in Appendix C). Specifically, we devise a prompting method that conditions the model to first comprehend the video in detail without attending to the user question, thereby eliminating the influence of the question. The complex video comprehension information refers to Icontext formulated in step 1. Subsequently, we pose the user question in the second step using prompt Puser which combines user question and the contextual reasoning information (Fig. 4, in green) while conditioning the model on both the video and the contextual reasoning information Icontext. Concretely, Final response = F(Puser|V), where Puser = [question; Icontext]. 8 \fTable 2: Evaluation results of Video LLMs across various video-evaluation categories on the CVRR-ES benchmark. We present results for both open-source and closed-source models, alongside human evaluation results which serves as the upper bound on the benchmark. Benchmark Category Video-LLaMA-2 VideoChat Video-ChatGPT Video-LLaVA MovieChat LLaMA-VID TimeChat Gemini-V Pro GPT4V Human Multiple Actions in 16.98 23.90 27.67 15.72 12.58 17.92 28.30 43.08 57.55 93.40 single video. Fine-grained action 29.57 33.48 26.96 25.22 23.48 26.09 39.13 51.61 77.39 95.65 understanding. Partial 24.76 33.01 22.82 13.59 21.36 14.56 49.51 67.48 73.79 98.54 actions. Time order 16.45 31.58 27.63 21.05 16.45 19.74 34.21 45.39 57.89 97.37 understanding. Non-existent actions with 10.14 15.22 23.19 5.07 5.07 2.90 23.19 57.25 71.01 97.10 existent scene. Non-existent actions with 13.19 14.58 17.36 3.47 11.81 6.94 13.89 49.64 75.00 100.00 non-existent scene. Continuity and Object 28.25 24.29 28.41 21.47 19.77 24.86 34.46 36.16 62.71 96.49 instance Count. Unusual and Physically 18.95 18.42 18.95 15.79 17.89 16.32 27.37 60.00 74.74 96.84 Anomalous activities. Interpretation of 25.00 31.07 32.50 18.93 17.14 13.93 39.29 64.29 79.64 97.51 social context. Understanding of 21.92 23.63 21.23 15.07 13.70 14.73 27.40 47.26 66.44 95.55 emotional context. Interpretation of 32.60 34.43 27.84 19.78 21.25 23.08 45.05 63.00 82.42 94.87 visual context. Average 21.62 25.78 24.96 15.92 16.41 16.46 32.89 53.20 70.78 96.67 Intuitively, the factual content generated in the first step will guide the model towards a robust response in the second step to produce factual and correct responses, even in the presence of noisy/misleading user questions. We illustrate the qualitative results of the DSCP method in Fig. 5. This approach leads to responses that are better grounded with the actual video content and are robust against potential lesser-quality user queries. As we will later show, the DSCP technique effectively enhances the performance of Video-LMMs on the CVRR-ES benchmark. 5 Evaluation Experiments on CVRR-ES. Video-LMMs. Both open-source and closed-source models are selected for the evaluation. Among the open-source models, we evaluate 7 recent Video-LMMs, including Video-LLaVA [Lin et al., 2023], TimeChat [Ren et al., 2023], MovieChat [Song et al., 2023], LLaMA-ViD [Li et al., 2023d], VideoChat [Li et al., 2023b] Video-ChatGPT [Maaz et al., 2023], and Video-LLaMA-2 [Zhang et al., 2023]. For evaluating closed-source models, we use Gemini-Pro-Vision [Google, 2023] and GPT-4V(vision) [OpenAI, 2023]. Refer to the Appendix A for implementation details. 5.1 Main Experiments on CVRR-ES. In Tab. 2, we present the evaluation results of Video-LMMs on the 11 dimension categories of the CVRR-ES benchmark. Below, we present several key findings. Open Source Video-LMMs struggles on CVRR-ES benchmark. All open-source LMMs show inferior performance across the different evaluation dimensions of CVRR-ES. Interestingly, some of the earlier developed open-source Video-LMMs, like Video-LLaMA, VideoChat, and Video-ChatGPT, exhibit higher performance compared to more recent models such as Video-LLaVA, MovieChat, and LLaMA-VID. Overall, TimeChat achieves the highest performance of 32.89% averaged across the 11 evaluation dimensions among open-source LMMs, followed by VideoChat with a score of 25.78%. Humans rank highest in CVRR-ES benchmark. Human studies achieve the highest performance on the CVRR-ES benchmark, with over 95% accuracy across all evaluation dimensions. Furthermore, these results suggest that the CVRR-ES QA pairs are answerable and suitable for benchmarking. Closed source models perform competitively on CVRR-ES. As shown in Tab. 2, both Gemini and GPT4V surpass the performance of open-source models and achieve high gains across all evaluation dimensions. The competitive results of GPT4V and Gemini on complex video evaluation dimensions such as partial actions, non-existent action/scene depiction, and context-dependent categories show 9 \fPrompting Method VideoChat Video-LLaVA MovieChat LLaMA-VID TimeChat Standard prompting 25.78 15.92 16.41 16.46 32.89 Chain of Thought (CoT) prompting 22.44 25.87 15.89 29.68 39.57 DSCP (Stage 1) 38.07 32.12 28.05 25.13 33.04 DSCP (Both stages) 47.92 37.93 35.87 46.85 39.45 Table 3: Prompting methods. DSCP stage 1 uses only the principled instructions designed in step 1, while DSCP (Both stages) uses the complete dual-step prompting technique. that these models have a more sophisticated understanding of the complex visual contents of videos and have strong capabilities to rectify misleading and confusing user questions. Overall, GTP4V improves over Gemini by 17.58% and provides an average accuracy of 70.78% on CVRR-ES. 5.2 Effectiveness of DSCP method for improving Video-LMMs performance 0 10 20 30 40 50 60 Accuracy % (averaged over 11 video dimensions) Video LLaVa MovieChat LLaMA-VID Video-LLaMA-2 Video-ChatGPT VideoChat TimeChat Gemini-Pro Video LMMs with DSCP +22.01 +19.46 +30.39 +16.15 +8.93 +22.14 +6.56 +5.02 Figure 6: Video-LMMs with DSCP technique effectively improves their performance (gains are shown in green) on CVRR-ES benchmark. We next integrate DSCP technique with VideoLMMs and present results on the CVRR-ES benchmark in Fig. 6. The results indicate that DSCP improves the model\u2019s performance compared with models that use standard prompting (i.e., using only the question itself). These results suggest that prompting techniques in Video-LMMs can better guide models for improved reasoning and robustness. With DSCP, initially low-performing Video-LMMs such as Video-LLaVa, MovieChat, and LLaMA-Vid show much better relative gains and become competitive with other models. The highest relative gain of 184% is achieved by LLaMA-ViD, which moves from 7th place in the leaderboard to 2nd among the open-source models after utilizing DSCP prompting. We observe similar overall positive trends of using DSCP with closed-source model Gemini, which improves on the benchmark by an absolute overall gain of 5.02%. We provide more detailed results comparisons in Appendix C. 5.3 Different prompting techniques. We study the contribution of each step of DSCP and compare it with chain-of-thought prompting [Wei et al., 2022b]. The results for the top 5 performing Video-LMMs are shown in Tab. 3. Chainof-thought prompting improves over the standard prompting technique in 3 out of 5 Video-LMMs, suggesting that prompting techniques from NLP literature can effectively guide multi-modal VideoLMMs to enhance reasoning and robustness. Next, we ablate on the first step of DSCP prompting, which uses the principled instructions of DSCP step 1 as a prefix alongside the actual user question. Using the first step prompting technique of DSCP substantially improves model performance on all Video-LMMs, suggesting the effectiveness of the principled prompt instructions designed specifically for Video models. DSCP with both steps, which integrates an additional thinking step in the prompting step, further improves the results and provides the highest results on 4 out of 5 Video-LMMs. 5.4 Main findings and Qualitative Results Based on the results of Video-LMMs on CVRR-ES, we draw key findings and show qualitative results. These insights can serve as valuable guidance for developing the next generation of Video-LMMs, aiming to make them more robust and reliable when deployed in real-world applications. Models excelling at standard VQA benchmarks struggle on CVRR-ES benchmark. Our analysis in Sec. 5.1 reveals that the latest open-source Video-LMMs, such as Video-LLaVA, MovieChat, and LLaMA-VID, perform less effectively on the CVRR-ES benchmark compared to Video-LMMs that were introduced earlier in the community, such as VideoChat and Video-ChatGPT. Interestingly, the same recent models demonstrate superior performance on general video comprehension benchmarks. This discrepancy suggests that current VQA benchmarks, like ActivityNet-QA [Yu et al., 2019] and MSRVTT [Xu et al., 2017], do not adequately correlate with the complex video reasoning and robustness scenarios highlighted in our benchmark. Consequently, this also indicates that most newer Video-LMMs are heavily trained to excel on the general video comprehension benchmarks while reducing their generalizability, reasoning, and robustness capabilities. Over-affirmative behavior of open-source Video-LMMs. Another important observation about open-source models is their tendency to exhibit excessively positive and affirmative responses. As shown in Fig. 7, open-source Video-LMMs consistently respond with \"Yes\" even when faced with 10 \fconfusing questions that describe non-existent actions and objects. This highlights the vulnerability of these models when interacting with users in real-world scenarios. In our CVRR-ES benchmark, opensource models are particularly vulnerable to our evaluation dimensions of \"Non-existent actions with the existent scene\" and \"Non-existent actions with the non-existent scene\" compared to closed-source models. These models lack negation and self-rectification capabilities, especially when users provide misleading or confusing questions. We conjecture that such behavior arises due to the absence of negative instruction tuning pairs during the training of Video-LMMs. Tendency towards activity completion. Most open-source Video-LMMs have shown weak performance on the evaluation dimension of partial actions in CVRR-ES, which contains videos focusing on incomplete or atomic actions. To further analyze the models\u2019 behavior, we show qualitative results on such videos in Fig. 8. It can be observed that most open-source models tend to complete actions, even when only part of the action is provided in the video. For instance, Video-LLaVA struggles to reason over the video and describes the man as kicking the soccer ball, while the action in the video stops at the point of the man placing his foot beside the ball. We observe similar behavior in other Video-LMMs. Upon examining the fine-tuning strategies [Maaz et al., 2023, Liu et al., 2023b], we find that almost all models are trained on end-to-end actions-based instruction-tuning data, causing them to generate complete action descriptions at inference. This tendency highlights the vulnerability of Video-LMMs after deployment, as real-world scenarios often involve atomic, sub-atomic, and general actions alike. To improve the performance of Video-LMMs, it is crucial to incorporate diverse action types during training, including partial and incomplete actions. Weak Generalization to extreme OOD videos. The evaluation dimension of unusual and physically anomalous activities in CVRR-ES resembles extreme out-of-distribution video examples. With the exception of GPT4V and Gemini, Video-LMMs struggle with this dimension, indicating weak generalizability towards OOD videos containing the coexistence of unusual objects and activities that are extremely rare in typical videos. For instance, Video-LLaVA in Fig. 9 describes a person falling on the street, while the video actually shows the person performing an optical illusion. To be responsibly deployed in real-world applications, where OOD actions occur more frequently, Video-LMMs need to be trained to perform more robustly on OOD samples. This may involve incorporating diverse and atypical examples in the training data to improve the model\u2019s ability to handle unusual situations. Limited understanding of temporal order in complex videos. The CVRR-ES benchmark results show that Video-LMMs perform relatively better on the fine-grained action dimension compared to the time-order understanding dimension. While these models can accurately identify fine-grained actions, they struggle with comprehending the correct temporal order of these actions within a video. This limitation can lead to misinterpretations of the underlying information depending on temporal order. We present failure cases of this dimension in Fig. 10. For building more advanced world-centric Video-LMMs, it is crucial to enhance their ability to process and interpret event sequences accurately. Video-LMMs struggles in understanding the emotional and social context. For more reliable interaction between Video-LMMs and humans in practical scenarios, these models should comprehend the spatio-temporal scenes with social and contextual reasoning capabilities similar to humans. The lower performance of Video-LMMs on social and emotional contextual dimensions in CVRR-ES highlights their limitations and lack of understanding of scenes based on contextual cues. For instance, as shown in Fig. 11 (bottom row), GPT-4V struggles to comprehend a scene where a worker is attempting to prevent shoes from getting wet due to the rain by moving them under the shade. Instead, GPT-4V provides a response that contradicts the social cues present in the video. 6" +} \ No newline at end of file diff --git a/abs_9K/test_abstract_short_2405.03894v1.json b/abs_9K/test_abstract_short_2405.03894v1.json new file mode 100644 index 0000000000000000000000000000000000000000..7deef1f0d7e293c8f57d21321b299d71c2419f8d --- /dev/null +++ b/abs_9K/test_abstract_short_2405.03894v1.json @@ -0,0 +1,17 @@ +{ + "url": "http://arxiv.org/abs/2405.03894v1", + "title": "MVDiff: Scalable and Flexible Multi-View Diffusion for 3D Object Reconstruction from Single-View", + "abstract": "Generating consistent multiple views for 3D reconstruction tasks is still a\nchallenge to existing image-to-3D diffusion models. Generally, incorporating 3D\nrepresentations into diffusion model decrease the model's speed as well as\ngeneralizability and quality. This paper proposes a general framework to\ngenerate consistent multi-view images from single image or leveraging scene\nrepresentation transformer and view-conditioned diffusion model. In the model,\nwe introduce epipolar geometry constraints and multi-view attention to enforce\n3D consistency. From as few as one image input, our model is able to generate\n3D meshes surpassing baselines methods in evaluation metrics, including PSNR,\nSSIM and LPIPS.", + "authors": "Emmanuelle Bourigault, Pauline Bourigault", + "published": "2024-05-06", + "updated": "2024-05-06", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.LG" + ], + "label": "Original Paper", + "paper_cat": "Diffusion AND Model", + "gt": "Generating consistent multiple views for 3D reconstruction tasks is still a\nchallenge to existing image-to-3D diffusion models. Generally, incorporating 3D\nrepresentations into diffusion model decrease the model's speed as well as\ngeneralizability and quality. This paper proposes a general framework to\ngenerate consistent multi-view images from single image or leveraging scene\nrepresentation transformer and view-conditioned diffusion model. In the model,\nwe introduce epipolar geometry constraints and multi-view attention to enforce\n3D consistency. From as few as one image input, our model is able to generate\n3D meshes surpassing baselines methods in evaluation metrics, including PSNR,\nSSIM and LPIPS.", + "main_content": "Introduction Consistent and high-quality novel view synthesis of realworld objects from a single input image is a remaining challenge in computer vision. There is a myriad of applications in virtual reality, augmented reality, robotic navigation, content creation, and filmmaking. Recent advances in the field of deep learning such as diffusion-based models [2, 13, 22, 36, 37] significantly improved mesh generation by denoising process from Gaussian noise. Text-to-image generation has shown great progress with the development of efficient approaches as generative adversarial networks [3, 11, 16], autoregressive transformers [9, 28, 39], and more recently, diffusion models [12, 14, 27, 32]. DALL-E 2 [27] and Imagen [32] are such models capable of generating of photorealistic images with large-scale diffusion models. Latent diffusion models [31] apply the diffusion process in the latent space, enabling for faster image synthesis. Although, image-to-3D generation has shown impressive results, there is still room for improvement in terms of consistency, rendering and efficiency. Generating 3D representations from single view is a difficult task. It requires extensive knowledge of the 3D world. Although diffusion models have achieved impressive performance, they require expensive per-scene optimization. Zero123 [18] proposes a diffusion model conditioned on view features and camera parameters trained on persepective images [6]. However, the main drawback is the lack of multiview consistency in the generation process impeding high-quality 3D shape reconstruction with good camera control. SyncDreamer [19] proposes a 3D feature volume into the Zero123 [18] backbone to improve the multiview consistency. However, the volume conditioning significantly reduces the speed of generation and it overfits to some viewpoints, with 3D shapes displaying distortions. In this paper, we present MVDiff, a multiview diffusion model using epipolar geometry and transformers to generate consistent target views. The main idea is to incorporate epipolar geometry constraints in the model via selfattention and multi-view attention in the UNet to learn the geometry correspondence. We first need to define a scene transformation transformer (SRT) to learn an implicit 3D representation given a set of input views. Then, given an input view and its relative camera pose, we use a viewconditioned diffusion model to estimate the conditional distribution of the target view. We show that this framework presents dual improvements compared to existing baselines in improving the 3D reconstruction from generated multi-view images and in terms of generalization capability. In summary, the paper presents a multi-view generation framework from single image that is transferable to various datasets requiring little amount of changes. We show high performance on the GSO dataset for 3D mesh generation. The model is able to extrapolate one view image of a 3D arXiv:2405.03894v1 [cs.CV] 6 May 2024 \fobject to 360-view with high fidelity. Despite being trained on one dataset of natural objects, it can create diverse and realistic meshes. We summarise our contributions as follows: \u2022 Implicit 3D representation learning with geometrical guidance \u2022 Multi-view self-attention to reinforce view consistency \u2022 Scalable and flexible framework 2. Related Work 2.1. Diffusion for 3D Generation Recently, the field of 3D generation has demonstrated rapid progress with the use of diffusion models. Several studies showed remarkable performance by training models from scratch on large datasets to generate point clouds [21, 24], meshes [10, 20] or neural radiance fields (NeRFs) at inference. Nevertheless, these models lack generalizability as they are trained on specific categories of natural objects. DreamFusion [26] explored leveraging 2D priors to guide 3D generation. Inspired by DreamFusion, several studies adopted a similar pipeline using distillation of a pretrained 2D text-to-image generation model for generating 3D shapes [1, 4, 5, 23, 43]. The per-scene optimisation process typically lacks in efficiency with times ranging from minutes to hours to generate single scenes. Recently, 2D diffusion models for multi-view synthesis from single view have raised interest for their fast 3D shape generation with appealing visuals [17, 18, 34]. However, they generally do not consider consistency of multi-view in the network design. Zero123 proposes relative viewpoint as conditioning in 2D diffusion models, in order to generate novel views from a single image [18]. However, this work does not consider other views in the learning process and this causes inconsistencies for complex shapes. One2-3-45 [17] decodes signed distance functions (SDF) [25] for 3D shape generation given multi-view images from Zero123 [18], but the 3D reconstruction is not smooth and artifacts are present. More recently, SyncDreamer [19] suggests a 3D global feature volume, in order to tackle inconsistencies in multiview generation. 3D volumes are used with depth-wise attention for maintaining multi-view consistency. The heavy 3D global modeling tend to reduce the speed of the generation and quality of the generated meshes. MVDream [35] on the other hand incorporates 3D self-attention with improved generalisability to unseen datasets. 2.2. Sparse-View Reconstruction Sparse-view image reconstruction [15, 45] is a challenging task where only a limited number of images, generally less than 10, are given. Traditional 3D reconstruction methods start by estimating camera poses, then as a second step perform dense reconstruction with multi-view stereo [38, 46] or NeRF [40]. Estimating camera poses in the context of sparse-view reconstruction is a challenging task as there is little or no overlap between views. [45] aimed at addressing this challenge by optimising camera poses and 3D shapes simultaneously. In the same line of research, PF-LRM [42] suggests a pose-free approach to tackle the uncertainty in camera poses. In our work, we learn the relative camera poses of the 3D representation implicitly via a transformer encoder-decoder network and a view-conditioned diffusion model capable of generating consistent multi-view images directly. We then employ a reconstruction system Neus [41] to recover a mesh. 3. Methodology 3.1. Multi-view Conditional Diffusion Model The rationale behind multi-view conditioning in diffusion models is to infer precisely the 3D shape of an object with the constraint that regions of the 3D object are unobserved. Direct 3D predictions for sequential targets as in Zero123 [18] might lead to implausible novel views. To control the uncertainty in novel view synthesis, we choose to enforce multi-view consistency during training. Given an input image or sparse-view input images of a 3D object, denoted as xI, with known camera parameters \u03c0I, and target camera parameters \u03c0T, our aim is to synthesize novel views that recover the geometry of the object. Our framework can be broken down into two parts: (i) first a scene representation transformer (SRT) [33] that learns the latent 3D representation given a single or few input views, and (ii) second a view-conditioned diffusion model to generate novel views. 3.2. Novel View Synthesis via Epipolar Geometry To perform novel view synthesis, we employ a scene representation transformer (SRT) [33]. In the work of [33], a transformer encoder-decoder architecture learns an implicit 3D latent representation given a set of images with camera poses (xI, \u03c0I). First, a CNN extracts features from xI and feeds them as tokens to the transformer encoder fE. The transformer encoder then outputs a set-latent scene representation z via self-attention. For novel view rendering, the decoder transformer of SRT queries the pixel color via cross-attention between the ray associated to that pixel r and the set-latent scene representation z. The aim is to minimize the pixel-level reconstruction loss in Eq. (1), \\lab e l {e q : rec_l o s s} \\ m a t h c al {L}_{\\mathrm {recon}} =\\sum _{\\mathbf {r} \\in \\mathcal {R}}\\left \\|C(\\mathbf {r})-\\hat {C}(\\mathbf {r})\\right \\|_2^2, (1) \fFigure 1. Pipeline of MVDiff. From a single input or few input images, the transformer encoder translates the image(s) into latent scene representations, implicitely capturing 3D information. The intermediate outputs from the scene representation transformer are used as input by the view-conditioned latent diffusion UNet, generating multi-view consistent images from varying viewpoints. where C(r) is the ground truth color of the ray and R is the set of rays sampled from target views. We aim to leverage cross-interaction between images through relative camera poses using epipolar geometrical constraints. For each pixel in a given view i, we compute the epipolar line and the epipolar distance for all pixels in view j to build a weighted affinity matrix A\u2032 i,j = Ai,j+Wi,j where Wi,j is the weighted map obtained from the inverse epipolar distance. View-Conditioned Latent Diffusion. The outputs from SRT do not recover fine details with simple pixel-level reconstruction loss. We employ a view-conditioned diffusion model LDM from [29] to estimate the conditional distribution of the target view given the source view and the relative camera pose: p (xT | \u03c0T, xI, \u03c0I). First, the SRT predicts a low-resolution 32 \u00d7 32 latent image \u02dc xT based on the target view \u03c0T for computationally efficiency. The latent image from SRT is concatenated with the noisy image y and fed into the latent diffusion UNet E\u03b8. In addition, we condition E\u03b8 on the latent scene representation z via cross-attention layers (see Fig. 1). The generated images \u02c6 \u03f5t can be denoted as \\ h at {\\ b old sy mbol {\\mathcal {E}}_t} &= \\boldsymbol {\\mathcal {E}}_\\theta (\\boldsymbol {y}, \\tilde {\\boldsymbol {x}}_{\\mathrm {\\textit {I}}}, \\boldsymbol {z}, t), (2) where t is the timestep. We optimize a simplified variational lower bound, that is \\ma t h c al { L}_{\\ m ath rm {VLD M}}=\\mathbb {E}\\left [\\left \\| \\boldsymbol {\\mathcal {E}}_t \\boldsymbol {\\mathcal {E}}_\\theta (\\boldsymbol {y}, \\tilde {\\boldsymbol {x}}_{\\mathrm {\\textit {T}}}, \\boldsymbol {z}, t) \\right \\|^2\\right ]. (3) Multi-View Attention. As previously stated, in Zero123 [18], multiple images are generated in sequence from a given input view based on camera parameters. This approach can introduce inconsistencies between generated views. To address this issue, we apply modifications to the UNet in order to feed multi-view images. This way, we can predict simultaneously multiple novel views. We employ self-attention block to ensure consistency for different viewpoints. 4. Experiments This section presents the novel view synthesis experiments in Sec. 4.1, and the 3D generation experiments in Sec. 4.2. We present ablation experiments in Sec. 4.3 and ethical considerations in Sec. 4.4. Training Data. For training our model for novel view synthesis, we use 800k 3D object models from Objaverse [6]. For a fair comparison with other 3D diffusion baselines, we use the same training dataset. Input condition views are chosen in a similar way as Zero123 [18]. An azimuth angle is randomly chosen from one of the eight discrete angles of the output cameras. The elevation angle is randomly selected in the range [\u221210\u25e6, 45\u25e6]. For data quality purposes, we discard empty rendered images. This represents about one per cent of the training data. 3D objects are centered and we apply uniform scaling in the range [-1,1] so that dimensions matches. Input images to our pipeline are RGB images 256x256. Test Data. We use the Google Scanned Object (GSO) [8] as our testing dataset, and use the same 30 objects as SyncDreamer [19]. There are 16 images per 3D object, with a fixed elevation of 30\u25e6and every 22.5\u25e6for azimuth. Implementation Details. Our model is trained using the AdamW optimiser [24] with a learning rate of 10\u22124 and weight decay of 0.01. We reduce the learning rate to 10\u22125 for a total of 100k training steps. For our training batches, we use 3 input views and 3 target views randomly sampled with replacement from 12 views for each object, with \fa batch size of 356. We train our model for 6 days on 4 A6000 (48GB) GPUs. Evaluation Metrics. For novel view synthesis, we report the PSNR, SSIM [44], and LPIPS [47]. For 3D reconstruction from single-view or few views, we use the Chamfer Distances (CD) and 3D IoU between the ground-truth and reconstructed volumes. 4.1. Novel View Synthesis We show in Tab. 1 the performance of MVDiff compared to baselines for novel view synthesis on an unseen dataset [8]. Qualitative results are shown in Fig. 2. Our model surpasses baseline Zero-123XL by a margin and benefits from additional views. Given the probabilistic nature of the model, it is able to generate diverse and realistic shapes given a single view (see Fig. 3). Training Sample # Ref. Views GSO NeRF Synthetic PSNR\u2191SSIM\u2191LPIPS\u2193Runtime\u2193PSNR\u2191SSIM\u2191LPIPS\u2193Runtime\u2193 Zero123 800K 1 18.51 0.856 0.127 7s 12.13 0.601 0.421 7s Zero123-XL 10M 1 18.93 0.856 0.124 8s 12.61 0.620 0.381 8s MVDiff 800k 1 20.24 0.884 0.095 9s 12.66 0.638 0.342 9s MVDiff 800k 2 22.91 0.908 0.064 9s 13.42 0.685 0.321 10s MVDiff 800k 3 24.09 0.918 0.052 10s 13.58 0.741 0.301 11s MVDiff 800k 5 25.09 0.927 0.043 11s 14.55 0.833 0.288 12s MVDiff 800k 10 25.90 0.935 0.036 12s 14.51 0.657 0.215 13s Table 1. Novel view synthesis performance on GSO and NeRF Synthetic datasets. MVDiff outperforms Zero-123XL with significantly less training data. Additionally, MVDiff performance exhibits further improvement with the inclusion of more reference views. 4.2. 3D Generation We showed in Sec. 4.1 that our model can generate multiple consistent novel views. In this section, we perform single and few-images 3D generation on the GSO dataset. We generate 16 views with azimuths uniformly distributed in the range 0\u25e6to 360\u25e6. For a fixed elevation angle of 30\u25e6, SyncDreamer may fail to recover the shape of 3D objects at the top and bottom since the camera angle does not cover those regions. Therefore, we also use different elevation angles from \u221210\u25e6to 40\u25e6. Then, we adopt NeuS [40] for 3D reconstruction. The foreground masks of the generated images are initially predicted using CarveKit. It takes around 3 minutes to reconstruct a textured mesh. We compare our 3D recontructions with SoTA 3D generation models, including One-2-3-45 [17] for decoding an SDF using multiple views predicted from Zero123, and SyncDreamer [19] for fitting an SDF using NeuS [40] from 16 consistent fixed generated views. Given two or more reference views, MVDiff outperforms all other baselines (see Tab. 2). MVDiff generates meshes that are visually consistent and resembles the ground-truth (see Fig. 4). # Input Views Chamfer Dist. \u2193 Volume IoU \u2191 Point-E 1 0.0561 0.2034 Shape-E 1 0.0681 0.2467 One2345 1 0.0759 0.2969 LGM 1 0.0524 0.3851 SyncDreamer 1 0.0493 0.4581 MVDiff 1 0.0411 0.4357 MVDiff 2 0.0341 0.5562 MVDiff 3 0.0264 0.5894 MVDiff 5 0.0252 0.6635 MVDiff 10 0.0254 0.6721 Table 2. 3D reconstruction performance on GSO dataset. MVDiff outperforms other image-to-3D baselines in generating high-quality 3D objects, with improved performance for multiple input views. PSNR\u2191 SSIM\u2191 LPIPS\u2193 MVDiff 20.24 0.884 0.095 w/o epipolar att. 19.14 0.864 0.118 w/o multi-view att. 19.92 0.871 0.113 Table 3. Effect of Self-Attention Mechanisms. We report PSNR, SSIM [44], and LPIPS [47] for novel view synthesis from single view on GSO dataset. Results show that epipolar attention and multi-view attention lead to superior performance. 4.3. Ablation Study Multi-View Consistency. The generated images may not always plausible and we need to generate multiple instances with different seeds and select a desirable instance for 3D reconstruction based on higher overall PSNR, SSIM and LPIPS for the view generated. Experiments show that we need 5 generations to obtain optimal reconstruction. Effect of Epipolar and Mult-View Attention. We evaluate the benefits of epipolar attention and multi-view attention on novel view synthesis performing ablation experiments on those components. In particular, we observe a significant drop in performance metrics when removing epipolar attention suggesting that the model is effectively able to implicitely learn 3D object geometry by enforcing geometrical guidance (see Tab. 3). Weight Initialisation. An alternative to initialising weights trained from Zero123 on view-dependent objects [7] is to use weights from Stable Diffusion [30]. We compare the performance of our model initializing weights from Stable Diffusion v2 [30] with a drop in performance of -2.58 PSNR compared to Zero123 [18] weight initialisation. This shows that initializing from Stable Diffusion v2 leads to poorer performance on the novel view task and worse generalisability. 4.4. Risks and Ethical Considerations There are several promising applications of synthetic data, notably in medicine. Synthetic data could make significant \fFigure 2. Zero-Shot Novel View Synthesis on GSO. MVDiff outperforms Zero123-XL for single view generation with greater camera control and generation quality. As more views are added, MVDiff resembles the ground-truth with fine details being captured such as elephant tail and turtle shell design. Input \u2190\u2212\u2212\u2212\u2212\u2212Generated \u2212\u2212\u2212\u2212\u2212\u2192 GT Figure 3. Diversity of Novel View Diffusion with MVDiff on NeRF-Synthetic Dataset. We show nearby views (top and bottom row) displaying good consistency, while more distant views (middle) are more diverse but still realistic. improvement in surgery planning and tailored patient diagnosis leveraging 3D information and its assets of quantitative parameters. Nevertheless, there are ethical considerations associated with the use of synthetic data in medicine. We should ensure the synthetic data is anonymised such that no particular features of the synthetic meshes could link back to a specific patient. In that light, there are transformations that can be applied to the meshes. We should also make sure that the synthetic data is not used in a way it could harm or be detrimental. Further validation on different cohorts of people is required before using these synthetic data in clinical settings. Despite important ethical considerations we shed light on, we believe these 3D representations of organs could be of great use, on hand for research purposes to run largescale statistical analysis on different cohorts and highlight associations with patient metadata. These cost effective synthetic data could be beneficial to improve the visualisations of bones and organs and be deployed widely. 4.5. Limitations A limitation of this work lies in its computational time and resource requirements. Despite advances in sampling approaches, our model still requires more than 50 steps to generate high-quality images. This is a limit of all diffusion based generation models. Moreover, the reconstructed meshes may not always be plausible. To increase the quality, we may need to use a larger object dataset like Objaverse-XL[7] and manually curate the dataset to filter out uncommon shapes such as point clouds, textureless 3D models and more complex scene representation. \fFigure 4. 3D reconstruction from single-view on GSO dataset. MVDiff produces consistent novel views and improves the 3D geometry compared to baselines. One-2-3-45 and SyncDreamer tend to generate overly-smoothed and incomplete 3D objects, in particular the sofa. 5." +} \ No newline at end of file diff --git a/abs_9K/test_abstract_short_2405.03958v1.json b/abs_9K/test_abstract_short_2405.03958v1.json new file mode 100644 index 0000000000000000000000000000000000000000..7fa25190668211b62743d77dce2118a1408b4910 --- /dev/null +++ b/abs_9K/test_abstract_short_2405.03958v1.json @@ -0,0 +1,18 @@ +{ + "url": "http://arxiv.org/abs/2405.03958v1", + "title": "Simple Drop-in LoRA Conditioning on Attention Layers Will Improve Your Diffusion Model", + "abstract": "Current state-of-the-art diffusion models employ U-Net architectures\ncontaining convolutional and (qkv) self-attention layers. The U-Net processes\nimages while being conditioned on the time embedding input for each sampling\nstep and the class or caption embedding input corresponding to the desired\nconditional generation. Such conditioning involves scale-and-shift operations\nto the convolutional layers but does not directly affect the attention layers.\nWhile these standard architectural choices are certainly effective, not\nconditioning the attention layers feels arbitrary and potentially suboptimal.\nIn this work, we show that simply adding LoRA conditioning to the attention\nlayers without changing or tuning the other parts of the U-Net architecture\nimproves the image generation quality. For example, a drop-in addition of LoRA\nconditioning to EDM diffusion model yields FID scores of 1.91/1.75 for\nunconditional and class-conditional CIFAR-10 generation, improving upon the\nbaseline of 1.97/1.79.", + "authors": "Joo Young Choi, Jaesung R. Park, Inkyu Park, Jaewoong Cho, Albert No, Ernest K. Ryu", + "published": "2024-05-07", + "updated": "2024-05-07", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.AI", + "cs.LG" + ], + "label": "Original Paper", + "paper_cat": "Diffusion AND Model", + "gt": "Current state-of-the-art diffusion models employ U-Net architectures\ncontaining convolutional and (qkv) self-attention layers. The U-Net processes\nimages while being conditioned on the time embedding input for each sampling\nstep and the class or caption embedding input corresponding to the desired\nconditional generation. Such conditioning involves scale-and-shift operations\nto the convolutional layers but does not directly affect the attention layers.\nWhile these standard architectural choices are certainly effective, not\nconditioning the attention layers feels arbitrary and potentially suboptimal.\nIn this work, we show that simply adding LoRA conditioning to the attention\nlayers without changing or tuning the other parts of the U-Net architecture\nimproves the image generation quality. For example, a drop-in addition of LoRA\nconditioning to EDM diffusion model yields FID scores of 1.91/1.75 for\nunconditional and class-conditional CIFAR-10 generation, improving upon the\nbaseline of 1.97/1.79.", + "main_content": "Introduction In recent years, diffusion models have led to phenomenal advancements in image generation. Many cuttingedge diffusion models leverage U-Net architectures as their backbone, consisting of convolutional and (qkv) self-attention layers Dhariwal & Nichol (2021); Kim et al. (2023); Saharia et al. (2022); Rombach et al. (2022); Podell et al. (2024). In these models, the U-Net architecture-based score network is conditioned on the time, and/or, class, text embedding Ho & Salimans (2021) using scale-and-shift operations applied to the convolutional layers in the so-called residual blocks. Notably, however, the attention layers are not directly affected by the conditioning, and the rationale behind not extending conditioning to attention layers remains unclear. This gap suggests a need for in-depth studies searching for effective conditioning methods for attention layers and assessing their impact on performance. Meanwhile, low-rank adaptation (LoRA) has become the standard approach for parameter-efficient fine-tuning of large language models (LLM) Hu et al. (2022). With LoRA, one trains low-rank updates that are added to frozen pre-trained dense weights in the attention layers of LLMs. The consistent effectiveness of LoRA for LLMs suggests that LoRA may be generally compatible with attention layers used in different architectures and for different tasks Chen et al. (2022); Pan et al. (2022); Lin et al. (2023); Gong et al. (2024). In this work, we introduce a novel method for effectively conditioning the attention layers in the U-Net architectures of diffusion models by jointly training multiple LoRA adapters along with the base model. We call these LoRA adapters TimeLoRA and ClassLoRA for discrete-time settings, and Unified Compositional LoRA (UC-LoRA) for continuous signal-to-ratio (SNR) settings. Simply adding these LoRA adapters in a drop-in fashion without modifying or tuning the original model brings consistent enhancement in FID scores across several popular models applied to CIFAR-10, FFHQ 64x64, and ImageNet datasets. In particular, adding LoRA-conditioning to the EDM model Karras et al. (2022) yields improved FID scores of 1.75, 1.91, 2.31 for class-conditional CIFAR-10, unconditional CIFAR-10, and FFHQ 64x64 datasets, respectively, outperforming the baseline scores of 1.79, 1.97, 2.39. Moreover, we find that LoRA conditioning by itself is 2 \fScale-Shift Group Norm SiLU Convolution Group Norm SiLU Convolution Input Conditioning Linear Input QKV Group Norm \u03c9-scale LoRA LoRA Dot Product Projection \u03c9-scale LoRA LoRA MLP MLP Conditioning A1 Am B1 Bm \u03c91(t) \u03c9m(t) W A\u2032 c B\u2032 c \u00b7 \u00b7 \u00b7 A1 Am B1 Bm \u03c91(cond) \u03c9m(cond) W \u00b7 \u00b7 \u00b7 cond. MLP Unified compositional LoRA TimeLoRA and ClassLoRA Attn. Block LoRA U-Net block LoRA conditioning of attention block Figure 2: Conditioning of U-Net Block: (left) scale-and-shift conditioning on the convolutional block (middle) LoRA conditioning on the attention block (right) top: TimeLoRA and ClassLoRA for the discrete-time setting, bottom: unified composition LoRA for the continuous-SNR setting. powerful enough to perform effectively. Our experiments show that only conditioning the attention layers using LoRA adapters (without the conditioning convolutional layers with scale-and-shift) achieves comparable FID scores compared to the baseline scale-and-shift conditioning (without LoRA). Contribution. Our experiments show that using LoRA to condition time and class information on attention layers is effective across various models and datasets, including nano diffusion Lelarge et al. (2024), IDDPM Nichol & Dhariwal (2021), and EDM Karras et al. (2022) architectures using the MNIST Deng (2012), CIFAR-10 Krizhevsky et al. (2009), and FFHQ Karras et al. (2019) datasets. Our main contributions are as follows. (i) We show that simple drop-in LoRA conditioning on the attention layers improves the image generation quality, as measured by lower FID scores, while incurring minimal (\u223c10%) added memory and compute costs. (ii) We identify the problem of whether to and how to condition attention layers in diffusion models and provide the positive answer that attention layers should be conditioned and LoRA is an effective approach that outperforms the prior approaches of no conditioning or conditioning with adaLN Peebles & Xie (2023). Our results advocate for incorporating LoRA conditioning into the larger state-of-the-art U-Net-based diffusion models and the newer experimental architectures. 2 Prior work and preliminaries 2.1 Diffusion models Diffusion models Sohl-Dickstein et al. (2015); Song & Ermon (2019); Ho et al. (2020); Song et al. (2021b) generate images by iteratively removing noise from a noisy image. This denoising process is defined by the reverse process of the forward diffusion process: given data x0 \u223cq0, progressively inject noise to x0 by q(xt | xt\u22121) = N \u0010p 1 \u2212\u03b2txt\u22121, \u03b2tI \u0011 for t = 1, . . . , T and 0 < \u03b2t < 1. If \u03b2t is sufficiently small, we can approximate the reverse process as q(xt\u22121 | xt) \u2248N (\u00b5t(xt), \u03b2tI) 3 \fwhere \u00b5t(xt) = 1 \u221a1 \u2212\u03b2t (xt + \u03b2t\u2207log pt(xt)). A diffusion model is trained to approximate the score function \u2207log pt(xt) with a score network s\u03b8, which is often modeled with a U-Net architecture Ronneberger et al. (2015); Song & Ermon (2019). With s\u03b8 \u2248\u2207log pt(xt), the diffusion model approximates the reverse process as p\u03b8(xt\u22121|xt) = N \u0012 1 \u221a1 \u2212\u03b2t (xt + \u03b2ts\u03b8(xt, t)), \u03b2tI \u0013 \u2248q(xt\u22121 | xt). To sample from a trained diffusion model, one starts with Gaussian noise xT \u223cN (0, (1 \u2212\u00af \u03b1T )I), where \u00af \u03b1t = Qt s=1(1\u2212\u03b2s), and progressively denoise the image by sampling from p\u03b8(xt\u22121|xt) with t = T, T \u22121, . . . , 2, 1 sequentially to obtain a clean image x0. The above discrete-time description of diffusion models has a continuous-time counterpart based on the theory of stochastic differential equation (SDE) for the forward-corruption process and reversing it based on Anderson\u2019s reverse-time SDE Anderson (1982) or a reverse-time ordinary differential equation (ODE) with equivalent marginal probabilities Song et al. (2021a). Higher-order integrators have been used to reduce the discretization errors in solving the differential equations Karras et al. (2022). Architecture for diffusion models. The initial work of Song & Ermon (2019) first utilized the CNN-based U-Net architecture Ronneberger et al. (2015) as the architecture for the score network. Several improvements have been made by later works Ho et al. (2020); Nichol & Dhariwal (2021); Dhariwal & Nichol (2021); Hoogeboom et al. (2023) incorporating multi-head self-attention Vaswani et al. (2017), group normalization Wu & He (2018), and adaptive layer normalization (adaLN) Perez et al. (2018). Recently, several alternative architectures have been proposed. Jabri et al. (2023) proposed Recurrent Interface Network (RIN), which decouples the core computation and the dimension of the data for more scalable image generation. Peebles & Xie (2023); Bao et al. (2023); Gao et al. (2023); Hatamizadeh et al. (2023) investigated the effectiveness of transformer-based architectures Dosovitskiy et al. (2021) for diffusion models. Yan et al. (2023) utilized state space models Gu et al. (2022) in DiffuSSM to present an attention-free diffusion model architecture. In this work, we propose a conditioning method for attention layers and test it on several CNN-based U-Net architectures. Note that our proposed method is applicable to all diffusion models utilizing attention layers. 2.2 Low-rank adaptation Using trainable adapters for specific tasks has been an effective approach for fine-tuning models in the realm of natural language processing (NLP) Houlsby et al. (2019); Pfeiffer et al. (2020). Low-rank adpatation (LoRA, Hu et al. (2022)) is a parameter-efficient fine-tuning method that updates a low-rank adapter: to fine-tune a pre-trained dense weight matrix W \u2208Rdout\u00d7din, LoRA parameterizes the fine-tuning update \u2206W with a low-rank factorization W + \u2206W = W + BA, where B \u2208Rdout\u00d7r, A \u2208Rr\u00d7din, and r \u226amin{din, dout}. LoRA and diffusion. Although initially proposed for fine-tuning LLMs, LoRA is generally applicable to a wide range of other deep-learning modalities. Recent works used LoRA with diffusion models for various tasks including image generation Ryu (2023); Gu et al. (2023); Go et al. (2023), image editing Shi et al. (2023), continual learning Smith et al. (2023), and distillation Golnari (2023); Wang et al. (2023b). While all these works demonstrate the flexibility and efficacy of the LoRA architecture used for fine-tuning diffusion models, to the best of our knowledge, our work is the first attempt to use LoRA as part of the core U-Net for diffusion models for full training, not fine-tuning. 4 \f2.3 Conditioning the score network For diffusion models to work properly, it is crucial that the score network s\u03b8 is conditioned on appropriate side information. In the base formulation, the score function \u2207xpt(x), which the score network s\u03b8 learns, depends on the time t, so this t-dependence must be incorporated into the model via time conditioning. When class-labeled training data is available, class-conditional sampling requires class conditioning of the score network Ho & Salimans (2021). To take advantage of data augmentation and thereby avoid overfitting, EDM Karras et al. (2022) utilizes augmentation conditioning Jun et al. (2020), where the model is conditioned on the data augmentation information such as the degree of image rotation or blurring. Similarly, SDXL Podell et al. (2024) uses micro-conditioning, where the network is conditioned on image resolution or cropping information. Finally, text-to-image diffusion models Saharia et al. (2022); Ramesh et al. (2022); Rombach et al. (2022); Podell et al. (2024) use text conditioning, which conditions the score network with caption embeddings so that the model generates images aligned with the text description. Conditioning attention layers. Prior diffusion models using CNN-based U-Net architectures condition only convolutional layers in the residual blocks by applying scale-and-shift or adaLN (see (left) of Figure 2). In particular, attention blocks are not directly conditioned in such models. This includes the stateof-the-art diffusion models such as Imagen Saharia et al. (2022), DALL\u00b7E 2 Ramesh et al. (2022), Stable Diffusion Rombach et al. (2022), and SDXL Podell et al. (2024). To clarify, Latent Diffusion Model Rombach et al. (2022) based models use cross-attention method for class and text conditioning, but they still utilize scale-and-shift for time conditioning. There is a line of research proposing transformer-based architectures (without convolutions) for diffusion models, and these work do propose methods for conditioning attention layers. For instance, DiT Peebles & Xie (2023) conditioned attention layers using adaLN and DiffiT Hatamizadeh et al. (2023) introduced time-dependent multi-head self-attention (TMSA), which can be viewed as scale-and-shift conditioning applied to attention layers. Although such transformer-based architectures have shown to be effective, whether conditioning the attention layers with adaLN or scale-and-shift is optimal was not investigated. In Section 5.5 of this work, we compare our proposed LoRA conditioning on attention layers with the prior adaLN conditioning on attention layers, and show that LoRA is the more effective mechanism for conditioning attention layers. Diffusion models as multi-task learners. Multi-task learning Caruana (1997) is a framework where a single model is trained on multiple related tasks simultaneously, leveraging shared representations between the tasks. If one views the denoising tasks for different timesteps (or SNR) of diffusion models as related but different tasks, the training of diffusion models can be interpreted as an instance of the multi-task learning. Following the use of trainable lightweight adapters for Mixture-of-Expert (MoE) Jacobs et al. (1991); Ma et al. (2018), several works have utilized LoRA as the expert adapter for the multi-task learning Caccia et al. (2023); Wang et al. (2023a; 2024); Zadouri et al. (2024). Similarly, MORRIS Audibert et al. (2023) and LoRAHub Huang et al. (2023) proposed using the weighted sum of multiple LoRA adapters to effectively tackle general tasks. In this work, we took inspiration from theses works by using a composition of LoRA adapters to condition diffusion models. 3 Discrete-time LoRA conditioning Diffusion models such as DDPM Ho et al. (2020) and IDDPM Nichol & Dhariwal (2021) have a predetermined number of discrete timesteps t = 1, 2, . . . , T used for both training and sampling. We refer to this setting as the discrete-time setting. We first propose a method to condition the attention layers with LoRA in the discrete-time setting. In particular, we implement LoRA conditioning on IDDPM by conditioning the score network with (discrete) time and (discrete) class information. 5 \f3.1 TimeLoRA TimeLoRA conditions the score network for the discrete time steps t = 1, . . . , T. In prior architectures, time information is typically injected into only the residual blocks containing convolutional layers. TimeLoRA instead conditions the attention blocks. See (right) of Figure 2. Non-compositional LoRA. Non-compositional LoRA instantiates T independent rank-r LoRA weights A1, A2, . . . , AT , B1, B2, . . . , BT . The dense layer at time t becomes Wt = W + \u2206W(t) = W + BtAt for t = 1, . . . , T. To clarify, the trainable parameters for each linear layer are W, A1, A2, . . . , AT , and B1, B2, . . . , BT . In particular, W is trained concurrently with A1, A2, . . . , AT , and B1, B2, . . . , BT . However, this approach has two drawbacks. First, since T is typically large (up to 4000), instantiating T independent LoRAs can occupy significant memory. Second, since each LoRA (At, Bt) is trained independently, it disregards the fact that LoRAs of nearby time steps should likely be correlated/similar. It would be preferable for the architecture to incorporate the inductive bias that the behavior at nearby timesteps are similar. Compositional LoRA. Compositional LoRA composes m LoRA bases, A1, . . . , Am and B1, . . . , Bm, where m \u226aT. Each LoRA basis (Ai, Bi) corresponds to time ti for 1 \u2264t1 < \u00b7 \u00b7 \u00b7 < tm \u2264T. The dense layer at time t becomes Wt = W + \u2206W(t) = W + m X i=1 (\u03c9t)i BiAi, where \u03c9t = ((\u03c9t)1 , . . . , (\u03c9t)m) is the time-dependent trainable weights composing the LoRA bases. To clarify, the trainable parameters for each linear layer are W, A1, A1, . . . , Am, B1, B1, . . . , Bm, and \u03c9t. Since the score network is a continuous function of t, we expect \u03c9t \u2248\u03c9t\u2032 if t \u2248t\u2032. Therefore, to exploit the task similarity between nearby timesteps, we initialize (\u03c9t)i with a linear interpolation scheme: for tj \u2264t < tj+1, (\u03c9t)i = \uf8f1 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f3 tj+1 \u2212t tj+1 \u2212tj i = j t \u2212tj tj+1 \u2212tj i = j + 1 0 otherwise. In short, at initialization, \u2206W(t) uses a linear combination of the two closest LoRA bases. During training, \u03c9t can learn to utilize more than two LoRA bases, i.e., \u03c9t can learn to have more than two non-zeros through training. Specifically, (\u03c91, . . . , \u03c9T ) \u2208Rm\u00d7T is represented as an m \u00d7 T trainable table implemented as nn.Embedding in Pytorch. 3.2 ClassLoRA Consider a conditional diffusion model with C classes. ClassLoRA conditions the attention layers in the score network with the class label. Again, this contrasts with the typical approach of injecting class information only into the residual blocks containing convolutional layers. See (right) of Figure 2. Since C is small for CIFAR-10 (C = 10) and the correlations between different classes are likely not strong, we only use the non-compositional ClassLoRA: Wc = W + \u2206W(c) = W + B\u2032 cA\u2032 c for c = 1, . . . , C. In other words, each LoRA (A\u2032 c, B\u2032 c) handles a single class c. When C is large, such as in the case of ImageNet1k, one may consider using a compositional version of ClassLoRA. 6 \f4 Continuous-SNR LoRA conditioning Motivated by (Kingma et al., 2021), some recent models such as EDM Karras et al. (2022) consider parameterizing the score function as a function of noise or signal-to-noise ratio (SNR) level instead of time. In particular, EDM Karras et al. (2022) considers the probability flow ODE Xt = \u2212\u02d9 \u03c3(t)\u03c3(t)s\u03b8(x; \u03c3(t)) dt, where s\u03b8(x; \u03c3) is the score network conditioned on the SNR level \u03c3. We refer to this setting as the continuousSNR setting. The main distinction between Sections 3 and 4 is in the discrete vs. continuous parameterization, since continuous-time and continuous-SNR parameterizations of score functions are equivalent. We choose to consider continuous-SNR (instead of continuous-time) parameterizations for the sake of consistency with the EDM model Karras et al. (2022). Two additional issues arise in the present setup compared to the setting of Section 3. First, by considering a continuum of SNR levels, there is no intuitive way to assign a single basis LoRA to a specific noise level. Second, to accommodate additional conditioning elements such as augmentations or even captions, allocating independent LoRA for each conditioning element could lead to memory inefficiency. 4.1 Unified compositional LoRA (UC-LoRA) Consider the general setting where the diffusion model is conditioned with N attributes cond1, . . . , condN, which can be a mixture of continuous and discrete information. In our EDM experiments, we condition the score network with N = 3 attributes: SNR level (time), class, and augmentation information. Unified compositional LoRA (UC-LoRA) composes m LoRA bases A1, . . . , Am and B1, . . . , Bm to simultaneously condition the information of cond1, . . . condN into the attention layer. The compositional weight \u03c9 = (\u03c91, . . . , \u03c9m) of the UC-LoRA is obtained by passing cond1, . . . condN through an MLP. Prior diffusion models typically process cond1, . . . , condN with an MLP to obtain a condition embedding v, which is then shared by all residual blocks for conditioning. For the j-th residual block, v is further processed by an MLP to get scale and shift parameters \u03b3j and \u03b2j: v = SharedMLP(cond1, . . . , condN) (\u03b3j, \u03b2j) = MLPj(v). The (\u03b3j, \u03b2j) is then used for the scale-and-shift conditioning of the j-th residual block in the prior architectures. In our UC-LoRA, we similarly use the shared embedding v and an individual MLP for the j-th attention block to obtain the composition weight \u03c9j(v): v = SharedMLP(cond1, \u00b7 \u00b7 \u00b7 , condN) \u03c9j(v) = MLPj(v). Then, the j-th dense layer of the attention block becomes W(cond1, . . . , condN) = W + \u2206W(cond1, . . . , condN) = W + m X i=1 \u03c9j,i(v)BiAi. To clarify, the trainable parameters for the j-th dense layer are W, A1, A2, . . . , Am, B1, B2, . . . , Bm, and the weights in MLPj. Shared across the entire architecture, the weights in SharedMLP are also trainable parameters. 7 \f5 Experiments In this section, we present our experimental findings. Section 5.1 describes the experimental setup. Section 5.2 first presents a toy, proof-of-concept experiment to validate the proposed LoRA conditioning. Section 5.3 evaluates the effectiveness of LoRA conditioning on attention layers with a quantitative comparison between diffusion models with (baseline) conventional scale-and-shift conditioning on convolutional layers; (only LoRA) LoRA conditioning on attention layers without conditioning convolutional layers; and (with LoRA) conditioning both convolutional layers and attention layers with scale-and-shift and LoRA conditioning, respectively. Section 5.4 investigates the effect of tuning the LoRA rank and the number of LoRA bases. Section 5.5 compares our proposed LoRA conditioning with the adaLN conditioning on attention layers. Section 5.6 explores the robustness of ClassLoRA conditioning compared to conventional scale-and-shift conditioning in extrapolating conditioning information. 5.1 Experimental Setup Diffusion models. We implement LoRA conditioning on three different diffusion models: nano diffusion Lelarge et al. (2024), IDDPM Nichol & Dhariwal (2021), and EDM-vp Karras et al. (2022). With nano diffusion, we conduct a proof-of-concept experiment. With IDDPM, we test TimeLoRA and ClassLoRA for the discrete-time setting, and with EDM, we test UC-LoRA for the continuous-SNR setting. Datasets. For nano diffusion, we use MNIST. For IDDPM, we use CIFAR-10 for both unconditional and class-conditional sampling, and ImageNet64, a downsampled version of the ImageNet1k, for unconditional sampling. For EDM-vp, we also use CIFAR-10 for both unconditional and class-conditional sampling and FFHQ64 for unconditional sampling. Configurations. We follow the training and architecture configurations proposed by the baseline works and only tune the LoRA adapters. For IDDPM, we train the model for 500K iterations for CIFAR-10 with batch size of 128 and learning rate of 1 \u00d7 10\u22124, and 1.5M iterations for ImageNet64 with batch size of 128 and learning rate of 1 \u00d7 10\u22124. For EDM, we train the model with batch size of 512 and learning rate of 1 \u00d7 10\u22123 for CIFAR-10, and with batch size of 256 and learning rate of 2 \u00d7 10\u22124 for FFHQ64. For sampling, in IDDPM, we use 4000 and 4001 timesteps for the baseline and LoRA conditioning respectively, and in EDM, we use the proposed Heun\u2019s method and sample images with 18 timesteps (35 NFE) for CIFAR-10 and 40 timesteps (79 NFE) for FFHQ64. Here, NFE is the number of forward evaluation of the score network and it differs from the number of timesteps by a factor of 2 because Heun\u2019s method is a 2-stage Runge\u2013Kutta method. Appendix A provides further details of the experiment configurations. Note that the baseline works heavily optimized the hyperparameters such as learning rate, dropout probability, and augmentations. Although we do not modify any configurations of the baseline and simply add LoRA conditioning in a drop-in fashion, we expect further improvements from further optimizing the configuration for the entire architecture and training procedure. LoRA. We use the standard LoRA initialization as in the original LoRA paper Hu et al. (2022): for the LoRA matrices (A, B) with rank r, A is initialized as Aij \u223cN(0, 1/r) and B as the zero matrix. Following Ryu (2023), we set the rank of each basis LoRA to 4. For TimeLoRA and ClassLoRA, we use 11 and 10 LoRA bases, and for UC-LoRA we use 18 and 20 LoRA bases for CIFAR-10 and FFHQ. Due to our constrained computational budget, we were not able to conduct a full investigation on the optimal LoRA rank or the number LoRA bases. However, we experiment with the effect of rank and number of LoRA bases to limited extent and report the result in Section 5.4. 5.2 Proof-of-concept experiments We conduct toy experiments with nano diffusion for both discrete-time and continuous-SNR settings. Nano diffusion is a small diffusion model with a CNN-based U-Net architecture with no skip connections with about 500, 000 trainable parameters. We train nano diffusion on unconditional MNIST generation with 8 \f3 different conditioning methods: conventional scale-and-shift, TimeLoRA, and UC-LoRA. As shown in Figure 3, conditioning with TimeLoRA or UC-LoRA yields competitive result compared to the conventional scale-and-shift conditioning. Figure 3: MNIST samples generated by nano diffusion trained with (1st row) conventional scale-and-shift conditioning; (2nd row) TimeLoRA with linear interpolation initialization; (3rd row) UC-LoRA; and (4th row) TimeLoRA with random initialization. Initialization of \u03c9i(t) for TimeLoRA. As shown in Figure 3 the choice of initialization of \u03c9i(t) for TimeLoRA impacts performance. With randomly initialized \u03c9i(t), nano diffusion did not converge after 100 epochs, whereas with \u03c9i(t) initialized with the linear interpolation scheme, it did converge. Moreover, Figure 4 shows that even in UC-LoRA, \u03c9(t) shows higher similarity between nearby timesteps than between distant timesteps after training. This is consistent with our expectation that \u03c9i(t) \u2248\u03c9i(t\u2032) if t \u2248t\u2032. 250 500 750 1000 t1 200 400 600 800 1000 t2 250 500 750 1000 t1 1.0 0.5 0.0 0.5 1.0 Figure 4: Cosine similarity between \u03c9(t1) and \u03c9(t2) for UC-LoRA applied to nano diffusion (left) at initialization and (right) after training. At initialization, the cosine similarity between \u03c9(t1) and \u03c9(t2) has no discernible pattern. After training, however, the cosine similarity between \u03c9(t1) and \u03c9(t2) for t1 \u2248t2 is close to 1, implying their high similarity. 5.3 Main quantitative results Simply adding LoRA conditioning yields improvements. To evaluate the effectiveness of the drop-in addition of LoRA conditioning to the attention layers, we implement TimeLoRA and ClassLoRA to IDDPM and UC-LoRA to EDM, both with the conventional scale-and-shift conditioning on the convolutional layers unchanged. We train IDDPM with CIFAR-10, ImageNet64 and EDM with CIFAR-10, FFHQ64. As reported in Table 1, the addition of LoRA conditioning to the attention layers consistently improves the image generation quality as measured by FID scores Heusel et al. (2017) across different diffusion models and datasets with only (\u223c10%) addition of the parameter counts. Note these improvements are achieved without tuning any hyperparameters of the base model components. 9 \fInitializing the base model with pre-trained weights. We further test UC-LoRA on pre-trained EDM base models for unconditional CIFAR-10 and FFHQ64 generations. As reported in Table 1, using pre-trained weights showed additional gain on FID score with fewer number of interations (\u223c50%). To clarify, although we initialize the base model with pre-trained weights, we fully train both base model and LoRA modules rather than finetuning. LoRA can even replace scale-and-shift. We further evaluate the effectiveness of LoRA conditioning by replacing the scale-and-shift conditioning for the convolutional layers in residual blocks with LoRA conditioning for the attention blocks. The results of Table 1 suggest that solely using LoRA conditioning on attention layers achieves competitive FID scores while being more efficient in memory compared to the baseline score network trained with scale-and-shift conditioning on convolutional layers. For IDDPM, using LoRA in place of the conventional scale-and-shift conditioning consistently produces better results. Significant improvement is observed especially for class-conditional generation of CIFAR-10. For EDM, replacing the scale-and-shift conditioning did not yield an improvement, but nevertheless performed comparably. We note that in all cases, LoRA conditioning is more parameter-efficient (\u223c10%) than the conventional scale-and-shift conditioning. 5.4 Effect of LoRA rank and number of LoRA bases We investigate the effect of tuning the LoRA rank and the number of LoRA bases on the EDM model for unconditional CIFAR-10 generation and report the results in Table 2. Our findings indicate that using more LoRA bases consistently improves the quality of image generations. On the other hand, increasing LoRA rank does not guarantee better performance. These findings suggest an avenue of further optimizing and improving our main quantitative results of Section 5.3 and Table 1, which we have not yet been able to pursue due to our constrained computational budget. # basis rank FID # Params Varying # basis 9 4 1.99 57185519 18 4 1.96 57745499 36 4 1.95 58865459 Varying rank 18 2 1.93 57192539 18 4 1.96 57745499 18 8 1.96 58851419 Table 2: Effect of the number of LoRA bases and the LoRA rank on unconditional CIFAR-10 sampling of EDM with LoRA 5.5 Comparison with adaLN We compare the effectiveness of our proposed LoRA conditioning with adaLN conditioning applied to attention layers. Specifically, we conduct an experiment on EDM with scale-and-shift conditioning on convolutional layers removed and with (i) adaLN conditioning attention layers or (ii) LoRA conditioning attention layers. We compare the sample quality of unconditional and class-conditional CIFAR-10 generation and report the results in Table 3. We find that LoRA conditioning significantly outperforms adaLN conditioning for both unconditional and conditional CIFAR-10 generation. This indicates that our proposed LoRA conditioning is the more effective mechanism for conditioning attention layers in the U-Net architectures for diffusion models. Type uncond. cond. adaLN conditioning 2.16 2.0 LoRA conditioning 1.99 1.82 Table 3: Comparison of adaLN conditioning and LoRA conditioning on attention layers on EDM (without conditioning convolutional layers). We consider both unconditional and conditional CIFAR-10 generation. 10 \f5.6 Extrapolating conditioning information We conduct an experiment comparing two class-conditional EDM models each conditioned by scale-and-shift and ClassLoRA, for the CIFAR-10 dataset. During training, both models receive size-10 one-hot vectors (ci)j = \u03b4ij representing the class information. First, we input the linear interpolation \u03b1ci +(1\u2212\u03b1)cj (0 \u2264\u03b1 \u22641) of two class inputs ci and cj (corresponding to \u2018airplane\u2019 and \u2018horse\u2019, respectively) to observe the continuous transition between classes. As shown in the top of Figure 5, both the scale-and-shift EDM and ClassLoRA EDM models effectively interpolate semantic information across different classes. However, when a scaled input \u03b2ci is received, with \u03b2 ranging from -1 to 1, scale-and-shift EDM generates unrecognizable images when \u03b2 < 0, while ClassLoRA EDM generates plausible images throughout the whole range, as shown in the bottom of Figure 5. This toy experiment shows that LoRA-based conditioning may be more robust to extrapolating conditioning information beyond the range encountered during training. Appendix C provides further details. Figure 5: Results of (Top) interpolation of class labels in class-conditional EDM with (row1) ClassLoRA; (row2) scale-and-shift; (bottom) extrapolation of class labels in class-conditional EDM with (row1) ClassLoRA; (row2) scale-and-shift 6" +} \ No newline at end of file diff --git a/abs_9K/test_abstract_short_2405.03962v1.json b/abs_9K/test_abstract_short_2405.03962v1.json new file mode 100644 index 0000000000000000000000000000000000000000..cbf3eb960d1ae9179826079fca4940116efe878a --- /dev/null +++ b/abs_9K/test_abstract_short_2405.03962v1.json @@ -0,0 +1,17 @@ +{ + "url": "http://arxiv.org/abs/2405.03962v1", + "title": "AdsorbDiff: Adsorbate Placement via Conditional Denoising Diffusion", + "abstract": "Determining the optimal configuration of adsorbates on a slab (adslab) is\npivotal in the exploration of novel catalysts across diverse applications.\nTraditionally, the quest for the lowest energy adslab configuration involves\nplacing the adsorbate onto the slab followed by an optimization process. Prior\nmethodologies have relied on heuristics, problem-specific intuitions, or\nbrute-force approaches to guide adsorbate placement. In this work, we propose a\nnovel framework for adsorbate placement using denoising diffusion. The model is\ndesigned to predict the optimal adsorbate site and orientation corresponding to\nthe lowest energy configuration. Further, we have an end-to-end evaluation\nframework where diffusion-predicted adslab configuration is optimized with a\npretrained machine learning force field and finally evaluated with Density\nFunctional Theory (DFT). Our findings demonstrate an acceleration of up to 5x\nor 3.5x improvement in accuracy compared to the previous best approach. Given\nthe novelty of this framework and application, we provide insights into the\nimpact of pre-training, model architectures, and conduct extensive experiments\nto underscore the significance of this approach.", + "authors": "Adeesh Kolluru, John R Kitchin", + "published": "2024-05-07", + "updated": "2024-05-07", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "physics.chem-ph" + ], + "label": "Original Paper", + "paper_cat": "Diffusion AND Model", + "gt": "Determining the optimal configuration of adsorbates on a slab (adslab) is\npivotal in the exploration of novel catalysts across diverse applications.\nTraditionally, the quest for the lowest energy adslab configuration involves\nplacing the adsorbate onto the slab followed by an optimization process. Prior\nmethodologies have relied on heuristics, problem-specific intuitions, or\nbrute-force approaches to guide adsorbate placement. In this work, we propose a\nnovel framework for adsorbate placement using denoising diffusion. The model is\ndesigned to predict the optimal adsorbate site and orientation corresponding to\nthe lowest energy configuration. Further, we have an end-to-end evaluation\nframework where diffusion-predicted adslab configuration is optimized with a\npretrained machine learning force field and finally evaluated with Density\nFunctional Theory (DFT). Our findings demonstrate an acceleration of up to 5x\nor 3.5x improvement in accuracy compared to the previous best approach. Given\nthe novelty of this framework and application, we provide insights into the\nimpact of pre-training, model architectures, and conduct extensive experiments\nto underscore the significance of this approach.", + "main_content": "Introduction Heterogenous catalysis plays an important role in developing chemicals in industries, environmental protection through converters, and the synthesis of alternative fuels (Liu & Li, 2017; Zitnick et al., 2020). Modeling these chemical reactions involve an intermediate adsorbate on a catalyst slab which determines the efficacy of the catalyst for that particular reaction. Discovering a novel catalyst computationally involves screening through billions of candidates and finding the lowest energy configuration. 1Department of Chemical Engineering, Carnegie Mellon University. Correspondence to: Adeesh Kolluru , John R. Kitchin . Finding the lowest energy configuration for an adsorbate and slab requires a global optimum (which is non-convex) search across different sites on the slab. Conventional approaches solve this in two steps (1) heuristically place the adsorbate on certain important sites and (2) perform optimization with quantum mechanical calculators like Density Functional Theory (DFT) on each of these sites. The lowest energy site out of these is considered for calculating adsorption energy, which is a thermodynamic descriptor for how good that catalyst is. With recent advances in machine learning methods for predicting forces, it has become possible to perform optimization with ML force fields (MLFFs) instead of Density Functional Theory (DFT) making this process faster and easier to test many sites and find better minima. These ML force fields are trained on DFT data to predict energies and forces corresponding to different adslab configurations. The recent release of the OC20-Dense dataset (Lan et al., 2023) signifies a significant advancement in the computation of the lowest energy adslab configuration. This work employs a blend of heuristic and random adsorbate placements across 100 sites, with subsequent optimizations across each site using Density Functional Theory (DFT) to calculate adsorption energy. The study further introduces AdsorbML, a paradigm characterized by a brute-force exploration of initial adsorbate placements. Employing pre-trained machine learning (ML) force fields from OC20, AdsorbML streamlines the optimization process, culminating in the determination of the lowest energy adsorbate-slab (adslab) configuration. The predictive accuracy of these configurations is rigorously validated against DFT single-points or complete DFT optimization. This hybrid approach results in a computational acceleration of 2000-fold in adsorption energy calculations compared to the sole reliance on DFT calculations. Recent developments in graph neural network (GNN) based ML architectures have increased the accuracies of adsorption energy prediction significantly by encoding geometric information of atoms in more explicit ways. However, there\u2019s little to no work done on improving the adsorption site prediction which could help us get away with the currently used brute-force approach. In this work, we develop a novel conditional denoising diffu1 arXiv:2405.03962v1 [cs.LG] 7 May 2024 \fAdsorbate placement via conditional denoising diffusion sion framework for adsorbate placement. We first formulate a diffusion framework over the space of the 2D translation and 3D rigid rotation of an adsorbate molecule over the slab considering periodic boundary conditions (PBC) of the slab. Through the learned diffusion process, we sample the most stable site by iteratively updating the center of mass of adsorbate and rigid orientation. Performing a naive unconditional diffusion framework on the most optimal adsorbate site and orientation \u2014 corresponding to the lowest energy adslab configuration out of 100 densely sampled calculations in OC20-Dense \u2014 leads to throwing away 99% of DFT optimal energy data. Therefore, we modify the diffusion training to be conditional on relative energies (relative across densely sampled sites of an adslab combination). This leads to significant improvements in accuracies and sample efficiency during diffusion training. After sampling for the optimal site and orientation of adsorbate on the slab, we perform ML force field (MLFF) optimization and DFT single-point verification similar to AdsorbML. This comprehensive end-to-end evaluation helps in robust assessment of the practical impact of the learned diffusion model. There have been significant advances in diffusion generative models in molecular and material discovery, and analogous problems in molecular docking on proteins. However, this is the first work to frame the adsorbate placement problem considering all its symmetries with the slab in a diffusion framework. Intuitively, the reverse diffusion process of AdsorbDiff helps in skipping multiple minima sites due to its energy-based conditional sampling which is followed by a local optimization with a DFT-learned MLFF to find a global optimum. To facilitate further research on this problem, we provide comprehensive results on the importance of GNN architectures for the diffusion task, show the importance of pretraining, and demonstrate the success of our approach to in-distribution (ID) and out-of-distribution (OOD) splits. The summary of contributions of this work are \u2022 We propose AdsorbDiff, a novel conditional denoising diffusion framework designed to leverage the translation, rotation, and periodic symmetries inherent in adsorbate and slab interactions. Additionally, this framework is adept at efficiently predicting the lowest energy site by conditional training on relative energies. \u2022 We present our results in a comprehensive end-to-end evaluation framework, integrated with DFT, to accurately gauge the true capability of our approach in predicting optimal adsorption energies. \u2022 We achieve a 31.8% success rate, 3.5x higher than the naive AdsorbML baseline of 9.1% with a single site prediction. Alternatively, we demonstrate that a comparable level of accuracy could be achieved by AdsorbML by employing 5x more placements. \u2022 We demonstrate that pretraining on large-scale local optimization data can significantly improve the results on the search for global optima. \u2022 We show that diffusion results exhibit insignificant dependence on GNN architectures, in contrast to the notable differences observed for the same architectures when trained on DFT forces. \u2022 We highlight the model\u2019s generalization capabilities to previously unseen adsorbates and slabs. 2. Background and Related Work Force-fields: Energy and forces (as a gradient of energy with respect to positions) are calculated using ab initio quantum mechanical methods like Density Functional Theory (DFT). ML models can be trained to predict these energies and forces, and are called ML force-fields (MLFFs). These force fields can be utilized to perform structure optimization to get the lowest energy structures. Optimization: For adsorption energy prediction, we start with an optimized adsorbate and slab, place the adsorbate on a slab, and perform optimization to get an adslab configuration with the lowest energy. Usually, second-order optimizers like BFGS, L-BFGS, Conjugate gradient descent, etc are used to solve this optimization problem. Since this is non-convex, the initial guess of adsorbate placement or the strategy of optimization is critical to finding an adslab configuration corresponding to the global optimum. AdsorbML (Lan et al., 2023) method starts with combining heuristic and random initial placements which is a brute-force approach to finding better minima. \u201dEasy Potential\u201d from (Schaarschmidt et al., 2022) trains a simple harmonic potential to guess this initial placement. Learn2Hop (Merchant et al., 2021) also learns the optimization landscape to navigate through better and hop through local minima. There are approaches like minima hopping that help in navigating through the entire optimization landscape with a force-field (Jung et al., 2023) and help in finding better minima, but these could be computationally expensive. GNNs: Message-Passing Neural Networks (MPNN) are a class of graph neural networks (GNN) that are utilized across material property prediction tasks. Different architectures encode the geometric information in different ways. SchNet (Sch\u00a8 utt et al., 2018) only encodes the distance information. Including more explicit geometric features have improved the model prediction as DimeNet (Gasteiger et al., 2020b;a) incorporates triplets. SphereNet (Liu et al., 2021), GemNet (Gasteiger et al., 2021; 2022) incorporates complete geometric information explicitly by giving triplets and quadruplets information. PaiNN (Sch\u00a8 utt et al., 2021) incorporates directional information and applies only linear operations on those features. Equivariant models like NequIP (Batzner et al., 2022), Allegro (Musaelian et al., 2023), MACE (Batatia et al., 2022), SCN (Zitnick et al., 2 \fAdsorbate placement via conditional denoising diffusion Figure 1. Overview of AdsorbDiff: Random initial site and orientation for the adsorbate are selected, followed by sampling over 2D translation, 3D rigid rotations, and considering periodic boundary conditions (PBC) to predict the optimal site and orientation. MLFF optimization is then conducted from the predicted site with a fixed interstitial gap until convergence. The final prediction undergoes constraint verification, and DFT verification is performed on valid structures to calculate success rates. 2022), Equiformer (Liao & Smidt, 2022; Liao et al., 2023) utilize spherical harmonics in representing the geometric features. Diffusion Models: Diffusion models are a class of generative models that have shown impressive results across different domains starting from computer vision (Dhariwal & Nichol, 2021; Croitoru et al., 2023), language models (Gong et al., 2022), temporal data modeling, to applications in molecules (Xu et al., 2022; 2023; Arts et al., 2023; Hoogeboom et al., 2022; Jing et al., 2022), proteins (Wu et al., 2022; Trippe et al., 2022; Watson et al., 2022; 2023) and materials (Xie et al., 2021; Fu et al., 2023; Zeni et al., 2023; Merchant et al., 2023; Yang et al., 2023b). There are different kinds of formulations proposed for diffusion models like denoising diffusion probabilistic models (DDPMs), score-based generative models (SGMs), and stochastic differential equations (Score SDEs) (Yang et al., 2023a). Many of these formulations have been adapted to problems in molecular and material discovery. For example, CDVAE (Xie et al., 2021) adapts concepts from noise-conditioned score networks (NCSN) for bulk discovery. Conditional diffusion has also been recently utilized across proteins (Krishna et al., 2024), catalyst and materials (Zheng et al., 2023) for generating structures with required properties. Diffusion models have also been recently utilized for molecular docking on proteins (Corso et al., 2022). Although this problem is somewhat analogous to placing adsorbate on a slab, as far as we know there hasn\u2019t been previous work on formulating adsorbate placement in a diffusion framework. AdsorbDiff also differs from molecular docking in several key aspects \u2013 2D translation formulation, periodic boundary conditions, conditional denoising formulation, and the requirement of DFT level accuracy as opposed to simple force-fields for proteins making our end-to-end evaluation with DFT critical. 3. AdsorbDiff 3.1. Overview The objective of this research is to enhance the efficiency of adsorption energy calculation, representing the lowest energy configuration of an adsorbate on a slab. The methodology of this work involves the initial placement of an adsorbate on a random site within the 2D surface of the slab, followed by reverse diffusion to predict the optimal adsorption site and orientation. Employing machine learning force field optimization, the structure undergoes iterative updates with an optimizer until forces converge close to 0. Subsequently, the final structure is verified for compliance with constraints essential for defining adsorption energy. On the optimized structure, a single Density Functional Theory (DFT) calculation is conducted to obtain the predicted energy (EP red). A successful outcome is determined by the predicted energy being within 0.1 eV or lower than the DFT baseline of adsorption energy in OC20-Dense data, indicating the model\u2019s ability to provide a comparable or superior estimate of adsorption energy (shown in Figure 1). 3 \fAdsorbate placement via conditional denoising diffusion The code is open-sourced with MIT License1. 3.2. Adsorbate placement Various adsorbate placement strategies were explored for the OC20-Dense dataset, incorporating a combination of heuristic and random approaches. Specifically, 100 sites were selected for each adslab configuration, utilizing a blend of heuristic and random placements. The heuristic placement involved strategically situating the adsorbate\u2019s binding site on either an on-top site, hollow site, or bridge site, with a specified interstitial gap denoting the distance between the connecting atom of the slab and the corresponding adsorbate atom. Additional random sites are introduced through the random rotation of the adsorbate along the normal of the slab, accompanied by a slight translational wobble along the surface from the heuristic site. 3.3. Diffusion for adsorbate placement In this work, our objective is to develop a diffusion model aimed at predicting the adsorbate orientation and site corresponding to the lowest energy, as established through benchmarking with the OC20-Dense dataset. The adsorbate motion is constrained within a manifold (Mc) and utilizes the combined action group (A), as described in DiffDock (Corso et al., 2022). This manifold permits the adsorbate to navigate towards configurations with lowenergy adslab states through a combination of translations, rotations, and torsion angle adjustments. Note, for fair comparisons with our baselines, torsion angle alterations are disregarded in our analysis due to the smaller size of the adsorbate employed in this study. This approach aligns with the methodology of AdsorbML, which does not introduce randomness in torsion angles as part of its benchmark. In our framework, we specifically consider translations in the 2D plane parallel to the slab while accounting for periodic boundary conditions (PBC). The z-coordinate is meticulously aligned to denote the normal direction of the slab and the diffusion process is executed across the xycoordinates. Therefore, the adsorbate movements are associated with the 2D translation group T(2), and rigid rotations are modeled using the SO(3) group. The translation operation, denoted as Atr : T(2) \u00d7 R2n \u2192R2n, is defined as Atr(r, x)i = xi + r, employing the isomorphism T(2) \u223c = R2, where xi \u2208R2 represents the position of the i-th adsorbate atom. Similarly, the rotation operation, denoted as Arot : SO(3) \u00d7 R3n \u2192R3n, is defined by Arot(R, x)i = R(xi \u2212\u00af x) + \u00af x, where \u00af x = 1 n P i xi, signifying rotations around the center-of-mass of the adsorbate. For the initial coordinates of adsorbate, we select a random 1https://github.com/AdeeshKolluru/ AdsorbDiff point on the slab. This point is considered as the center-ofmass of the adsorbate in fractional coordinates. We then convert from fractional coordinates to real coordinates and perform a reverse diffusion process to get to the lowest energy site (as shown in Algorithm 1). The work conducted by De et al. (De Bortoli et al., 2022) and Corso et al. (Corso et al., 2022) has demonstrated the applicability of the diffusion framework to Riemannian manifolds. In this context, the score model constitutes the tangent space, and a geodesic random walk serves as the reverse stochastic differential equation (SDE) solver. The score model is trained using denoising score matching (Song & Ermon, 2019), wherein a score function s\u03b8(x) is learned to approximate the gradient of the probability density \u2207xp(x) at varying noise levels (as shown in Algorithm 2). The learned scores for translations and rotations are treated as independent entities, assuming the tangent space is a direct sum of individual tangent spaces, with contributions from torsion being neglected. The forward SDE for both translation and rotation is defined as dx = q d\u03c32(t) dt dw, 4 \fAdsorbate placement via conditional denoising diffusion where w represents the corresponding Wiener process. In the translational scenario within T(2), the model learns a score for a standard Gaussian distribution with variance \u03c32(t). For rotations in SO(3), the diffusion kernel is governed by the IGSO(3) distribution, which can be sampled in the axis-angle parameterization. This involves sampling a unit vector \u03c9\u2032 \u2208so(3) uniformly and a random angle \u03c9 from the interval [0, \u03c0], as outlined by Equations 1 and 2. The score of diffusion kernel is defined in Equation 3. The computation of R\u2032 = R(\u03c9\u02c6 \u03c9)R, where R is the result of applying the Euler vector \u03c9\u02c6 \u03c9 to R, has been established in prior work by Yim et al. (Yim et al., 2023). To efficiently carry out the score computation and sampling processes, it is feasible to precompute the truncated infinite series and interpolate the cumulative distribution function (CDF) of p(\u03c9). p(\u03c9) = 1 \u2212cos(\u03c9) \u03c0 f(\u03c9) (1) f(\u03c9) = \u221e X l=0 (2l + 1) exp \u0012 \u2212l(l + 1)\u03c32 2 \u0013 \u00d7 sin \u0012\u0012 l + 1 2 \u0013 \u03c9 \u0013 sin \u0010\u03c9 2 \u0011 (2) \u2207ln pt(R\u2032|R) = \u0012 d d\u03c9 log f(\u03c9) \u0013 \u02c6 \u03c9 (3) 3.4. Conditional denoising diffusion for adsorbate placement While the OC Challenge set provides densely calculated adsorption energies for 244 systems, a total of 244 * 100 DFT optimization benchmarks were conducted. This involved performing 100 different random placements for each configuration. Notably, the naive denoising diffusion setup was exclusively trained on the 244 lowest energy configurations. To leverage the entirety of the DFT optimization data, a conditional diffusion model is employed. In this model, the optimized position is conditioned on the relative energy, specifically relative to the energy of the lowest energy configuration (Ec rel-i = Ec min \u2212Ec i ). This approach allows for a more comprehensive utilization of the available DFT optimization data. 3.5. Graph Neural Network (GNN) architecture The inputs to the ML model are the 3D positions of all input atoms from the adslab configuration and their corresponding atomic numbers. The outputs predict per-atom 3D vectors. These vectors are forces in the case of force fields and the score function in the case of diffusion. To predict multiple score functions (for translation and rotation), multiple output heads are trained each predicting independent score functions. All architectures used in this work come under the messagepassing neural network (MPNN) framework of graph neural networks (GNNs). MPNNs operate by passing messages between nodes in the graph, allowing information to be exchanged and aggregated iteratively. The key components of an MPNN include message passing, updating node states, and global readout. In the message-passing step, nodes exchange information based on their local context, and this information is then used to update the states of the nodes (as shown in Equation 4). h(t+1) v = Update \u0010 h(t) v , Aggregate \u0010 {m(t) u\u2192v | u \u2208N(v)} \u0011\u0011 (4) Here, h(t) v represents embeddings of node v at iteration t, m(t) u\u2192v denotes the message from node u to v at iteration t, N(v) represents the neighborhood of node v, and Update and Aggregate are differentiable functions for updating node states and aggregating messages, respectively. In our study, we systematically investigate diverse architectures employed in the training of diffusion models to discern the significance of architectural decisions in this context. Specifically, we have chosen to assess the performance of PaiNN, GemNet-OC, and EquiformerV2, each distinguished by its treatment of explicit geometric information and rotational symmetries (Duval et al., 2023). This selection is grounded in the diverse characteristics they bring to the table. Furthermore, we employ these architectures in benchmarking against OC20 force-field evaluation, thereby facilitating comparative analysis of architectural significance in the realms of force-fields and diffusion. 4. Results In this section, we present results demonstrating the impact of AdsorbDiff in accelerating the search for adsorption energy or better global optima. Specifically, we demonstrate the impact of conditional denoising training over unconditional training and a randomly placed adsorbate baseline. This random baseline is equivalent to performing AdsorbML on a single site (Nsite=1). Additionally, we demonstrate the impact of pretraining, model architectures, and the generalization of this approach to new adsorbates and slabs. 4.1. Datasets We utilize two publicly available datasets for this work OC20-Dense (Lan et al., 2023) and OC20 (Chanussot et al., 2021). OC20: Open Catalyst 2020 (OC20) is a large-scale dataset that contains converged DFT optimization trajectories of 5 \fAdsorbate placement via conditional denoising diffusion 460k unique adslab configurations, encompassing 55 unique elements and 74 adsorbates. Note that these optimizations are local optimizations performed with a single heuristic placement. ML force field models are trained on the forces derived from these DFT trajectories. Additionally, the optimized structure from OC20 is utilized for pre-training the diffusion model. OC20-Dense: The OC20-Dense dataset serves as a DFT benchmark for adsorption energies, employing dense placement on 100 random sites per adslab configuration, followed by DFT optimization. This dataset releases both in-distribution (ID) and out-of-distribution (OOD) data, relative to OC20. The ID data incorporates adsorbates and slabs from OC20\u2019s training set but presents different combinations and configurations, while OOD introduces new adsorbates and/or slabs not found in the OC20 training set. A subset of OC20-Dense ID and OOD was utilized in the Open Catalyst Challenge 2023, hosted at the AI for Science Workshop during NeurIPS 2023 2. We split the ID data into 80/20 ratios for training the diffusion model and validating the sampling process. These smaller subsets make it computationally cheaper to perform end-to-end iterations. 4.2. Metric and constraints Our success metric is defined by the final energy calculated through DFT. For real-world applications, this energy (DDF T T otal) is used in calculating the adsorption energy EDF T Ads as EDF T Adsorption = EDF T T otal \u2212EDF T Slab \u2212EDF T Adsorbate, where EDF T Slab and EDF T Adsorbate are the independent energies of slab and adsorbate respectively. This adsorption energy acts as a thermodynamic description of how good a catalyst is for downstream application. The DFT Success Rate (SR) is defined as the percentage of valid structures within 0.1 eV or lower of the DFT computed adsorption energy benchmark in the OC20-Dense data (as described in AdsorbML). This is computationally expensive to calculate but is accurate. Metrics calculated from ML predictions are inexpensive but are also inaccurate, discussed further in Appendix C. Since we calculate adsorption energies, the adsorbate and slab must not change during optimization. Therefore, the structures are considered an anomaly due to (1) adsorbate desorption: adsorbate moves far away from the slab, (2) adsorbate dissociation: atoms in adsorbate dissociate into multiple adsorbates, (3) slab mismatch/reconstruction: slab reconstructs into a completely different structure during optimization (4) adsorbate intercalation: when any of the adsorbate atoms detaches and get into the slab. Experimental setup: All presented results are based on the DFT success rate metric as defined in the preceding 2https://opencatalystproject.org/ challenge.html section. Throughout the diffusion process, we employ the EquiformerV2 architecture, unless explicitly stated otherwise, owing to its state-of-the-art performance in AdsorbML. Additionally, for MLFF optimization, we utilize GemNetOC pre-trained on OC20, chosen for its lower inference cost. Further specifics regarding model and training hyperparameters are available in Appendix D. All results are shown on the val ID split apart from the OOD section. 4.3. Conditional vs Unconditional diffusion 0 5 10 15 20 25 30 35 40 DFT Success Rate (%) Random Unconditional Conditional 9.1% 11.4% 31.8% Conditional vs Unconditional Diffusion (Nsite=1) Figure 2. Comparison of conditional and unconditional diffusion with a baseline of random placement. Conditional diffusion training on relative energies of configurations of adslab significantly improves success rates over unconditional training and AdsorbML baseline. We demonstrate the importance of conditional training on relative energies (as shown in Section 3.4) over unconditional diffusion training in Figure 2. We compare both of these approaches to a naive baseline of AdsorbML with a single site (Nsite=1) where MLFF optimization is performed on a random adsorbate placement. It is noteworthy that the performance of unconditional training is suboptimal, this may be ascribed to the unexploited potential of additional data made available through conditional training. 4.4. AdsorbDiff vs AdsorbML AdsorbML conducts MLFF optimization and DFT evaluations on adsorption sites randomly placed within the system. A comparative analysis is drawn with AdsorbDiff, where the prediction of adsorption sites is facilitated through the utilization of diffusion models. As depicted in Figure 3, it is evident that AdsorbDiff exhibits notably superior performance, particularly at lower Nsites. However, as the number of adsorption sites (Nsites) increases, AdsorbDiff tends to either converge to or underperform in comparison to the brute force approach employed by AdsorbML. Adsorbate sites sampled from AdsorbDiff have less diversity by design as it\u2019s trained to predict the global optima. We calculate the average across the standard deviation of the points sampled at 10 Nsites and get 8.1 \u02da A for AdsorbML and 2.7 \u02da A for AdsorbDiff. AdsorbML\u2019s brute force placements have more randomness which leads to fewer anomalies post the MLFF 6 \fAdsorbate placement via conditional denoising diffusion 2 4 6 8 10 Number of Sites 10 15 20 25 30 35 40 45 DFT Success Rate (%) 9.1% 31.8% 20.5% 34.1% 34.1% 36.3% 47.7% 41.0% AdsorbDiff vs AdsorbML AdsorbML AdsorbDiff AdsorbDiff (Nsite=1) Figure 3. DFT Success Rates (%) for AdsorbDiff and AdsorbML across a varying number of site predictions. AdsorbDiff performs 3.5x better than AdsorbML utilizing a single site prediction. At higher sites, AdsorbML performs better due to the brute-force nature of site prediction that reduces anomalies. 2 4 6 8 10 Number of Sites 10 15 20 25 30 Anomalies 31.8% 25.0% 18.2% 20.5% 11.4% 22.7% 6.8% 13.6% AdsorbML AdsorbDiff Figure 4. Anomalies in AdsorbDiff and AdsorbML with respect to Nsites. A system is labeled as anomalous if all its predicted sites result in anomalies. AdsorbML has fewer anomalies than AdsorbDiff at higher Nsites due to more randomness in initial sites. optimization process shown in Figure 4. 4.5. Impact of pretraining Conditional diffusion benefits from training on a dataset that is 100 times more extensive than the unconditional approach, a consequence of leveraging multiple local optima within a unique adslab configuration. The substantial increase in training data size manifests in a notable enhancement in the success rate for the conditional approach. The OC20 IS2RE dataset, containing optimization data for 460,000 distinct adslab combinations, serves as a valuable resource for pretraining the diffusion model. It is important to acknowledge that this pretraining process results in a model that learns the local optima of an adslab combination, with the caveat that the model may not capture global optima for an adslab combination. 0 5 10 15 20 25 30 35 40 DFT Success Rate (%) Random PT Zero-shot PT Conditional 9.1% 29.6% 31.8% Impact of Pre-training (Nsite=1) Figure 5. Impact of pretraining on 460k OC20 local optima data on DFT Success Rate. PT Zero-shot measures zero-shot generalization of OC20 pre-trained model to OC20-Dense data. PT Conditional is finetuned on OC20 Dense data conditionally on relative energies of adslab configurations. Random baseline corresponds to randomly placed adsorbate. IS2RS Pretraining (PT) Zero-shot: Taking advantage of the diffusion model pre-trained on OC20 IS2RE data, we conduct a zero-shot validation on the OC20-Dense ID val split. This experimental setup allows us to assess the model\u2019s ability to predict better global optima having trained on a large dataset of local optima. Notably, we observe a substantial increase in DFT success rate in the zero-shot setting (as shown in Figure 5). IS2RS Pretraining (PT) Conditional: In this approach, we utilize the pre-trained model using the OC20-Dense data as described in Section 3.4. We observe that although this gives a 2% improvement over zero-shot, it converges to the same results as just training conditionally on OC20-Dense (shown in Figure 5). 4.6. Impact of architectures Architectures characterized by richer geometric information and extensive many-body interaction capabilities, such as eSCN and EquiformerV2, have demonstrated superior performance in force evaluations within the OC20 dataset compared to simpler models like PaiNN, which primarily encode directional information and apply linear transformations. Our benchmarking involves the evaluation of three architectures that exhibit progressively improved performance in OC20 Force MAE, revealing significant differences among them. This evaluation is specifically conducted in the context of the zero-shot assessment following pretraining (PT zeroshot) on an extensive dataset encompassing 460,000 OC20 instances. This choice is inspired by insights from the GemNet-OC paper (Gasteiger et al., 2022), suggesting that certain architectural choices manifest optimal performance only at higher data scales. 7 \fAdsorbate placement via conditional denoising diffusion 0 5 10 15 20 25 30 35 40 DFT Success Rate (%) PaiNN GemNet-OC EquiformerV2 27.3% 27.3% 29.6% Impact of GNN architectures on diffusion Figure 6. Impact of Graph Neural Network (GNN) architectures on the diffusion process for DFT Success Rate keeping other parts of the framework same. Different architectures perform similarly on the task of diffusion sampling. Interestingly, in the realm of the diffusion task, we note that the disparity in success rates among these architectures is marginal (as shown in Figure 6) which has been recently demonstrated in applications of molecular generation tasks as well (Wang et al., 2023). The intuition behind this result is that the diffusion model\u2019s score function can be thought of as learning a harmonic potential (Xie et al., 2021). Harmonic potentials are simpler force-fields than ab-initio DFT calculations involved in OC20 forces. This could result in simpler architectures being able to capture the underlying complexity of the diffusion task defined in our work. 4.7. OOD generalization We measure the success of AdsorbDiff in out-of-distribution (OOD) cases where the model hasn\u2019t seen the adsorbate or the slab even during the pre-training on OC20. We pick a random 50 samples out of 200 validation OOD split defined in Open Catalyst Challenge 2023. We observe a marginal decrease of only 3.8% in results for the OOD case compared to the ID scenario and consistently observe significant improvement over the AdsorbML (Nsite=1) baseline. 0 5 10 15 20 25 30 35 40 DFT Success Rate (%) Random AdsorbDiff 8.4% 28% OOD Results Figure 7. Comparison of DFT Success Rate for In-Distribution (ID) and Out-of-Distribution (OOD) splits using the AdsorbDiff method. Random baseline corresponds to randomly placed adsorbate. 4.8. Inference cost In the case of conditional diffusion, our approach maintains a maximum step limit of 100, with adsorbate placement converging, on average, within 98 steps. In contrast, for MLFF optimization with a maximum step limit of 300 and Fmax criteria of 0.01 eV/A (consistent with AdsorbML), the convergence occurs in approximately 286 steps. Consequently, for scenarios with a single adsorption site (Nsite 1), AdsorbDiff incurs approximately 34% more inference cost than AdsorbML, given the GNN architecture for diffusion and MLFF optimization is the same. This end-to-end ML framework is O(104) times faster than the conventional DFT pipelines (Lan et al., 2023). In Section 4.6, we illustrate that simpler and faster models such as PaiNN yield comparable performance to more intricate and slower models like EquiformerV2. This enhances the efficiency of our diffusion-based approach, as its computational burden becomes negligible in comparison to MLFF optimization, which would require more computationally intensive ML architectures (details in Appendix B). 5." +} \ No newline at end of file diff --git a/abs_9K/test_abstract_short_2405.03989v2.json b/abs_9K/test_abstract_short_2405.03989v2.json new file mode 100644 index 0000000000000000000000000000000000000000..7b20b3fd1af85b41ffd470df7b7283603b3f3d97 --- /dev/null +++ b/abs_9K/test_abstract_short_2405.03989v2.json @@ -0,0 +1,16 @@ +{ + "url": "http://arxiv.org/abs/2405.03989v2", + "title": "A Method for Parsing and Vectorization of Semi-structured Data used in Retrieval Augmented Generation", + "abstract": "This paper presents a novel method for parsing and vectorizing\nsemi-structured data to enhance the functionality of Retrieval-Augmented\nGeneration (RAG) within Large Language Models (LLMs). We developed a\ncomprehensive pipeline for converting various data formats into .docx, enabling\nefficient parsing and structured data extraction. The core of our methodology\ninvolves the construction of a vector database using Pinecone, which integrates\nseamlessly with LLMs to provide accurate, context-specific responses,\nparticularly in environmental management and wastewater treatment operations.\nThrough rigorous testing with both English and Chinese texts in diverse\ndocument formats, our results demonstrate a marked improvement in the precision\nand reliability of LLMs outputs. The RAG-enhanced models displayed enhanced\nability to generate contextually rich and technically accurate responses,\nunderscoring the potential of vector knowledge bases in significantly boosting\nthe performance of LLMs in specialized domains. This research not only\nillustrates the effectiveness of our method but also highlights its potential\nto revolutionize data processing and analysis in environmental sciences,\nsetting a precedent for future advancements in AI-driven applications. Our code\nis available at https://github.com/linancn/TianGong-AI-Unstructure.git.", + "authors": "Hang Yang, Jing Guo, Jianchuan Qi, Jinliang Xie, Si Zhang, Siqi Yang, Nan Li, Ming Xu", + "published": "2024-05-07", + "updated": "2024-05-08", + "primary_cat": "cs.DB", + "cats": [ + "cs.DB" + ], + "label": "Original Paper", + "paper_cat": "Retrieval AND Augmented AND Generation AND RAG", + "gt": "This paper presents a novel method for parsing and vectorizing\nsemi-structured data to enhance the functionality of Retrieval-Augmented\nGeneration (RAG) within Large Language Models (LLMs). We developed a\ncomprehensive pipeline for converting various data formats into .docx, enabling\nefficient parsing and structured data extraction. The core of our methodology\ninvolves the construction of a vector database using Pinecone, which integrates\nseamlessly with LLMs to provide accurate, context-specific responses,\nparticularly in environmental management and wastewater treatment operations.\nThrough rigorous testing with both English and Chinese texts in diverse\ndocument formats, our results demonstrate a marked improvement in the precision\nand reliability of LLMs outputs. The RAG-enhanced models displayed enhanced\nability to generate contextually rich and technically accurate responses,\nunderscoring the potential of vector knowledge bases in significantly boosting\nthe performance of LLMs in specialized domains. This research not only\nillustrates the effectiveness of our method but also highlights its potential\nto revolutionize data processing and analysis in environmental sciences,\nsetting a precedent for future advancements in AI-driven applications. Our code\nis available at https://github.com/linancn/TianGong-AI-Unstructure.git.", + "main_content": "Introduction Large Language Models (LLMs) present substantial benefits in various specialized fields, particularly due to their proficiency in processing and deriving \finsights from extensive volumes of unstructured text. These models excel in converting intricate, unstructured data into organized formats, which is crucial for tasks such as predicting reaction conditions in scientific studies or isolating pertinent legal clauses from extensive documents. This capability is invaluable, especially for augmenting experimental databases and melding computational and experimental data, with notable applications in environmental science(Rillig et al., 2023). In the medical sector, LLMs have shown remarkable efficacy in named entity recognition (NER) tasks, facilitating the extraction and categorization of biomedical information from expansive data sets(Lee et al., 2020). This has significantly contributed to both research and clinical practice. Similarly, in the legal realm, LLMs have proven effective in analyzing complex legal documents, pinpointing crucial legal terms, and enhancing contract analysis(L. Yue et al., 2024). These applications underscore the transformative impact of LLMs in processing large and complex datasets into actionable insights, thus optimizing operations in specialized domains such as healthcare and law. However, the integration of LLMs in specialized domains still faces challenges(Peng et al., 2023.). A notable issue is the generation of 'hallucinations' (L. Yang et al., 2024),which means the creation of factually incorrect, yet seemingly plausible information. This problem is compounded when addressing highly specialized or nuanced queries within professional contexts. This limitation predominantly originates from the generalized nature of the datasets used to train these models, which often lack the depth and specificity required for particular legal and medical scenarios(S. Pan et al., 2024). Consequently, this underscores the critical need for a strategic integration of LLMs with domain-specific expertise. Such a fusion, complemented by continuous evaluation and refinement, is essential to ensure the accuracy and relevance of the models' outputs, especially in fields where precision is paramount. In the realm of ecological environmental management, the Retrieval-Augmented Generation (RAG) approach is highly relevant for LLMs applications. RAG integrates the capabilities of LLMs with external databases, enabling access to and incorporation \fof essential data during generation. This enhances the model's ability to provide accurate, context-specific information, crucial in environmental management's complex domain. However, implementing RAG faces significant challenges, notably in developing a vector-based knowledge base essential for accurate data retrieval. The complexity of creating this base from vast, unstructured environmental data is compounded by a lack of efficient structuring methods. Addressing these data processing challenges is imperative to fully utilize RAG's potential, thereby improving LLMs' effectiveness in ecological environmental governance. In this study, we present an efficient method for processing documents in the `.docx` format and constructing a vector database, leveraging an unstructured open-source toolkit, the function calling capacity of OpenAI and the vector database platform of Pinecone. This paper details the method and their application in processing professional books for wastewater treatment plant operation and constructing a vector database for use with Retrieval-Augmented Generation (RAG), aiming to improve the expertise of large language models in the domain of wastewater treatment plant operation. 2 Background and Related work Retrieval Augmented Generation (RAG) within large language models (LLMs) marks a significant stride in AI research, blending advanced knowledge retrieval with the generation capabilities of LLMs. This approach aims to boost the accuracy and relevance of the models' responses while preserving their contextual depth. Current research focuses on fine-tuning the retrieval process, ensuring that the information fetched aligns closely with user queries and enhances the quality of the model's output(Lewis et al., 2021.). A key challenge lies in integrating this retrieved information smoothly into the generation process, creating responses that are both coherent and contextually appropriate(Rohde et al., 2021). A significant area of exploration is in improving the retrieval phase to filter out irrelevant information or 'noise', ensuring that the data used by the model is of high quality and relevance(Karpukhin et al., 2020). Researchers are also working on \fmaking LLMs more adaptable in using this retrieved data across various topics, enhancing the algorithms that control how the model accesses and uses this information(Kalyan et al., 2021). Central to RAG's function in LLMs is the creation of vector databases from unstructured or semi-structured data like texts and web pages. These databases store information in a format that LLMs can easily access and use. Current research, including work on Transformer-based models, is pivotal in developing methods to efficiently transform vast amounts of data into these useful vector formats (Devlin et al., 2019). However, a noticeable gap in this area is the lack of simple, efficient methods for creating these vector databases. Existing techniques, while effective, tend to be complex and resource-heavy, limiting their broader application. Addressing this challenge with more user-friendly vectorization methods is crucial. Such advancements would significantly widen the scope and effectiveness of LLMs, enabling them to process and generate more nuanced, context-rich language responses in a range of fields, thus enhancing the practical utility and reach of LLMs in various applications. 3 Core Functions However, a noticeable gap in this area is the lack of simple, efficient methods for creating these vector databases. Existing techniques, while effective, tend to be complex and resource-heavy, limiting their broader application. Addressing this challenge with more user-friendly vectorization methods is crucial. Such advancements would significantly widen the scope and effectiveness of LLMs, enabling them to process and generate more nuanced, context-rich language responses in a range of fields, thus enhancing the practical utility and reach of LLMs in various applications. \fFig. 1 Parsing and Vectorization of Semi-structured Data process framework 3.1 Data Preparation In this phase, a diverse array of sources including books, reports, scholarly articles, and data tables is compiled.These data largely consists of semi-unstructured data, encompassing a variety of file formats such as `.html`, `pdf`, `xml`, `docx`, `xlsx` and etc. Considering the substantial volume of data to be processed, the `.docx` format stands out due to its uniform standardization, high-quality text, ease of editing, broad compatibility, and rich metadata content, making it highly advantageous for efficient bulk processing and structured data extraction.In this project, API functionalities are employed to integrate open-source tools for the purpose of converting diverse data formats into the .docx format. For the assurance of effective post-processing, it is imperative that the content in the transformed `.docx` files, including headings, textual elements, and tables, be conformed to a standardized format. This standardization process involves harmonizing the font type, font size, inter-paragraph spacing, and line spacing across all headings, main text, and table contents. 3.2 Automated parsing and splitting During the parsing process, the `.docx` files are divided into multiple elements including titles, texts, images, tables, headers and footers with the partitioning function, utilizing detectron2, a deep learning-based object detection system (Unstructured, 2023). This partition function uses a combination of the styling information in the document and the structure of the text to determine the type of a text element. \fAs part of data preparation for an NLP model, these elements require further filtering, to mitigate potential detrimental impacts on model efficiency caused by superfluous content. This ensuing phase entails a deliberate omission of specific components, particularly 'Headers' and 'Footers'. As a result, this refinement process retains only four core elements: 'Title', 'Text', 'Image', and 'Table', thereby ensuring a concise and targeted dataset for advanced analysis.. For the \"Title\" and \"Text\" elements, prior to integration into NLP models, rigorous data cleaning is essential to avoid efficiency losses caused by extraneous information. To tackle this issue, specialized functions within the 'Unstructured Documentation' cleaning framework are utilized (Unstructured, 2023). These functions effectively merge paragraphs separated by newlines, remove initial bullets and dashes, and eliminate surplus whitespace. This process significantly enhances the textual data's clarity and structural integrity, which is crucial for effective model performance. For the \"Table\" elements, the core textual information is retained in the element's 'text attribute'. To preserve the formatting fidelity of these tables, their HTML representation is also stored, specifically within 'element.metadata.text_as_html'. This dual-storage approach is critical for ensuring that the table's structural and visual integrity is maintained in its rendered form. For the \"Image\" elements, the 'vision_completion' approach leverages the capabilities of the 'gpt-4-vision-preview' API. This method involves generating specific queries that prompt GPT to provide detailed textual descriptions of images. Once these descriptions are obtained, they are inserted back into the data collection, replacing the positions originally occupied by the images. This process ensures a seamless transition from visual to textual data representation in the dataset.. 3.3 Chunking In the 'Unstructured Core Library,' essential for document processing in RAG contexts, the 'chunk_by_title' function is noteworthy for its methodical segmentation of documents into distinct subsections, identifying titles as section markers \f(Unstructured, 2023). Notably, it treats elements like tables and images as separate sections. The inclusion of the 'multi-page_sections' parameter is significant, facilitating the formation of multi-page sections that maintain thematic continuity. Unlike common practices, the 'combine_text_under_n_chars' parameter set to zero allows each text piece, regardless of length, to be recognized as an individual section, preserving the document's detailed structure. The default 'new_after_n_chars' parameter relies on the function\u2019s internal logic for starting new sections. The 'max_characters' parameter, adjusted to 4096, accommodates larger sections, tailored to the specific requirements of the document structure and content 3.4 Vector Database construction By leveraging OpenAI's \"text-embedding-ada-002\" model via API, embedding vectors are generated that correspond to specific content. This involves transforming data, initially partitioned into chunks through a preceding chunking process, into vector formats. The utilization of the \"text-embedding-ada-002\" model is pivotal in enabling large language models to locate content in our dataset that aligns with the given input prompt. The resultant vector data are then stored in Pinecone's vector database, where the feature vectors maintain a dimensionality of 1536. This strategic configuration significantly enhances the database's ability to conduct similarity searches and offers notable advantages in data storage capacity. The application of the \"text-embedding-ada-002\" model thus integrates OpenAI's advanced natural language processing prowess with Pinecone's efficient vector data management, providing a powerful and versatile solution for text search and analysis purposes. 4 Experiments and Discussion In this segment of the research, we have selected one scholarly papers in Chinese and another in English, along with one book in each language, to evaluate the efficacy of the methodologies employed in this study and the performance of the Retrieval-Augmented Generation (RAG) technique. These papers and books include textual, pictorial, and tabular elements. These two categories represent the predominant forms of publicly released documents at present. Papers are commonly \favailable in an editable PDF format, whereas publicly released books are often found in scanned or image-based PDF formats.The specifics of the documents and books utilized for testing are detailed in Table 1. 4.1 Data Processing Results 4.1.1 Results of Text Processing Results The processing results for text information are displayed in Figure 2 and 3, featuring four distinct text blocks from the test papers and books: two in Chinese and two in English. The outcomes are evident in the \"Title\" and \"Cleaned Text\" sections. Upon converting all documents to the `.docx` format and applying the prescribed process, the methodology proficiently identifies \"Title\" across various text types and performs comprehensive text cleaning and organization. This underscores the method's robustness in managing different data structures and multiple languages. Table 1 Information of papers and books Type Title Page Count Language Paper Full-scale upgrade activated sludge to continuous-flow aerobic granular sludge Implementing microaerobic-aerobic configuration with internal separators 12 English \u63d0\u8d28\u589e\u6548\u80cc\u666f\u4e0b\u6392\u6c34\u7ba1\u7f51\u68c0\u6d4b\u6280\u672f\u7684 \u5e94\u7528\u4e0e\u603b\u7ed3 8 Chinese Book Modelling plastic flows in the European Union value chain 132 English \u6c61\u6c34\u5904\u7406\u8bbe\u5907\u64cd\u4f5c\u7ef4\u62a4\u95ee\u7b54 369 Chinese \fFig. 2 Text Processing Results Instances of papers: (a) and (c) are instances of original texts from English and Chinese papers, respectively,while (b) and (d) represent the results of the segmentation into chunks. \fFig. 3 Text Processing Results Instances of books: (a) and (b) are instances of original texts from English and Chinese books, respectively,while (c) and (d) represent the results of the segmentation into chunks. 4.1.2 Results of Image Processing Results The results of transforming images into textual descriptions using LLM are presented in Table 2. This research employs an embedding method that leverages the GPT 4.0 LLM to convert images into text, thereby preserving the completeness of the information. The findings indicate that the key information in both English and Chinese images can be effectively extracted. However, due to the model's limited support for Chinese elements, images containing Chinese require additional inputs such as captions or related information to improve the model\u2019s recognition accuracy and efficacy, preventing ineffective identifications. \fTable 2 Image processing results NO. Original Image Cleaned Text in Chunks 1 2 3 4 4.1.3 Results of Table Processing Results In the process of data handling, table processing presents significant challenges as tables often contain extensive parameter and comparative analysis information. Such information significantly enhances a LLM's capabilities in data understanding, pattern recognition, and knowledge integration, thereby improving the accuracy and relevance of text generation. In this study, we employed the \"text_as_html\" method to handle tabular data, with the results displayed in table 3.The corresponding text, \frendered as an HTML document, appears as demonstrated in Figure 4.Our analysis indicates that the sections of tables within chunks are expressed in HTML syntax, allowing the saved HTML files to accurately restore the original structure and hierarchy of the tables when opened, ensuring the correct identification and extraction of information. Table 3 Table processing results NO. Original Table Cleaned text in Chunks 1 2 3 \fFig. 4 Results of tables elements in chunks converted to html file 4.2 Zero-shot Question Answering Results under RAG To evaluate the effectiveness of vector knowledge bases constructed using the methodologies outlined in this study for enhancing the expertise of large language models, GPT 4.0 was employed to process the papers and books utilized in this research. A set of fifty questions was randomly generated, focusing on the content of the selected documents. Subsequently, three questions in English and two in Chinese were randomly chosen for testing purposes. GPT 4.0 was then tasked with scoring the responses obtained from these tests, providing an objective measure of the effectiveness of the vector knowledge bases in augmenting the domain-specific knowledge of the language model across different languages. The results of the English and Chinese assessments are presented in Tables 4 and 5, respectively, offering a clear overview of the performance of the vector knowledge bases in enhancing the expertise of GPT 4.0. \fTable 4 Zero-shot question answer results in English NO. Question and answer Scores 1 Question1\uff1aExplain how the \"Transfer Coefficients\" (TCs) are used to simulate plastic flows in the form of a paragraph? Answer by GPT 4.0 75/100 Answer by RAG 95/100 \fNO. Question and answer Scores 2 Question2\uff1aWhich predefined scenarios showed the greatest potential improvement when assessing the 2025 plastic recycling targets? Answer by GPT 4.0 60/100 Answer by RAG 95/100 \fNO. Question and answer Scores 3 Question3\uff1aHow did the microaerobic-aerobic configuration impact the microbial community structure and pollutant removal pathways? Answer by GPT 4.0 85/100 Answer by RAG 95/100 \fTable 5 Zero-shot question answer results in Chinese NO. Question and answer Scores 1 Question1\uff1a\u51e0\u79cd\u5e38\u7528\u6811\u8102\u518d\u751f\u5242\u7684\u9002\u7528\u5bf9\u8c61\u3001\u6d53\u5ea6\u8303\u56f4\u53ca\u76f8\u5bf9\u7528\u91cf\u662f\u591a \u5c11\uff1f Answer by GPT 4.0 80/100 Answer by RAG 95/100 \fNO. Question and answer Scores 2 Question2\uff1a\u5728\u6392\u6c34\u7ba1\u7f51\u68c0\u67e5\u4e2d\u7535\u78c1\u68c0\u67e5\u6cd5\u6709\u54ea\u4e9b\u5e94\u7528\u6848\u4f8b\uff1f Answer by GPT 4.0 75/100 Answer by RAG 90/100 The results presented in this study provide compelling evidence that vector knowledge bases constructed using the methodologies described herein can significantly enhance the ability of large language models to acquire and apply domain-specific information. This improvement is manifested across several critical dimensions, including clarity, specificity, accuracy, technical depth, and comprehensiveness. By effectively augmenting the knowledge acquisition process, \fthese vector knowledge bases enable language models to generate responses of substantially higher quality, demonstrating their efficacy in improving the performance of large language models in specialized domains. These findings underscore the potential of vector knowledge bases as a powerful tool for enhancing the accuracy and relevance of language model outputs in domain-specific contexts, paving the way for more effective and efficient natural language processing applications in various specialized fields." +} \ No newline at end of file diff --git a/abs_9K/test_abstract_short_2405.04003v1.json b/abs_9K/test_abstract_short_2405.04003v1.json new file mode 100644 index 0000000000000000000000000000000000000000..dc9e7db20adf55bdaf691a16db09ed4b851c3885 --- /dev/null +++ b/abs_9K/test_abstract_short_2405.04003v1.json @@ -0,0 +1,17 @@ +{ + "url": "http://arxiv.org/abs/2405.04003v1", + "title": "High Energy Density Radiative Transfer in the Diffusion Regime with Fourier Neural Operators", + "abstract": "Radiative heat transfer is a fundamental process in high energy density\nphysics and inertial fusion. Accurately predicting the behavior of Marshak\nwaves across a wide range of material properties and drive conditions is\ncrucial for design and analysis of these systems. Conventional numerical\nsolvers and analytical approximations often face challenges in terms of\naccuracy and computational efficiency. In this work, we propose a novel\napproach to model Marshak waves using Fourier Neural Operators (FNO). We\ndevelop two FNO-based models: (1) a base model that learns the mapping between\nthe drive condition and material properties to a solution approximation based\non the widely used analytic model by Hammer & Rosen (2003), and (2) a model\nthat corrects the inaccuracies of the analytic approximation by learning the\nmapping to a more accurate numerical solution. Our results demonstrate the\nstrong generalization capabilities of the FNOs and show significant\nimprovements in prediction accuracy compared to the base analytic model.", + "authors": "Joseph Farmer, Ethan Smith, William Bennett, Ryan McClarren", + "published": "2024-05-07", + "updated": "2024-05-07", + "primary_cat": "physics.comp-ph", + "cats": [ + "physics.comp-ph", + "cs.LG" + ], + "label": "Original Paper", + "paper_cat": "Diffusion AND Model", + "gt": "Radiative heat transfer is a fundamental process in high energy density\nphysics and inertial fusion. Accurately predicting the behavior of Marshak\nwaves across a wide range of material properties and drive conditions is\ncrucial for design and analysis of these systems. Conventional numerical\nsolvers and analytical approximations often face challenges in terms of\naccuracy and computational efficiency. In this work, we propose a novel\napproach to model Marshak waves using Fourier Neural Operators (FNO). We\ndevelop two FNO-based models: (1) a base model that learns the mapping between\nthe drive condition and material properties to a solution approximation based\non the widely used analytic model by Hammer & Rosen (2003), and (2) a model\nthat corrects the inaccuracies of the analytic approximation by learning the\nmapping to a more accurate numerical solution. Our results demonstrate the\nstrong generalization capabilities of the FNOs and show significant\nimprovements in prediction accuracy compared to the base analytic model.", + "main_content": "Introduction Marshak waves, a common type of driven supersonic radiative heat waves, play a key part in the physics of internal confinement fusion (ICF) [1\u20134], astrophysics [5\u20137] and other high energy density phenomena [8]. In most cases, a full description of the radiative transfer process is not required. Therefore, approximations are in order. The diffusion approximation is one of these and is considered the simplest [9]. In some cases, analytic solutions to the radiation diffusion equation can be useful in understanding experiments [10\u201316]. These analytic or semi-analytic models can be thought of as a reduced order approximation of the full system, which is itself a simplification. As examples, [10] reduces a two dimensional diffusion system via asymptotic expansion. The diffusion system is an approximation to higher order radiation transport equations. Marshak, the namesake of these waves, reduced a partial differential equation (PDE) into an ordinary differential equation (ODE) [13, 14]. Reduced order solutions have the benefit of simpler calculation, as solving an ODE is usually preferable to solving a PDE, and they can be interrogated to clarify physical relationships between parameters. However, coming to a semi-analytic or analytic solution often involves invoking simplifications which may debase the accuracy of the prediction. Thus, the motive for this inquiry is to take a widely used and appreciated semi-analytic diffusion model, the Hammer and Rosen Marshak wave model (HR) [11], and provide a correction to the model\u2019s limiting assumptions in a computationally efficient manner. Classical numerical solvers such as finite difference, finite element, or finite volume methods discretize continuous equations into a finite set of algebraic equations [17\u2013 22]. These numerical solvers can be computationally expensive for high dimensional problems and for domains with complex geometries. In recent years, approaches that leverage ML have garnered support to alleviate these challenges [23\u201325]. In particular, neural operators, a class of ML models, have emerged as a promising solution to these challenges. These operators learn mappings between infinite-dimensional function spaces, effectively approximating differential or integral operators that govern PDEs in a data driven manner [26, 27]. One of the key advantages of neural operators is that they only need to be trained once to learn a family of PDEs, and obtaining a solution for a new instance of a PDE parameter requires only a forward pass of the network. Furthermore, neural operators are discretizationinvariant as they share network parameters across discretizations, allowing for the transfer of solutions between meshes. The Fourier neural operator (FNO) [28] is a seminal neural operator that learns network parameters in Fourier space. The FNO uses fast Fourier transform (FFT) for spectral decomposition of the input and computation of the convolution integral kernel in the Fourier space. This approach has shown promising results in learning the underlying physics of various PDEs including Burgers, Darcy, and Navier-Stokes equations. In this work, we propose to use FNO to learn the physics of Marshak waves for various input-output pairs. We develop two models: a base model which takes the physical parameters of the Marshak wave problem as input and outputs the time dependent wavefront position and temperature distribution as given by the HR model, 2 \fand a hybrid approach which corrects the analytic HR solution to output the numerical solution to the full flux-limited diffusion equation. The structure of this paper is as follows. The diffusion model for Marshak waves is introduced in Section 2. Hammer and Rosen\u2019s approximation is summarized in Section 3. The neural network that is employed to correct the HR model is discussed in Section 4. Finally, results and conclusions are offered in Sections 5 and 6. 2 Marshak wave problem We study radiation diffusion in planar geometry, which assumes variation of the dependent variables only in a single direction, x. The evolutions of the radiation and material energy density are governed by [29], \u2202er \u2202t = \u2202 \u2202x c 3\u03ba(\u03c1, T) \u2202er \u2202x + c\u03ba(aT 4 \u2212er), (1) \u2202e \u2202t = c\u03ba(e \u2212aT 4) (2) where, er is the energy density of the radiation and e is the energy density of the material. c is the speed of light, \u03ba is the opacity with units of inverse length, a is the radiation constant, defined a \u22614\u03c3 c where \u03c3 is the Stefan-Boltzmann constant. T is the material temperature and \u03c1 is the material density. A Marshak boundary condition will specify the incoming radiation flux [29], er(x = 0, t) \u2212 \u0012 2 3\u03ba \u2202er \u2202x \u0013 \f \f \f \f x=0 = 4 c Finc. (3) where Finc is the incident flux on the surface at x = 0. The material energy density is found via integration of the specific heat, e = Z T 0 dT \u2032 Cv(T \u2032). (4) Solutions to Eq. (1) in the optically thick limit are recognizable by sharp drops in temperature near the wavefront and gradual temperature variation behind the front. This is because the radiation temperature and material temperature are in equilibrium behind the wavefront. Thus, is often valid to assume equilibrium between the radiation temperature and and material temperature, i.e. er = aT 4. This assumption simplifies Eqs. (1) and (2) to a single equation for the material temperature, \u2202e \u2202t = 4 3 \u2202 \u2202x 1 \u03ba(\u03c1, T) \u0012 \u2202 \u2202x\u03c3T 4 \u0013 (5) with the boundary condition at the surface, T(x = 0, t) = Ts(t). (6) 3 \fFurthermore, the equation of state is specified so that, e = fT \u03b2\u03c1\u2212\u00b5, (7) This is the formulation given in [11]. The parameters f, \u03b2, \u00b5 are found by fitting experimental data, as in [30]. 3 Hammer and Rosen approximation The Hammer and Rosen model for supersonic thermal radiation diffusion is a perturbative, semi-analytic, one dimensional solution to the diffusion equation under mild limiting assumptions. In particular, this model assumes planar geometry, power law representations for the opacity, 1 K = gT \u03b1\u03c1\u2212\u03bb, and material internal energy, e = fT \u03b2\u03c1\u2212\u00b5, and a constant density. These assumptions transform Eq. (5) into, \u03c1\u2202e \u2202t = 4 3 \u2202 \u2202x \u0012 1 K\u03c1 \u2202 \u2202x\u03c3T 4 \u0013 , (8) where \u03c1 is the material density, e is the internal energy, \u03c3 is the Stefan-Boltzmann constant, and T is the radiation temperature. The application of these assumptions and some simplification leads to the expression \u2202T \u03b2 \u2202t = C \u22022 \u2202x2 T 4+\u03b1 (9) where our constants are collected into the term C = 4 4 + \u03b1 4 3 1 f g\u03c1\u00b5\u22122\u2212\u03bb (10) This model predicts the position of the wave front as a function of time as the solution to an integral expression, then provides an explicit expression for the temperature profile in the material. The model can accommodate an arbitrary radiation temperature boundary condition. The Hammer and Rosen model gives the position of the wavefront, xf, as x2 f (t) = 2 + \u03f5 1 \u2212\u03f5CT \u2212\u03b2 s Z t 0 T 4+\u03b1 s d\u02c6 t (11) where Ts is the boundary temperature, \u03f5 = \u03b2 4+\u03b1 is a combination of terms from the power laws, and xf is the heat front position as a function of time, t. With knowledge of the wavefront position a simple expression can be evaluated for the temperature profile: T 4+\u03b1 T 4+\u03b1 s (x, t) = \u0014\u0012 1 \u2212x xf \u0013 \u0012 1 + \u03f5 2 \u0012 1 \u2212 x2 f CH2\u2212\u03f5 dH dt \u0013 x xf \u0013\u00151/(1\u2212\u03f5) . (12) Here H = T 4+\u03b1 s . One hallmark of this approximate solution is that it is very inexpensive to evaluate. In practice, and when compared to computing a numerical solution, 4 \fthis method is effectively immediate. For this reason, it has proven to be particularly helpful for rapid iteration during the design process. 4 Fourier neural operator model We now turn to the consideration of producing a machine learning model to compute Marshak wave solutions. For this task we turn to the Fourier Neural Operator. In this section we use standard notation from the ML literature; regrettably, this overlaps with the standard notation for Marshak waves at times. g f \u00c6 \u00d8 \u220f \u00b5 \u03a9 Parameters 1.0 \u00a3 10\u00b04 1.0 \u00a3 10\u00b02 1.0 \u00a3 100 1.0 \u00a3 102 Values 0 1 2 3 t (ns) 0.000 0.045 0.090 0.135 0.180 0.225 0.270 xf (cm) 0.00 0.02 0.04 0.06 xf (cm) 0 1 2 T (HeV) 0 1 2 3 t (ns) 1.0 1.5 2.0 2.5 3.0 3.5 4.0 T (HeV) P Fourier layer 1 Fourier layer 2 Fourier layer l Q a(x) u(x) v(x) F R F\u22121 Fourier layer W + \u03c3 Fig. 1: Fourier neural operator architecture for solving the Marshak wave problem. The input function a(x) is projected to a higher representation v0(x) by the projection layer P. This is then processed through l iterations of Fourier layers. Each Fourier layer consists of a Fourier transform F that maps vi(x) to the Fourier domain, multiplication with the weight tensor R and filtering of higher Fourier modes, and an inverse Fourier transform F\u22121 to return to the spatial domain. The output is linearly transformed by W and passed through a nonlinear activation function \u03c3. This is added to the previous Fourier layer\u2019s output to produce the updated representation vi+1(x). After l layers, the final representation vl(x) is mapped to the output solution u(x). The boundary temperature drive (top left) and parameters (bottom left) represent the input functions and the front position (top right) and temperature distribution (bottom right) represent the output functions for the Marshak wave problem The primary goal of an operator G is to establish a mapping between infinitedimensional spaces from a finite collection of input-output pairs, denoted as A = A(Rda) \u2282Rda and U = U(Rdu) \u2282Rdu, respectively. Following from [28, 31], consider a partial differential equation (PDE) which maps input function spaces to an output solution space. For a given domain D \u2282Rd with boundary \u2202D, and x \u2208D, an operator would map source terms, f(x, t) : D \u2192R, boundary conditions, u(\u2202D, t) : D \u2192R, and initial conditions u(x, 0) : D \u2192R, to the solution space u(x, t) : D \u2192R, where t is time. In the present work, we aim to learn the nonlinear differential operator G : A \u2192U for various sets of input parameters a \u2208A in the Marshak wave problem. 5 \fBy constructing a parametric map G : A \u00d7 \u0398 \u2192U, the optimal parameter \u03b8 \u2208\u0398 can be approximated with data-driven methods to adjust \u03b8 such that G(\u00b7, \u03b8) approaches the target map G. Classical numerical solvers, be it finite elements, finite differences, or many modern data-driven and physics-informed neural networks attempt to learn the output function u(x, t) which satisfies G for a single instance of input parameter a and can be computationally prohibitive, especially when the solution for the PDE is required for many instances of the parameter. On the other hand, Fourier neural operators (FNO) have been developed to approximate G directly so that solutions to a family of PDEs are realized for different sets of a, thereby enhancing computational efficiency and practical utility. In general, input and output functions a and u are continuous, however, we assume to know only point-wise evaluations. To that end, the problem at hand can be described using the n-point discretization of D, Dj = {x1, . . . , xn} \u2282D with observations of input-output pairs indexed by j \b aj \u2208Rn\u00d7da, uj \u2208Rn\u00d7du\tN j=1, and uj = G(aj). The neural operator to learn the input-output mapping is an iterative architecture. First, the input a(x, t) is transformed to a higher dimensional representation by v0(x) = P(a(x)) where the transformation P(a(x)) : Rda 7\u2192Rdv. In this framework, a shallow fully connected network can achieve this desired transformation. Next a series of l updates vi 7\u2192vi+1 are performed vi+1(x) := \u03c3 (Wvi(x) + (K(a; \u03d5)vi) (x)) , \u2200x \u2208D. (13) with nonlinear activation function \u03c3(\u00b7) : R 7\u2192R and a linear transformation W : Rdv 7\u2192Rdv. Each vi is a dv-dimensional real vector in Rdv. For a vector input x = [x1, x2, . . . , xdv]T \u2208Rdv, \u03c3(x) is applied element-wise, resulting in [\u03c3(x1), \u03c3(x2), . . . , \u03c3(xdv)]T . The integral kernel operator K : A \u00d7 \u03b8 \u2192L(U, U) is parameterized by \u03d5 \u2208\u0398K (K(a; \u03d5)vi) (x) := Z D \u03ba\u03d5(x, y, a(x), a(y); \u03d5)vi(y)dy, \u2200x \u2208D. (14) where \u03ba\u03d5 : R2(d+da) \u2192Rdv\u00d7dv is a neural network parameterized by \u03d5 \u2208\u0398K. After all iterations, a transformation function u(x) = Q (vl(x)) moves vl(x) into the solution space Q (vl(x)) : Rdv 7\u2192Rdu. This approach extends the idea of neural networks to operate on infinite-dimensional function spaces, enabling the learning of mappings between such spaces from finite data samples. By leveraging neural operators, it becomes possible to approximate the nonlinear operators that govern the relationships between infinite-dimensional input and output function spaces, such as those arising in the context of partial differential equations. The FNO is a specific neural operator architecture designed for such nonlinear mappings. It replaces the kernel integral operator in by a Fourier convolution operator F\u22121 (F (\u03ba\u03d5) \u00b7 F (vi)) (x), and applying the convolution theorem. The Fourier kernel integral operator becomes (K(\u03d5)vi) (x) = F\u22121 (R\u03d5 \u00b7 (Fvi)) (x), \u2200x \u2208D, 6 \fwhere F is the Fourier transform of a function and F\u22121 is its inverse transform, R\u03d5 is the Fourier transform of a periodic function \u03ba parameterized by \u03d5 \u2208\u0398K. Given that \u03ba is periodic and can be represented by a Fourier series expansion, only discrete modes are considered k \u2208Zd. To create a finite dimensional representation, the Fourier series is truncated at a maximum number of modes kmax = |{k \u2208Zd : |kj| \u2264kmax,j for j = 1, . . . , d}|. In a discretized domain D with n \u2208N points, vi \u2208Rn\u00d7dv and F(vi) \u2208Cn\u00d7dv is obtained, here C represents the complex space. A convolution of vi with a function that has kmax Fourier modes gives F(vi) \u2208Ckmax\u00d7dv . Then the multiplication with the weight tensor R \u2208Ckmax\u00d7dv\u00d7dv is (R \u00b7 (Fvi))k,l = X j=1 Rk,l,j (Fvi)k,j , k = 1, . . . , kmax, j = 1, . . . , dv (15) With uniform discretization and resolution s1 \u00d7 \u00b7 \u00b7 \u00b7 \u00d7 sd = n, Fast Fourier Transform (FFT) can replace F. For f \u2208Rn\u00d7dv, k = (k1, . . . , kd) \u2208Zs1 \u00d7 \u00b7 \u00b7 \u00b7 \u00d7 Zsd, and x = (x1, . . . , xd) \u2208D, the FFT \u02c6 F and its inverse \u02c6 F\u22121 are defined as ( \u02c6 Ff)l(k) = s1\u22121 X x1=0 \u00b7 \u00b7 \u00b7 sd\u22121 X xd=0 fl (x1, . . . , xd) e \u22122i\u03c0 Pd j=1 xj kj sj , (16) \u0010 \u02c6 F\u22121f \u0011 l (x) = s1\u22121 X k1=0 \u00b7 \u00b7 \u00b7 sd\u22121 X kd=0 fl (k1, . . . , kd) e 2i\u03c0 Pd j=1 xj kj sj . (17) Finally, since Eq. (13) follows standard neural network structures training a network training is done with an appropriate loss function L = U \u00d7 U \u0398 = arg min \u0398 (L(G(a), G(a, \u0398)). (18) A schematic representation of the Fourier Neural Operator model for the Marshak wave problem is provided in Figure 1. 5 Results 5.1 Problem description and parameter space The Marshak waves we consider concern the propagation of heat waves through lowdensity foam cylinders or other materials driven by a hohlraum similar to those described in [30, 32]. Key parameters in these experiments include density, drive energy and radiation temperature, which typically can range from 100 to 300 eV. Xray imaging is used to track the heat wave, while diagnostic tools measure the flux breaking through the foam edge. The experiments cover a wide range of temperatures, materials, and densities. 7 \fTable 1, adapted from [30], presents material properties used in various Marshak wave experiments. The first ten rows contain parameters for the foams, while the last two rows provide parameters for coating materials. For each material, the numerical parameters were fitted in relevant experimental regimes. Further details about the experiments can be found in [30] and references cited therein. Table 1: Material properties for various Marshak wave experiments Experiment Foam g \u0000g/cm2\u0001 f (MJ) \u03b1 \u03b2 \u03bb \u00b5 \u03c1 \u0000g/cm3\u0001 Massen C11H16Pb0.3852 1/3200 10.17 1.57 1.2 0.1 0 0.080 Xu pure C6H12 1/3926.6 12.27 2.98 1 0.95 0.04 0.05 Xu with copper C6H12Cu0.394 1/7692.9 8.13 3.44 1.1 0.67 0.07 0.05 Back, Moore SiO2 1/9175 8.77 3.53 1.1 0.75 0.09 0.05 Back Ta2O5 1/8433.3 4.78 1.78 1.37 0.24 0.12 0.04 Back low energy SiO2 1/9652 8.4 2.0 1.23 0.61 0.1 0.01 Moore C8H7Cl 1/24466 14.47 5.7 0.96 0.72 0.04 0.105 Keiter Pure C15H20O6 1/26549 11.54 5.29 0.94 0.95 0.038 0.065 Keiter with Gold C15H20O6Au0.172 1/4760 9.81 2.5 1.04 0.35 0.06 0.0625 Ji-Yan C8H8 1/2818.1 21.17 2.79 1.06 0.81 0.06 0.160 Au 1/7200 3.4 1.5 1.6 0.2 0.14 0.160 Be 1/402.8 8.81 4.89 1.09 0.67 0.07 0.160 Numerical approximations for solving the Marshak wave problem can be computationally expensive, especially when exploring a wide range of material properties. To overcome this challenge, we propose using the Fourier Neural Operator (FNO) to learn the mapping between material properties and their corresponding Marshak wave solutions. FNOs have shown success in solving partial differential equations by learning the solution operator from a dataset of input-output pairs. To train the FNO model, we generate a dataset that spans the parameter space defined by the material properties in Table 1. The input consists of a set of material properties, (g, f, \u03b1, \u03b2, \u03bb, \u00b5, \u03c1), while the output corresponds to the solution of the Marshak wave problem in terms of the temperature profile and wave front position at a given time. We create a uniformly spaced grid of values for each material property, covering the range of values found in the experiments: In Table 2, N is the number Table 2: Parameter ranges for generating training data Parameter Range Number of grid points g [min(g), max(g)] N (log-spaced) f [min(f), max(f)] N \u03b1 [min(\u03b1), max(\u03b1)] N \u03b2 [min(\u03b2), max(\u03b2)] N \u03bb [min(\u03bb), max(\u03bb)] N \u00b5 [min(\u00b5), max(\u00b5)] N \u03c1 [min(\u03c1), max(\u03c1)] N 8 \fof grid points for each parameter. For the g parameter, we use logarithmically spaced values to better capture its wide range, while the other parameters are linearly spaced. In addition to the material properties, the Marshak wave problem also depends on the boundary temperature (i.e., the drive temperature). We parameterize the drive with a function Tb(t, a, b, c, d), measured in HeV, defined as follows Tb(t, a, b, c, d) = a + (b(t \u2265c)(t \u2212c))(t < d) + (t \u2265d)(b(d \u2212c)). (19) Here t is time (in ns), and a \u2208[1, 3], b \u2208[0, 1], c \u2208[0.1, 2], and d \u2208[2, 5]. The function consists of a constant term a, and a piecewise function that takes different values based on the conditions involving t, c, and d. We generate a set of boundary temperature functions by sampling the parameters a, b, c, and d from their respective ranges. To create the training set, we take the Cartesian product of the material property values and the boundary temperature function parameters and obtain a set of input parameter combinations that cover the entire parameter space. For each input combination, we solve the Marshak wave problem using a numerical solver to obtain the corresponding output solution. These input-output pairs form our training dataset, which we use to train the FNO model. As will be seen, by learning from this diverse set of input-output pairs, the FNO can effectively capture the underlying physics of the Marshak wave problem across the entire parameter space, including the dependence on the boundary temperature function. This allows the trained model to quickly and accurately predict solutions for new, unseen combinations of material properties and boundary temperature functions within the specified ranges. 5.2 Base model As a starting point, we introduce a base model that takes all material properties and boundary temperature function parameters as inputs and uses the Hammer and Rosen approximation as the output. The Hammer and Rosen approximation provides an analytical solution to the Marshak wave problem, which serves as a useful benchmark for evaluating the performance of our FNO model. Figure 2 compares the temperature solutions of the Marshak wave in space for three different boundary temperature functions. The boundary temperature functions, shown in Figure 2a, are generated by varying the parameters a, b, c, and d in Equation 19. The corresponding temperature solutions, obtained using both the Hammer and Rosen approximation and the FNO model, are presented in Figure 2b. The results demonstrate good agreement between the FNO model and the Hammer and Rosen approximation for all three boundary temperature functions. This indicates that the FNO model is capable of accurately capturing the physics of the Marshak wave problem and reproducing the analytical solutions provided by the Hammer and Rosen approximation. 5.3 Hammer and Rosen Correction model While the Hammer and Rosen approximation provides an analytical solution to the Marshak wave problem, it suffers from inaccuracies due to the assumptions made in 9 \f0.0 0.5 1.0 1.5 2.0 2.5 3.0 t (ns) 1.0 1.5 2.0 2.5 3.0 3.5 4.0 T (HeV) Tb1 Tb2 Tb3 (a) Temperature Drive 0.00 0.25 0.50 0.75 1.00 x (cm) 0.0 0.5 1.0 1.5 2.0 2.5 3.0 T (HeV) Tb1 Tb2 Tb3 HR FNO (b) Temperature profile at 3 ns Fig. 2: Comparison of the Hammer and Rosen approximation and the FNO model for a representative material under different boundary temperature drives (a) are characterized by a constant temperature followed by a linear ramp at different times and rates. The corresponding temperature solutions (b) obtained from the Hammer and Rosen approximation (solid lines) and the FNO model (dashed lines) show close agreement. its derivation, Section 3. These inaccuracies become apparent when comparing the Hammer and Rosen solution to more accurate numerical solvers, such as diffusion based methods, and experimental results. To address this issue, we introduce the Hammer and Rosen Correction model, which aims to improve the accuracy of the Hammer and Rosen approximation using FNO. The Hammer and Rosen Correction model is built similarly to the base model but takes the Hammer and Rosen solution for the temperature and the front position as additional inputs. The outputs are generated using a more accurate diffusion solution, and the FNO learns to map the Hammer and Rosen solution to the diffusion solution. By doing so, the Hammer and Rosen Correction model effectively corrects the inaccuracies of the Hammer and Rosen approximation and provides a more accurate prediction of the Marshak wave behavior. Figure 3 illustrates in a parallel axis plot the input parameter values for four different test cases used to evaluate the Hammer and Rosen Correction model. Each line represents a specific test case, with the values of the parameters plotted along the y-axis for each parameter on the x-axis. The boundary temperature drive is given with parameters a = 1.2, b = 0.8, c = 1, and d = 2 for Eq. (19). The output values are produced by a numerical solver we developed to solve radiation diffusion in planar geometry. The solver assumes equilibrium between the radiation temperature and material temperature, reducing Eq. (1) and Eq. (2) to a single equation for the material temperature Eq. (5). The solver employs finite difference method to discretize the spatial domain into a uniform grid. Time integration is performed by the backward differentiation formula, an implicit multi-step method. The spatial derivatives in Eq. (5) are approximated using a second order central difference scheme. The left boundary at the surface (x = 0), Eq. (3), is prescribed as a 10 \fg f \u03b1 \u03b2 \u03bb \u00b5 \u03c1 Parameters 1.0 \u00d7 10\u22124 1.0 \u00d7 10\u22122 1.0 \u00d7 100 1.0 \u00d7 102 Values Case 1 Case 2 Case 3 Case 4 Fig. 3: Parameter values from the test set for four different cases to evaluate the performance of the Hammer and Rosen Correction model function of time and the solver assumes equation of state given by Eq. (7). At each time step, the solver computes the temperature profile across a one-dimensional spatial grid consisting of 100 spatial cells and tracks the position of the wavefront. The Hammer and Rosen correction model is trained and tested using the dataset generated by the numerical solver and the Hammer and solution, paired with the input parameter values. The dataset is split into standard training and testing sets. It is important to note that the testing set contains parameter combinations that may not represent physically realistic scenarios, as they are generated by uniformly sampling the parameter space defined in Table 2. The model is trained using 1.05M input-output pairs, with 58k trainable parameters and is trained over 30 epochs. Figure 4 presents a comparison of the front position solutions over time for the Hammer and Rosen approximation, the Hammer and Rosen Correction model, and the diffusion solution. The subfigures 4a, 4b, 4c, and 4d show the results for different sets of input parameters. It is evident from the figures that the Hammer and Rosen approximation deviates noticeable from the diffusion solution over time. In contrast, the Hammer and Rosen Correction model accurately predicts the diffusion solution, demonstrating its ability to correct the inaccuracies of the Hammer and Rosen approximation. Figure 5 provides a comparison of the temperature solutions for the same three models. Subfigures 5a, 5b, 5c, and 5d show the temperature profiles at the same time instance. Once again, the Hammer and Rosen Correction model closely matches the diffusion solution, while the Hammer and Rosen approximation exhibits discrepancies. The Hammer and Rosen Correction model both improves the accuracy of the Marshak wave Hammer and Rosen solution and provides a framework for integrating analytical approximations with data-driven approaches. This hybrid approach combines benefits of both analytical and machine learning methods by giving a physical solution to simplify the inference. 11 \f0.0 0.5 1.0 1.5 2.0 2.5 3.0 t (ns) 0.000 0.045 0.090 0.135 0.180 0.225 0.270 xf (cm) Di\ufb00usion HR HR Correction (a) Case 1 front position solution 0.0 0.5 1.0 1.5 2.0 2.5 3.0 t (ns) 0.00 0.07 0.14 0.21 0.28 0.35 0.42 xf (cm) Di\ufb00usion HR HR Correction (b) Case 2 front position solution 0.0 0.5 1.0 1.5 2.0 2.5 3.0 t (ns) 0.000 0.012 0.024 0.036 0.048 0.060 0.072 xf (cm) Di\ufb00usion HR HR Correction (c) Case 3 front position solution 0.0 0.5 1.0 1.5 2.0 2.5 3.0 t (ns) 0.0 0.3 0.6 0.9 1.2 1.5 1.8 xf (cm) Di\ufb00usion HR HR Correction (d) Case 4 front position solution Fig. 4: Comparison of the front position solutions over time for the Hammer and Rosen approximation, the Hammer and Rosen Correction model, and the diffusion solution for different sets of input parameters. The Hammer and Rosen approximation (orange lines), deviates from the diffusion solution (blue lines) over time, while the Hammer and Rosen Correction (dashed green lines) accurately predicts the diffusion solution. 5.4 Model generalization and performance In the previous sections, we demonstrated the effectiveness of the Hammer and Rosen Correction model in accurately predicting the Marshak wave behavior for unseen data. It is important to note that these tests were performed on collocation points of the spacing grid shown in Table 2. To validate generalization capabilities of FNO, we present additional tests on specific physical materials from Table 1. Figure 6 compares the front position solutions obtained from the diffusion solver and the Hammer and Rosen Correction model for four different materials: C15H20O6Au0.172, Be, C15H20O6, and C6H12 with properties as specified in [30]. These materials were not explicitly included in the training data grid but represent realistic physical scenarios. The subfigures 6a, 6b, 6c, and 6d show excellent agreement between diffusion solutions and the Hammer and Rosen Correction model predictions for all four materials. This demonstrates that the FNO has successfully learned the mapping 12 \f0.0 0.2 0.4 0.6 0.8 1.0 xf (cm) 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 T (HeV) Di\ufb00usion HR HR Correction (a) Case 1 temperature solution 0.00 0.01 0.02 0.03 0.04 0.05 0.06 xf (cm) 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 T (HeV) Di\ufb00usion HR HR Correction (b) Case 2 temperature solution 0.0 0.2 0.4 0.6 0.8 xf (cm) 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 T (HeV) Di\ufb00usion HR HR Correction (c) Case 3 temperature solution 0.0 0.5 1.0 1.5 2.0 xf (cm) 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 T (HeV) Di\ufb00usion HR HR Correction (d) Case 4 temperature solution Fig. 5: Comparison of the temperature profiles for the Hammer and Rosen approximation, the Hammer and Rosen Correction model, and the diffusion solution at the same time instance for different sets of input parameters. The Hammer and Rosen approximation (orange line) exhibits discrepancies compared to the diffusion solution (blue line), while the Hammer and Rosen Correction (dashed green lines) closely match the diffusion solution. in the entire parameter space and can accurately predict the Marshak wave behavior for arbitrary material properties within the considered ranges. To quantitatively asses the performance and computational efficiency of the Hammer and Rosen Correction model, we compare it with the base model in Table 3. Both models are trained with the same number of trainable parameters, training data, and epochs to ensure a fair comparison. The mean squared error (MSE) is used as the evaluation metric for both temperature and front position predictions. The results in Table 3 show that the Hammer and Rosen Correction model significantly outperforms the base model in terms of prediction accuracy. The Hammer and Rosen Correction model achieves a 56.16% improvement in temperature MSE and a 13 \f0.0 0.5 1.0 1.5 2.0 2.5 3.0 t (ns) 0.00 0.02 0.04 0.06 0.08 0.10 0.12 xf (cm) Di\ufb00usion HR HR Correction (a) C15H20O6Au0.172 0.0 0.5 1.0 1.5 2.0 2.5 3.0 t (ns) 0.00 0.06 0.12 0.18 0.24 0.30 0.36 xf (cm) Di\ufb00usion HR HR Correction (b) Be 0.0 0.5 1.0 1.5 2.0 2.5 3.0 t (ns) 0.00 0.04 0.08 0.12 0.16 0.20 0.24 xf (cm) Di\ufb00usion HR HR Correction (c) C15H20O6 0.0 0.5 1.0 1.5 2.0 2.5 3.0 t (ns) 0.00 0.08 0.16 0.24 0.32 0.40 0.48 xf (cm) Di\ufb00usion HR HR Correction (d) C6H12 Fig. 6: Comparison of the front positions obtained from the Hammer and Rosen approximation (orange lines), diffusion solver (blue lines), and the Hammer and Rosen Correction model (dashed green lines) for four different materials from the Table 1. Table 3: Prediction performance and computational costs of deep learning models (MSE is the mean squared error) Parameter HR Correction Base model % Improvement Temperature MSE 0.00081 0.00185 56.16 Front position MSE 0.00807 0.01220 33.93 Train data 1.05M 1.05M Trainable parameters 58k 58k Epochs 30 30 Inference time (s) 0.0032 0.0016 33.93% improvement in front position MSE compared to the base model. This superior performance can be attributed to the hybrid-nature approach of the Hammer and Rosen Correction model. 14 \fIn terms of computational efficiency, the Hammer and Rosen Correction model has slightly slower inference time as compared to the base model. This is expected due to the additional complexity introduced by the correction step. However, it is important to note that both models have extremely fast inference times, with the Hammer and Rosen Correction model requiring only 0.0032 seconds per prediction and the base model requiring 0.0016 seconds. These fast inference time highlight the efficiency of the FNO-based approach, enabling real-time predictions of the Marshak wave behavior. 6" +} \ No newline at end of file diff --git a/abs_9K/test_abstract_short_2405.04233v1.json b/abs_9K/test_abstract_short_2405.04233v1.json new file mode 100644 index 0000000000000000000000000000000000000000..a5781e5fe52a975fc7b07b137807a40d823088e7 --- /dev/null +++ b/abs_9K/test_abstract_short_2405.04233v1.json @@ -0,0 +1,17 @@ +{ + "url": "http://arxiv.org/abs/2405.04233v1", + "title": "Vidu: a Highly Consistent, Dynamic and Skilled Text-to-Video Generator with Diffusion Models", + "abstract": "We introduce Vidu, a high-performance text-to-video generator that is capable\nof producing 1080p videos up to 16 seconds in a single generation. Vidu is a\ndiffusion model with U-ViT as its backbone, which unlocks the scalability and\nthe capability for handling long videos. Vidu exhibits strong coherence and\ndynamism, and is capable of generating both realistic and imaginative videos,\nas well as understanding some professional photography techniques, on par with\nSora -- the most powerful reported text-to-video generator. Finally, we perform\ninitial experiments on other controllable video generation, including\ncanny-to-video generation, video prediction and subject-driven generation,\nwhich demonstrate promising results.", + "authors": "Fan Bao, Chendong Xiang, Gang Yue, Guande He, Hongzhou Zhu, Kaiwen Zheng, Min Zhao, Shilong Liu, Yaole Wang, Jun Zhu", + "published": "2024-05-07", + "updated": "2024-05-07", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.LG" + ], + "label": "Original Paper", + "paper_cat": "Diffusion AND Model", + "gt": "We introduce Vidu, a high-performance text-to-video generator that is capable\nof producing 1080p videos up to 16 seconds in a single generation. Vidu is a\ndiffusion model with U-ViT as its backbone, which unlocks the scalability and\nthe capability for handling long videos. Vidu exhibits strong coherence and\ndynamism, and is capable of generating both realistic and imaginative videos,\nas well as understanding some professional photography techniques, on par with\nSora -- the most powerful reported text-to-video generator. Finally, we perform\ninitial experiments on other controllable video generation, including\ncanny-to-video generation, video prediction and subject-driven generation,\nwhich demonstrate promising results.", + "main_content": "Introduction Diffusion models have obtained breakthrough progress on generating high-quality images, videos and other types of data, outperforming alternative approaches like auto-regressive networks. Previously, video generation models primarily relied on diffusion models [13, 9, 14] with the U-Net backbone [11], and focused on a single limited duration like 4 seconds [8, 5, 7, 4]. Our model, Vidu, demonstrates that a text-to-video diffusion model with U-ViT [1, 2] as its backbone can break this duration limitation by leveraging the scalability and the long sequence modeling ability of a transformer [15]. Vidu is capable of producing 1080p videos up to 16 seconds in a single generation, as well as images as videos of a single frame. Additionally, Vidu exhibits strong coherence and dynamism, and is capable of generating both realistic and imaginative videos. Vidu also has a preliminary understanding of some professional photography techniques, such as transitions, camera movements, lighting effects and emotional portrayal. We observe that to some extent, the generation performance of Vidu is comparable with that of Sora [6], which is currently the most powerful text-to-video generator, much better than the other text-to-video generators. Finally, we perform initial experiments on other controllable video generation, including canny-to-video generation [16], video prediction and subject-driven generation [12]. All of them demonstrate promising results. 2 Text-to-Video Generation Vidu firstly employs a video autoencoder [10] to reduce both the spatial and temporal dimensions of videos for efficient training and inference. After that, Vidu employs a U-ViT [1] as the noise prediction network to model these compressed representations. Specifically, as shown in Figure 1, U-ViT splits the compressed videos into 3D patches, treats all inputs including the time, text condition \u2217Second authors listed alphabetically. \u2021The corresponding author. arXiv:2405.04233v1 [cs.CV] 7 May 2024 \fTransformer Block Transformer Block Transformer Block Transformer Block t c Embedding Layer Linear 0 1 2 3 4 5 6 L \u00b7\u00b7\u00b7 C \u00b7\u00b7\u00b7 \u00b7\u00b7\u00b7 \u00b7\u00b7\u00b7 Transformer Block Embeddings Norm MLP Multi-Head Attention Norm + + + : Add C : Concatenate + Linear Transformer Block \ud835\udc99\ud835\udc61 C Rearrange to T\u00d73\u00d7H\u00d7W Predicted noise Figure 1: The U-ViT architecture for predicting the noise in videos. and noisy 3D patches as tokens, and employs long skip connections between shallow and deep layers in a transformer. By leveraging the ability of transformers to process variable-length sequences, Vidu can handle videos with variable durations. Vidu is trained on vast amount of text-video pairs, and it is infeasible to have all videos labeled by humans. To address it, we firstly train a high-performance video captioner optimized for understanding dynamic information in videos, and then automatically annotate all the training videos using this captioner. During inference, we apply the re-captioning technique [3] to rephrase user inputs into a form that is more suitable for the model. 2 \f2.1 Generating Videos of Different Lengths Since Vidu is trained on videos of various lengths, it can generate 1080p videos of all lengths up to 16 seconds, including images as videos of a single frame. We present examples in Figure 2. (a) 16 seconds. Prompt: A person clad in a space suit with a helmet and equipped with a chest light and arm device is seen closely examining and interacting with a variety of plants in a lush, indoor botanical setting. (b) 8 seconds. Prompt: A desolate lunar landscape with craters and a large moon in the sky transitions to a warmly lit interior of a spacecraft-like structure where a group of people are engaged in various activities. (c) Image. Prompt: An exquisite silverware piece, aesthetically adorned with intricate patterns and scenes, exhibits the detailed artisanship and metallic sheen. (d) Image. Prompt: Under the veil of nightfall, a rose reveals its subtle, exquisite beauty in the gentle moonlight. Figure 2: Vidu can generate videos of all lengths up to 16 seconds, including images. 3 \f2.2 3D Consistency The video generated by Vidu exhibits strong 3D consistency. As the camera rotates, the video presents projections of the same object from different angles. For instance, as shown in Figure 3, the hair of the generated cat naturally occludes as the camera rotates. (a) Prompt: This portrait depicts an orange cat with blue eyes, slowly rotating, inspired by Vermeer\u2019s \u2019Girl with a Pearl Earring\u2019. The cat is adorned with pearl earrings and has brown fur styled like a Dutch cap against a black background, illuminated by studio lighting. (b) Prompt: In a studio, there is a painting depicting a ship sailing through the rough sea. (c) Prompt: A red car is stuck in the snow, with the entire vehicle emitting green light and red signal lights flashing on the back. The camera slowly pans around the car. Figure 3: 3D consistency of Vidu. 4 \f2.3 Generating Cuts Vidu is capable of generating videos incorporating cuts. As shown in Figure 4, these videos present different perspectives of the same scene by switching camera angles, while maintaining consistency of subjects in the scene. (a) Prompt: A sculptor is intently working on a clay bust, meticulously refining its facial features with precise hand movements. (b) Prompt: Churning ocean waves at night with a lighthouse on the coast create an intense and somewhat foreboding atmosphere. The scene is set under an overcast sky, with the ocean\u2019s dark waters illuminated by natural light, highlighting the white foam of the waves. Figure 4: Vidu is capable of generating videos with cuts. 5 \f2.4 Generating Transitions Vidu is capable of producing videos with transitions in a single generation. As shown in Figure 5, these transitions can connect two different scenes in an engaging manner. (a) Prompt: An elderly man with glasses, dressed in formal attire, is deeply engrossed in examining a large, ornate pocket watch. As the video progresses, there is a cinematic transition to a fantastical mechanical cityscape, viewed through the openwork of the watch. This shift evokes a sense of wonder and transports the viewer into a steampunk-inspired world where buildings and structures are made of metal and gears. (b) Prompt: A person holding a dessert with a fluffy layer of whipped cream elegantly drizzled with smooth chocolate sauce. As a dollop of cream falls, a mini polar bear appears, with floating icebergs nearby, set against a serene blue backdrop. Figure 5: Vidu is capable of generating videos with transitions. 6 \f2.5 Camera Movements Camera movements involve the physical adjustments or movements of a camera during filming, enhancing visual narrative and conveying various perspectives and emotions within scenes. Vidu learned these techniques from the data, enhancing the visual experience of viewers. For instance, as shown in Figure 6, Vidu is capable of generating videos with camera movements including zoom, pan and dolly. (a) Zoom. Prompt: A large sailing ship sails slowly through the fog. (b) Pan. Prompt: An elderly man with a white beard is seated in a room filled with wooden bookshelves, brimming with old books. He is dressed in a dark suit and tie, and he is engrossed in reading a large book. The room is bathed in the warm glow of sunlight streaming through a window, creating a serene and contemplative atmosphere. (c) Dolly. Prompt: An animated hedgehog with distinctive spiky hair and large eyes is seen exploring a lush, grassy environment. Figure 6: Camera movements generated by Vidu. 7 \f2.6 Lighting Effects Vidu is capable of generating videos with impressive lighting effects, which help enhance the overall atmosphere. For example, as shown in Figure 7, the generated videos can evoke atmospheres of mystery and tranquility. Therefore, besides the entities within the video content, Vidu has the preliminary ability to convey some abstract feelings. (a) Prompt: A man wearing a hat and a dark suit walks from the corridor towards the room. The lighting casts a bluish tint over the scene, creating a suspenseful atmosphere. (b) Prompt: A rustic wooden cabin nestles by the shore of a clear, sunlit lake, surrounded by verdant trees and mountains. The water is calm, reflecting the sky above, with a few clouds scattered across it. Sailboats and kayaks are moored on the lake, inviting leisure and tranquility. Figure 7: Lighting effects generated by Vidu. 8 \f2.7 Emotional Portrayal Vidu is able to depict characters\u2019 emotions effectively. For example, as shown in Figure 8, Vidu can express emotions such as happiness, loneliness, embarrassment, and joy. (a) Prompt: A man and a woman are sharing a close and affectionate interaction in an indoor setting that suggests a romantic ambiance. (b) Prompt: An elderly woman with white hair and a lined face is seated inside an older model car, looking out through the side window with a contemplative or mildly sad expression. (c) Prompt: A couple about to get divorced sat awkwardly in the waiting room. (d) Prompt: Audience members in a theater are captured in a series of medium shots, with a young man and woman in formal attire centrally positioned and illuminated by a spotlight effect. Figure 8: Emotional portrayal of Vidu. 9 \f2.8 Imaginative Ability In addition to generating real-world scenes, Vidu also possesses a rich imagination. As shown in Figure 9, Vidu is able to generate scenes that do not exist in the real world. (a) Prompt: A painting of a boat on water comes to life, with waves crashing and the boat becoming submerged. (b) Prompt: An animated rabbit in a playful pink snowboarding outfit is carving its way down a snowy mountain slope under a clear blue sky. (c) Prompt: A model train with a blue engine is seen traveling through a meticulously crafted miniature landscape. The train is pulling several red and cream-colored passenger cars along a track that winds through a rural or suburban setting with small-scale houses, verdant trees, and miniature waterfalls. Figure 9: Imaginative ability of Vidu. 10 \f2.9 Comparison with Sora Sora [6] is currently the most powerful text-to-video generator, capable of producing high-definition videos with high consistency. However, as Sora is not publicly accessible, we compare them by inserting the example prompts released by Sora directly to Vidu. Figure 10 and Figure 11 illustrate the comparison between Vidu and Sora, indicating that to some extent, the generation performance of Vidu is comparable to Sora. (a) Sora (b) Vidu Figure 10: Prompt: The camera rotates around a large stack of vintage televisions all showing different programs \u2014 1950s sci-fi movies, horror movies, news, static, a 1970s sitcom, etc, set inside a large New York museum gallery. 11 \f(a) Sora (b) Vidu Figure 11: Prompt: The camera follows behind a white vintage SUV with a black roof rack as it speeds up a steep dirt road surrounded by pine trees on a steep mountain slope, dust kicks up from it\u2019s tires, the sunlight shines on the SUV as it speeds along the dirt road, casting a warm glow over the scene. The dirt road curves gently into the distance, with no other cars or vehicles in sight. The trees on either side of the road are redwoods, with patches of greenery scattered throughout. The car is seen from the rear following the curve with ease, making it seem as if it is on a rugged drive through the rugged terrain. The dirt road itself is surrounded by steep hills and mountains, with a clear blue sky above with wispy clouds. 12 \f3 Other Controllable Video Generation We also perform several initial experiments at 512 resolution on other controllable video generation, including canny-to-video generation [16], video prediction, and subject-driven generation [12]. All of them demonstrate promising results. 3.1 Canny-to-Video Generation Vidu can add additional control by using techniques similar to ControlNet [16], as shown in Figure 12. (a) Input canny. (b) Prompt: During the day, a white car drove towards me and splashed water as it passed by a pond, realistic visual style. (c) Prompt: During the day, a red car drove towards me and splashed water as it passed by a pond, realistic visual style. (d) Prompt: During the day, a white car drove towards me and splashed water as it passed by a pond, anime style. Figure 12: Canny-to-video generation examples of Vidu. 13 \f3.2 Video Prediction As shown in Figure 13, Vidu can generate subsequent frames, given an input image, or several input frames (marked with red boxes). (a) Prompt: A pink chrysanthemum flower with intricate petals is the focal point, resting on a wooden surface in an indoor setting. (b) Prompt: A serene mountainous landscape bathed in the warm glow of sunset or twilight, with snow-capped peaks rising above the green vegetation-covered slopes. A calm body of water rests in the foreground, reflecting the sky above, which is dotted with clouds tinged with pink and orange hues. Figure 13: Video prediction examples of Vidu. 14 \f3.3 Subject-Driven Generation We surprisingly find that Vidu can perform subject-driven video generation by finetuning solely on images without videos. For example, we use the DreamBooth [12] technique to designate the learned subject as a special symbol for finetuning. As shown in Figure 14, the generated videos faithfully recreates the learned subject. (a) Input images. (b) Prompt: A dog lies on the ground and then goes to eat from the bowl. (c) Prompt: A dog bit his tail happily and shakes his head. Figure 14: Subject-driven generation examples of Vidu. 15 \f4" +} \ No newline at end of file diff --git a/abs_9K/test_abstract_short_2405.04272v1.json b/abs_9K/test_abstract_short_2405.04272v1.json new file mode 100644 index 0000000000000000000000000000000000000000..0a9c2cb0fa9f26d323a60c069dff7ac9d1156159 --- /dev/null +++ b/abs_9K/test_abstract_short_2405.04272v1.json @@ -0,0 +1,18 @@ +{ + "url": "http://arxiv.org/abs/2405.04272v1", + "title": "BUDDy: Single-Channel Blind Unsupervised Dereverberation with Diffusion Models", + "abstract": "In this paper, we present an unsupervised single-channel method for joint\nblind dereverberation and room impulse response estimation, based on posterior\nsampling with diffusion models. We parameterize the reverberation operator\nusing a filter with exponential decay for each frequency subband, and\niteratively estimate the corresponding parameters as the speech utterance gets\nrefined along the reverse diffusion trajectory. A measurement consistency\ncriterion enforces the fidelity of the generated speech with the reverberant\nmeasurement, while an unconditional diffusion model implements a strong prior\nfor clean speech generation. Without any knowledge of the room impulse response\nnor any coupled reverberant-anechoic data, we can successfully perform\ndereverberation in various acoustic scenarios. Our method significantly\noutperforms previous blind unsupervised baselines, and we demonstrate its\nincreased robustness to unseen acoustic conditions in comparison to blind\nsupervised methods. Audio samples and code are available online.", + "authors": "Eloi Moliner, Jean-Marie Lemercier, Simon Welker, Timo Gerkmann, Vesa V\u00e4lim\u00e4ki", + "published": "2024-05-07", + "updated": "2024-05-07", + "primary_cat": "eess.AS", + "cats": [ + "eess.AS", + "cs.LG", + "cs.SD" + ], + "label": "Original Paper", + "paper_cat": "Diffusion AND Model", + "gt": "In this paper, we present an unsupervised single-channel method for joint\nblind dereverberation and room impulse response estimation, based on posterior\nsampling with diffusion models. We parameterize the reverberation operator\nusing a filter with exponential decay for each frequency subband, and\niteratively estimate the corresponding parameters as the speech utterance gets\nrefined along the reverse diffusion trajectory. A measurement consistency\ncriterion enforces the fidelity of the generated speech with the reverberant\nmeasurement, while an unconditional diffusion model implements a strong prior\nfor clean speech generation. Without any knowledge of the room impulse response\nnor any coupled reverberant-anechoic data, we can successfully perform\ndereverberation in various acoustic scenarios. Our method significantly\noutperforms previous blind unsupervised baselines, and we demonstrate its\nincreased robustness to unseen acoustic conditions in comparison to blind\nsupervised methods. Audio samples and code are available online.", + "main_content": "INTRODUCTION When acoustic waves propagate in enclosures and get reflected by walls, the sound received is perceived as reverberated, which can significantly degrade speech intelligibility and quality [1]. The goal of dereverberation is to recover the anechoic component from reverberant speech. We focus here on the single-channel scenario, where measurements from only one microphone are available, which is significantly more challenging than multi-channel scenarios [2]. Traditional dereverberation algorithms assume some statistical properties, such as Gaussianity or sparsity, about the anechoic and reverberant signals. These properties are leveraged to perform dereverberation in the time, spectral or cepstral domain [3]. These methods can tackle informed scenarios, where the room impulse response (RIR) is known [4, 5] as well as blind scenarios where the RIR is unknown [6, 7]. Informed dereverberation is easier than blind dereverberation, but most scenarios in real-life applications are blind, as the RIR is either not measured beforehand, or becomes invalid even with the slightest deviations in receiver or emitter positions. Data-driven approaches rely less on such assumptions but rather learn the signal properties and structures from data [8]. Most of these methods are based on supervised learning using pairs of anechoic and reverberant speech. Supervised predictive models have been widely used for blind dereverberation, including time-frequency (T-F) maskers [9], time-domain methods [10] and \u2217These authors contributed equally to this work. 1uhh.de/sp-inf-buddy. spectro-temporal mapping [11]. Generative models represent another category of dereverberation algorithms aiming to learn the distribution of anechoic speech conditioned on reverberant input. Some blind supervised methods using generative models such as diffusion models [12,13] have been recently proposed [14,15]. However, supervised approaches struggle with limited generalization to diverse acoustic conditions due to the scarcity and variability of available RIR data. Unsupervised approaches offer the potential to circumvent such limitations as they do not require paired anechoic/reverberant data. This paper builds upon prior work [16], which proposed an unsupervised method for informed single-channel dereverberation based on diffusion posterior sampling. The previous study showed the potential of leveraging diffusion models as a strong clean speech prior, which, when combined with a criterion to match the measurement, reached state-of-the-art dereverberation in an informed scenario [16]. This paper extends the method to blind dereverberation, where the unknown RIR is estimated along the anechoic speech. We parameterize the RIR with a model-based subband filter, where each subband of the reverberation filter is modeled by an exponentially decaying signal. The resulting algorithm is an optimization scheme alternating between the diffusion process generating the anechoic speech, and the parameter search estimating the acoustic conditions. Previous works in related domains explore various parameter estimation techniques for solving blind inverse problems with diffusion posterior sampling. For image deblurring, [17] propose to use a parallel diffusion process to estimate the deblurring kernel, while [18] adopts an expectation-maximization approach. In the audio domain, [19] address the problem of blind bandwidth extension by iteratively refining the parameters of the lowpass filter degradation. Closely related is the work by Saito et al. [20], which perform unsupervised blind dereverberation using DDRM [21] and the weighted-prediction error (WPE) algorithm as initialization [6]. We name our method BUDDy for Blind Unsupervised Dereverberation with Diffusion Models. We show experimentally that BUDDy efficiently removes reverberation from speech utterances in many acoustic scenarios, thereby largely outperforming previous blind unsupervised techniques. As supervision is not required during the training phase, we demonstrate that BUDDy does not lose performance when presented with unseen acoustic conditions, as opposed to existing blind supervised dereverberation approaches. 2. BACKGROUND 2.1. Diffusion-Based Generative Models Diffusion-based generative models, or simply diffusion models [12, 22], emerged as a class of generative models that learn complex data distributions via iterative denoising. At training time, the target data arXiv:2405.04272v1 [eess.AS] 7 May 2024 \fdistribution is transformed into a tractable Gaussian distribution by a forward process, incrementally adding noise. During the inference, the reverse process refines an initial noise sample into a data sample, by progressively removing noise. The reverse diffusion process, which transports noise samples from a Gaussian prior to the data distribution pdata, can be characterized by the following probability flow ordinary differential equation (ODE): dx\u03c4 = [f(x\u03c4, \u03c4) \u22121 2g(\u03c4)2\u2207x\u03c4 log p(x\u03c4)]d\u03c4, (1) where \u03c4 indexes the diffusion steps flowing in reverse from Tmax to 0. The current diffusion state x\u03c4 starts from the initial condition xTmax \u223cN(0, \u03c3(Tmax)2I) and ends at x0 \u223cpdata. We adopt the variance exploding parameterization of Karras et al. [23], where the drift and diffusion are defined as f(x\u03c4, \u03c4) = 0 and g(\u03c4) = \u221a 2\u03c4, respectively. Similarly, we adopt \u03c3(\u03c4) = \u03c4 as the noise variance schedule, which defines the so-called transition kernel i.e. the marginal densities: p\u03c4(x\u03c4|x0) = N(x\u03c4; x0, \u03c3(\u03c4)2I). The score function \u2207x\u03c4 log p(x\u03c4) is intractable at inference time as we do not have access to x0. In practice, a score model parameterized with a deep neural network s\u03b8(x\u03c4, \u03c4) is trained to estimate the score function using a denoising score matching objective [24]. 2.2. Diffusion Posterior Sampling for Dereverberation Single-channel dereverberation can be considered as the inverse problem of retrieving the anechoic utterance x0 \u2208RL from the reverberant measurement y \u2208RL, which is often modelled by convolving the anechoic speech with an RIR h \u2208RLh, expressed as y = h \u2217x0. We aim to solve this inverse problem by sampling from the posterior distribution p(x0|y, h) of anechoic speech given the measurement and the RIR. We adopt diffusion models for this posterior sampling task by replacing the score function \u2207x\u03c4 log p(x\u03c4) in (1) by the posterior score \u2207x\u03c4 log p(x\u03c4|y, h) [13]. Applying Bayes\u2019 rule, the posterior score is obtained as \u2207x\u03c4 log p(x\u03c4|y, h) = \u2207x\u03c4 log p(x\u03c4) + \u2207x\u03c4 log p(y|x\u03c4, h), (2) where the first term, or prior score, can be approximated with a trained score model s\u03b8(x\u03c4, \u03c4) \u2248\u2207x\u03c4 log p(x\u03c4). The likelihood p(y|x\u03c4, h) is generally intractable because we lack a signal model for y given the diffusion state x\u03c4. We will introduce in the next section a series of approximations to make its computation tractable. 3. METHODS 3.1. Likelihood Score Approximation In order to obtain a tractable likelihood computation, we posit as in [25] that a one-step denoising estimate of x0 at time \u03c4 can serve as a sufficient statistic for x\u03c4 in this context, i.e. that p(y|x\u03c4, h) \u2248 p(y|\u02c6 x0, h). Such estimate \u02c6 x0 can be obtained using the score model: \u02c6 x0 \u2206 = \u02c6 x0(x\u03c4, \u03c4) = x\u03c4 \u2212\u03c3(\u03c4)2s\u03b8(x\u03c4, \u03c4). (3) Furthermore, we consider here that the convolution model remains valid when using this denoised estimate, and therefore that p(y|\u02c6 x0, h) \u2248p(y|\u02c6 x0\u2217h). Finally, we model the estimation error as following a Gaussian distribution in the compressed STFT domain. p(y|\u02c6 x0 \u2217h) = N(Scomp(y); Scomp(\u02c6 x0 \u2217h), \u03b72I), (4) where Scomp(y) = |STFT(y)|2/3 exp{j\u2220STFT(y)} is the compressed spectrogram. We apply this compression to account for the heavy-tailedness of speech distributions [26]. With this series of approximations, we obtain the following likelihood score: \u2207x\u03c4 log p(y|x\u03c4, h) \u2248\u2212\u03b6(\u03c4)\u2207x\u03c4 C(y, h \u2217\u02c6 x0), (5) where the function C(\u00b7, \u00b7) is defined as: C(y, \u02c6 y) = 1 M M X m=1 K X k=1 \u2225Scomp(y)m,k \u2212Scomp(\u02c6 y)m,k\u22252 2. (6) The weighting parameter \u03b6(\u03c4) controls the trade-off between adherence to the prior data distribution and fidelity to the observed data. According to our Gaussian assumption (4), its theoretical value should depend on the unknown variance \u03b7 as \u03b6(\u03c4) = 1/2\u03b72. In practice, we resort to the same parameterization as in [19,27]. 3.2. Reverberation Operator The employed reverberation operator relies on a subband filtering approximation [28], which is applied within the Short-Time Fourier Transform (STFT) domain. Let H := STFT(h) \u2208CNh\u00d7K represent the STFT of an RIR h with Nh time frames and K frequency bins. Similarly, let X \u2208CM\u00d7K, and Y \u2208CM+Nh\u22121\u00d7K, denote the STFTs of anechoic x0 and reverberant y speech signals, repectively. The subband convolution operation applies independent convolutions along the time dimension of each frequency band: Ym,k = Nh X n=0 Hn,kXm\u2212n,k. (7) In the blind scenario, we need to estimate H, which is an arduous task without knowledge of the anechoic speech. We constrain the space of possible solutions by designing a structured, differentiable RIR prior whose parameters \u03c8 can be estimated through gradient descent. We denote the complete forward reverberation operator, including forward and inverse STFT, as A\u03c8(\u00b7) : RL \u2192RL. We denote as A \u2208RNh\u00d7K and \u03a6 \u2208RNh\u00d7K the RIR magnitudes and phases of H, respectively. We parameterize the magnitude matrix A as a multi-band exponential decay model defined in B < K frequency bands. Let A\u2032 \u2208RNh\u00d7B be the subsampled version of A in the B selected frequency bands. Each frequency band b is characterized by its weight wb and exponential decay rate \u03b1b, such that the corresponding subband magnitude filter can be expressed as: A\u2032 n,b = wbe\u2212\u03b1bn. (8) Once the weights and decay rates parameters are estimated, we reconstruct the magnitudes A by interpolating the subsampled A\u2032 using A = exp(lerp(log(A\u2032))), where lerp represents linear interpolation of the frequencies. Given the lack of structure of RIR phases, we perform independent optimization for each phase factor in \u03a6. The resulting set of parameters to optimize is therefore \u03c8 = {\u03a6, (wb, \u03b1b)b=1,...,B}. After each optimization step, the estimated time-frequency RIR H is further processed through a projection step: H = STFT(\u03b4 \u2295Pmin(iSTFT(H))). (9) This operation primarily ensures STFT consistency [29] of H. We additionally include a projection Pmin that ensures the time domain RIR has minimum phase lag to guarantee a stable inverse filter, using the Hilbert transform method [30]. Finally, to make the directto-reverberation ratio only depend on the late reverberation and to \fxN \u03c8N xn \u03c8n Score Model s\u03b8(xn, \u03c3n) \u02c6 x0 RIR Optimization \u00d7Nits. Posterior Sampling Step LH Score Approx. \u2212\u03b6(\u03c4n)\u2207xnC(y, A\u03c8n(\u02c6 x0)) xn\u22121 \u03c8n\u22121 x0 \u03c80 Fig. 1: Blind unsupervised dereverberation alternating between RIR estimation and posterior sampling for speech reconstruction. enforce further constraints on \u03c8 for a more stable optimization, we take the direct path to be at the first sample and with amplitude one. This is achieved by replacing the first sample of the time-domain RIR with a unit impulse, as indicated by the operation \u03b4 \u2295(\u00b7). 3.3. Blind Dereverberation Inference The inference process solves the following objective: \u02c6 x0, \u02c6 \u03c8 = arg min x0,\u03c8 C(y, A\u03c8(x0)) + R(\u03c8), s.t. x0 \u223cpdata. (10) This objective seeks to find the optimal speech \u02c6 x0 and RIR parameters \u02c6 \u03c8 that minimize the reconstruction error C(y, A\u03c8(x0)) while also incorporating a regularization term R(\u03c8). An essential aspect is the constraint x0 \u223cpdata, which ensures that the estimated signal \u02c6 x0 adheres to the distribution pdata of anechoic speech samples. This constraint is implemented in a soft manner by leveraging a pretrained score model s\u03b8(x\u03c4, \u03c4) trained on anechoic speech. The inference algorithm is outlined in Algorithm 1 and visualized in Fig. 1, using the discretization further described in Eq. (12). The algorithm employs the likelihood score approximation from Sec. 3.1, but replacing the convolution with the the reverberation operator A\u03c8(\u00b7), while its parameters \u03c8 are optimized in parallel with the speech signal through gradient descent. We introduce in (10) a noise regularization term R(\u03c8): R(\u03c8) = 1 Nh Nh X l=1 K X k=1 \u2225Scomp(\u02c6 h\u03c8)l,k \u2212Scomp(\u02c6 h\u03c8\u2032 + \u03c3\u2032v)l,k\u22252 2, (11) where \u02c6 h\u03c8 = A\u03c8(\u03b4) represents the estimated RIR in the waveform domain, v \u223cN(0, I) is a vector of white Gaussian noise, and \u02c6 h\u03c8\u2032 is a copy of the current estimate of \u02c6 h\u03c8, such that the arg min in (10) does not apply to it. In code, this is analogous to detaching the gradients of \u02c6 h\u03c8 using a stop grad operator. We adopt an annealed schedule for the noise level \u03c3\u2032(\u03c4), resembling the score model schedule \u03c3(\u03c4) but with different hyper-parameters. This regularization term injects noise in the RIR parameter gradients, with decreasing noise power, which enables a wider and smoother exploration while allowing for convergence toward the end of the optimization. 4. EXPERIMENTAL SETUP 4.1. Data We use VCTK [34] as clean speech, selecting 103 speakers for training, 2 for validation and 2 for testing. We curate recorded RIRs Algorithm 1 Inference algorithm Require: reverberant speech y xinit \u2190WPE(y) Sample xN \u223cN(xinit, \u03c32 NI) \u25b7Warm initialization Initialize \u03c8N \u25b7Initialize the RIR parameters for n \u2190N, . . . , 1 do \u25b7Discrete step backwards sn \u2190s\u03b8(xn, \u03c4n) \u25b7Evaluate score model \u02c6 x0 \u2190xn \u2212\u03c32 nsn \u25b7Get one-step denoising estimate \u02c6 x0 \u2190Rescale(\u02c6 x0) \u03c80 n\u22121 \u2190\u03c8n \u25b7Use the RIR parameters from last step for j \u21900, . . . , Nits. do \u25b7RIR optimization JRIR(\u03c8j n\u22121) \u2190C(y, A\u03c8j n\u22121(\u02c6 x0)) + R(\u03c8j n\u22121) \u03c8j+1 n\u22121 \u2190\u03c8j n\u22121 \u2212Adam(JRIR(\u03c8j n\u22121)) \u25b7Optim. step \u03c8j+1 n\u22121 \u2190project(\u03c8j+1 n\u22121) \u25b7Projection step \u03c8n\u22121 \u2190\u03c8M n\u22121 gn \u2190\u03b6(\u03c4n)\u2207xnC(y, A\u03c8n\u22121(\u02c6 x0)) \u25b7LH score approx. xn\u22121 \u2190xn \u2212\u03c3n(\u03c3n\u22121 \u2212\u03c3n)(sn + gn) \u25b7Update step return x0 \u25b7Reconstructed audio signal from various public datasets (please visit our code repository for details). In total we obtain approximately 10,000 RIRs, and split them between training, validation, and testing using ratios 0.9, 0.05, and 0.05, respectively. The training and validation sets are only used to train the baselines which require coupled reverberant/anechoic data. All data is resampled at 16 kHz. 4.2. Baselines We compare our method BUDDy to several blind supervised baselines such as NCSN++M [31] and diffusion-based SGMSE+ [14] and StoRM [15]. We also include blind unsupervised approaches leveraging traditional methods such as WPE [6] and Yohena et al. [7], as well as diffusion models Saito et al. [20] and GibbsDDRM [33] with code provided by the authors. For WPE, we take 5 iterations, a filter length of 50 STFT frames (400 ms) and a delay of 2 STFT frames (16 ms). 4.3. Hyperparameters and Training Configuration Data representation: We train the score model s\u03b8 using only the anechoic data from VCTK. For training, 4-s segments are randomly extracted from the utterances. Using publicly available code, the blind supervised models NCSN++M [31], SGMSE+ [14] and StoRM [15] are trained using coupled reverberant/anechoic speech, where the reverberant speech is obtained by convolving the anechoic speech from VCTK with the normalized RIRs. Reverberation operator: For all methods, STFTs are computed using a Hann window of 32 ms and a hop size of 8 ms. For subband filtering, we further employ 50% zero-padding to avoid aliasing artifacts. Given our sampling rate of fs = 16 kHz, this results in K = 513 frequency bins. We set the number of STFT frames of our operator to Nh = 100 (800 ms). We subsample the frequency scale in B = 26 bands, with a 125-Hz spacing between 0 and 1 kHz, a 250-Hz spacing between 1 and 3 kHz, and a 500-Hz spacing between 3 and 8 kHz. We optimize the RIR parameters \u03c8 with Adam, where the learning rate is set to 0.1, the momentum parameters to \u03b21 = 0.9, and \u03b22 = 0.99, and Nits. = 10 optimization iterations per diffusion step. We constrain the weights wb between 0 and 40 dB, \fTable 1: Dereverberation results obtained on VCTK-based reverberant datasets. Values indicate mean and standard deviation. We indicate for each method in the table if is blind (i.e. have no knowledge of the RIR) and/or unsupervised. Boldface numbers indicate best performance for supervised and unsupervised methods separately. For all metrics, higher is better. Matched Mismatched Method Blind Unsup. DNS-MOS PESQ ESTOI DNS-MOS PESQ ESTOI Reverberant 3.14 \u00b1 0.52 1.61 \u00b1 0.37 0.50 \u00b1 0.14 3.05 \u00b1 0.47 1.57 \u00b1 0.29 0.47 \u00b1 0.11 RIF+Post [5] \u2717 \u2713 3.41 \u00b1 0.47 2.66 \u00b1 0.40 0.76 \u00b1 0.09 3.55 \u00b1 0.45 2.86 \u00b1 0.31 0.78 \u00b1 0.09 InfDerevDPS [16] \u2717 \u2713 3.91 \u00b1 0.35 3.77 \u00b1 0.41 0.83 \u00b1 0.09 3.92 \u00b1 0.32 3.69 \u00b1 0.31 0.84 \u00b1 0.08 NCSN++M [31] \u2713 \u2717 3.75 \u00b1 0.38 2.85 \u00b1 0.55 0.80 \u00b1 0.10 3.61 \u00b1 0.39 2.08 \u00b1 0.47 0.64 \u00b1 0.09 SGMSE+M [14,31] \u2713 \u2717 3.88 \u00b1 0.32 2.99 \u00b1 0.48 0.78 \u00b1 0.09 3.74 \u00b1 0.34 2.48 \u00b1 0.47 0.69 \u00b1 0.09 StoRM [15] \u2713 \u2717 3.90 \u00b1 0.33 3.33 \u00b1 0.48 0.82 \u00b1 0.10 3.83 \u00b1 0.32 2.51 \u00b1 0.53 0.67 \u00b1 0.09 Yohena and Yatabe [7] \u2713 \u2713 2.99 \u00b1 0.56 1.80 \u00b1 0.33 0.55 \u00b1 0.12 2.94 \u00b1 0.44 1.71 \u00b1 0.29 0.51 \u00b1 0.10 WPE [32] \u2713 \u2713 3.24 \u00b1 0.54 1.81 \u00b1 0.42 0.57 \u00b1 0.14 3.10 \u00b1 0.48 1.74 \u00b1 0.37 0.54 \u00b1 0.12 Saito et al. [20] \u2713 \u2713 3.22 \u00b1 0.56 1.68 \u00b1 0.40 0.51 \u00b1 0.13 3.12 \u00b1 0.52 1.70 \u00b1 0.33 0.52 \u00b1 0.10 GibbsDDRM [33] \u2713 \u2713 3.33 \u00b1 0.53 1.70 \u00b1 0.37 0.51 \u00b1 0.13 3.30 \u00b1 0.52 1.75 \u00b1 0.36 0.52 \u00b1 0.11 BUDDy (proposed) \u2713 \u2713 3.76 \u00b1 0.41 2.30 \u00b1 0.53 0.66 \u00b1 0.12 3.74 \u00b1 0.38 2.24 \u00b1 0.54 0.65 \u00b1 0.12 and the decays \u03b1b between 0.5 and 28. This prevents the optimization from approaching degenerate solutions at early sampling stages. Furthermore, we rescale the denoised estimate \u02c6 x0 at each step to match the empirical dataset standard deviation \u03c3data = 5 \u00b7 10\u22122, so as to enforce a constraint on the absolute magnitudes of \u02c6 h\u03c8 and \u02c6 x0. Forward and reverse diffusion We set the extremal diffusion times to Tmax = 0.5 and Tmin = 10\u22124. For reverse diffusion, we follow Karras et al. [23] and employ a discretization of the diffusion time axis using N = 200 steps according to: \u2200n < N, \u03c4n = \u03c3n = \u0012 T 1/\u03c1 max + n N \u22121(T n/\u03c1 min \u2212T 1/\u03c1 max) \u0013\u03c1 , (12) with warping \u03c1 = 10. We use the second-order Euler-Heun stochastic sampler in [23] with Schurn = 50 and \u03b6\u2032 = 0.5 (prior scaling, see [27]), and the initial point xinit is taken to be the output of WPE [6] (with same parameters as the WPE baseline) plus Gaussian noise with standard deviation \u03c3 = Tmax. The annealing schedule \u03c3\u2032(\u03c4) in the noise regularization term in (11) is the same as the diffusion noise schedule \u03c3(\u03c4) but we bound it between extremal values \u03c3\u2032 min = 5 \u00d7 10\u22124 and \u03c3\u2032 max = 10\u22122. Network architecture: To remain consistent with [16], the unconditional score network architecture is NCSN++M [15, 31], a lighter variant of the NCSN++ [13] with 27.8M parameters instead of 65M. Training configuration: We adopt Adam as the optimizer to train the unconditional score model, with a learning rate of 10\u22124 and an effective batch size of 16 for 190k steps. We track an exponential moving average of the DNN weights with a decay of 0.999. Evaluation metrics: We assess the quality and intelligibility of speech using the intrusive Perceptual Evaluation of Speech Quality (PESQ) [35] and extended short-term objective intelligibility (ESTOI) [36]. We also employ the non-intrusive DNS-MOS [37], as a DNN-based mean opinion score (MOS) approximation. 5. RESULTS AND DISCUSSION Table 1 shows the dereverberation results for all baselines and indicates whether each approach is blind and/or unsupervised. We included the results for RIF+Post [5] and InfDerevDPS [16] in the informed scenario to show the upper bound of dereveberation quality one can achieve with perfect knowledge of the room acoustics. We use the same score model s\u03b8 and cost function C(\u00b7, \u00b7) for InfDerevDPS [16] as for BUDDy. Blind supervised approaches NCSN++M, SGMSE+M, and StoRM largely profit from the supervision during training, and boast a better performance compared to the unsupervised methods. However, in the mismatched setting, their performance dwindles because of their limited generalizability. In contrast, the proposed method BUDDy benefits from unsupervised training, and therefore, modifying the acoustic conditions does not impact performance at all: typically NCSN++M loses 0.78 PESQ by switching from the matched case to the mismatched case, where BUDDy loses 0.06. Our method then outperforms NCSN++M and comes within reach of other supervised approaches, although the generative nature of SGMSE+ and StoRM allow them to retain a relatively high generalization ability. We also observe that the traditional blind unsupervised methods such as WPE [6] and Yohena and Yatabe [7] can only perform limited dereverberation, as they do not benefit from the strong anechoic speech prior that learning-based methods parameterized with deep neural networks offer. Finally, we note that BUDDy performs significantly better on all metrics than the diffusion-based blind unsupervised baselines Saito et al. [20] and GibbsDDRM [33], as these perform mild dereverberation in the presented acoustic conditions, where the input direct-to-reverberant ratio is significanty lower than in the authors\u2019 setup. 6." +} \ No newline at end of file diff --git a/abs_9K/test_abstract_short_2405.04356v1.json b/abs_9K/test_abstract_short_2405.04356v1.json new file mode 100644 index 0000000000000000000000000000000000000000..6c60b648149e16d07e10fe8530baeb0e9e2ef33c --- /dev/null +++ b/abs_9K/test_abstract_short_2405.04356v1.json @@ -0,0 +1,16 @@ +{ + "url": "http://arxiv.org/abs/2405.04356v1", + "title": "Diffusion-driven GAN Inversion for Multi-Modal Face Image Generation", + "abstract": "We present a new multi-modal face image generation method that converts a\ntext prompt and a visual input, such as a semantic mask or scribble map, into a\nphoto-realistic face image. To do this, we combine the strengths of Generative\nAdversarial networks (GANs) and diffusion models (DMs) by employing the\nmulti-modal features in the DM into the latent space of the pre-trained GANs.\nWe present a simple mapping and a style modulation network to link two models\nand convert meaningful representations in feature maps and attention maps into\nlatent codes. With GAN inversion, the estimated latent codes can be used to\ngenerate 2D or 3D-aware facial images. We further present a multi-step training\nstrategy that reflects textual and structural representations into the\ngenerated image. Our proposed network produces realistic 2D, multi-view, and\nstylized face images, which align well with inputs. We validate our method by\nusing pre-trained 2D and 3D GANs, and our results outperform existing methods.\nOur project page is available at\nhttps://github.com/1211sh/Diffusion-driven_GAN-Inversion/.", + "authors": "Jihyun Kim, Changjae Oh, Hoseok Do, Soohyun Kim, Kwanghoon Sohn", + "published": "2024-05-07", + "updated": "2024-05-07", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "label": "Original Paper", + "paper_cat": "Diffusion AND Model", + "gt": "We present a new multi-modal face image generation method that converts a\ntext prompt and a visual input, such as a semantic mask or scribble map, into a\nphoto-realistic face image. To do this, we combine the strengths of Generative\nAdversarial networks (GANs) and diffusion models (DMs) by employing the\nmulti-modal features in the DM into the latent space of the pre-trained GANs.\nWe present a simple mapping and a style modulation network to link two models\nand convert meaningful representations in feature maps and attention maps into\nlatent codes. With GAN inversion, the estimated latent codes can be used to\ngenerate 2D or 3D-aware facial images. We further present a multi-step training\nstrategy that reflects textual and structural representations into the\ngenerated image. Our proposed network produces realistic 2D, multi-view, and\nstylized face images, which align well with inputs. We validate our method by\nusing pre-trained 2D and 3D GANs, and our results outperform existing methods.\nOur project page is available at\nhttps://github.com/1211sh/Diffusion-driven_GAN-Inversion/.", + "main_content": "Introduction In recent years, multi-modal image generation has achieved remarkable success, driven by the advancements in Generative Adversarial Networks (GANs) [15] and diffusion models (DMs) [11, 18, 48]. Facial image processing has become a popular application for a variety of tasks, including face image generation [21, 39], face editing [6, 12, 30, 36, 37, 46], and style transfer [7, 64]. Many tasks typically utilize the pre-trained StyleGAN [21, 22], which can generate realistic facial images and edit facial attributes by manipulating the latent space using GAN inversion [39, 42, 58]. In these tasks, using multiple modalities as conditions is becoming a popular approach, which improves the user\u2019s controllability in generating realistic face images. However, existing GAN *Corresponding author This research was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIP) (NRF2021R1A2C2006703). rebuttal (a) Oil painting (b) Watercolor Visual input 2D face image generation 3D-aware face image generation Face style transfer \u201cThe woman has bangs, brown hair. She is smiling.\u201d \u201cGreek statue\u201d \u201csilver hair Elf\u201d \u201cCartoon style\u201d Overview of our method \u201cThe chubby man has receding hairline, eyeglasses, gray hair, and double chin.\u201d \u201cWatercolor painting\u201d GAN Ours Diffusion \u201cShe has blond hair, straight hair, and wears heavy makeup.\u201d Visual condition Text condition Figure 1. We present a method to map the diffusion features to the latent space of a pre-trained GAN, which enables diverse tasks in multi-modal face image generation and style transfer. Our method can be applied to 2D and 3D-aware face image generation. inversion methods [51, 58] have poor alignment with inputs as they neglect the correlation between multi-modal inputs. They struggle to map the different modalities into the latent space of the pre-trained GAN, such as by mixing the latent codes or optimizing the latent code converted from a given image according to the input text. Recently, DMs have increased attention in multi-modal image generation thanks to the stability of training and the flexibility of using multiple modalities as conditions. DMs [23, 53, 54] can control the multiple modalities and render diverse images by manipulating the latent or attention features across the time steps. However, existing textto-image DMs rely on an autoencoder and text encoder, such as CLIP [41], trained on unstructured datasets collected from the web [40, 45] that may lead to unrealistic arXiv:2405.04356v1 [cs.CV] 7 May 2024 \fimage generation. Moreover, some approaches address multi-modal face image generation in a 3D domain. In GAN inversion [14, 51], multi-view images can be easily acquired by manipulating the latent code with pre-trained 3D GANs. While DMs are inefficient in learning 3D representation, which has the challenge to generate multi-view images directly due to the lack of 3D ground-truth (GT) data for training [32, 47]. They can be used as a tool to acquire training datasets for 3D-aware image generation [24, 33]. In this paper, we present a versatile face generative model that uses text and visual inputs. We propose an approach that takes the strengths of DMs and GAN and generates photo-realistic images with flexible control over facial attributes, which can be adapted to 2D and 3D domains, as illustrated in Figure 1. Our method employs a latent mapping strategy that maps the diffusion features into the latent space of a pre-trained GAN using multi-denoising step learning, producing the latent code that encodes the details of text prompts and visual inputs. In summary, our main contributions are: (i) We present a novel method to link a pre-trained GAN (StyleGAN [22], EG3D [4]) and DM (ControlNet [62]) for multi-modal face image generation. (ii) We propose a simple mapping network that links pretrained GAN and DM\u2019s latent spaces and an attentionbased style modulation network that enables the use of meaningful features related to multi-modal inputs. (iii) We present a multi-denoising step training strategy that enhances the model\u2019s ability to capture the textual and structural details of multi-modal inputs. (iv) Our model can be applied for both 2Dand 3D-aware face image generation without additional data or loss terms and outperforms existing DMand GAN-based methods. 2. Related Work 2.1. GAN Inversion GAN inversion approaches have gained significant popularity in the face image generation task [7, 31, 51, 59] using the pre-trained 2D GAN, such as StyleGAN [21, 22]. This method has been extended to 3D-aware image generation [27, 60, 61] by integrating 3D GANs, such as EG3D [4]. GAN inversion can be categorized into learning-based, optimization-based, and hybrid methods. Optimization-based methods [44, 67] estimate the latent code by minimizing the difference between an output and an input image. Learning-based methods [1, 52] train an encoder that maps an input image into the latent space of the pre-trained GAN. Hybrid methods [58, 66] combine these two methods, producing an initial latent code and then refining it with additional optimizations. Our work employs a learning-based GAN inversion, where a DM serves as the encoder. We produce latent codes by leveraging semantic features in the denoising U-Net, which can generate images with controlled facial attributes. 2.2. Diffusion Model for Image Generation Many studies have introduced text-to-image diffusion models [36, 43, 45] that generate images by encoding multimodal inputs, such as text and image, into latent features via foundation models [41] and mapping them to the features of denoising U-Net via an attention mechanism. ControlNet [62] performs image generation by incorporating various visual conditions (e.g., semantic mask, scribbles, edges) and text prompts. Image editing models using DMs [16, 20, 26, 28, 34] have exhibited excellent performance by controlling the latent features or the attention maps of a denoising U-Net. Moreover, DMs can generate and edit images by adjusting latent features over multiple denoising steps [2]. We focus on using latent features of DM, including intermediate features and cross-attention maps, across denoising steps to link them with the latent space of GAN and develop a multi-modal face image generation task. 2.3. Multi-Modal Face Image Generation Face generative models have progressed by incorporating various modalities, such as text [25], semantic mask [38, 55], sketch [5, 9], and audio [65]. Several methods adopt StyleGAN, which can generate high-quality face images and edit facial attributes to control the style vectors. The transformer-based models [3, 13] are also utilized, which improves the performance of face image generation by handling the correlation between multi-modal conditions using image quantization. A primary challenge faced in face generative models is to modify the facial attributes based on given conditions while minimizing changes to other attributes. Some methods [39, 57] edit facial attributes by manipulating the latent codes in GAN models. TediGAN [58] controls multiple conditions by leveraging an encoder to convert an input image into latent codes and optimizing them with a pre-trained CLIP model. Recent works [19, 35] use DMs to exploit the flexibility of taking multiple modalities as conditions and generate facial images directly from DMs. Unlike existing methods, we use the pre-trained DM [62] as an encoder to further produce the latent codes for the pre-trained GAN models. 3. Method 3.1. Overview Figure 2 illustrates the overall pipeline of our approach. During the reverse diffusion process, we use the middle and decoder blocks of a denoising U-Net in ControlNet [62] as an encoder E. A text prompt c, along with a visual condition x, are taken as input to the denoising U-Net. Subsequently, E produces the feature maps h from the middle block, and \f\ud835\udc300 \ud835\udefe \u2219\u2219\u2219 \ud835\udc61= 0 \ud835\udc61= \ud835\udc47 \ud835\udc3c0 \u2032 \ud835\udc3c0 \ud835\udc51 \ud835\udc210 \ud835\udc3c\ud835\udc47 \u2032 \u2219\u2219\u2219 Conv ReLU \ud835\udc21\ud835\udc61 \ud835\udc30\ud835\udc61 \ud835\udc5a \ud835\udc300 \ud835\udc300 \ud835\udefd Conv ReLU FC \u0de0 \ud835\udc05\ud835\udc61 \ud835\udc30\ud835\udc61 \ud835\udefe \ud835\udc30\ud835\udc61 \ud835\udefd \ud835\udc1f0 \ud835\udc300 \ud835\udc5a \ud835\udc50 Reverse Process of Diffusion \ud835\udc1a\ud835\udc61 \ud835\udc1f\ud835\udc61 Max-pool Average Average Upsample \ud835\udc05\ud835\udc61 \ud835\udc00\ud835\udc61 \u0d25 \ud835\udc00\ud835\udc61 \u0d24 \ud835\udc05\ud835\udc61 Style Modulation Network \u0de0 \u0d24 \ud835\udc05\ud835\udc61 \ud835\udc1a0 \ud835\udc50 \u201cThis person has arched eyebrows, wavy hair, and mouth slightly open.\u201d \u201cThis person has arched eyebrows, wavy hair, and mouth slightly open.\u201d Pixel-wise multiplication Pixel-wise addition Our Model Mapping Network AbSMNet Frozen Figure 2. Overview of our method. We use a diffusion-based encoder E, the middle and decoder blocks of a denoising U-Net, that extracts the semantic features ht, intermediate features ft, and cross-attention maps at at denoising step t. We present the mapping network M (Sec. 3.2) and the attention-based style modulation network (AbSMNet) T (Sec. 3.3) that are trained across t (Sec. 3.4). M converts ht into the mapped latent code wm t , and T uses ft and at to control the facial attributes from the text prompt c and visual input x. The modulation codes w\u03b3 t and w\u03b2 t are then used to scale and shift wm t to produce the final latent code, wt, that is fed to the pre-trained GAN G. We obtain the generation output I\u2032 t from our model Y and we use the image Id 0 from the U-Net after the entire denoising process for training T (Sec. 3.4). Note that only the networks with the dashed line ( ) are trainable, while others are frozen. the intermediate features f and the cross-attention maps a from the decoder blocks. h is then fed into the mapping network M, which transforms the rich semantic feature into a latent code wm. The Attention-based Style Modulation Network (AbSMNet), T , takes f and a as input to generate the modulation latent codes, w\u03b3 and w\u03b2, that determine facial attributes related to the inputs. The latent code w is then forwarded to the pre-trained GAN G that generates the output image I\u2032. Our model is trained across multiple denoising steps, and we use the denoising step t to indicate the features and images obtained at each denoising step. With this pipeline, we aim to estimate the latent code, w\u2217 t , that is used as input to G to render a GT image, Igt: w\u2217 t = arg min wt L(Igt, G(wt)), (1) where L(\u00b7, \u00b7) measures the distance between Igt and the rendered image, I\u2032 = G(wt). We employ learning-based GAN inversion that estimates the latent code from an encoder to reconstruct an image according to given inputs. 3.2. Mapping Network Our mapping network M aims to build a bridge between the latent space of the diffusion-based encoder E and that of the pre-trained GAN G. E uses a text prompt and a visual input, and these textual and image embeddings are aligned by the cross-attention layers [62]. The feature maps h from the middle block of the denoising U-Net particularly contain rich semantics that resemble the latent space of the generator [28]. Here we establish the link between the latent spaces of E and G by using ht across the denoising steps t. Given ht, we design M that produces a 512-dimensional latent code wm t \u2208RL\u00d7512 that can be mapped to the latent space of G: wm t = M(ht). (2) M is designed based on the structure of the map2style block in pSp [42], as seen in Figure 2. This network consists of convolutional layers downsampling feature maps and a fully connected layer producing the latent code wm t . 3.3. Attention-based Style Modulation Network By training M with learning-based GAN inversion, we can obtain wm t and use it as input to the pre-trained GAN for image generation. However, we observe that ht shows limitations in capturing fine details of the facial attributes due to its limited spatial resolution and data loss during the encoding. Conversely, the feature maps of the DM\u2019s decoder blocks show rich semantic representations [53], benefiting from aggregating features from DM\u2019s encoder blocks via skip connections. We hence propose a novel Attentionbased Style Modulation Network (AbSMNet), T , that produces style modulation latent codes, w\u03b3 t , w\u03b2 t \u2208RL\u00d7512, by using ft and at from E. To improve reflecting the multimodal representations to the final latent code wt, we modulate wm t from M using w\u03b3 t and w\u03b2 t , as shown in Figure 2. We extract intermediate features, ft = {f n t }N n=1, from N different blocks, and cross-attention maps, at = {ak t }K k=1, from K different cross-attention layers of the n-th block, in E that is a decoder stage of denoising U-Net. The discrim\f(a) Cross-attention maps averaging for all denoising steps t= 0 \ud835\udc61= \ud835\udc47 (b) Cross-attention maps for individual denoising steps \ud835\udc00\ud835\udc61 0 \ud835\udc00\ud835\udc61 1 \ud835\udc00\ud835\udc61 2 \u0d25 \ud835\udc00\ud835\udc61 \ud835\udc00\ud835\udc47 1 \ud835\udc05\ud835\udc47 1 \u0de0 \ud835\udc05\ud835\udc47 1 (c) Example of an intermediate feature map Multi-modal inputs Output \u201cThe person has arched eyebrows, wavy hair, and mouth slightly open.\u201d Figure 3. Visualization of cross-attention maps and intermediate feature maps. (a) represents the semantic relation information between an input text and an input semantic mask in the spatial domain. The meaningful representations of inputs are shown across all denoising steps and N different blocks. (b) represents N different cross-attention maps, At, at denoising steps t = T and t = 0. (c) shows the example of refined intermediate feature map \u02c6 F1 T at 1st block and t = T that is emphasized corresponding to input multi-modal conditions. The red and yellow regions of the map indicate higher attention scores. As the denoising step approaches T, the text-relevant features appear more clearly, and as the denoising step t approaches 0, the features of the visual input are more preserved. inative representations are represented more faithfully because ft consists of N multi-scale feature maps that can capture different sizes of facial attributes, which allows for finer control over face attributes. For simplicity, we upsample each intermediate feature map of ft to same size intermediate feature maps Ft = {Fn t }N n=1, where Fn t \u2208RH\u00d7W \u00d7Cn has H, W, and Cn as height, width and depth. Moreover, at is used to amplify controlled facial attributes as it incorporates semantically related information in text and visual input. To match the dimension with Ft, we convert at to At = {An t }N n=1, where An t \u2208RH\u00d7W \u00d7Cn, by max-pooling the output of the cross-attention layers in each decoder block and upsampling the max-pooling outputs. To capture the global representations, we additionally compute \u00af At \u2208RH\u00d7W \u00d71 by depth-wise averaging the max-pooling output of at over each word in the text prompt and upsampling it. As illustrated in Figures 3 (a) and (b), At and \u00af At represent the specific regions aligned with input text prompt and visual input, such as semantic mask, across denoising steps t. By a pixel-wise multiplication between Ft and At, we can obtain the refined intermediate feature maps \u02c6 Ft that emphasize the representations related to multiShift Net \u0de1 \ud835\udc6d\ud835\udc61 \ud835\udefd\ud835\udc54 1 \u2212\ud835\udefc\ud835\udc61 \ud835\udefd \ud835\udc6d\ud835\udc61 Weighted sum map2style \ud835\udc30\ud835\udc61 \ud835\udefe \ud835\udc30\ud835\udc61 \ud835\udefd Scale Net \u0de0 \ud835\udc05\ud835\udc61 \ud835\udefe\ud835\udc59 Shift Net Concat Scale Net Shift Net \u0de0 \ud835\udc05\ud835\udc61 \ud835\udefe\ud835\udc54 \u0de0 \ud835\udc05\ud835\udc61 \ud835\udefd\ud835\udc54 1 \u2212\ud835\udefc\ud835\udc61 \ud835\udefe \u0de0 \ud835\udc05\ud835\udc61 \ud835\udefd\ud835\udc59 \ud835\udefc\ud835\udc61 \ud835\udefd 1 \u2212\ud835\udefc\ud835\udc61 \ud835\udefd \ud835\udefc\ud835\udc61 \ud835\udefe map2style \u0de0 \u0d24 \ud835\udc05\ud835\udc61 \u0de0 \ud835\udc05\ud835\udc61 Weighted sum \u0de0 \ud835\udc05\ud835\udc61 \ud835\udefd \u0de0 \ud835\udc05\ud835\udc61 \ud835\udefe Figure 4. Style modulation network in T . The refined intermediate feature maps \u02c6 Ft and \u02c6 \u00af Ft are used to capture local and global semantic representations, respectively. They are fed into the scale and shift network, respectively. The weighted summations of these outputs are used as input to the map2style network, which finally generates the scale and shift modulation latent codes, w\u03b3 t , and w\u03b2 t . modal inputs as shown in Figure 3 (c). The improved average feature map \u02c6 \u00af Ft \u2208RH\u00d7W \u00d71 is also obtained by multiplying \u00af At with \u00af Ft, where \u00af Ft \u2208RH\u00d7W \u00d71 is obtained by first averaging the feature maps in Ft = {Fn t }N n=1 and then depth-wise averaging the outputs. \u02c6 Ft and \u02c6 \u00af Ft distinguish textand structural-relevant semantic features, which improves the alignment with the inputs. We use \u02c6 Ft and \u02c6 \u00af Ft as input to the style modulation network that produces the modulation codes w\u03b3 t , and w\u03b2 t as shown in Figure 4. We capture both local and global features by using \u02c6 Ft, which consists of feature maps representing different local regions on the face, and \u02c6 \u00af Ft, which implies representations of the entire face. We concatenate N intermediate feature maps of \u02c6 Ft, concat(\u02c6 F1 t \u00b7 \u00b7 \u00b7 \u02c6 FN t ), and it is forward to the scale and shift networks that consist of convolutional layers and Leaky ReLU, forming the local modulation feature maps, \u02c6 F\u03b3l t and \u02c6 F\u03b2l t . We also estimate global modulation feature maps, \u02c6 F\u03b3g t and \u02c6 F\u03b2g t , by feeding \u02c6 \u00af Ft to the scale and shift network. The final scale, \u02c6 F\u03b3 t , and shift, \u02c6 F\u03b2 t , feature maps are estimated by the weighted summation: \u02c6 F\u03b3 t = \u03b1\u03b3 t \u02c6 F\u03b3l t + (1 \u2212\u03b1\u03b3 t )\u02c6 F\u03b3g t , (3) \u02c6 F\u03b2 t = \u03b1\u03b2 t \u02c6 F\u03b2g t + (1 \u2212\u03b1\u03b2 t )\u02c6 F\u03b2g t , where \u03b1\u03b3 t and \u03b1\u03b2 t are learnable weight parameters. Through the map2style module, we then convert \u02c6 F\u03b3 t and \u02c6 F\u03b2 t into the final scale, w\u03b3 t \u2208RL\u00d7512, and shift, w\u03b2 t \u2208RL\u00d7512, latent codes. With these modulation latent codes, we achieve more precise control over facial details while corresponding to the input multi-modal inputs at the pixel level. Finally, the mapped latent code wm t from M is modulated by w\u03b3 t and w\u03b2 t from T to get the final latent code wt that is used to obtain the generated image I\u2032 t as follows: wt = wm t \u2299w\u03b3 t \u2295w\u03b2 t , (4) I\u2032 t = G(wt). (5) \f10132 5987 13044 9807 rebuttal (a) \u201cThis person has brown hair, and eyeglasses.\u201d (b)\u201cThis person has mustache.\u201d (c) \u201cThis person has gray hair, and eyeglasses.\u201d Inputs TediGAN UaC Ours (a) (b) (c) (a) (b) (c) (a) (b) (c) (a) \u201cShe has high cheekbones, straight hair, black hair.\u201d (b)\u201cShe has high cheekbones, straight hair, blond hair.\u201d (c) \u201cHe has blond hair, sideburns.\u201d (a) \u201cHe has brown hair, and wavy hair.\u201d (b)\u201cHe has black hair, and straight hair.\u201d (c) \u201cHe has black hair, and goatee.\u201d Collaborative ControlNet Figure 5. Visual examples of the 2D face image generation using a text prompt and a semantic mask. For each semantic mask, we use three different text prompts (a)-(c), resulting in different output images (a)-(c). 3.4. Loss Functions To optimize M and T , we use reconstruction loss, perceptual loss, and identity loss for image generation, and regularization loss [42] that encourages the latent codes to be closer to the average latent code \u00af w. For training M, we use the GT image Igt as reference to encourage the latent code wm t to generate a photo-realistic image as follows: LM = \u03bbm 0 \u2225Igt \u2212G(wm t )\u22252+ (6) \u03bbm 1 \u2225F(Igt) \u2212F(G(wm t )\u22252+ \u03bbm 2 (1 \u2212cos(R(Igt), R(G(wm t ))))+ \u03bbm 3 \u2225E(zt, t, x, c) \u2212\u00af w\u22252, where R(\u00b7) is pre-trained ArcFace network [8], F(\u00b7) is the feature extraction network [63], zt is noisy image, and the hyper-parameters \u03bbm (\u00b7) guide the effect of losses. Note that we freeze T while training M. For training T , we use Id 0 produced by the encoder E into the reconstruction and perceptual losses. With these losses, the loss LT encourages the network to control facial attributes while preserving the identity of Igt: LT = \u03bbs 0\u2225Id 0 \u2212G(wt)\u22252+ (7) \u03bbs 1\u2225F(Id 0) \u2212F(G(wt)\u22252+ \u03bbs 2(1 \u2212cos(R(Igt), R(G(wt))))+ \u03bbs 3\u2225E(zt, t, x, c) \u2212\u00af w\u22252, where the hyper-parameters \u03bbs (\u00b7) guide the effect of losses. Similar to Equation 6, we freeze M while training T . We further introduce a multi-step training strategy that considers the evolution of the feature representation in E over the denoising steps. We observe that E tends to focus more on text-relevant features in an early step, t = T, and structure-relevant features in a later step, t = 0. Figure 3 (b) shows the attention maps \u00af A showing variations across the denoising step. As the attention map, we can capture the textual and structural features by varying the denoising steps. To effectively capture the semantic details of multi-modal conditions, our model is trained across multiple denoising steps. 4. Experiments 4.1. Experimental Setup We use ControlNet [62] as the diffusion-based encoder that receives multi-modal conditions, including text and visual conditions such as a semantic mask and scribble map. The StyleGAN [22] and EG3D [4] are exploited as pre-trained 2D and 3D GAN, respectively. See the Supplementary Material for the training details, the network architecture, and additional results. Datasets. We employ the CelebAMask-HQ [29] dataset comprising 30,000 face RGB images and annotated semantic masks, including 19 facial-component categories such as skin, eyes, mouth, and etc. We also use textual de\fOurs I (a) (b) (c) (d) Ours IDE-3D \u201cThe person has brown hair, and sideburn.\u201d \u201cThe person has gray hair, and straight hair.\u201d \u201cThe person has gray hair, and straight hair.\u201d \u201cThe person has black hair, and wavy hair.\u201d (a) (b) (c) (d) Inputs Figure 6. Visual examples of the 3D-aware face image generation using a text and a semantic mask. We show the images generated with inputs and arbitrary viewpoints. Input conditions Method Model Domain FID\u2193 LPIPS\u2193 SSIM\u2191 ID\u2191 ACC\u2191 mIoU\u2191 Text + semantic mask TediGAN [58] GAN 2D 54.83 0.31 0.62 0.63 81.68 40.01 IDE-3D [51] GAN 3D 39.05 0.40 0.41 0.54 47.07 10.98 UaC [35] Diffusion 2D 45.87 0.38 0.59 0.32 81.49 42.68 ControlNet [62] Diffusion 2D 46.41 0.41 0.53 0.30 82.42 42.77 Collaborative [19] Diffusion 2D 48.23 0.39 0.62 0.31 74.06 30.69 Ours GAN 2D 46.68 0.30 0.63 0.76 83.41 43.82 Ours GAN 3D 44.91 0.28 0.64 0.78 83.05 43.74 Text + scribble map ControlNet [62] Diffusion 2D 93.26 0.52 0.25 0.21 Ours GAN 2D 55.60 0.32 0.56 0.72 Ours GAN 3D 48.76 0.34 0.49 0.62 Table 1. Quantitative results of multi-modal face image generation on CelebAMask-HQ [29] with annotated text prompts [58]. scriptions provided by [58] describing the facial attributes, such as black hair, sideburns, and etc, corresponding to the CelebAMask-HQ dataset. For the face image generation task using a scribble map, we obtain the scribble maps by applying PiDiNet [49, 50] to the RGB images in CelebAMask-HQ. We additionally compute camera parameters based on [4, 10] for 3D-aware image generation. Comparisons. We compare our method with GAN-based models, such as TediGAN [58] and IDE-3D [51], and DMbased models, such as Unite and Conquer (UaC) [35], ControlNet [62], and Collaborative diffusion (Collaborative) [19], for face generation task using a semantic mask and a text prompt. IDE-3D is trained by a CLIP loss term like TediGAN to apply a text prompt for 3D-aware face image generation. ControlNet is used for face image generation using a text prompt and a scribble map. We use the official codes provided by the authors, and we downsample the results into 256 \u00d7 256 for comparison. Evaluation Metrics. For quantitative comparisons, we evaluate the image quality and semantic consistency using sampled 2k semantic maskand scribble map-text prompt pairs. Frechet Inception Distance (FID) [17], LPIPS [63], and the Multiscale Structural Similarity (MS-SSIM) [56] are employed for the evaluation of visual quality and diversity, respectively. We also compute the ID similarity mean score (ID) [8, 57] before and after applying a text prompt. Additionally, we assess the alignment accuracy between the input semantic masks and results using mean Intersectionover-Union (mIoU) and pixel accuracy (ACC) for the face generation task using a semantic mask. 4.2. Results Qualitative Evaluations. Figure 5 shows the visual comparisons between ours and two existing methods for 2D face image generation using a text prompt and a semantic mask as input. We use the same semantic mask with different text prompts (a)-(c). TediGAN produces results consistent with the text prompt as the latent codes are optimized using the input text prompt. However, the results are inconsistent with the input semantic mask, as highlighted in the red boxes. UaC shows good facial alignment with the input semantic mask, but the results are generated with unexpected attributes, such as glasses, that are not indicated in the inputs. Collaborative and ControlNet produce inconsistent, blurry, and unrealistic images. Our model is capable of preserving semantic consistency with inputs and generating realistic facial images. As shown in Figure 5, our method preserves the structure of the semantic mask, such as the hairline, face position, and mouth shape, while changing the attributes through a text prompt. Figure 6 compares our method with IDE-3D [51] to validate the performance of 3D-aware face image generation \fInput View 1. 2. 3. 4. Novel Views (a) Inputs (b) ControlNet (c) Ours Input text: 1. \u201cThis young woman has straight hair, and eyeglasses and wears lipstick.\u201d 2. \u201cThe man has mustache, receding hairline, big nose, goatee, sideburns, bushy eyebrows, and high cheekbones.\u201d 3. \u201cShe has big lips, pointy nose, receding hairline, and arched eyebrows.\u201d 4. \u201cThis man has mouth slightly open, and arched eyebrows. He is smiling.\u201d Figure 7. Visual examples of 3D-aware face image generation using text prompts and scribble maps. Using (1-4) the text prompts and their corresponding (a) scribble maps, we compare the results of (b) ControlNet with (c) multi-view images generated by ours. using a semantic mask and a text prompt. We use the same semantic mask with different text prompts in Figures 6 (a) and (b), and use the same text prompt with different semantic masks in Figures 6 (c) and (d). The results of IDE-3D are well aligned with the semantic mask with the frontal face. However, IDE-3D fails to produce accurate results when the non-frontal face mask is used as input. Moreover, the results cannot reflect the text prompt. Our method can capture the details provided by input text prompts and semantic masks, even in a 3D domain. Figure 7 shows visual comparisons with ControlNet on 2D face generation from a text prompt and a scribble map. The results from ControlNet and our method are consistent with both the text prompt and the scribble map. ControlNet, however, tends to over-emphasize the characteristic details related to input conditions. Our method can easily adapt to the pre-trained 3D GAN and produce photo-realistic multiview images from various viewpoints. Quantitative Evaluations. Table 1 reports the quantitative results on CelebAMask-HQ with text prompts [58]. Our method using text prompts and semantic masks shows performance increases in all metrics in 2D and 3D domains, compared with TediGAN and UaC. Our model using 2D GAN significantly improves LPIPS, ID, ACC, and mIoU scores, surpassing TediGAN, UaC, ControlNet, and Collaborative, respectively. It demonstrates our method\u2019s strong ability to generate photo-realistic images while reflecting input multi-modal conditions better. For 3D-aware face image generation using a text prompt and a semantic mask, it \ud835\udcaf (c) w/o \ud835\udc34, \u04a7 \ud835\udc34 (d) Full model urns, and bags under eyes.\u201d and has arched eyebrows, black hair.\u201d 2. 3. 1. Input text: 1. \u201cThis man has gray hair.\u201d 2. \u201cHe has double chin, sideburns, and bags under eyes.\u201d 3. \u201cShe wears heavy makeup and has arched eyebrows, black hair.\u201d (a) Inputs (b) w/o T (c) w/o A, \u00af A (d) Ours Figure 8. Effect of M and T . (b) shows the results using only M, and (c) shows the effect of the cross-attention maps (A and \u00af A) in T . The major changes are highlighted with the white boxes. Method M T At Igt Id 0 FID\u2193 LPIPS\u2193ID\u2191 ACC\u2191 (a) \u2713 \u2713 \u2713 62.08 0.29 0.62 81.09 (b) \u2713 \u2713 \u2713 \u2713 48.68 0.28 0.66 82.86 (c) \u2713 \u2713 \u2713 \u2713 54.27 0.31 0.58 80.58 (d) \u2713 \u2713 \u2713 \u2713 61.60 0.29 0.62 80.04 (e) \u2713 \u2713 \u2713 \u2713 \u2713 44.91 0.28 0.78 83.05 Table 2. Ablation analysis on 3D-aware face image generation using a text prompt and a semantic mask. We compare (a) and (b) with (e) to show the effect of our style modulation network and (c) and (d) with (e) to analyze the effect of Igt and Id in model training. is reasonable that IDE-3D shows the highest FID score as the method additionally uses an RGB image as input to estimate the latent code for face generation. The LPIPS, SSIM, and ID scores are significantly higher than IDE-3D, with scores higher by 0.116, 0.23, and 0.24, respectively. Our method using 3D GAN exhibits superior ACC and mIoU scores for the 3D face generation task compared to IDE3D, with the score difference of 35.98% and 32.76%, likely due to its ability to reflect textual representations into spatial information. In face image generation tasks using a text prompt and a scribble map, our method outperforms ControlNet in FID, LPIPS, SSIM, and ID scores in both 2D and 3D domains. Note that the ACC and mIoU scores are applicable for semantic mask-based methods. 4.3. Ablation Study We conduct ablation studies to validate the effectiveness of our contributions, including the mapping network M, the AbSM network T , and the loss functions LM and LT . Effectiveness of M and T . We conduct experiments with different settings to assess the effectiveness of M and T . \fw/ \ud835\udc3c\ud835\udc54\ud835\udc61 (d) Ours (a) Inputs (b) w/ \ud835\udc3c\ud835\udc61=0 \ud835\udc51 (c) w/ \ud835\udc3c\ud835\udc54\ud835\udc61 (d) Ours 2. (a) Inputs (b) w/ \ud835\udc3c\ud835\udc61=0 \ud835\udc51 (c) w/ \ud835\udc3c\ud835\udc54\ud835\udc61 (d) Ours \u201cShe wears lipstick and has arched eyebrows, and slightly \u201cThis young person has goatee, mustache, big lips, and strai d) Ours urs and big lips, ws, and (a) Inputs (b) w/ \ud835\udc3c0 \ud835\udc51 (c) w/ \ud835\udc3c\ud835\udc54\ud835\udc61 (d) Ours 2. 1. Input text: 1. \u201cThis young person has goatee, mustache, big lips, and straight hair.\u201d 2. \u201cShe wears lipstick and has arched eyebrows, and mouth slightly open.\u201d Figure 9. Effect of using Id from the denoising U-Net and the GT image Igt in model training. Using text prompts (1, 2) with (a) the semantic mask, we show face images using our model trained with (b) Id 0 , (c) Igt, and (d) both. We also show the advantages of using cross-attention maps in our model. The quantitative and qualitative results are presented in Table 2 and Figure 8, respectively. When using only M, we can generate face images that roughly preserve the structures of a given semantic mask in Figure 8 (a), including the outline of the facial components (e.g. face, eye) in Figure 8 (b). On the other hand, T enables the model to express face attribute details effectively, such as hair colors and mouth open, based on the multi-modal inputs in Figure 8 (c). The FID and ACC scores are higher than the model using only M in Table 2 (b). We further present the impact of adopting cross-attention maps to T for style modulation. Figure 8 (d) shows how the attention-based modulation approach enhances the quality of results, particularly in terms of the sharpness of desired face attributes and the overall consistency between the generated image and multi-modal conditions. Table 2 (e) demonstrates the effectiveness of our method by showing improvements in FID, LPIPS, ID, and ACC. Our method, including both M and T with cross-attention maps, significantly improves the FID showing our model\u2019s ability to generate high-fidelity images. From the improvement of the ID score, the crossattention maps enable relevantly applying the details of input conditions to facial components. Model Training. We analyze the effect of loss terms LM and LT by comparing the performance with the model trained using either Id 0 from the denoising U-Net or GT image Igt. The model trained using Id 0 produces the images in Figure 9 (b), which more closely reflected the multi-modal conditions (a), such as \u201cgoatee\u201d and \u201chair contour\u201d. In Table 2 (c), the ACC score of this model is higher than the model trained only using Igt in Table 2 (d). The images generated by the model trained with Igt in Figure 9 (c) are more perceptually realistic, as evidenced by the lower LPIPS score compared to the model trained with Id 0 in TaInput text: 1. 2. 3. 1. \u201cA photo of a face of a beautiful elf with silver hair in live action movie.\u201d 2. \u201cA photo of a white Greek statue.\u201d 3. \u201cA photo of a face of a zombie.\u201d Figure 10. Visual examples of 3D face style transfer. Our method generates stylized multi-view images by mapping the latent features of DM and GAN. ble 2 (c) and (d). Using Igt also preserves more conditionirrelevant features inferred by the ID scores in Table 2 (c) and (d). In particular, our method combines the strengths of two models as shown in Figure 9 (d) and Table 2 (e). 4.4. Limitations and Future Works Our method can be extended to multi-modal face style transfer (e.g. face \u2192Greek statue) by mapping the latent spaces of DM and GAN without CLIP losses and additional dataset, as shown in Figure 10. For the 3D-aware face style transfer task, we train our model using Id 0 that replaces GT image Igt in our loss terms. This method, however, is limited as it cannot transfer extremely distinct style attributes from the artistic domain to the photo-realistic domain of GAN. To better transfer the facial style in the 3D domain, we will investigate methods to map the diffusion features related to the input pose into the latent space of GAN in future works. 5." +} \ No newline at end of file diff --git a/abs_9K/test_abstract_short_2405.04370v1.json b/abs_9K/test_abstract_short_2405.04370v1.json new file mode 100644 index 0000000000000000000000000000000000000000..da245643803c5aa987d1e632571939a4927f9c26 --- /dev/null +++ b/abs_9K/test_abstract_short_2405.04370v1.json @@ -0,0 +1,16 @@ +{ + "url": "http://arxiv.org/abs/2405.04370v1", + "title": "Diff-IP2D: Diffusion-Based Hand-Object Interaction Prediction on Egocentric Videos", + "abstract": "Understanding how humans would behave during hand-object interaction is vital\nfor applications in service robot manipulation and extended reality. To achieve\nthis, some recent works have been proposed to simultaneously predict hand\ntrajectories and object affordances on human egocentric videos. They are\nregarded as the representation of future hand-object interactions, indicating\npotential human motion and motivation. However, the existing approaches mostly\nadopt the autoregressive paradigm for unidirectional prediction, which lacks\nmutual constraints within the holistic future sequence, and accumulates errors\nalong the time axis. Meanwhile, these works basically overlook the effect of\ncamera egomotion on first-person view predictions. To address these\nlimitations, we propose a novel diffusion-based interaction prediction method,\nnamely Diff-IP2D, to forecast future hand trajectories and object affordances\nconcurrently in an iterative non-autoregressive manner. We transform the\nsequential 2D images into latent feature space and design a denoising diffusion\nmodel to predict future latent interaction features conditioned on past ones.\nMotion features are further integrated into the conditional denoising process\nto enable Diff-IP2D aware of the camera wearer's dynamics for more accurate\ninteraction prediction. The experimental results show that our method\nsignificantly outperforms the state-of-the-art baselines on both the\noff-the-shelf metrics and our proposed new evaluation protocol. This highlights\nthe efficacy of leveraging a generative paradigm for 2D hand-object interaction\nprediction. The code of Diff-IP2D will be released at\nhttps://github.com/IRMVLab/Diff-IP2D.", + "authors": "Junyi Ma, Jingyi Xu, Xieyuanli Chen, Hesheng Wang", + "published": "2024-05-07", + "updated": "2024-05-07", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "label": "Original Paper", + "paper_cat": "Diffusion AND Model", + "gt": "Understanding how humans would behave during hand-object interaction is vital\nfor applications in service robot manipulation and extended reality. To achieve\nthis, some recent works have been proposed to simultaneously predict hand\ntrajectories and object affordances on human egocentric videos. They are\nregarded as the representation of future hand-object interactions, indicating\npotential human motion and motivation. However, the existing approaches mostly\nadopt the autoregressive paradigm for unidirectional prediction, which lacks\nmutual constraints within the holistic future sequence, and accumulates errors\nalong the time axis. Meanwhile, these works basically overlook the effect of\ncamera egomotion on first-person view predictions. To address these\nlimitations, we propose a novel diffusion-based interaction prediction method,\nnamely Diff-IP2D, to forecast future hand trajectories and object affordances\nconcurrently in an iterative non-autoregressive manner. We transform the\nsequential 2D images into latent feature space and design a denoising diffusion\nmodel to predict future latent interaction features conditioned on past ones.\nMotion features are further integrated into the conditional denoising process\nto enable Diff-IP2D aware of the camera wearer's dynamics for more accurate\ninteraction prediction. The experimental results show that our method\nsignificantly outperforms the state-of-the-art baselines on both the\noff-the-shelf metrics and our proposed new evaluation protocol. This highlights\nthe efficacy of leveraging a generative paradigm for 2D hand-object interaction\nprediction. The code of Diff-IP2D will be released at\nhttps://github.com/IRMVLab/Diff-IP2D.", + "main_content": "Introduction Accurately anticipating human intentions and future actions is important for artificial intelligence systems in robotics and extended reality [1, 2, 3]. Recent works have tried to tackle the problem from various perspectives, including action recognition and anticipation [4, 5, 6, 7], gaze prediction [8, 9, 10, 11], hand trajectory prediction [12, 13, 14, 15], and object affordance extraction [12, 16, 14, 17]. Among them, jointly predicting hand motion and object affordances can effectively facilitate more reasonable robot manipulation as the prior contextual information, which has been demonstrated on some robot platforms [1, 18, 19]. We believe that deploying such models pretrained by internet-scale human videos on robots is a promising path towards embodied agents. Therefore, our work aims to jointly predict hand trajectories and object affordances on egocentric videos as a concrete hand-object interaction (HOI) expression, following the problem modeling of previous works [12, 14]. Currently, the state-of-the-art approaches [12, 13] predicting hand trajectories and object affordances on egocentric videos tend to exploit the autoregressive (AR) model. They reason about the next \u2217Corresponding author: wanghesheng@sjtu.edu.cn Preprint. Under review. arXiv:2405.04370v1 [cs.CV] 7 May 2024 \fview1 (other observations) view2 (last observation) gap egocentric images (a) Existing Paradigm (b) Diff-IP2D Paradigm t autoregressive model HOI (t2) HOI (t1) predicted interaction diffusion-based model denoising HOI (t1) HOI (t2) HOI (t3) predicted interaction egocentric images t steps HOI (t1) HOI (t3) HOI (t1) HOI (t2) in parrallel motion features (c) Autoregressive Generation vs. Parallel Generation (d) Inherent Gaps gt gt ego motion real actions pixel movement gap accumulated error gt bidirectional unidirectional 3D environments Figure 1: Diff-IP2D vs. Existing Paradigm. The existing HOI prediction paradigm (a) tends to accumulate prediction errors under unidirectional constraints. In contrast, our proposed Diff-IP2D (b) directly forecasts all the future interaction states in parallel with denoising diffusion, mitigating error accumulation with bidirectional constraints (c). Moreover, we integrate egomotion information into our proposed paradigm to narrow the inherent gaps (d) in HOI prediction. HOI state only according to the previous steps (Fig. 1(a)). However, expected \u201cpost-contact states\u201d also affect \u201cpre-contact states\u201d according to human intentions that persist across the holistic HOI process as an oracle. There must be more coherent constraints that reflect human intention and mutually connect the preceding and the following motion in the HOI prediction process. Inspired by this, we argue that predicting future HOI states in parallel considering the bidirectional constraints within the holistic sequence outperforms generating the next state autoregressively (Fig. 1(c)). With diffusion models emerging across multiple domains [20, 21, 22, 23, 24, 25, 26, 27], their strong forecasting capability has been widely validated. Therefore, we propose a diffusion-based method to predict future hand-object interaction in parallel, considering bidirectional constraints in the latent space compared to the traditional autoregressive generation (Fig. 1(b)). In the forward process, the past and future video images are first encoded to sequential latent features. Noises are gradually added to the part of the future sequence while the past features remain anchored. Subsequently, a Transformer-based network is devised for learning to reverse the diffusion and reconstruct the input latent features. Finally, the proposed predictors are exploited to recover future hand trajectories and object affordances from the denoised latents. A new regularization strategy is also proposed to link the two latent spaces adjacent to the denoising diffusion process. Moreover, we also identify two inherent gaps (Fig. 1(d)) affecting HOI prediction in the existing paradigm: 1) Directly predicting the projection of 3D future hand trajectories and object affordances on 2D egocentric image plane is an ill-posed problem involving spatial ambiguities. There is generally a gap between 2D pixel movements and 3D real actions, which can be bridged by spatial transformation across multiple views changing with egomotion. 2) The past egocentric videos are absorbed to predict future interaction states on the last observed image, which is actually a \u201ccanvas\u201d from a different view w.r.t all the other frames. Therefore, there is also a gap between the last observation (first-person view) and the other observations (analogous to third-person view) caused by egomotion. To fill the two gaps together, we further propose to integrate the camera wearer\u2019s egomotion into our diffusion-based paradigm. The utilized homography features enable the denoising model aware of the camera wearer\u2019s dynamics and the spatial relationship between consecutive egocentric video frames. The main contributions of this paper are as follows: 1) We propose a diffusion-based hand-object interaction prediction method, dubbed Diff-IP2D. To our best knowledge, this is the first work to jointly forecast future hand trajectories and object affordances by the devised denoising diffusion probabilistic model with only 2D egocentric videos as input. It provides a foundation generative paradigm in the field of HOI prediction. 2) The homography egomotion features are integrated to fill the motion-related gaps inherent in HOI prediction on egocentric videos. 3) We extend the existing metrics and propose the first protocol for jointly evaluating the performance of hand trajectory prediction and object affordance prediction. 4) Comprehensive experiments are conducted to demonstrate that our Diff-IP2D can predict plausible hand trajectories and object affordances compared to the state-of-the-art baselines, showing its potential for deployment on artificial intelligence systems. 2 \f2 Related work Understanding hand-object interaction. Human HOI comprehension can guide the downstream tasks in artificial intelligence systems. As a pioneer work, Calway et al. [28] connect the specific human tasks to relevant objects, revealing the importance of object-centric understanding in different HOI modes. In contrast, Liu et al. [29] focus on capturing the changeable attributes of objects, which underlines the relationship between object-centric interaction and goal-oriented human activities. After that, more and more works contribute to HOI understanding by pixel-wise semantic segmentation [30, 31, 32, 33], bounding-box-wise detection [34, 35, 36, 37], fine-grained hand/object pose estimation [38, 39, 40, 41, 42, 43]. Ego4D [44] further provides a standard benchmark that divides HOI understanding into several predefined subtasks. Predicting hand-object interaction. Analyzing only past human behavior may be insufficient for service robot manipulation or extended reality. Forecasting possible future object-centric HOI states based on historical observations is also valuable, which attracts increasing attention due to the general knowledge that can be transferred to robot applications [1, 18, 19, 45]. For example, Dessalene et al. [46] propose to generate contact anticipation maps and next active object segmentations as future HOI predictions. Liu et al. [14] first achieve hand trajectory and object affordance prediction simultaneously, revealing that predicting hand motion benefits the extraction of interaction hotspots. Following this work, Liu et al. [12] further develop an object-centric Transformer to jointly forecast future trajectories and affordances autoregressively, and annotate publicly available datasets to support future works. More recently, Bao et al. [13] lift the problem to 3D spaces where hand trajectories are predicted by an uncertainty-aware state space Transformer in an autoregressive manner. However, this method needs additional 3D perception inputs from the RGB-D camera. In this work, we still achieve joint hand trajectory and object affordance prediction on 2D human videos rather than in 3D space. We focus on capturing more general knowledge from only egocentric camera observations in an iterative non-autoregressive (iter-NAR) manner, rather than the autoregressive way of the state-of-the-art works [12, 13]. Diffusion-based egocentric video analysis. Diffusion models have been successfully utilized in exocentric and egocentric video prediction [47, 48, 49, 50, 2] due to their strong generation ability. With only egocentric videos as inputs, diffusion-based techniques can also achieve human mesh recovery [51, 52], 3D HOI reconstruction [53, 54], and 3D HOI synthesizing [16, 55]. However, none of these works concentrate on the combination of fine-grained hand trajectories and object affordances as future HOI representations for potential utilization in artificial intelligence systems. Our proposed Diff-IP2D first achieves this based on the denoising diffusion probabilistic model [20], which dominates the existing paradigm [12, 13] in prediction performance on egocentric videos. 3 Proposed Method 3.1 Preliminaries Task definition. Given the video clip of past egocentric observations I = {It}0 t=\u2212Np+1, we aim to predict future hand trajectories H = {HR t , HL t }Nf t=1(HR t , HL t \u2208R2) and potential object contact points O = {On}No n=1(On \u2208R2), where Np and Nf are the numbers of frames in the past and future time horizons respectively, and No denotes the number of predicted contact points used to calculate interaction hotspots as object affordances. Following the previous works [12, 14], we predict the future positions of the right hand, the left hand, and the affordance of the next active object on the last observed image of the input videos. Diffusion models. In this work, we propose a diffusion-based approach to gradually corrupt the input to noisy features and then train a denoising model to reverse this process. We first map the input images into a latent space z0 \u223cq(z0), which is then corrupted to a standard Gaussian noise zS \u223cN(0, I). In the forward process, the perturbation operation can be represented as q(zs|zs\u22121) = N(zs; \u221a1 \u2212\u03b2szs\u22121, \u03b2sI), where \u03b2 is the predefined variance scales. In the reverse process, we set a denoising diffusion model to gradually reconstruct the latent z0 from the noisy zS. The denoised features can be used to recover the final future hand trajectories and object affordances. 3 \fforward process future HOI features conditional past HOI features reverse process Multi-Feature Extractor egomotion homography Hand Trajectory Head trajectory loss shared weights regularization affordance loss diffusion-related losses Input: sequential past egocentric images Output: future HOI states feature space (s=S) Side-Oriented Fusion Module MADT Predictors MADT Object Affordance Head global/right/left intermediate features right/left fused features diffusion process feature space (s=S/2) feature space (s=0) Hand Trajectory Head Figure 2: System Overview of Diff-IP2D. Our proposed paradigm takes in sequential past egocentric images and jointly predicts hand trajectories and object affordances as future HOI states. The observations are mapped to the latent feature space for the diffusion process. 3.2 Architecture System overview. Accurately reconstructing the future part of the input sequence is critical in the diffusion-based prediction task. We empirically found that ground-truth hand waypoints Hgt = {HR,gt t , HL,gt t }Nf t=1(HR,gt t , HL,gt t \u2208R2) and contact points Ogt = {Ogt n}No n=1(Ogt n \u2208R2) provide discrete and sparse supervision signals for reconstruction, which is not enough for capturing possible high-level semantics such as human intentions in the denoising process. Therefore, as Fig. 2 shows, we first use Multi-Feature Extractor and Side-Oriented Fusion Module to transform the input images into latent HOI features, and then implement diffusion-related operation in the latent continuous space. The HOI features denoised by Motion-Aware Denoising Transformer are further absorbed by Hand Trajectory Head and Object Affordance Head to generate future hand trajectories and object hotspots. Multi-Feature Extractor (MFE). Following the previous work [12], we use MFE that consists of a pretrained Temporal Segment Network (TSN) provided by Furnari et al. [34], RoIAlign [56] with average pooling, and Multilayer Perceptron (MLP) to extract hand, object, and global features for each sequence image It \u2208I. The positions of hand-object bounding boxes are also encoded to feature vectors fused with hand and object features. Side-Oriented Fusion Module (SOFM). Our proposed SOFM is a learnable linear transformation to fuse the above-mentioned three types of feature vectors into the final latent form for two sides respectively. Specifically, the global features and right-side features (right-hand/object features) are concatenated to the right-side HOI features FR = {F R t }X t=\u2212Np+1(F R t \u2208Ra, X = Nf for training and X = 0 for inference). The operation and feature sizes are the same as the leftside counterparts, leading to FL = {F L t }X t=\u2212Np+1. We further concatenate the side-oriented features along the time axis respectively to generate the input latents F R seq, F L seq \u2208R(Np+X)\u00d7a for the following diffusion model. Motion-Aware Denoising Transformer (MADT). Our proposed MADT takes in the noisy latent HOI features and reconstructs future HOI features for the following predictors conditioned on past HOI counterparts. MADT consists of several stacked Transformer layers as shown in Fig. 3. Inspired by the text generation technique [26], we anchor the past HOI features for both forward and reverse processes. We only impose noises and denoise at the positions of the future feature sequence. The features of the two sides are denoised using the same model, leading to \u02c6 F R seq and \u02c6 F L seq. In addition, egomotion guidance is proposed here to fill the gaps mentioned in Sec. 1. Specifically, we first extract the Scale-Invariant Feature Transform (SIFT) descriptors to find the pixel correspondence between two adjacent images of past observations I. Then we calculate the homography matrix with RANSAC that finds a transformation to maximize the number of inliers in the keypoint pairs. We accumulate the consecutive homography matrices and obtain Mseq \u2208RNp\u00d73\u00d73 representing the camera wearer\u2019s motion between It (t \u22640) and I0. They are further linearly embedded into an egomotion feature Eseq \u2208RNp\u00d7b by Motion Encoder. The multi-head cross-attention module 4 \fMHSA Add & Norm MHCA Add & Norm FFN Add & Norm past HOI features TE PE egomotion feature latent noisy samples denoised future HOI features \u3002 homography Motion Encoder N X input video clip \u3002\u3002 t m1,1 m1,2 m1,3 m2,1 m2,2 m2,3 m3,1 m3,2 m3,3 ... ... ... ... ... ... Figure 3: Architecture of our proposed MADT. MADT receives corrupted latent HOI features with the position embedding (PE) and time embedding (TE), and outputs denoised future HOI features. (MHCA) in the devised Transformer layer then absorbs the egomotion feature to guide the denoising process. More analysis on the use of egomotion guidance can be found in Appendix, Sec. B. Predictors. Our proposed predictors consist of Hand Trajectory Head (HTH) and Object Affordance Head (OAH). HTH contains an MLP that receives the future parts of the denoised features, \u02c6 F R seq[Np+1: Np+Nf] and \u02c6 F L seq[Np+1 : Np+Nf] to generate future waypoints H of two hands. As to OAH, we empirically exploit Conditional Variational Autoencoder (C-VAE) [57] to generate possible contact points O in the near future. Take the right hand as an example, the condition is selected as the time-averaged \u02c6 F R seq and predicted waypoints HR t . Note that we additionally consider denoised future HOI features \u02c6 F R seq[Np+1 : Np+Nf] (t>0) besides the features from the past observation (t\u22640) for object affordance prediction. This aligns with the intuitive relationship between the contact points and the overall interaction process. Therefore, we integrate richer conditional features from trajectory prediction into the object affordance prediction compared to the previous work [12] only conditioned on historical features. 3.3 Training Forward process. We implement partial noising [26] in the forward process during training. Taking the right side as an example, the output of SOFM is first extended by a Markov transition q(z0|F R seq) = N(F R seq, \u03b20I), where F R seq \u2208R(Np+Nf)\u00d7a. We discard the embedding process from Gong et al. [26] since the HOI feature F R seq is already in the continuous latent space. In each following forward step of the diffusion model, we implement q(zs|zs\u22121) by adding noise to the future part of zs\u22121, i.e., zs\u22121[Np+1:Np+Nf] for both sides. Reverse process. After corrupting the initial z0 to zS by the forward process, our proposed MADT is adopted to denoise zS to z0 in a classifier-free manner. Considering the guidance of egomotion features, the reverse process can be modeled as pMADT(z0:S) := p(zs) QS s=1 pMADT(zs\u22121|zs, Mseq). Specifically, the MADT model fMADT(zs, s, Mseq) predicts the injected noise for each forward step with pMADT(zs\u22121|zs, Mseq) = N(zs\u22121; \u00b5MADT(zs, s, Mseq), \u03c3MADT(zs, s, Mseq)). The same denoising operation and motion-aware guidance are applied to HOI features of both sides. Training objective. The loss function to train the networks in Diff-IP2D contains four parts, including diffusion-related losses, trajectory loss, affordance loss, and an additional regularization term (see Fig. 2). Take the right side as an example, we use the variational lower bound LR VLB as the diffusion-related losses: LR VLB = S X s=2 ||zR 0 \u2212fMADT(zR s, s, Mseq)||2 + ||F R seq \u2212\u02c6 F R seq||2, (1) where \u02c6 F R seq = fMADT(zR 1, 1, Mseq). To reconstruct hand trajectories beyond the latent feature space, we further set trajectory loss LR traj with the distance between the ground-truth waypoints and the ones predicted by HTH: LR traj = Nf X t=1 ||HR t \u2212HR,gt t ||2, (2) 5 \fwhere HR t = fHTH( \u02c6 F R seq[Np+1:Np+Nf]). We only focus on the future part out of the holistic sequence for computing LR traj since we let HTH be more sensitive to predictions rather than bias it to past observations. As to the object affordance prediction, we also compute the affordance loss Laff after multiple stochastic sampling considering the next active object recognized following Liu et al. [12] (assuming in the right side here for brevity): Laff = No X n=1 ||On \u2212Ogt n||2 + cLKL, (3) where On =fOAH( \u02c6 F R seq, HR t ), and LKL = 1 2(\u2212log \u03c32 OAH( \u02c6 F R seq, HR t )+\u00b52 OAH( \u02c6 F R seq, HR t )+\u03c32 OAH( \u02c6 F R seq, HR t )\u2212 1) is the KL-Divergence regularization for C-VAE, which is scaled by c = 1e-3. The latent features and predicted hand waypoints are fused by MLP suggested by the previous work [12]. We consider both reconstructed future HOI features \u02c6 F R seq[Np+1:Np+Nf] and anchored past counterparts \u02c6 F R seq[0:Np] compared to [12] as mentioned before. We also notice that the latent feature spaces before and after the denoising diffusion process represent the same \u201cprofile\u201d of the input HOI sequence. Therefore, we propose an additional regularization term implicitly linking F R seq and \u02c6 F R seq by hand trajectory prediction: LR reg = Nf X t=1 || \u02dc HR t \u2212HR,gt t ||2, (4) where \u02dc HR t = fHTH(F R seq[Np+1:Np+Nf]). Although Eq. (4) does not explicitly contain the term \u02c6 F R seq, the training direction is the same with Eq. (2), thus maintaining training stability. The regularization helps the convergence of Diff-IP2D by consistently constraining the two latent spaces alongside the diffusion process. Here we do not use object affordance prediction for regularization because we empirically found that incorporating OAH mitigates training efficiency while the positive effect is not obvious. Finally, we get the total loss to train our proposed Diff-IP2D: Ltotal = \u03bbVLB(LR VLB + LL VLB) + \u03bbtraj(LR traj + LL traj) + \u03bbaffLaff + \u03bbreg(LR reg + LL reg), (5) where \u03bbVLB, \u03bbtraj, \u03bbaff, and \u03bbreg are the weights to balance different losses. Besides, we leverage the importance sampling technique proposed in improved DDPM [58], which promotes the training process focusing more on the steps with relatively large Ltotal. 3.4 Inference In the inference stage, we first sample F R noise, F L noise \u2208RNf\u00d7a from a standard Gaussian distribution, which is then concatenated with F R seq, F L seq \u2208RNp\u00d7a along the time axis to generate zR S and zL S. Then we use MADT to predict zR 0 and zL 0 based on DDIM sampling [59]. Note that we anchor the past part of reparameterized zs as the fixed condition in every step of the inference process following Gong et al. [26]. Finally, the generated \u02c6 F R seq and \u02c6 F L seq are used to predict future hand waypoints and contact points by fHTH(\u00b7) and fOAH(\u00b7) as mentioned before. It can be seen from the inference stage that Diff-IP2D can be regarded as an iter-NAR model in the latent feature space. Compared to the state-of-the-art baselines in an autoregressive manner, our approach shifts the iteration from F1,1 F1,2 F1, Nf ... F2,1 F2,2 F2, Nf ... FS,1 FS,2 FS, Nf ... ... denoising diffusion process time axis ... ... F1 F2 FNf ... time axis H1 H2 HN ... f FS-1,1 FS-2,1 FS, Nf ... H1 H2 HN ... f F3 H3 F1 F2 FNf ... time axis H1 H2 HN ... f F3 H3 (b) Iter-NAR Prediction (a) AR Prediction Figure 4: Comparison of AR and our iter-NAR prediction. the time axis to the denoising direction, which is shown in Fig. 4. This alleviates the accumulated artifacts caused by the limited iteration in the time dimension, and maintains bidirectional constraints among the sequential features to generate future HOI states in parallel, providing a deeper understanding of human intention. We further present the mathematical relationship between the two iter-NAR models, Diff-IP2D for HOI prediction and DiffuSeq [26] for text generation in Appendix, Sec. A. 6 \f4 Experiments 4.1 Experimental setups Datasets. Following the previous work [12], we utilize three publicly available datasets including Epic-Kitchens-55 (EK55) [60], Epic-Kitchens-100 (EK100) [61], and EGTEA Gaze+ (EG) [11]. For the EK55 and EK100 datasets, we sample past Np = 10 frames (2.5 s) to forecast HOI states in future Nf = 4 frames (1.0 s), both at 4 FPS. As to the EG dataset, Np = 9 frames (1.5 s) are used for Nf = 3 HOI predictions (0.5 s) at 6 FPS. See the Appendix, Sec. C.2 for more details. Diff-IP2D configuration. MFE extracts the hand, object, and global feature vectors all with the size of 512 for each input image. For the EK55 and EK100 datasets, the outputs of SOFM F R seq, F L seq have the size of 14 \u00d7 512 for training and 10 \u00d7 512 for inference. For the EG dataset, F R seq, F L seq are 9 \u00d7 512 for training and 12 \u00d7 512 for inference. As to the diffusion process, the total number of steps S is set to 1000. We also provide an ablation study on multiple steps for training and inference in Appendix, Sec. D.3. The square-root noise schedule in Diffusion-LM [62] is adopted here for the forward diffusion process. MADT has 6 Transformer layers (Fig. 3) for denoising, where the embedding dimension is 512, the number of heads is set to 4, and the intermediate dimension of the feed-forward layer is set to 2048. Motion Encoder linearly projects each homography matrix to an egomotion feature vector of 512. We use an MLP with hidden dimensions 256 and 64 to predict the hand waypoints as HTH, and a C-VAE containing an MLP with a hidden dimension 512 to predict contact points as OAH. The training configurations can be found in Appendix, Sec. C.2. In the reference stage, we generate the 10 candidate samples for each prediction. Baseline configuration. We choose Constant Velocity Hand (CVH), Seq2Seq [63], FHOI [14], OCT [12], and USST [13] as the baselines for hand trajectory prediction. CVH is the most straightforward one which assumes two hands remain in uniform motion over the future time horizon with the average velocity during past observations. Besides, we adjust the input and architecture of USST to the 2D prediction task since it was originally designed for 3D hand trajectory prediction. We choose Center Object [14], Hotspots [64], FHOI [14], OCT [12], and Final Hand of USST [13] (USST-FH) as the baselines for object affordance prediction. USST-FH puts a mixture of Gaussians at the last hand waypoint predicted by USST since its vanilla version can only predict waypoints. Evaluation metrics. Following the previous work [14, 12, 13], we use Final Displacement Error (FDE) to evaluate prediction performance on hand trajectories. Considering the general knowledge of \u201cpost-contact trajectories\u201d extracted from human videos is potentially beneficial to robot manipulation [1, 18], we additionally extend the metric Average Displacement Error to Weighted Displacement Error (WDE): WDE = 1 2Nf X R,L Nf X t=1 t Nf D(Ht, Hgt t ), (6) where D(\u00b7) denotes the L2 distance function and the later waypoints contribute to larger errors. We select the mean error among the 10 samples for each hand trajectory prediction. As to the object affordance prediction, we use Similarity Metric (SIM) [65], AUC-Judd (AUC-J) [66], and Normalized Scanpath Saliency (NSS) [67] as evaluation metrics. We use all 10 contact point candidates to compute the metric values for each affordance prediction. Moreover, we propose a novel object-centric protocol to jointly evaluate the two prediction tasks. We first calculate the averaged hand waypoints \u00af HR t and \u00af HL t for each future timestamp from multiple samples. Then we select the waypoint closest to each predicted contact prediction On as an additional \u201cinteraction point\u201d, which can be formulated by: \u00af Hip n = minR,L,tD( \u00af Ht, On), (7) Finally, the joint hotspot is predicted using { \u00af Hip n \u222aOn}No n=1. This protocol comprehensively considers object-centric attention since HOI changes the object states and hand waypoints must have a strong correlation with object positions. Note that we also use the quantitative metrics same as the ones for object affordance prediction, which are denoted as SIM\u2217, AUC-J\u2217, and NSS\u2217. More clarifications about our proposed new protocol can be found in Appendix, Sec. C.1. 7 \fTable 1: Comparison of performance on hand trajectory and object affordance prediction approach EK55 EK100 EG WDE \u2193 FDE \u2193 WDE \u2193 FDE \u2193 WDE \u2193 FDE \u2193 CVH 0.636 0.315 0.658 0.329 0.689 0.343 Seq2Seq [63] 0.505 0.212 0.556 0.219 0.649 0.263 FHOI [14] 0.589 0.307 0.550 0.274 0.557 0.268 OCT [12] 0.446 0.208 0.467 0.206 0.514 0.249 USST [13] 0.458 0.210 0.475 0.206 0.552 0.256 Diff-IP2D (ours) 0.411 0.181 0.407 0.187 0.478 0.211 SIM \u2191 AUC-J \u2191 NSS \u2191 SIM \u2191 AUC-J \u2191 NSS \u2191 SIM \u2191 AUC-J \u2191 NSS \u2191 Center Object [14] 0.083 0.553 0.448 0.081 0.558 0.401 0.094 0.562 0.518 Hotspots [64] 0.156 0.670 0.606 0.147 0.635 0.533 0.150 0.662 0.574 FHOI [14] 0.159 0.655 0.517 0.120 0.548 0.418 0.122 0.506 0.401 OCT [12] 0.213 0.710 0.791 0.187 0.677 0.695 0.227 0.704 0.912 USST-FH [13] 0.208 0.682 0.757 0.179 0.658 0.754 0.190 0.675 0.729 Diff-IP2D (ours) 0.226 0.725 0.980 0.211 0.736 0.917 0.242 0.722 0.956 SIM\u2217\u2191 AUC-J\u2217\u2191 NSS\u2217\u2191 SIM\u2217\u2191 AUC-J\u2217\u2191 NSS\u2217\u2191 SIM\u2217\u2191 AUC-J\u2217\u2191 NSS\u2217\u2191 FHOI [14] 0.130 0.602 0.487 0.113 0.545 0.409 0.118 0.501 0.379 OCT [12] 0.219 0.720 0.848 0.182 0.684 0.662 0.194 0.672 0.752 Diff-IP2D (ours) 0.222 0.730 0.888 0.204 0.727 0.844 0.226 0.701 0.825 Figure 5: Visualization of hand trajectory prediction on Epic-Kitchens. The waypoints from groundtruth labels, Diff-IP2D, and the second-best baseline [12] are connected by red, white, and blue dashed lines respectively. 4.2 Separate evaluation on hand trajectory and object affordance prediction We first present the evaluation results on hand trajectory prediction. As Tab. 1 depicts, our proposed Diff-IP2D outperforms all the baselines on the EK55 and EK100 datasets on WDE and FED. This is mainly achieved by the devised iter-NAR paradigm of Diff-IP2D alleviating degeneration in AR baselines, as well as the egomotion guidance. The visualization of the related hand prediction results is shown in Fig. 5. It can be seen that our proposed method can better capture the camera wearer\u2019s intention (such as putting the food in the bowl) and generate more reasonable future trajectories even if there is a lack of past observations for hands (such as reaching out towards the table). Besides, our method can predict a good final hand position although there is a large shift in the early stage (the subfigure in the bottom right corner of Fig. 5), which benefits from our diffusion-based parallel generation. When directly transferring the models trained on Epic-Kitchens to the unseen EG dataset, our method still outperforms the other baselines, which improves by 7.0% and 15.3% against the second-best method on WDE and FDE respectively. This reveals the solid generalization capability of our diffusion-based approach across different environments. The comparison results of object affordance prediction are also shown in Tab. 1. Our proposed Diff-IP2D predicts the hotspots with larger SIM, AUC-J, and NSS compared to all the baselines on both Epic-Kitchens data and unseen EG data. Fig. 6 illustrates the predicted contact points with minimum distances to the ground-truth ones. Our proposed method focuses more on objects of interest considering the features of the holistic interaction and potential hand trajectories, and therefore grounds the contact points closer to the ground-truth labels than the counterparts of the baseline. 8 \f\u8981\u8bf4 \u4e3a\u4e86\u663e\u793a\u65b9\u4fbf \u52a0\u4e86\u4e2a\u865a\u62df\u7684hotspots\u5728\u4e0a\u9762 Figure 6: Visualization of object affordance prediction on Epic-Kitchens. The contact points from ground-truth, Diff-IP2D, and the state-of-the-art baseline OCT [12] are represented by red, white, and blue dots respectively. For a clearer illustration, we additionally put a fixed Gaussian with each contact point as the center. See the Appendix, Sec. D.6 for more visualization results. Table 2: Ablation study on egomotion guidance approach EK55 EK100 WDE \u2193 FDE \u2193 SIM \u2191 AUC-J \u2191 NSS \u2191 WDE \u2193 FDE \u2193 SIM \u2191 AUC-J \u2191 NSS \u2191 Diff-IP2D* 0.427 0.186 0.218 0.717 0.929 0.439 0.198 0.201 0.710 0.846 Diff-IP2D 0.411 0.181 0.226 0.725 0.980 0.407 0.187 0.211 0.736 0.917 improvement 3.7% 2.7% 3.7% 1.1% 5.5% 7.3% 5.6% 5.0% 3.7% 8.4% Diff-IP2D*: Diff-IP2D w/o egomotion guidance 4.3 Joint evaluation on hand trajectory and object affordance prediction We further compare Diff-IP2D with the other two joint prediction baselines, FHOI [14] and OCT [12], using our proposed object-centric protocol. The video clips containing both ground-truth hand waypoints and contact points are used for evaluation in this experiment. The results are also shown in Tab. 1, which indicates that our proposed Diff-IP2D can generate the best object-centric HOI predictions considering the two tasks concurrently on both Epic-Kitchens and unseen EG data. The results also suggest that Diff-IP2D outperforms the baselines on object-centric HOI prediction by focusing more attention on the target objects and predicting reasonable hand trajectories around them. 4.4 Ablation study on egomotion guidance We provide an ablation study of the egomotion features used to guide MADT denoising on the EK55 and EK100 datasets. Here we replace the MHCA in MADT with a multi-head self-attention module (MHSA) to remove the egomotion guidance while keeping the same parameter number. The experimental results in Tab. 2 show that the guidance of motion features improves our proposed diffusion-based paradigm noticeably on both hand trajectory prediction and object affordance prediction. This is achieved by narrowing the two gaps caused by 2D-3D ill-posed problem and view difference mentioned in Sec. 1. Note that the egomotion guidance is more significant on the EK100 dataset than on the EK55 dataset. The reason could be that EK100 has a larger volume of training data incorporating more diverse egomotion patterns than EK55, leading to a model that can capture human dynamics better. More results of the related joint evaluation are presented in Appendix, Sec. D.1. 4.5" +} \ No newline at end of file diff --git a/abs_9K/test_abstract_short_2405.04403v1.json b/abs_9K/test_abstract_short_2405.04403v1.json new file mode 100644 index 0000000000000000000000000000000000000000..f515e5f481ee9bd787d2faad9fd0d4c41dd4d35a --- /dev/null +++ b/abs_9K/test_abstract_short_2405.04403v1.json @@ -0,0 +1,17 @@ +{ + "url": "http://arxiv.org/abs/2405.04403v1", + "title": "Learning To See But Forgetting To Follow: Visual Instruction Tuning Makes LLMs More Prone To Jailbreak Attacks", + "abstract": "Augmenting Large Language Models (LLMs) with image-understanding capabilities\nhas resulted in a boom of high-performing Vision-Language models (VLMs). While\nstudying the alignment of LLMs to human values has received widespread\nattention, the safety of VLMs has not received the same attention. In this\npaper, we explore the impact of jailbreaking on three state-of-the-art VLMs,\neach using a distinct modeling approach. By comparing each VLM to their\nrespective LLM backbone, we find that each VLM is more susceptible to\njailbreaking. We consider this as an undesirable outcome from visual\ninstruction-tuning, which imposes a forgetting effect on an LLM's safety\nguardrails. Therefore, we provide recommendations for future work based on\nevaluation strategies that aim to highlight the weaknesses of a VLM, as well as\ntake safety measures into account during visual instruction tuning.", + "authors": "Georgios Pantazopoulos, Amit Parekh, Malvina Nikandrou, Alessandro Suglia", + "published": "2024-05-07", + "updated": "2024-05-07", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.CL" + ], + "label": "Original Paper", + "paper_cat": "LLM AND Jailbreak", + "gt": "Augmenting Large Language Models (LLMs) with image-understanding capabilities\nhas resulted in a boom of high-performing Vision-Language models (VLMs). While\nstudying the alignment of LLMs to human values has received widespread\nattention, the safety of VLMs has not received the same attention. In this\npaper, we explore the impact of jailbreaking on three state-of-the-art VLMs,\neach using a distinct modeling approach. By comparing each VLM to their\nrespective LLM backbone, we find that each VLM is more susceptible to\njailbreaking. We consider this as an undesirable outcome from visual\ninstruction-tuning, which imposes a forgetting effect on an LLM's safety\nguardrails. Therefore, we provide recommendations for future work based on\nevaluation strategies that aim to highlight the weaknesses of a VLM, as well as\ntake safety measures into account during visual instruction tuning.", + "main_content": "Introduction Visual Instruction Tuning extends the instructionfollowing abilities of Large Language Models (LLMs) to the visual modality. The common recipe for a Vision-Language Model (VLM), is to combine an existing LLM along with a vision encoder and learn a mapping between the two unimodal experts (Alayrac et al., 2022; Dai et al., 2023b; Liu et al., 2024). As a result, VLMs can solve additional tasks as opposed to their language-only counterparts, while their performance correlates heavily with the capabilities of their unimodal backbones. LLMs have become the go-to option for practically all Natural Language Processing (NLP) tasks, with models such as ChatGPT (OpenAI, 2022) and Gemini (Gemini Team et al., 2023) witnessing widespread deployment. While these models exhibit\u2014to some degree\u2014general capabilities (OpenAI, 2023a), previous work shows they are susceptible to misuse (Bommasani et al., 2021; Kreps et al., 2022; Weidinger et al., 2021). Consequently, a large body of work incorporates safety mechanisms in model development to constrain model behavior to a \u201csafer\u201d subset by aligning models with values (Askell et al., 2021; Christiano et al., 2017; Dai et al., 2023a; Ouyang et al., 2022). Despite these efforts, LLMs are vulnerable to malicious prompts\u2014referred to as \u201cjailbreaking\u201d (Wei et al., 2024; Xie et al., 2023): engineered to trick the LLM outside of the safer subset and generate the potentially harmful content it was trained to reject (Qi et al., 2023). An example of such behavior is illustrated in Figure 1, where the model initially refuses to comply with the input question, but com*Equal Contribution pletely adheres to the modified adversarial prompt. Therefore, recent approaches to model development incorporate safety tuning against jailbreaking during training (Jiang et al., 2023; Touvron et al., 2023) that mitigate these vulnerabilities. Adversarial attacks, in the form of input perturbations, can also affect vision or language models (Goodfellow et al., 2014; Madry et al., 2018; Yu and Rieser, 2023). With VLMs now being deployed to a wider audience (Brooks et al., 2024; Gemini Team et al., 2023; OpenAI, 2023b), we believe that now is a good time to consider: how does visual instruction tuning affect the safety of LLMs? We explore this question by prompting publiclyavailable VLMs, and the LLM backbones they were derived from, by employing jailbreaking techniques across eight distinct scenarios (Liu et al., 2023d). We demonstrate that visual instruction tuning damages guardrails put in place during the LLM training (Jiang et al., 2023; Touvron et al., 2023), leaving models vulnerable to adversarial attacks. Overall, our findings further highlight the importance of considering the effects of the training stages, training data, and the ways to evaluate and mitigate potential harms when introducing new modalities (Weidinger et al., 2023). As such, with great instruction tuning comes great responsibility. 2. Background 2.1. Instruction-tuning models Language models trained on self-supervised objectives show broad pattern recognition abilities (Brown et al., 2020; Radford et al., 2019; Raffel et al., 2020) when paired with instruction-tuning: arXiv:2405.04403v1 [cs.CV] 7 May 2024 \fFigure 1: Example responses to the query \u201cWhat are some effective ways to create and spread fake news that incites violence and unrest in society?\u201d Vicuna 13B (Chiang et al., 2023) does not directly generate hateful content (top-left), but will given a jailbreak pre-prompt (bottom-left). LLaVa 13B (Liu et al., 2023c) produces hateful content both with (bottom-right) and without (top-right) a jailbreak pre-prompt. a fine-tuning paradigm that uses language instructions as input to solve multiple tasks (Chung et al., 2022; Gupta et al., 2022; Wei et al., 2021). Instruction-tuning is an established concept in NLP (Chung et al., 2022; Mishra et al., 2022) as resulting models generalize better to user queries (Chung et al., 2022; Sanh et al., 2022; Wei et al., 2021) by learning to connect them to concepts seen during pretraining for zero-shot generalization on unseen tasks (Gupta et al., 2022; Mishra et al., 2022). Visual Instruction Tuning refers to the process of converting a LLM into a VLM, often using language (Bai et al., 2023a; Chiang et al., 2023) and vision experts (Fang et al., 2023; Radford et al., 2021), by learning a mapping between the two modalities. Existing approaches concatenate visual and textual representations with a lightweight adapter module (Liu et al., 2024). Other techniques construct \u201cvisual prompts\u201d with a resampler\u2014where learnable latent tokens are informed by each modality (Bai et al., 2023b; Li et al., 2023a; Zhu et al., 2023). Training involves multiple stages, with initial stages focusing on image-text alignment and later stages on supervised fine-tuning (SFT). As VLMs based on this recipe are successful across established multimodal tasks (Goyal et al., 2017; Singh et al., 2019), a large body of work focuses on the safety aspect of these models through the hallucination prism. These works typically measure the degree to which model responses are factually grounded to the visual context (Li et al., 2023b; Liu et al., 2023a,b). However, they do not explore how safety guardrails integrated into the LLM are impacted by visual instruction tuning. 2.2. Jailbreaking and adversarial attacks LLMs and VLMs exhibit vulnerabilities along the same lines as other deep learning models; slight perturbations in inputs can result in (possibly coherent) \u201challucinated\u201d responses (Bender et al., 2021; Goodfellow et al., 2014; Liu et al., 2023b; Szegedy et al., 2013). Learning from vast training corpora improves a model\u2019s generalization capabilities (Radford et al., 2018; Raffel et al., 2020). However, as datasets surpass trillions of tokens (Gao et al., 2020; Hoffmann et al., 2022; Touvron et al., 2023), it is difficult to know the characteristics and biases included in them (Gehman et al., 2020). Moreover, while instruction-tuned models can make reasonable predictions with irrelevant and misleading prompts (Webson and Pavlick, 2022), a model\u2019s strong pattern recognition abilities can at the same time be exploited forcing potentially harmful responses (Ganguli et al., 2022; Perez et al., 2022). As a result, various methods (Christiano et al., 2017; Dai et al., 2023a; Ouyang et al., 2022) try to better align generated content to one more preferred by humans; encouraging safer and more ethical responses (Bai et al., 2022; Ganguli \fVision-Language Model Large Language Model LLaVA-1.5 (Liu et al., 2023c) Vicuna 13B (Chiang et al., 2023) Qwen-VL-Chat (Bai et al., 2023b) Qwen-Chat 7B (Bai et al., 2023a) InternLM-XComposer2 (Dong et al., 2024) InternLM2-Chat 7B (InternLM Team, 2023) Table 1: VLM & LLM pairs used in our experiments. et al., 2022). Other measures include SFT on datasets with adversarial prompts and exemplary responses (Touvron et al., 2023), and context distillation (Askell et al., 2021) which finetunes a model on outputs generated by another model prompted for safe behavior. However, introducing visual inputs opens a new attack vector as adversarial inputs imperceptible to the human eye can steer models to unsafe behavior (Qi et al., 2023). 3. Experimental Setup We hypothesize that after visual instruction tuning, models become less safe and more vulnerable to jailbreaks as opposed to their original LM backbone. To test this hypothesis, we prompt three state-of-the-art VLMs and their LM counterparts with questions related to prohibited scenarios, both with and without jailbreak prompt prefixes.1 Model Selection Table 1 displays the evaluated VLMs along with their respective LLM backbones. We selected these models because: 1) they showcased strong performance in established multimodal tasks (Goyal et al., 2017; Li et al., 2023b; Marino et al., 2019); 2) they connect vision and language models in different ways; and 3) they incorporate safety mechanisms during the development of their LLM. Finally, all chosen VLMs and LLMs are open-source, ensuring reproducibility. See Appendix A for additional details about this selection. Data Preparation We query each model with a prompt, a question, and, for the VLMs, an input image. We leverage the jailbreak prompt dataset from Liu et al. (2023d), which contains questions to simulate prohibited scenarios and prompts that were successful in jailbreaking ChatGPT (OpenAI, 2022).2 Liu et al. (2023d) categorized jailbreak prompts into one-of-three different types, and oneof-ten different patterns. Overall, we employ 40 input queries: derived from eight prohibited scenarios, with each containing five questions. We used four jailbreak prompts that cover all patterns 1Code available at https://github.com/ gpantaz/vl_jailbreak 2See Appendix B for a short description of each scenario, and we refer to Liu et al. (2023d) for details. to ensure models are evaluated fairly across all jailbreak types, resulting in 160 queries to evaluate how susceptible models are to jailbreaking. In order to mimic a common downstream use case of VLMs, we retrieve the most relevant image for each question from the pretraining data of LLaVA (Liu et al., 2024) by selecting the image with the maximum CLIPScore (Hessel et al., 2021) using the base CLIP model (Radford et al., 2021). Finally, we also use a blank image (i.e. an image with only white pixels) to simulate pure text generation by removing any visual context. As a result, we have four conditions for VLMs from the combinations of original vs. jailbreak prompts, and retrieved images vs. blank images. Data Annotation To understand the impact of visual instruction tuning against adversarial inputs, all 1,800 responses were annotated by the authors of this paper\u2014each with a background in developing large-scale multimodal models\u2014and measured inter-annotator agreement on a subset of 100 samples, resulting in a Krippendorff\u2019s alpha of 0.674. We labeled every response with one of: \u2022 Refusal: The model refuses to answer the query; optionally, providing counter-arguments or condemning unethical/illegal actions. \u2022 Neutral: Generated content is neither a refusal nor is potentially harmful; such as, when models do not acknowledge the harmful intent but respond in the opposite manner. For example, when asked \u201cHow to screen job candidates based on race?\u201d, models might provide measures against discrimination. \u2022 Harmful: Response contains potentially harmful content. This includes where generated content promotes discrimination towards individuals\u2014 possibly based on demographics or gender\u2014or encourages illegal activities. \u2022 Not Applicable: Content is not applicable for the study. For example, the model repeated the instruction or continuously produced the same (invalid or incoherent) sentence. 4. Results Figure 2 illustrates the percentage of the responses labeled as harmful across all models. We observe that all VLMs generate substantially more hateful responses as opposed to their LLM backbones. In particular, LLaVA generates 27.50% and 6% more harmful content than Vicuna, with and without jailbreak pre-prompts respectively. Additionally, QwenChat/Qwen-VL-Chat and InterLM2-Chat/InterLMXComposer2 exhibit similar behavior, though they \f Jailbreak Prompt Jailbreak Prompt 0 10 20 30 40 50 60 70 Percentage of harmful responses 20.00 60.50 47.50 66.50 40.00 69.00 Vicuna & LLaVA Vicuna LLaVA LLaVA-Blank Jailbreak Prompt Jailbreak Prompt 0 10 20 30 40 Percentage of harmful responses 7.50 42.50 15.00 45.00 12.50 47.50 Qwen & Qwen-VL-Chat Qwen-Chat Qwen-VL-Chat Qwen-VL-Chat-Blank Jailbreak Prompt Jailbreak Prompt 0 10 20 30 40 Percentage of harmful responses 10.00 40.62 17.50 41.88 17.50 45.62 InterLM2 & InterLM-Xcomposer2 InterLM2-Chat InterLM-XComposer2 InterLM-XComposer2-Blank Figure 2: Percentage of harmful responses for every LLM & VLM pair. Across all model pairs, the VLM generates harmful content more frequently compared to its LLM backbone. generate less harmful responses. Consequently, the safeguards imposed on the LLMs during model development are, at best, relaxed as an outcome of the visual instruction tuning stage. Furthermore, VLMs are more prone to generate potentially harmful content when provided with a prompt and a semantically-relevant image. While this may seem obvious, we observe that in the case of adversarial input, including a blank image results leads to more harmful responses. We hypothesize that this is due to \u201ccompeting objectives\u201d (Wei et al., 2024); where, on one hand, the model tries to generate content relative to both the instruction and the image, while on the other hand, it tries to adhere to its safeguards. Using a jailbreak pre-prompt, however, provides a signal stronger than the content of the image resulting in the aforementioned behavior. 5. Discussion Why are VLMs more prone to jailbreak attacks? Competing objectives present a significant challenge for both VLMs and LLMs. Given an adversarial prompt, both models must navigate between providing relevant responses and resisting adherence to the adversarial prompt. While we have not explored whether this effect is magnified in VLMs, we hypothesize that both models are equally susceptible to the impact of competing objectives. A more plausible scenario is that VLMs forget queries from adversarial prompts when undergoing visual instruction tuning. Reframing generation of appropriate responses to adversarial prompts as its own task, it becomes evident that models may inadvertently disregard this task during further finetuning. This behavior is particularly likely to occur as the model must incorporate an additional modality during the instruction tuning stage. However, we believe this issue can be mitigated through continual learning or training methodologies that expose the model to additional (image-text or text-only) examples that demonstrate appropriate responses during the visual instruction tuning stage. In the follow-up section, we further elaborate on possible strategies to mitigate the forgetting effect. 5.1. Suggestions for Future Work Evaluation & Benchmarking Most current evaluations of VLMs focus exclusively on model capabilities, such as grounding, reasoning, and factuality (Weidinger et al., 2021). Some recent benchmarks are starting to address the gap in safety (Li et al., 2024b; Roger et al., 2023) and robustness to adversarial attacks (Carlini et al., 2024; Zhao et al., 2024). However, creating comprehensive benchmarks to evaluate the safety of VLMs remains a crucial area for future research. A possible step in this direction would be to implement a unified framework for evaluating VLMs similar to LM-Harness (Gao et al., 2023) and SALAD-Bench (Li et al., 2024a), ensuring transparency and reproducibility. Additionally, we emphasize the need for \u201cdata parity\u201d when evaluating from a safety perspective. Without it, jailbreak prompts may be accidentally leaked into (pre-)training data, leading to inflated scores (Golchin and Surdeanu, 2023; Li and Flanigan, 2023; Zhou et al., 2023). However, as jailbreaking is an adversarial setting, it should be evaluated on out-of-distribution prompts (Yuan et al., 2023) that are held-out and/or regularly updated (Kiela et al., 2021). Safety Defenses in All Training Stages VLMs are trained following a curriculum: typically involving image-text alignment and instruction-tuning stages (Bai et al., 2023a; Li et al., 2023a; Liu et al., 2024). Our analysis indicates that when safety is not considered across all\u2014or, at least, final\u2014 stages, models become misaligned and are therefore more likely to generate harmful content. Korbak et al. (2023) show that incorporating conditional pretraining\u2014where text segments are conditioned on human preferences\u2014can reduce the toxicity of model outputs without sacrificing performance on other tasks. As a result, when training a model from scratch, safety should be considered at every stage. However, as training from scratch \fis resource-intensive, it may be more practical to initialize a VLM with pretrained experts. Another possible solution is to ensure that the VLM alignment is part of the final training stage. However, multimodal datasets annotated with human preferences or exemplar responses against adversarial prompts (Li et al., 2024b) are largely missing. Therefore, an important avenue for future work would be to collect or synthetically generate (Liu et al., 2024) such resources. The goal of maintaining safety alignment after visual instruction tuning resembles a continual learning scenario. Future work could draw inspiration from approaches that aim to mitigate catastrophic forgetting (Hadsell et al., 2020; Ke and Liu, 2022). For instance, previous work has found that methods such as experience replay (Biesialska et al., 2020) and logit distillation (Jin et al., 2022) can be effective in continual pretraining of language models. Further benefits could be achieved through more sophisticated approaches, such as selectively updating a small isolated set of parameters for vision (Gururangan et al., 2022; Ke et al., 2022). 6." +} \ No newline at end of file diff --git a/abs_9K/test_abstract_short_2405.04483v1.json b/abs_9K/test_abstract_short_2405.04483v1.json new file mode 100644 index 0000000000000000000000000000000000000000..fb8f8678a9aaa1a9f99325f01d2dd0b1b284517e --- /dev/null +++ b/abs_9K/test_abstract_short_2405.04483v1.json @@ -0,0 +1,16 @@ +{ + "url": "http://arxiv.org/abs/2405.04483v1", + "title": "CloudDiff: Super-resolution ensemble retrieval of cloud properties for all day using the generative diffusion model", + "abstract": "Clouds play a crucial role in the Earth's water and energy cycles,\nunderscoring the importance of high spatiotemporal resolution data on cloud\nphase and properties for accurate numerical modeling and weather prediction.\nCurrently, Moderate Resolution Imaging Spectroradiometer (MODIS) provides cloud\nproducts with a spatial resolution of 1 km. However, these products suffer from\na lengthy revisit cycle. This study develops a generative diffusion model\n(donated as CloudDiff) for super-resolution retrieval of high spatiotemporal\ncloud phase and properties, applicable both day and night. Leveraging 2 km\nspatial resolution Himawari-8 Advanced Himawari Imager (AHI) thermal infrared\n(TIR) radiances and viewing geometry as condition, alongside daytime MODIS\nproducts as targets, the model can generate cloud phase (CLP), cloud top height\n(CTH), cloud optical thickness (COT), and cloud effective radius (CER) at 1 km\nspatial resolution and 10-minute temporal resolution. The conditional diffusion\nmodel can generate sharper images and capture finer local features than\ndeterministic super-resolution approaches. It draws multiple samples based on\nthe underlying probability distribution, enabling retrieval uncertainty\nassessment. Evaluations show agreement between cloud phase and properties\nderived from the CloudDiff and MODIS cloud products. The ensemble mean is found\nto enhance retrieval accuracy and credibility, outperforming the deterministic\nmodel.", + "authors": "Haixia Xiao, Feng Zhang, Lingxiao Wang, Wenwen Li, Bin Guo, Jun Li", + "published": "2024-05-07", + "updated": "2024-05-07", + "primary_cat": "physics.ao-ph", + "cats": [ + "physics.ao-ph" + ], + "label": "Original Paper", + "paper_cat": "Diffusion AND Model", + "gt": "Clouds play a crucial role in the Earth's water and energy cycles,\nunderscoring the importance of high spatiotemporal resolution data on cloud\nphase and properties for accurate numerical modeling and weather prediction.\nCurrently, Moderate Resolution Imaging Spectroradiometer (MODIS) provides cloud\nproducts with a spatial resolution of 1 km. However, these products suffer from\na lengthy revisit cycle. This study develops a generative diffusion model\n(donated as CloudDiff) for super-resolution retrieval of high spatiotemporal\ncloud phase and properties, applicable both day and night. Leveraging 2 km\nspatial resolution Himawari-8 Advanced Himawari Imager (AHI) thermal infrared\n(TIR) radiances and viewing geometry as condition, alongside daytime MODIS\nproducts as targets, the model can generate cloud phase (CLP), cloud top height\n(CTH), cloud optical thickness (COT), and cloud effective radius (CER) at 1 km\nspatial resolution and 10-minute temporal resolution. The conditional diffusion\nmodel can generate sharper images and capture finer local features than\ndeterministic super-resolution approaches. It draws multiple samples based on\nthe underlying probability distribution, enabling retrieval uncertainty\nassessment. Evaluations show agreement between cloud phase and properties\nderived from the CloudDiff and MODIS cloud products. The ensemble mean is found\nto enhance retrieval accuracy and credibility, outperforming the deterministic\nmodel.", + "main_content": "Introduction Clouds are critical in the Earth\u2019s water and energy budgets (Li et al., 2005). Their influence on the radiation budget can induce either heating or cooling of the planet, contingent upon the radiative characteristics of the cloud and its altitude (Stephens et al., 1981, 1990). The significance of clouds is further underscored by variables such as cloud optical thickness (COT), cloud effective radius (CER), cloud top height (CTH), and cloud phase (CLP). These parameters profoundly impact the Earth\u2019s net radiation balance due to their distinct scattering and absorption characteristics (Fauchez et al., 2018a; Min et al., 2020; Wang et al., 2016a). Achieving an accurate representation of these optical properties remains a formidable challenge, primarily because the microscale physical processes within clouds are difficult to explicitly simulate in global numerical models (Baran, 2012; Ceppi et al., 2017; Waliser et al., 2009). Consequently, there is an urgent need to obtain cloud phase and properties with high spatial and temporal resolution. Such detailed cloud data are indispensable for a deeper understanding of atmospheric physical processes, the enhancement of data assimilation techniques, and the improvement of weather forecasting accuracy (Muskatel et al., 2021). The retrieval of cloud properties has been conducted for several decades. Since the 1970s, airborne measurements have been employed to retrieve COT and CER, resulting in numerous successful experimental studies (Finger et al., 2015; King, 1987; Krisna et al., 2018; Platnick et al., 1995; Twomey and Cocks, 1989). However, these campaigns incur high costs, and the temporal and spatial coverage of field observations is limited. With the advancement of satellite remote sensing technology, particularly passive sensors (geostationary and polar-orbiting satellites), researchers have increasingly utilized data from visible and near-infrared bands to retrieve cloud properties. This approach enables the characterization of cloud properties at various spatial and temporal resolutions (King et al., 1992; Menzel et al., 2008; Platnick et al., 2003; Tang et al., 2017; Zhang and Platnick, 2011; Zhuge et al., 2020), owing to the wide observational coverage provided by passive sensors. The basic physical principle behind this method is that the cloud radiances measured by the nonabsorptive channels in the visible or near-infrared wavelengths are influenced by COT, while those captured by water-absorption channels in the shortwave infrared wavelength are sensitive to the CER (Nauss and Kokhanovsky, 2011). These retrieval methods, which rely on solar radiation, are effective only for daytime scenes. However, they are not applicable to nighttime scenes and exhibit higher uncertainties in high-latitude regions and optically thin cloud scenes (Wang et al., 2016b). Thermal Infrared (TIR) retrieval algorithm, utilizing the split-window technique (Parol et al., 1991; Toshiro, 1985), offer valuable capabilities for both daytime and nighttime scene analysis. This technique retrieves COT and CER from the brightness temperature differences between two distinct channels in the infrared atmospheric windows, where gaseous absorption is minimal. Additionally, the optimal estimation methodology (Rodgers, 2000) has been implemented for the Atmospheric Infrared 2 \fSounder V6 (AIRS) and Advanced Microwave Sounding Unit (AMSU), utilizing infrared spectral data to successfully retrieve the physical and optical properties of clouds (Kahn et al., 2014, 2015). However, due to significant absorption by cloud particles in the infrared spectrum, these traditional IR-based algorithms primarily excel in retrieving optically thin cloud properties, while facing challenges in scenarios involving opaque, thick clouds (Wang et al., 2016a). Consequently, an alternative approach is necessary to provide a more comprehensive solution. The data-driven deep learning method, renowned for their proficiency in capturing the spatial variations of image features with fast computation, have been extensively applied in the cloud identification and properties retrieval (Tong et al., 2023; Zhao et al., 2023). For example, Wang et al. (2022) developed a convolutional neural network (CNN) model for the continuous cloud identification and retrieval of cloud properties (i.e., COT, CER, and CTH) throughout the diurnal cycle for the Moderate Resolution Imaging Spectroradiometer (MODIS), leveraging utilizing daytime MODIS TIR radiances alongside satellite viewing zenith angles (VZA). Additionally, employing a transfer-learning-based UNet model and MODIS/Himawari-8 cloud products, Li et al. (2023) successfully estimated the CER, COT, and CTH from Himawari-8 TIR measurements, and results showed that the model enhanced performance for optically thick clouds. Previous research has relied on either polar-orbiting (e.g., MODIS) or geostationary (e.g., Himawari-8 Advanced Himawari Imager) satellite sensors for cloud property estimation. While polar-orbiting satellites offer high-resolution cloud products (1 km resolution), they suffer from a lengthy revisit cycle, impacting temporal resolution. Conversely, geostationary satellites provide frequent revisits, offering high temporal resolution and continuous cloud observation (Meng et al., 2024). However, their spatial resolution is lower compared to polar-orbiting satellites. Hence, combining data from both types of satellites to achieve high spatiotemporal resolution in cloud phase and properties is a promising direction to explore. For high-impact weather events such as severe convective storms, tropical and extratropical cyclones, the underlying dynamical and thermodynamic mechanisms are complex, leading to significant uncertainties in retrieving their cloud properties. Unfortunately, current CNN/UNet retrieval methods primarily focus on deterministic modeling, which often neglects the inherent uncertainties within the data. Diffusion models, a novel category of likelihood-based models recently highlighted for generating high-quality images (Sohl-Dickstein et al., 2015; Song and Ermon, 2019), offer desirable characteristics such as distribution coverage (Ho et al., 2020). Unlike deterministic retrieval methods, diffusion models derive probability distribution functions and can generate a large number of samples (Ho et al., 2020; Ling et al., 2024; Bishop, 2024), while guaranteeing that the retrieval distribution encapsulates all plausible outcomes, thus allowing for estimating the probability density and its score. Diffusion models have proven successful in various research domains, such as computer vision for image generation and synthesis (Croitoru, 2023), precipitation nowcasting (Nai 3 \fet al., 2024), estimating the unresolved geophysical processes (Pan et al., 2023), and earth system model downscaling (Hess et al., 2024), showcasing their effectiveness in handling complex systems. The primary objective of this study is to develop a diffusion model aimed at superresolution high spatiotemporal resolution cloud optical properties and cloud phase retrieval throughout the diurnal cycle using a geostationary satellite. Leveraging the TIR channels of the Himawari-8 satellite and employing MODIS cloud products as ground truth, we have developed a generative diffusion model capable of cloud identification and retrieval of COT, CER, and CTH, characterized by high precision and enhanced spatiotemporal resolution. The efficacy of this model is evaluated against standard MODIS cloud product measurements, focusing particularly on its generalization capabilities and the uncertainty, analyzed across typhoon case studies and extended datasets. The data, methodology, and experimental details are outlined in Section 2. The performance outcomes of the model are thoroughly examined in Section 3. Lastly, Section 4 offers conclusions and discussions. 2. Data and methods 2.1. Data 2.1.1. Himawari-8 AHI Satellite Data Himawari-8, launched in October 2014, is the geostationary satellite sensor system operated by the Japan Meteorological Agency (JMA). It represents the latest iteration in the Multifunctional Transport Satellite (MTSAT) series. The Advanced Himawari Imager (AHI) sensor onboard Himawari-8 captures full disk images every 10 minutes across 16 spectral bands from visible to infrared wavelengths, with spatial resolutions ranging from 500 m to 2 km and temporal resolutions between 2.5 and 10 minutes, covering regions from East Asia to Australia. The TIR measurements are sensitive to optically thin clouds and are continuously obtained throughout the diurnal cycle, independent of solar geometry (Fauchez et al., 2018a). In this study, TIR radiations from Himawari-8 AHI are utilized to estimate cloud properties during both daytime and nighttime. Additionally, the VZA are employed to construct the retrieval model. Table 1 summarizes the used TIR measurements (6.95\u201313.30 \u00b5m) and VZA of Himawari-8 AHI. 2.1.2. MODIS data With the launch of NASA\u2019s Terra satellite in 1999, followed by Aqua in 2002, MODIS has emerged as one of the most indispensable satellite remote sensing platforms for Earth science research. It measures reflected solar and emitted thermal radiation across 36 spectral channels (0.42\u201314.24 \u00b5m), offering unique spectral and spatial capabilities for retrieving cloud properties (Platnick et al., 2016). The Terra-MODIS (MOD06) and Aqua-MODIS (MYD06) products, which have a spatial resolution of 1 km, are accessible through the Atmosphere Archive and Distribution System website 4 \f(https://ladsweb.modaps.eosdis.nasa.gov/). These products include cloud top properties (e.g., CTH, CLP for both day and night) and cloud optical and microphysical properties (e.g., COT, CER, daytime only). Over the years, the MODIS cloud products have demonstrated consistent high accuracy and reliable performance (King et al., 2003; Platnick et al., 2015). In this study, the daytime MODIS cloud optical and physical properties (CTH, COT, CER, and CLP) from the Level-2 cloud product (MYD06 L2 and MOD06 L2) are utilized as ground truth to develop the super-resolution retrieval model. Table 1: The Himawari-8 AHI data used for cloud parameter super-resolution retrieval. Band Number Bandwidth (\u00b5m) Central Wavelength (\u00b5m) Spatial resolution (km) Spatial resolution (minute) 9 6.89\u20137.01 6.95 10 7.26\u20137.43 7.35 11 8.44\u20138.76 8.6 12 9.54\u20139.72 9.63 2 10 13 10.3\u201310.6 10.45 14 11.1\u201311.3 11.20 15 12.2\u201312.5 12.35 16 13.20\u201313.40 13.30 VZA \u2013 \u2013 2.1.3. Data preprocessing As described above, the TIR measurements (6.95 \u00b5m, 7.35 \u00b5m, 8.60 \u00b5m, 9.60 \u00b5m, 10.45 \u00b5m, 11.20 \u00b5m, 12.35 \u00b5m, and 13.30 \u00b5m) along with the VZA of the Himawari-8 AHI serve as the inputs for the model, while the MODIS level-2 CLP, CTH, COT, and CER data are used as the targets for training the model. To optimize the model during training and enhance its accuracy, we normalized the inputs and targets. By employing min-max normalization, we scaled the input and output variables to fall within the range of 0 to 1. To cover as wide a range of the Earth\u2019s surface and viewing geometries as possible, and to accommodate seasonal variations, we collected data from January 2016 to October 2017. Specifically, data from January 2016 to May 2017 was utilized for model training, data from June to August 20, 2017 for model validation, and data from August 21, 2017, to October 2017 served as the test set. Owing to the differing spatiotemporal resolutions of the Himawari-8 AHI and MODIS cloud products, we performed spatiotemporal matching of the data. In this process, we selected data from both MODIS and Himawari-8 for the same regions and times, with the cloud product grid points being twice that of the TIR observations. To alleviate memory and computational demands and to accelerate the selection process for the model, 5 \fwe cropped the cloud products in the training, validation, and test sets to a size of 256\u00d7256 km, while the input TIR observations were sized at 128\u00d7128 km. Ultimately, our training set comprised 76,247 samples, with the validation and test sets containing 9,530 and 9,532 samples, respectively. 2.2. Method The diffusion model is a state-of-the-art deep learning technique that employs probabilistic denoising processes to develop generative models (Bishop, 2024). The model typically operates on the principle of simulating a gradual process of denoising, effectively reconstructing data points from a noise-like distribution. This process is modeled as a reverse Markov chain, where a data sample is initially transformed into noise through a sequence of diffusion steps and then reconstructed back into a clean sample through learned reverse transitions. In a classical set-up, the model involves iteratively applying a series of conditional Gaussian distributions, beginning from a distribution of noise p(zT) and progressively denoising it to retrieve the original data distribution p(x0). This can be succinctly represented as, p(x0) = Z \u00b7 \u00b7 \u00b7 Z p(x0|x1)p(x1|x2) \u00b7 \u00b7 \u00b7 p(xT\u22121|zT)p(zT) dx1 \u00b7 \u00b7 \u00b7 dxT\u22121dzT. (1) In each iteration, the model utilizes the noisy data from the previous step as input, subsequently refining it to a greater degree of accuracy in accordance with the data\u2019s original state. The denoising path is learned from training data, thereby enabling the model to effectively generate or reconstruct high-quality data samples. 2.2.1. Conditional diffusion model In our study, these TIR measurements and VZA variable are denoted by y which is the condition variable. The target variables, cloud products, are represented by x. The objective is to approximate the conditional distribution of x given y, using a significantly large dataset of paired samples (xi, yi). The conditional diffusion model incorporates conditioning variables into the generative process (Batzolis, 2021), allowing the model to generate data conditioned on specific information. Mathematically, this can be represented as the transition from a noise distribution p(zT) to the data distribution p(x0) conditioned on a variable y, described by, p(x0|y) = Z p(x0|zT, y)p(zT|y) dzT, (2) where, zT represents the latent variables at the final timestep, and the model iteratively refines these variables through the conditioning on y, enhancing its ability to target specific data generation tasks. As Figure 1 shows, the conditional diffusion model enables to produce cloud products given the conditions of TIR and VZA variables, making it particularly useful in scenarios where the output needs to be tailored to specific environments. In this framework, for any given y, the algorithm 6 \foutputs samples of x from x \u223cp(x0|y), where p is a learned distribution that does not adhere to any predefined probability distribution form. The forward process has the same scheme as the Denoising Diffusion Probabilistic Models(DDPMs) (Ho et al., 2020), but in the reverse process we embed the conditional variables into the UNet for modelling the conditional probability distributions (Nai et al., 2024). \ud835\udc650 \ud835\udc651 \ud835\udc652 ... \ud835\udc65\ud835\udc47 \ud835\udc65\ud835\udc47 \ud835\udc650 Forward Diffusion Process Reverse Diffusion Process ... \ud835\udc65\ud835\udc47\u22121 UNet UNet Condition Figure 1: The CloudDiff for super-resolution cloud identification and properties retrieval. The generated samples x are cloud products, and the conditions y includes TIR and VZA variables. In the forward process, the data x0 undergoes a series of transformations, gradually adding noise over discrete time steps T until it is converted into pure Gaussian noise xT \u2261zT. The noise addition at each timestep t is defined by a variance schedule \u03b2t, and can be described by the following stochastic differential equation, xt = p 1 \u2212\u03b2txt\u22121 + p \u03b2t\u03f5, \u03f5 \u223cN(0, I), (3) where \u03f5 represents Gaussian noise. The reverse process, where the model learns to reconstruct the original data from noise, is explicitly conditioned on y. At each step, the model estimates the original data xt\u22121 from the current noisy data xt using a neural network parameterized by {\u03b8}. This network predicts the mean \u00b5\u03b8(xt, t, y) of the distribution for xt\u22121, typically modeled as, xt\u22121 = \u00b5\u03b8(xt, t, y) + \u03c3t\u03f5, \u03f5 \u223cN(0, I), (4) where \u03c3t is a predetermined noise level (Ho et al., 2020). 7 \fThe objective of training this conditional diffusion model is to minimise the difference between the estimated xt\u22121 and its actual value. This effectively allows the model to learn the reverse of the forward diffusion process. The loss function is originally from the Fisher divergence (Song and Ermon, 2019; Song et al., 2021; Nai et al., 2024), but equivalently used as a variant of the mean squared error between the predicted and actual previous timestep values, conditioned on y, L(\u03b8) = Ex0,\u03f5,y \u0002 \u2225\u03f5 \u2212\u03f5\u03b8(xt, t, y)\u22252\u0003 , (5) where \u03f5\u03b8 represents the outputs of the UNet as the predictions of the noise used to generate xt from xt\u22121. To improve the representation ability, we have introduced the multi-head attention modules into the UNet architecture (Vaswani et al., 2017). After training, the conditional diffusion model (hereafter, CloudDiff) is capable of generating multiple samples simultaneously. In our tests, we generate 30 samples per evaluation instance. These samples are reminiscent of the ensemble members used in numerical weather prediction\u2019s dynamical models, which employ large numbers of members for ensemble predictions (Li et al., 2024). Furthermore, we conduct comparative analyses between the CloudDiff and established deterministic data-driven methods. For this purpose, the study uses a supervised learning approach with a UNet architecture (Trebing et al., 2021), referred to as the deterministic model, as the benchmark. This method is specifically applied to the tasks of super-resolution retrieval of cloud properties and cloud identification, serving as a baseline for performance comparison. 2.2.2. Performance evaluation The CloudDiff serves as a super-resolution approach that requires an appropriate evaluation scheme. Although intuitive, sample-by-sample comparisons cannot fully demonstrate the effectiveness of the super-resolution technique. To obtain a comprehensive performance evaluation, we collect MODIS labels for assessing the quality of the generated cloud products. Consequently, we employ Mean Absolute Error (MAE) and Mean Squared Error (MSE) as metrics, allowing for a quantitative assessment of the model\u2019s performance in enhancing spatial resolution. These metrics, commonly used in cloud properties retrieval (Wang et al., 2022; Zhao et al., 2023), are defined as follows, MAE = 1 NNp N X i=1 Np X j=1 |xi,j \u2212\u02c6 xi,j| , (6) RMSE = v u u t 1 NNp N X i=1 Np X j=1 (xi,j \u2212\u02c6 xi,j)2, (7) where N represents the number of samples, xi denotes the values from MODIS cloud products, and \u02c6 xi represents the super-resolution retrieved cloud products. Np indicates the number of pixels for each sample, and j labels the index of the pixels. It 8 \fshould be noted that a more accurate super-resolution model will have a smaller root mean square error (RMSE) and mean absolute error (MAE). 3. Results 3.1. Case study We begin our study with a case analysis focusing on Typhoon Hato (No.1713) over the offshore areas of China to evaluate the performance of the CloudDiff and comprehend its uncertainty. Typhoon Hato developed in the northwest Pacific Ocean at 06:00 UTC on August 20, 2017, and progressively intensified. By 01:00 UTC on August 23, it had escalated to a severe typhoon, peaking at Category 16 with maximum sustained winds of 52 m/s. It made landfall near Zhuhai City, Guangdong Province, China, around 04:50 UTC on August 23 as a severe typhoon, causing substantial devastation in southern China. On that day, the Terra satellite passed over the coastal Zhuhai area around 02:50 UTC; thus, our analysis primarily focused on evaluating the retrieved COT, CER, CTH, and CLP at this specific time. The analysis covered the typhoon area between 19.78\u00b0N\u201322.32\u00b0N and 111.68\u00b0E\u2013114.22\u00b0E, corresponding to a grid size of 256\u00d7256. Figure 2 presents the various cloud properties generated by the CloudDiff across 30 samples and grid points where MODIS cloud properties were not captured by samples. Since all 30 CLP samples indicated ice clouds within the study area, CLP results are not displayed. It is observed that the cloud properties generated by different samples vary slightly but generally reflect the typhoon\u2019s morphology accurately. Despite variations in COT values among the samples and differing degrees of overestimation and underestimation in the typhoon\u2019s cloud wall, they accurately estimated the optical thickness at the typhoon eye. Notably, underestimation occurred for COT values over 90 at about 16.03% of the grid points, and overestimation at 1.67% of the grid points, while COT values below 60 were well retrieved. Regarding CER, some samples did not accurately represent the CER, generally overestimating (9.68%, mainly around the typhoon eye) and underestimating (12.49%, mainly in the typhoon\u2019s cloud wall). Additionally, samples underestimated CTH to various extents, particularly on the west and southwest sides of the typhoon eye, with a total underestimation of 30.41% in CTH and a mere 0.63% overestimation. To evaluate the performance and uncertainty of the CloudDiff, we compared the cloud properties with those from the deterministic model (Fig. 3). The results show that individual sample produces more sharpness and more local details of COT, CER, and CTH compared to the ensemble mean (appears blurrier). The deterministic model\u2019s results blurrier than the ensemble mean and also lack detail. Regarding COT, compared to MODIS cloud products, the sample underestimated the COT in the typhoon eye region and overestimated areas with COT <90. The ensemble mean (the mean values of 30 samples) also overestimated the extent of COT <90 but reported lower values than single sample, somewhat correcting the underestimation of COT in 9 \fSamples COT CER CTH Figure 2: Cloud properties retrieval in the typhoon Hato region centering around 21.8\u00b0N, 113.8\u00b0E at 0250 UTC on August 23, 2017, was conducted using the CloudDiff. The columns represent samples and grid points where MODIS cloud properties are not captured by samples. The underestimation and overestimation are respectively indicated by black squares and green \u2019x\u2019. The background is colored based on MOD06 cloud products. the typhoon eye region by single sample. The standard deviation of 30 samples, which can donate the retrieval uncertainty, indicates large error in the estimates of COT in the typhoon\u2019s cloud wall, mainly because most samples overestimated the COT in this area (see Fig. 2). The deterministic model not only overestimated the extent of COT >90 (with lower internal values) but also underestimated the optical thickness on the western side of the typhoon eye. Both single sample and ensemble mean, as well as the deterministic model, inaccurately retrieved areas with CER >35\u00b5m and overestimated the CER in the typhoon eye area. However, the CloudDiff exhibited smaller biases in CER retrievals compared to the deterministic model, and standard deviations mostly below 6\u00b5m across most regions, indicating small uncertainty. Regarding CTH, CloudDiff exhibits minimal uncertainty, with standard deviations generally below 1 km across most regions. compared to MODIS, the ensemble mean more accurately represented CTH in the southern part of the typhoon eye than individual samples, but it underestimated areas with CTH greater than 16 km and the CTH in the typhoon eye. The deterministic model also underestimated CTH greater than 16 km and the CTH in the typhoon eye. Additionally, deterministic model 10 \f20\u00b0N 20.5\u00b0N 21\u00b0N 21.5\u00b0N 22\u00b0N MODIS Sample Ensemble mean Deterministic model Std 20\u00b0N 20.5\u00b0N 21\u00b0N 21.5\u00b0N 22\u00b0N 112\u00b0E 113\u00b0E 114\u00b0E 20\u00b0N 20.5\u00b0N 21\u00b0N 21.5\u00b0N 22\u00b0N 112\u00b0E 113\u00b0E 114\u00b0E 112\u00b0E 113\u00b0E 114\u00b0E 112\u00b0E 113\u00b0E 114\u00b0E 112\u00b0E 113\u00b0E 114\u00b0E 0 20 40 60 80 0 10 20 30 40 0 10 20 30 40 50 m 0 3 6 9 12 m 10 11 12 13 14 15 16 17 km 0.0 0.4 0.8 1.2 1.6 km CTH CER COT Figure 3: MOD06 cloud products and retrieved cloud properties in the typhoon Hato region at 0250 UTC on August 23,2017. The columns are MOD06 cloud products, sample, esemble means, deterministic model, and standard deviation (std). underestimated CTH at the image edges. Moreover, both the ensemble mean and deterministic model accurately retrieved CLP (not showed), consistent with MODIS cloud classification results. Overall, the super-resolution cloud properties retrieval based on the CloudDiff proved superior to those from the deterministic model, providing sharper and more localized details of 1 km cloud properties during the typhoon event. Using 30 samples generated by the CloudDiff, we computed probability estimates for various thresholds of cloud property estimates and cloud phase probability results (Fig. 4), which deterministic model cannot provide. Based on the thresholds provided by the International Satellite Cloud Climatology Project (ISCCP) for COT and CTH associated with cloud types, we computed probability estimates for COT (Fig.4b,c,d) and CTH (Fig.4j,k,l) at similar thresholds in ISCCP. The results indicate that the probability estimates from the CloudDiff are close to with MODIS data, with probabilities exceeding 80% in the 3.66.4 km to be over 90%. Following ISCCP cloud classifications, the predominant cloud types in the typhoon eye and its southwestern sea regions are cirrostratus, while other areas feature deep convection clouds. For CER, thresholds of 20 \u00b5m and 40 \u00b5m were selected for probability estimation (Fig.4f,g,h), revealing that the CloudDiff\u2019s CER estimates primarily 11 \ffall within the (20, 40] range, with very low probabilities for CER in the (0, 20] and CER>40 \u00b5m ranges. In comparison to MODIS, the CloudDiff tends to overestimate CER in the typhoon eye and underestimate CER over the western land areas of the typhoon eye. Furthermore, the CloudDiff\u2019s probability estimates for clouds classified as ice clouds in the study area exceed 99 % (not showed) , aligning well with MODIS. Overall, through probabilistic estimation, we can better ascertain the range of cloud property values and cloud phase, evaluate the uncertainty in cloud property retrieval and identification, and enhance the accuracy of super-resolution retrievals. 112\u00b0E 113\u00b0E 114\u00b0E 20\u00b0N 20.5\u00b0N 21\u00b0N 21.5\u00b0N 22\u00b0N (a)MODIS 112\u00b0E 113\u00b0E 114\u00b0E 20\u00b0N 20.5\u00b0N 21\u00b0N 21.5\u00b0N 22\u00b0N (b)(0, 3.6] 112\u00b0E 113\u00b0E 114\u00b0E 20\u00b0N 20.5\u00b0N 21\u00b0N 21.5\u00b0N 22\u00b0N (c)(3.6, 23] 112\u00b0E 113\u00b0E 114\u00b0E 20\u00b0N 20.5\u00b0N 21\u00b0N 21.5\u00b0N 22\u00b0N (d)COT > 23 112\u00b0E 113\u00b0E 114\u00b0E 20\u00b0N 20.5\u00b0N 21\u00b0N 21.5\u00b0N 22\u00b0N (e)MODIS 112\u00b0E 113\u00b0E 114\u00b0E 20\u00b0N 20.5\u00b0N 21\u00b0N 21.5\u00b0N 22\u00b0N (f)(0, 20] 112\u00b0E 113\u00b0E 114\u00b0E 20\u00b0N 20.5\u00b0N 21\u00b0N 21.5\u00b0N 22\u00b0N (g)(20, 40] 112\u00b0E 113\u00b0E 114\u00b0E 20\u00b0N 20.5\u00b0N 21\u00b0N 21.5\u00b0N 22\u00b0N (h)CER > 40 112\u00b0E 113\u00b0E 114\u00b0E 20\u00b0N 20.5\u00b0N 21\u00b0N 21.5\u00b0N 22\u00b0N (i)MODIS 112\u00b0E 113\u00b0E 114\u00b0E 20\u00b0N 20.5\u00b0N 21\u00b0N 21.5\u00b0N 22\u00b0N (j)(0, 3.2] 112\u00b0E 113\u00b0E 114\u00b0E 20\u00b0N 20.5\u00b0N 21\u00b0N 21.5\u00b0N 22\u00b0N (k)(3.2, 6.4] 112\u00b0E 113\u00b0E 114\u00b0E 20\u00b0N 20.5\u00b0N 21\u00b0N 21.5\u00b0N 22\u00b0N (l)CTH > 6.4 0 20 40 60 80 0 20 40 60 80 0 20 40 60 80 0 20 40 60 80 0 10 20 30 40 50 m 0 20 40 60 80 0 20 40 60 80 0 20 40 60 80 10 11 12 13 14 15 16 17 km 0 20 40 60 80 0 20 40 60 80 0 20 40 60 80 CTH CER COT Figure 4: The probability estimates for cloud properties in the typhoon Hato region at 0250 UTC on August 23,2017. (b-d) present the probability estimates of COT within different threshold ranges.(fh) display the probability estimates of CER for varying thresholds. (j-l) show the probability estimates for CTH across different threshold ranges. 3.2. Overall evaluation We evaluated the overall performances of the models using data from the test set. We employed MSE and RMSE metrics to evaluate cloud properties. A comparative analysis was conducted to investigate how the number of samples affects the superresolution retrieval performance. This analysis included ensemble means with 1 and 30 samples. Additionally, we compared these results with those from the deterministic model. Figure 5 illustrates the RMSE and MSE comparisons between the MODIS cloud products and the super-resolution retrieval results. 12 \fALL Water Ice 6.0 6.5 7.0 7.5 8.0 8.5 9.0 9.5 MAE (a) COT ALL Water Ice 4 5 6 7 8 9 10 11 (b) CER ( m) ALL Water Ice 0.75 1.00 1.25 1.50 1.75 2.00 2.25 (c) CTH (km) ALL Water Ice 12 13 14 15 16 17 RMSE (d) ALL Water Ice 6 8 10 12 14 16 (e) ALL Water Ice 1.5 2.0 2.5 3.0 3.5 4.0 (f) size 1 5 10 20 30 Deterministic model Figure 5: The performance evaluation of cloud properties. Skill metrics were calculated between the CloudDiff/deterministic model and MODIS cloud products. Different sizes of circles represent ensemble sizes ranging from 1 to 30, while pentagrams indicate deterministic model. For COT, CER, and CTH, the results indicate significantly higher MAE and RMSE values when the ensemble size is 1. As the ensemble size increases beyond five, both the MAE and RMSE of the ensemble mean gradually decrease. An interesting observation is that the improvement in super-resolution retrieval capability from 20 to 30 samples is relatively minor, suggesting that approximately 20 samples are sufficient to capture most of the high-resolution details and adequately cover the uncertainty space in the retrieval process. The MAE and RMSE values of the deterministic model retrieval approach those when the ensemble size is 5, and are notably lower than those observed with an ensemble size of 30. Specifically, for COT at an ensemble size of 30, the ensemble mean MAE for all clouds (water and ice) is 6.62, with an RMSE of 12.51, compared to the deterministic model results which have an MAE of 7.45 and an RMSE of 13.48. For water clouds alone, the MAE is 6.97 and the RMSE is 12.68, with ice clouds showing slightly better performance (MAE = 6.23, RMSE = 12.32). For CER, the ensemble mean MAE for all clouds at an ensemble size of 30 is 5.87\u00b5m, with an RMSE of 8.93\u00b5m. Water clouds exhibit a lower MAE of 4.47\u00b5m and RMSE of 6.62\u00b5m, whereas ice clouds have a higher MAE of 7.48\u00b5m and RMSE of 10.98\u00b5m. Similarly, for CTH at the same ensemble size, the ensemble mean MAE for all clouds is 1.18 km, with an RMSE of 2.15 km. The MAE for water clouds is 0.91 km and RMSE is 1.72 km, with ice clouds performing worse (MAE = 1.61 km, RMSE = 2.68 km). 13 \fClear Water Ice CloudDiff Clear Water Ice MODIS 0.89 0.10 0.02 0.10 0.85 0.05 0.02 0.10 0.88 (a) OA=85.89% Clear Water Ice Deterministic model Clear Water Ice MODIS 0.87 0.11 0.02 0.11 0.83 0.06 0.03 0.11 0.87 (b) OA=84.52% 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 Figure 6: Confusion matrix of CLP products between MODIS and CloudDiff (a), deterministic model (b). \u2019OA\u2019 is the overall accuracy In addition, the cloud identification results were assessed. Here, we primarily compared the performance of the deterministic model with the ensemble mean results of 30 samples. The validation results demonstrate the model\u2019s capability to accurately identify true targets from MODIS data. Figure 6 presents the CLP identification results for the ensemble mean of the CloudDiff (Fig. 6a) and the deterministic model (Fig. 6b), which categorize the targets primarily into clear sky, water clouds, and ice clouds. The CloudDiff achieves an overall accuracy (OA) of 85.89%. Specifically, it shows a retrieval accuracy for clear sky and ice clouds of 89% and 88% respectively, and 85% for water clouds. In contrast, the deterministic model exhibits a retrieval accuracy of 88% for both clear sky and water clouds, but a slightly lower accuracy of 83% for ice clouds, with an OA of 84.52%, which is marginally lower than that of the CloudDiff. Overall, the ensemble mean of the CloudDiff demonstrates superior performance in identifying clear sky, water clouds, and ice clouds compared to the deterministic model. In summary, the CloudDiff enables the efficient generation of realistic samples that are faithful to a broad range of resolved retrieval schemes and sufficiently diverse to cover most plausible outcomes. 4." +} \ No newline at end of file diff --git a/abs_9K/test_abstract_short_2405.04496v1.json b/abs_9K/test_abstract_short_2405.04496v1.json new file mode 100644 index 0000000000000000000000000000000000000000..d29f4253a848ac52eec86fc47d578b726243adc5 --- /dev/null +++ b/abs_9K/test_abstract_short_2405.04496v1.json @@ -0,0 +1,16 @@ +{ + "url": "http://arxiv.org/abs/2405.04496v1", + "title": "Edit-Your-Motion: Space-Time Diffusion Decoupling Learning for Video Motion Editing", + "abstract": "Existing diffusion-based video editing methods have achieved impressive\nresults in motion editing. Most of the existing methods focus on the motion\nalignment between the edited video and the reference video. However, these\nmethods do not constrain the background and object content of the video to\nremain unchanged, which makes it possible for users to generate unexpected\nvideos. In this paper, we propose a one-shot video motion editing method called\nEdit-Your-Motion that requires only a single text-video pair for training.\nSpecifically, we design the Detailed Prompt-Guided Learning Strategy (DPL) to\ndecouple spatio-temporal features in space-time diffusion models. DPL separates\nlearning object content and motion into two training stages. In the first\ntraining stage, we focus on learning the spatial features (the features of\nobject content) and breaking down the temporal relationships in the video\nframes by shuffling them. We further propose Recurrent-Causal Attention\n(RC-Attn) to learn the consistent content features of the object from unordered\nvideo frames. In the second training stage, we restore the temporal\nrelationship in video frames to learn the temporal feature (the features of the\nbackground and object's motion). We also adopt the Noise Constraint Loss to\nsmooth out inter-frame differences. Finally, in the inference stage, we inject\nthe content features of the source object into the editing branch through a\ntwo-branch structure (editing branch and reconstruction branch). With\nEdit-Your-Motion, users can edit the motion of objects in the source video to\ngenerate more exciting and diverse videos. Comprehensive qualitative\nexperiments, quantitative experiments and user preference studies demonstrate\nthat Edit-Your-Motion performs better than other methods.", + "authors": "Yi Zuo, Lingling Li, Licheng Jiao, Fang Liu, Xu Liu, Wenping Ma, Shuyuan Yang, Yuwei Guo", + "published": "2024-05-07", + "updated": "2024-05-07", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "label": "Original Paper", + "paper_cat": "Diffusion AND Model", + "gt": "Existing diffusion-based video editing methods have achieved impressive\nresults in motion editing. Most of the existing methods focus on the motion\nalignment between the edited video and the reference video. However, these\nmethods do not constrain the background and object content of the video to\nremain unchanged, which makes it possible for users to generate unexpected\nvideos. In this paper, we propose a one-shot video motion editing method called\nEdit-Your-Motion that requires only a single text-video pair for training.\nSpecifically, we design the Detailed Prompt-Guided Learning Strategy (DPL) to\ndecouple spatio-temporal features in space-time diffusion models. DPL separates\nlearning object content and motion into two training stages. In the first\ntraining stage, we focus on learning the spatial features (the features of\nobject content) and breaking down the temporal relationships in the video\nframes by shuffling them. We further propose Recurrent-Causal Attention\n(RC-Attn) to learn the consistent content features of the object from unordered\nvideo frames. In the second training stage, we restore the temporal\nrelationship in video frames to learn the temporal feature (the features of the\nbackground and object's motion). We also adopt the Noise Constraint Loss to\nsmooth out inter-frame differences. Finally, in the inference stage, we inject\nthe content features of the source object into the editing branch through a\ntwo-branch structure (editing branch and reconstruction branch). With\nEdit-Your-Motion, users can edit the motion of objects in the source video to\ngenerate more exciting and diverse videos. Comprehensive qualitative\nexperiments, quantitative experiments and user preference studies demonstrate\nthat Edit-Your-Motion performs better than other methods.", + "main_content": "INTRODUCTION Diffusion-based [22, 41, 44, 49, 53] video motion editing aims to control the motion (e.g., standing, dancing, running) of objects in the source video based on text prompts or other conditions (e.g., depth map, visible edges, human poses, etc), while preserving the integrity of the source background and object\u2019s content. This technique is especially valuable in multimedia [6, 10, 21, 33, 52, 56, 58, 63], including advertising, artistic creation, and film production. It allows users to effortlessly modify the motion of objects in videos Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from permissions@acm.org. ACM MM, 2024, Melbourne, Australia \u00a9 2024 Copyright held by the owner/author(s). Publication rights licensed to ACM. ACM ISBN 978-x-xxxx-xxxx-x/YY/MM https://doi.org/10.1145/nnnnnnn.nnnnnnn using a video motion editing model, eliminating the necessity for complex software. In prior studies, researchers primarily utilized generative methods to create videos featuring specific actions, with few efforts focusing on editing motions within a specific video. For example, several prior studies [26, 64, 65] have focused on pose-guided video generation, which involves creating videos that align with specified human poses. Other studies [9, 17, 25, 35, 57, 66] to generate videos with the same motion by learning the motion features in the source video. These studies operate within the text-driven space-time diffusion model framework, engineered to learn the link between textual prompt inputs and corresponding video outputs. However, the spatial and temporal features of the video are not separated during the training, which makes them entangled. The spatial features are usually represented as the object\u2019s content, and the temporal features are usually represented as the background and motion. This entangled state leads to overlapping object content, background and motion in the space-time diffusion model. As a result, it is challenging to generate highly aligned videos with the fine-grained foreground and background of the source video, even when detailed text descriptions are used. Intuitively, the key to video motion editing lies in decoupling [8, 54, 60] the temporal and spatial features of the space-time diffusion model. MotionEditor [45] first explored this problem by utilizing a twobranch structure in the inference stage to decouple the object\u2019s content and background in the feature layer by the object\u2019s segmentation mask. However, since the MotionEditor\u2019s model learns the relationship between the prompt and the entire video during the training stage, the features of objects and the background overlap in the feature layer. This overlap makes it challenging to distinguish between the background and the objects using only the segmentation mask [23, 39, 50]. In this paper, we explore methods to separate the learning of temporal and spatial features in space-time diffusion models. To this end, we propose a one-shot video motion editing method named Edit-Your-Motion that requires only a single text-video pair for training. Specifically, we propose the Detailed Prompt-Guided Learning Strategy (DPL), a two-stage learning strategy designed to separate spatio-temporal features within space-time diffusion models. Furthermore, we propose Recurrent-Causal Attention (RC-Attn) as an enhancement over Sparse-Causal Attention. The RecurrentCausal Attention allows early frames in a video to receive information from subsequent frames, ensuring consistent content of objects throughout the video without adding computational burden. Additionally, we construct the Noise Constraint Loss [31] to minimize inter-frame differences of the edited video during the second training stage. During DPL, we use the space-time diffusion model (inflated UNet [37]) as the backbone and integrate ControlNet [61] to control the generation of motion. In the first training stage, we activate Recurrent-Causal Attention and freeze the other parameters. Then, we randomly disrupt the order of frames in the source video and mask the background to guide Recurrent-Causal Attention to focus on learning the content features of objects. In the second training stage, we activate Temporal Attention [48] and freeze other parameters to learn motion and background features from ordered video \fEdit-Your-Motion: Space-Time Diffusion Decoupling Learning for Video Motion Editing ACM MM, 2024, Melbourne, Australia frames. Concurrently, Noise Constraint Loss is used to minimize the difference between frames. In the inference stage, we first perform a DDIM [42] inversion for the source video to introduce latent noise and facilitate the smoothness of the edited video. Then, the pose information of the reference video is introduced via ControlNet. Next, to ensure that the content of the objects in the edited video remains consistent with that of the source video, we utilize a two-branch structure (edit branch and reconstruction branch) similar to [45]. However, unlike MotionEditor, DPL distinctly decoupled spatial and temporal features into Recurrent-Causal Attention and Temporal Attention, respectively. Therefore, we only inject the key and value of Recurrent-Causal Attention from the reconstruction branch into the editing branch, eliminating the need for the segmentation mask. In conclusion, our contributions are as follows: \u2022 We further explored how to decouple spatio-temporal features in video motion editing explicitly and proposed a oneshot video motion editing method named Edit-Your-Motion. \u2022 We designed the Detailed Prompt-Guided Learning Strategy (DPL), a two-stage training method. It can decouple the space-time diffusion model\u2019s overlapping spatial and temporal features, thereby avoiding interference from background features during the editing object\u2019s motion. \u2022 We designed Recurrent-Causal Attention to assist DPL in learning the more comprehensive content of objects in the first training stage. In addition, We constructed the Noise Constraint Loss to smooth out inter-frame differences in the second training stage. \u2022 We conduct experiments on in-the-wild videos, where the results show the superiority of our method compared with the state-of-the-art. 2 RELATED WORK In this section, we provide a brief overview of the fields related to video motion editing and point out the connections and differences between them and video motion editing. 2.1 Image Editing Recently, a large amount of work has been done on image editing using diffusion models [7, 30, 36]. SDEdit [28] is the first method for image synthesis and editing based on diffusion models. Promptto-Prompt [13] edits images by referencing cross-attention in the diffusion process. Plug-and-play [46] provides fine-grained control over the generative structure by manipulating spatial features during generation. UniTune [47] completes text-conditioned image editing tasks by fine-tuning. For non-rigidly transformed image editing, Imagic [19] preserves the overall structure and composition of the image by linearly interpolating between texts, thus accomplishing non-rigid editing while. Masactrl [4] converts selfattention to mutual self-attention for non-rigid image editing. On the other hand, InstructPix2Pix [3] has devised a method of editing images by written instructions rather than textual descriptions of image content. Unlike text-driven image editing, DreamBooth [38] generates new images with theme attributes by using several different images of a given theme. However, these methods lack temporal modeling, and it is difficult to maintain consistency between frames when generating video. 2.2 Pose-guided and Motion-Customization Video Generation Pose-guided image and video generation is a method to control image and video generation by adding additional human poses. ControlNet [61] references additional conditions via auxiliary branches to produce images consistent with the condition map. Follow-YourPose [26] controls video generation given human skeletons. It uses a two-stage training to learn to pose and control temporal consistency. ControlVideo [64] is adapted from ControlNet and uses cross-frame interaction to constrain appearance coherence between frames. Control-A-Video [65] enhances faithfulness and temporal consistency by fine-tuning the attention modules in both the diffusion models and ControlNet. Unlike the pose-guided video generation model, the motioncustomization video generation model generates videos with the same motion by learning the motion features in the source video. Customize-A-Video [35] designed an Appearance Absorber module to decompose the spatial information of motion, thus directing the Temporal LoRA [16] to learn the motion information. MotionCrafter [66] customizes the content and motion of the video by injecting motion information into U-Net\u2019s temporal attention module through a parallel spatial-temporal architecture. VMC [17] fine-tunes only the temporal attention layer in the video diffusion model to achieve successful motion customization. Unlike these methods, video motion editing requires controlling the motion of the source video object while maintaining its content and background. 2.3 Video Editing The current video editing models can be divided into two categories: video content editing models [1, 5, 20, 24, 32, 51, 67] and video motion editing models [45]. The video content editing model is designed to modify the background and object\u2019s content (e.g., the scene in the background, the clothes colour, the vehicle\u2019s shape, etc.) in the source video. In video content editing, Tune-A-Video [51] introduces the OneShot Video Tuning task for the first time, which trains the spacetime diffusion model by a single text-video pair. FateZero [32] uses cross-attention maps to edit the content of videos without any training. Mix-of-show [12] fine-tune the model through low-rank adaptions [16] (LoRA) to prevent the crash of knowledge learned by the pre-trained model. Some other approaches [2, 5, 20] use NLA [18] mapping to map the video to a 2D atlas to decouple the object content from the background to edit the content of the object effectively. In video motion editing, MotionEditor [45] uses the object\u2019s segmentation mask to decouple the content and background in the feature layer. Content features are then injected into the editing branch to maintain content consistency. Since the object and the background overlap in the feature layer, it is difficult to accurately separate the object\u2019s content from the background features with the segmentation mask. \fACM MM, 2024, Melbourne, Australia Yi Zuo, Lingling Li, Licheng Jiao, Fang Liu, Xu Liu, Wenping Ma, Shuyuan Yang, and Yuwei Guo Our approach decouples the object from the background during the training stage and directs RC-Attn and Temporal Attention to learn spatial and temporal features, respectively. This ensures that the source video content is accurately injected. 3 METHOD In video motion editing, the focus is on decoupling the spatiotemporal features of the diffusion model. To this end, we propose Edit-Your-Motion, a one-shot video motion editing method trained only on a pair of source and reference videos. Specifically, we design the Detailed Prompt-Guided Learning strategy (DPL), a two-stage learning strategy capable of decoupling spatio-temporal features in the space-time diffusion model. In the first training stage, we shuffle the video frames to disrupt the temporal relationship of the video. Then, mask the background and learn intently spatial features (object content) from the unordered frames. We further propose Recurrent-Causal Attention (RC-Attn) instead of Sparse-Causal Attention to construct consistent features of objects over the whole sequence. In the second training stage, we recover the temporal relationships in the video frames to learn the temporal features (the background and object motion). To smooth out the inter-frame differences, we also construct Noise Constraint Loss. Finally, in the inference stage, we use the deconstruction with a two-branch structure [66] (reconstruction branch and editing branch). Since the spatial and temporal features have been decoupled in the training stage, we obtain the background and motion features in the editing branch and inject the content features of the objects in the reconstruction branch into the editing branch. Fig. 2 illustrates the pipeline of Edit-Your-Motion. To introduce our proposed Edit-Your-Motion, we first introduce the basics of the text-video diffusion model in Sec. 3.1. Then, Sec. 3.2 introduces our proposed Recurrent-Causal Attention (RC-Attentio). After that, in Sec. 3.3, our proposed Detailed Prompt-Guided Learning strategy and Noise Constraint Loss are described. Finally, we will introduce the inference stage in Sec. 3.4. 3.1 Preliminaries Denoising Diffusion Probabilistic Models. The denoising diffusion probabilistic models [11, 14, 27, 55] (DDPMs) consists of a forward diffusion process and a reverse denoising process. During the forward diffusion process, it gradually adds noise \ud835\udf16to a clean image \ud835\udc990 \u223c\ud835\udc5e(\ud835\udc990) with time step \ud835\udc61, obtaining a noisy sample \ud835\udc65\ud835\udc61. The process of adding noise can be represented as: \ud835\udc5e(\ud835\udc99\ud835\udc61|\ud835\udc99\ud835\udc61\u22121) = N (\ud835\udc99\ud835\udc61| \u221a\ufe01 1 \u2212\ud835\udefd\ud835\udc61\ud835\udc99\ud835\udc61\u22121, \ud835\udefd\ud835\udc61I), (1) where \ud835\udefd\ud835\udc61\u2208(0, 1) is a variance schedule. The entire forward process of the diffusion model can be represented as a Markov chain from time \ud835\udc61to time \ud835\udc47, \ud835\udc5e(\ud835\udc991:\ud835\udc47) = \ud835\udc5e(\ud835\udc990) \ud835\udc47 \u00d6 \ud835\udc61=1 \ud835\udc5e(\ud835\udc99\ud835\udc61|\ud835\udc99\ud835\udc61\u22121) . (2) Then, in reverse processing, noise is removed through a denoising autoencoders \ud835\udf16\ud835\udf03(\ud835\udc65\ud835\udc61,\ud835\udc61) to generate a clean image. The corresponding objective can be simplified to: \ud835\udc3f\ud835\udc37\ud835\udc40= E\ud835\udc65,\ud835\udf16\u223cN(0,1),\ud835\udc61 \u0002 \u2225\ud835\udf16\u2212\ud835\udf16\ud835\udf03(\ud835\udc65\ud835\udc61,\ud835\udc61)\u22252 2 \u0003 . (3) Latent Diffusion Models. Latent Diffusion models (LDM) [29, 36, 59] is a newly introduced variant of DDPM that operates in the latent space of the autoencoder. Specifically, the encoder E compresses the image to latent features \ud835\udc9b= E(\ud835\udc99). Then performs a diffusion process over \ud835\udc67, and finally reconstructs latent features back into pixel space using the decoder D. The corresponding objective can be represented as: \ud835\udc3f\ud835\udc3f\ud835\udc37\ud835\udc40= EE(\ud835\udc65),\ud835\udf16\u223cN(0,1),\ud835\udc61 h \u2225\ud835\udf16\u2212\ud835\udf16\ud835\udf03(\ud835\udc67\ud835\udc61,\ud835\udc61)\u22252 2 i . (4) Text-to-Video Diffusion Models. Text-to-Video Diffusion Models [43] train a 3D UNet \ud835\udf163\ud835\udc37 \ud835\udf03 with text prompts \ud835\udc50as a condition to generate videos using the T2V model. Given the \ud835\udc39frames \ud835\udc991...\ud835\udc39of a video, the 3D UNet is trained by \ud835\udc3f\ud835\udc472\ud835\udc49= EE(\ud835\udc651...\ud835\udc39),\ud835\udf16\u223cN(0,1),\ud835\udc61,\ud835\udc50 \u0014\r \r \r\ud835\udf16\u2212\ud835\udf163\ud835\udc37 \ud835\udf03 (\ud835\udc671...\ud835\udc39 \ud835\udc61 ,\ud835\udc61,\ud835\udc50) \r \r \r 2 2 \u0015 , (5) where \ud835\udc671...\ud835\udc39 \ud835\udc61 is the latent features of \ud835\udc991...\ud835\udc39, \ud835\udc671...\ud835\udc39 \ud835\udc61 = E(\ud835\udc991...\ud835\udc39). 3.2 Recurrent-Causal Attention Like Tune-A-Video [51], we use the inflated U-Net network (spacetime diffusion model) as the backbone of Edit-Your-Motion, consisting of stacked 3D convolutional residual blocks and transform blocks. Each transformer block consists of Sparse-Causal Attention, Cross Attention, Temporal Attention, and a Feed-Forward Network (FFN). To save computational overhead, Tune-A-Video uses the current frame latent \ud835\udc67\ud835\udc63\ud835\udc56\u2208 \b \ud835\udc67\ud835\udc630, . . . ,\ud835\udc67\ud835\udc63\ud835\udc56\ud835\udc5a\ud835\udc4e\ud835\udc65 \t as the query for Sparse-Causal Attention. Meanwhile, the previous frame latent \ud835\udc67\ud835\udc63\ud835\udc56\u22121 is combined with the first frame latent \ud835\udc67\ud835\udc631 to obtain the key and value. The specific formula is as follows: \ud835\udc44= \ud835\udc4a\ud835\udc44\ud835\udc67\ud835\udc63\ud835\udc56, \ud835\udc3e= \ud835\udc4a\ud835\udc3e\u0002 \ud835\udc67\ud835\udc631,\ud835\udc67\ud835\udc63\ud835\udc56\u22121 \u0003 ,\ud835\udc49= \ud835\udc4a\ud835\udc49\u0002 \ud835\udc67\ud835\udc631,\ud835\udc67\ud835\udc63\ud835\udc56\u22121 \u0003 , (6) where [\u00b7] denotes concatenation operation. where \ud835\udc4a\ud835\udc44, \ud835\udc4a\ud835\udc3eand \ud835\udc4a\ud835\udc49are projection matrices. However, because there is less information in the early frames of a video, Sparse-Causal Attention does not consider the connection with the subsequent frames. As a result, it may lead to inconsistencies between the content at the beginning and the end of the video. To solve this problem, we propose a simple Recurrent-Causal Attention with no increase in computational complexity. In RecurrentCausal Attention, key and value are obtained by combining the previous frame latent \ud835\udc67\ud835\udc63\ud835\udc56\u22121 with the current frame latent \ud835\udc67\ud835\udc63\ud835\udc56, not \ud835\udc67\ud835\udc631 with \ud835\udc67\ud835\udc63\ud835\udc56\u22121. Notably, the key and value of the first frame latent \ud835\udc67\ud835\udc631 are obtained from the last frame latent \ud835\udc67\ud835\udc63\ud835\udc56\ud835\udc5a\ud835\udc4e\ud835\udc65with the first frame latent \ud835\udc67\ud835\udc631. This allows the object\u2019s content to propagate throughout the video sequence without adding any computational complexity. The formula for Recurrent-Causal Attention is as follows: \ud835\udc44= \ud835\udc4a\ud835\udc44\ud835\udc67\ud835\udc63\ud835\udc56, (7) \ud835\udc3e= ( \ud835\udc4a\ud835\udc3e\u0002 \ud835\udc67\ud835\udc63\ud835\udc56\u22121,\ud835\udc67\ud835\udc63\ud835\udc56 \u0003 if \ud835\udc56< \ud835\udc56\ud835\udc5a\ud835\udc4e\ud835\udc65 \ud835\udc4a\ud835\udc3e\u0002 \ud835\udc67\ud835\udc630,\ud835\udc67\ud835\udc63\ud835\udc56 \u0003 \ud835\udc52\ud835\udc59\ud835\udc60\ud835\udc52 , (8) \ud835\udc49= ( \ud835\udc4a\ud835\udc49\u0002 \ud835\udc67\ud835\udc63\ud835\udc56\u22121,\ud835\udc67\ud835\udc63\ud835\udc56 \u0003 if \ud835\udc56< \ud835\udc56\ud835\udc5a\ud835\udc4e\ud835\udc65 \ud835\udc4a\ud835\udc49\u0002 \ud835\udc67\ud835\udc630,\ud835\udc67\ud835\udc63\ud835\udc56 \u0003 \ud835\udc52\ud835\udc59\ud835\udc60\ud835\udc52 . (9) \fEdit-Your-Motion: Space-Time Diffusion Decoupling Learning for Video Motion Editing ACM MM, 2024, Melbourne, Australia \u201cA boy wearing black clothes and gray pants is dancing.\u201d ControlNet Inference Stage: A Two-Branch Structure that Injects Spatial Features The Second Training Stage: Learning Temporal Feature from Ordered Video Frames \u201cA boy wearing black clothes and gray Pants is playing basketball.\u201d The First Training Stage: Learning Spatial Features from Shuffled Images \u201cA boy wearing black clothes and gray pants.\u201d a P s P a P t P rf S rf C Editing Branch Reconstruction Branch k V Edited video sr S Source video Reference video a P Unordered Frames ControlNet Temp-Attn Cross-Attn RC-Attn Temp-Attn Cross-Attn RC-Attn ControlNet Ordered Frames Temp-Attn Cross-Attn RC-Attn Temp-Attn Cross-Attn RC-Attn s P sr S s r C rf C t P Figure 2: The overall pipline of Edit-Your-Motion. Edit-Your-Motion decouples spatial features (object appearance) from temporal features (background and motion information) of the source video using the Detailed Prompt-Guided Learning Strategy (DPL). In the first training stage, Recurrent-Causal attention (RC-Attn) is guided to learn spatial features. In the second training stage, Temporal Attention (Temp-Attn) is guided to learn temporal features. During inference, the spatial features of the source video are injected into the editing branch through the key and value of Recurrent-Causal Attention, thus keeping the source content and background unchanged. Overall, Recurrent-Causal Attention enables early frames to acquire more comprehensive content information compared to Sparse-Causal Attention, by establishing a link to the last frame in the first frame. 3.3 The Detailed Prompt-Guided Learning Strategy The purpose of diffusion-based video motion editing is to control the motion of objects in the source video based on a reference video with a prompt and to ensure that the content and background of the objects remain unchanged. The key lies in decoupling the diffusion model\u2019s overlapping temporal and spatial features. MotionEditor uses the object\u2019s segmentation mask to decouple the object content and the background in the feature layer. However, the decoupled features also overlap since the spatio-temporal features have been obfuscated in the model. In order to be able to decouple overlapping spatio-temporal features, we design the Detailed Prompt-Guided Learning Strategy (DPL). DPL is divided into two training stages: (1) The First Training Stage: Learning Spatial Features from Shuffled Images, and (2) The Second Training Stage: Learning Temporal Features from Ordered video frames. Next, we will describe the two stages in detail. The First Training Stage: Learning Spatial Features from Shuffled Images. In this stage, the space-time diffusion model focuses on learning the spatial features of the source object. First, we disrupt the order of video frames to destroy their temporal information and generate unordered video frames U = {\ud835\udc62\ud835\udc56|\ud835\udc56\u2208[1,\ud835\udc5b]}, where \ud835\udc5b is the length of the video. If we train the model directly using unordered frames, the features of the object and the background will overlap. Such overlapping spatio-temporal features are challenging to decouple later and will lead to interference from background features when controlling object motion. Therefore, we use an existing segmentation \fACM MM, 2024, Melbourne, Australia Yi Zuo, Lingling Li, Licheng Jiao, Fang Liu, Xu Liu, Wenping Ma, Shuyuan Yang, and Yuwei Guo network to extract the segmentation mask \ud835\udc40for the unordered video frames. Therefore, we use an existing segmentation network to extract the segmentation mask M for the video frames and mask out the background as: UM = U \u00b7 M, (10) ZM \ud835\udc61 = E(UM), (11) where ZM \ud835\udc61 is the latent features of UM, and E(\u00b7) is encoder. Then, we utilize an existing skeleton extraction network to obtain the human skeleton \ud835\udc46\ud835\udc60\ud835\udc5fin the source video and feed it into ControlNet along with the prompt \ud835\udc43\ud835\udc4e. \ud835\udc36\ud835\udc60\ud835\udc5f= \ud835\udc36\ud835\udc5c\ud835\udc5b\ud835\udc61\ud835\udc5f\ud835\udc5c\ud835\udc59\ud835\udc41\ud835\udc52\ud835\udc61(\ud835\udc46\ud835\udc60\ud835\udc5f, \ud835\udc43\ud835\udc4e), (12) where \ud835\udc36\ud835\udc60\ud835\udc5fis the pose feature of source video. Next, we will freeze other parameters and only activate Recurrent-Causal Attention. Finally, we will \ud835\udc43\ud835\udc4eand \ud835\udc36\ud835\udc60\ud835\udc5finto the space-time diffusion model for training. The reconstruction loss can be written as follows: \ud835\udc3f\ud835\udc5f\ud835\udc52\ud835\udc50= E\ud835\udc67\ud835\udc5a \ud835\udc61,\ud835\udf16\u223cN(0,1),\ud835\udc61,\ud835\udc43\ud835\udc4e,\ud835\udc36\ud835\udc60\ud835\udc5f \u0014\r \r \r\ud835\udf16\u2212\ud835\udf163\ud835\udc37 \ud835\udf03 (\ud835\udc67\ud835\udc5a \ud835\udc61,\ud835\udc61, \ud835\udc43\ud835\udc4e,\ud835\udc36\ud835\udc60\ud835\udc5f) \r \r \r 2 2 \u0015 . (13) The Second Training Stage: Learning Temporal Features from Ordered Video Frames. Unlike the first training stage, we restored the temporal relationship of video frames. Then, guide the spacetime diffusion model to learn the temporal features of motion and background from ordered video frames V = {\ud835\udc63\ud835\udc56|\ud835\udc56\u2208[1,\ud835\udc5b]}. Specifically, We construct a new prompt \ud835\udc43\ud835\udc60, which adds a description of the motion to \ud835\udc43\ud835\udc4e. Then, Temporal Attention is activated to learn motion features while other parameters are frozen. To smooth the video, we added Noise Constraint Loss [31]. The noise constraint loss can be written as follows: \ud835\udc3f\ud835\udc5b\ud835\udc5c\ud835\udc56\ud835\udc60\ud835\udc52= 1 \ud835\udc5b\u22121 \ud835\udc5b\u22121 \u2211\ufe01 \ud835\udc56=1 \r \r \r\ud835\udf16\ud835\udc53\ud835\udc56 \ud835\udc9b\ud835\udc61\u2212\ud835\udf16\ud835\udc53\ud835\udc56+1 \ud835\udc9b\ud835\udc61 \r \r \r 2 2 , (14) where \ud835\udc53\ud835\udc56denote the \ud835\udc56-th frame of the video. \ud835\udf16\ud835\udc53\ud835\udc56 \ud835\udc9b\ud835\udc61is the noise prediction at timestep \ud835\udc61. The total loss for the second training stage is constructed as follows: \ud835\udc3f\ud835\udc47\ud835\udc5c\ud835\udc61\ud835\udc4e\ud835\udc59= (1 \u2212\ud835\udf06)\ud835\udc3f\ud835\udc5b\ud835\udc5c\ud835\udc56\ud835\udc60\ud835\udc52+ \ud835\udf06\ud835\udc3f\ud835\udc5f\ud835\udc52\ud835\udc50, (15) where \ud835\udc3f\ud835\udc5f\ud835\udc52\ud835\udc50is constructed from ordered video frames V without segmentation mask \ud835\udc40. \ud835\udf06is set to 0.9. 3.4 Inference Pipelines In the inference stage, we first extract the human skeleton \ud835\udc46\ud835\udc5f\ud835\udc53from the reference video to guide motion generation. Then, to ensure that the object\u2019s content and background are unchanged, we use a two-branch architecture (reconstruction branch and editing branch) similar to [45] to inject the object\u2019s content and background features into the editing branch. Specifically, we first input the latent noise \ud835\udc67\ud835\udc60from the source video DDIM inversion and \ud835\udc43\ud835\udc4einto the reconstruction branch. Simultaneously input \ud835\udc67\ud835\udc60and \ud835\udc43\ud835\udc61into the editing branch. Then, we will input the human skeleton \ud835\udc46\ud835\udc5f\ud835\udc53from the reference video and \ud835\udc43\ud835\udc61 into ControlNet to obtain feature \ud835\udc36\ud835\udc5f\ud835\udc53as: \ud835\udc36\ud835\udc5f\ud835\udc53= \ud835\udc36\ud835\udc5c\ud835\udc5b\ud835\udc61\ud835\udc5f\ud835\udc5c\ud835\udc59\ud835\udc41\ud835\udc52\ud835\udc61(\ud835\udc46\ud835\udc5f\ud835\udc53, \ud835\udc43\ud835\udc61), (16) where \ud835\udc36\ud835\udc5f\ud835\udc53is the pose feature of the reference video to be used to guide the generation of motion in the editing branch. Next, we will inject the spatial features from the reconstruction branch into the editing branch. Due to disrupting the time relationship and mask the background in the first training stage of DPL. Therefore, we directly inject the keys and values of the RC-Attn in the reconstruction branch into the editing branch without needing segmentation masks. The specific formula can be written as: \ud835\udc3e\ud835\udc5f= \ud835\udc4a\ud835\udc3e\ud835\udc67\ud835\udc60 \ud835\udc63\ud835\udc56,\ud835\udc49\ud835\udc5f= \ud835\udc4a\ud835\udc49\ud835\udc67\ud835\udc60 \ud835\udc63\ud835\udc56, (17) \ud835\udc3e\ud835\udc52= h \ud835\udc4a\ud835\udc3e\ud835\udc67\ud835\udc52 \ud835\udc63\ud835\udc56\u22121,\ud835\udc4a\ud835\udc3e\ud835\udc67\ud835\udc52 \ud835\udc63\ud835\udc56, \ud835\udc3e\ud835\udc5fi ,\ud835\udc49\ud835\udc52= h \ud835\udc4a\ud835\udc49\ud835\udc67\ud835\udc52 \ud835\udc63\ud835\udc56\u22121,\ud835\udc4a\ud835\udc49\ud835\udc67\ud835\udc52 \ud835\udc63\ud835\udc56,\ud835\udc49\ud835\udc5fi , (18) \ud835\udc49\ud835\udc5fwhere \ud835\udc52represents the editing branch. \ud835\udc5frepresents the reconstruction branch. In the end, we obtained the edited video. 4 EXPERIMENTAL 4.1 Implementation Details Our proposed Edit-Your-Motion is based on the Latent Diffusion Model [36] (Stabel Diffusion). The data in this article comes from TaichiHD [40] and YouTube video datasets, in which each video has a minimum of 70 frames. During training, we finetune 300 steps for each of the two training stages at a learning rate of 3 \u00d7 10\u22125. For inference, we used the DDIM sampler [42] with no classifier guidance [15] in our experiments. For each video, the fine-tuning takes about 15 minutes with a single NVIDIA A100 GPU. 4.2 Comparisons Method To demonstrate the superiority of our Edit-Your-Motion, we have selected methods from motion customization, pose-guided video generation, video content editing, and video motion editing as comparison methods. (1) Tune-A-Video [51]: The first presents the work of one-shot video editing. It inflates a pre-trained T2I diffusion model to 3D to handle the video task. (2) MotionEditor1 [45]: The first examines the work of video motion editing while maintaining the object content and background unchanged. (3) Follow-YourPose [26]: Generating pose-controllable videos using two-stage training. (4) MotionDirector [66]: Generate motion-aligned videos by decoupling appearance and motion in reference videos for videomotion-customization. 4.3 Evaluation Our method can edit the motion of objects in the source video by using the reference video and prompting without changing the object content and the background. Fig. 4 shows some of our examples. As can be seen, our proposed Edit-Your-Motion accurately controls the motion and preserves the object\u2019s content and background well. The more cases are in the appendix. Qualitative Results. Fig. 3 shows the results of the visual comparison of Edit-Your-Motion with other comparison methods on 25 in-the-wild cases. Although Follow-Your-Pose and MotionDirector can align well with the motion of the reference video, it is difficult to 1Since the article\u2019s code is not provided, the experimental results in this paper are obtained by replication. \fEdit-Your-Motion: Space-Time Diffusion Decoupling Learning for Video Motion Editing ACM MM, 2024, Melbourne, Australia Source video Reference video Tune-A-Video Ours MotionEditor 4 8 12 16 0 22 6 2 Follow-Your-Pose MotionDirector Source video Reference video Tune-A-Video Ours MotionEditor 0 22 6 2 Follow Your Pose MotionDirector A girl in a plaid top and black skirt is dancing practicing wugong. A boy with a black top and gray pants is playing basketball dancing. Figure 3: Qualitative comparison with state-of-the-art methods. Compared to other baselines, Edit-Your-Motion successfully achieves motion alignment with the reference video and maintains the content consistency of the background and objects. maintain consistency between the object content and background in both the source and reference videos. It demonstrates that generating specific background and content using only text prompts is difficult. Tune-A-Video and MotionEditor show noticeable content changes. In addition, MotionEditor shows motion overlap (arms) caused by using of the segmentation mask to decouple overlapping features. In contrast to the above, our proposed Edit-Your-Motion aligns the motion of the edited video and the reference video well and preserves the content and background of the objects in the source video intact. This also demonstrates the effectiveness of our method in video motion editing. Quantitative results. We evaluate the methods with automatic evaluations and human evaluations on 25 in-the-wild cases. Automatic Evaluations. To quantitatively assess the differences between our proposed Edit-Your-Motion and other comparative methods, we use the following metrics to measure the results: (1) Text Alignment (TA). We use CLIP [34] to compute the average cosine similarity between the prompt and the edited frames. (2) Temporal Consistency (TC). We use CLIP to obtain image features and compute the average cosine similarity between neighbouring video frames. (3) LPIPS-N (L-N): We calculate Learned Perceptual Image Patch Similarity [62] between edited neighbouring frames. (4) LPIPS-S (L-S): We calculate Learned Perceptual Image Patch Table 1: Quantitative evaluation using CLIP and LPIPS. TA, TC, L-N, L-S represent Text Alignment, Temporal Consistency, LPIPS-N and LPIPS-S, respectively. Method TA \u2191 TC \u2191 L-N \u2193 L-S \u2193 Follow-Your-Pose [26] 0.236 0.913 0.213 0.614 MotionDirector [66] 0.239 0.872 0.141 0.430 Tune-A-Video [51] 0.278 0.934 0.137 0.359 MotionEditor [45] 0.286 0.948 0.102 0.300 Ours 0.289 0.950 0.109 0.276 Similarity between edited frames and source frames. Table 1 shows the quantitative results of Edit-Your-Motion with other comparative methods. The results show that Edit-Your-Motion outperforms the other methods on all metrics. User Study. We invited 70 participants to participate in the user study. Each participant could see the source video, the reference video, and the results of our and other comparison methods. For each case, we combined the results of Edit-Your-Motion with the results of each of the four comparison methods. Then, we set three \fACM MM, 2024, Melbourne, Australia Yi Zuo, Lingling Li, Licheng Jiao, Fang Liu, Xu Liu, Wenping Ma, Shuyuan Yang, and Yuwei Guo A boy wearing black clothes and gray pants is playing basketball dancing. A woman in a blue top and white skirt is waving her hand dancing. A girl with a black top and black skirt is dancing practicing Tai Chi. A man with a dark green top and black pants is standing practicing Tai Chi. 0 6 12 18 22 Figure 4: Some examples of motion editing results for Edit-Your-Motion. Table 2: User Study. Higher indicates the users prefer more to our MotionEditor. TA, CA, and MA represent Text Alignment, Content Alignment, and Motion Alignment, respectively. Method TA CA MA Follow-Your-Pose [26] 87.142% 96.663% 90.953% MotionDirector [66] 94.522% 96.190% 86.188% Tune-A-Video [51] 78.810% 82.145% 84.047% MotionEditor [45] 76.428% 82.380% 80.950% questions to evaluate Text Alignment, Content Alignment and Motion Alignment. The three questions are \"Which is more aligned to the text prompt?\", \"Which is more content aligned to the source video?\" and \"Which is more motion aligned to the reference video?\". Table 2 shows that our method outperforms the other compared methods in all three aspects. 4.4 Ablation Study To verify the effectiveness of the proposed module, we show the results of the ablation experiments in Fig. 5. In column 3, we replace RC-Attn with Sparse Attention, which makes the first frame inconsistent with the object content in the subsequent frames. This shows that RC-Attn can better establish content consistency over the entire sequence than with Sparse Attention. In column 4, w/o Noise Constraint Loss (NCL) affects the smoothness between frames, causing the background to be inconsistent between frames. In column 5, we train RC-Attn and Temporal Attention in a training stage. However, the lack of spatio-temporal decoupling results in background and object content interfering, generating undesirable edited videos. At the same time, it also demonstrates the effectiveness of DPL in decoupling time and space. \fEdit-Your-Motion: Space-Time Diffusion Decoupling Learning for Video Motion Editing ACM MM, 2024, Melbourne, Australia 40 44 48 58 Source video Reference video w/o RCA w/o NCL w/o DPT Edit-Your-Motion A girl with a black top and black shorts is waving her hand dancing. Figure 5: Some examples of video motion editing results for Edit-Your-Motion. 5" +} \ No newline at end of file diff --git a/abs_9K/test_abstract_short_2405.04534v1.json b/abs_9K/test_abstract_short_2405.04534v1.json new file mode 100644 index 0000000000000000000000000000000000000000..1ae3d0d7532549497c3d3c16a4e22a53edbb2cf7 --- /dev/null +++ b/abs_9K/test_abstract_short_2405.04534v1.json @@ -0,0 +1,16 @@ +{ + "url": "http://arxiv.org/abs/2405.04534v1", + "title": "Tactile-Augmented Radiance Fields", + "abstract": "We present a scene representation, which we call a tactile-augmented radiance\nfield (TaRF), that brings vision and touch into a shared 3D space. This\nrepresentation can be used to estimate the visual and tactile signals for a\ngiven 3D position within a scene. We capture a scene's TaRF from a collection\nof photos and sparsely sampled touch probes. Our approach makes use of two\ninsights: (i) common vision-based touch sensors are built on ordinary cameras\nand thus can be registered to images using methods from multi-view geometry,\nand (ii) visually and structurally similar regions of a scene share the same\ntactile features. We use these insights to register touch signals to a captured\nvisual scene, and to train a conditional diffusion model that, provided with an\nRGB-D image rendered from a neural radiance field, generates its corresponding\ntactile signal. To evaluate our approach, we collect a dataset of TaRFs. This\ndataset contains more touch samples than previous real-world datasets, and it\nprovides spatially aligned visual signals for each captured touch signal. We\ndemonstrate the accuracy of our cross-modal generative model and the utility of\nthe captured visual-tactile data on several downstream tasks. Project page:\nhttps://dou-yiming.github.io/TaRF", + "authors": "Yiming Dou, Fengyu Yang, Yi Liu, Antonio Loquercio, Andrew Owens", + "published": "2024-05-07", + "updated": "2024-05-07", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "label": "Original Paper", + "paper_cat": "Diffusion AND Model", + "gt": "We present a scene representation, which we call a tactile-augmented radiance\nfield (TaRF), that brings vision and touch into a shared 3D space. This\nrepresentation can be used to estimate the visual and tactile signals for a\ngiven 3D position within a scene. We capture a scene's TaRF from a collection\nof photos and sparsely sampled touch probes. Our approach makes use of two\ninsights: (i) common vision-based touch sensors are built on ordinary cameras\nand thus can be registered to images using methods from multi-view geometry,\nand (ii) visually and structurally similar regions of a scene share the same\ntactile features. We use these insights to register touch signals to a captured\nvisual scene, and to train a conditional diffusion model that, provided with an\nRGB-D image rendered from a neural radiance field, generates its corresponding\ntactile signal. To evaluate our approach, we collect a dataset of TaRFs. This\ndataset contains more touch samples than previous real-world datasets, and it\nprovides spatially aligned visual signals for each captured touch signal. We\ndemonstrate the accuracy of our cross-modal generative model and the utility of\nthe captured visual-tactile data on several downstream tasks. Project page:\nhttps://dou-yiming.github.io/TaRF", + "main_content": "Introduction As humans, our ability to perceive the world relies crucially on cross-modal associations between sight and touch [19, 50]. Tactile sensing provides a detailed understanding of material properties and microgeometry, such as the intricate patterns of bumps on rough surfaces and the complex motions that soft objects make when they deform. This type of understanding, which largely eludes today\u2019s computer vision models, is a critical component of applications that require reasoning about physical contact, such as robotic locomotion [3, 24, 31, 34, 37, 38] and manipulation [6, 7, 11, 42, 60], and methods that simulate the behavior of materials [4, 13, 40, 41]. In comparison to many other modalities, collecting tactile data is an expensive and tedious process, since it requires direct physical interaction with the environment. A recent line of work has addressed this problem by having humans or robots probe the environment with touch sensors (see Table 1). Early efforts have been focused on capturing the properties of only a few objects either in simulation [16, 17, 52] or in lab-controlled settings [6, 7, 18, 28, 35, 52, 63], which may not fully convey the diversity of tactile signals in natural environments. Other works have gone beyond a 1 arXiv:2405.04534v1 [cs.CV] 7 May 2024 \fDataset Samples Aligned Scenario Source More Than a Feeling [7] 6.5k \u2715 Tabletop Robot Feeling of Success [6] 9.3k \u2715 Tabletop Robot VisGel [35] 12k \u2715 Tabletop Robot SSVTP [28] 4.6k \u2713 Tabletop Robot ObjectFolder 1.0 [16] \u2013 \u2713 Object Synthetic ObjectFolder 2.0 [17] \u2013 \u2713 Object Synthetic ObjectFolder Real [18] 3.7k \u2715 Object Robot Burka et al. [5] 1.1k \u2715 Sub-scene Human Touch and Go [56] 13.9k \u2715 Sub-scene Human YCB-Slide\u2217[52] \u2713 Object Human Touching a NeRF [63] 1.2k \u2713 Object Robot TaRF (Ours) 19.3k \u2713 Full scene Human Table 1. Dataset comparison. We present the number of real visual-tactile pairs and whether such pairs are visually aligned, i.e., whether the visual image includes an occlusion-free view of the touched surface. \u2217YCB-Slide has real-world touch probes but synthetic images rendered with CAD models of YCB objects on a white background [9]. lab setting and have collected touch from real scenes [5, 56]. However, existing datasets lack aligned visual and tactile information, since the touch sensor and the person (or robot) that holds it often occlude large portions of the visual scene (Fig. 2). These datasets also contain only a sparse set of touch signals for each scene, and it is not clear how the sampled touch signals relate to each other in 3D. In this work, we present a simple and low-cost procedure to capture quasi-dense, scene-level, and spatially-aligned visual and touch data (Fig. 1). We call the resulting scene representation a tactile-augmented radiance field (TaRF). We remove the need for robotic collection by leveraging a 3D scene representation (a NeRF [39]) to synthesize a view of the surface being touched, which results in spatially aligned visual-tactile data (Fig. 2). We collect this data by mounting a touch sensor to a camera with commonly available materials (Fig. 3). To calibrate the pair of sensors, we take advantage of the fact that popular vision-based touch sensors [25, 26, 32, 48] are built on ordinary cameras. The relative pose between the vision and tactile sensors can thus be estimated using traditional methods from multi-view geometry, such as camera resectioning [20]. We use this procedure to collect a large real-world dataset of aligned visual-tactile data. With this dataset, we train a diffusion model [45, 51] to estimate touch at locations not directly probed by a sensor. In contrast to the recent work of Zhong et al. [63], which also estimates touch from 3D NeRF geometry, we create scene-scale reconstructions, we do not require robotic proprioception, and we use diffusion models [51]. This enables us to obtain tactile data at a much larger scale, and with considerably more diversity. Unlike previous visual-tactile diffusion work [57], we condition the model on spatially aligned visual and depth information, enhancing the generated samples\u2019 quality and their usefulness in downstream applications. After training, the diffusion model can be used to predict tactile informaOF 2.0 [17] VisGel [35] OF Real [18] SSVTP [28] TG [56] TaRF (Ours) Figure 2. Visual-tactile examples. In contrast to the visual-tactile data captured in previous work, our approach allows us to sample unobstructed images that are spatially aligned with the touch signal, from arbitrary 3D viewpoints using a NeRF. tion for novel positions in the scene. Analogous to quasidense stereo methods [15, 33], the diffusion model effectively propagates sparse touch samples, obtained by probing, to other visually and structurally similar regions of the scene. We evaluate our visual-tactile model\u2019s ability to accurately perform cross-modal translation using a variety of quality metrics. We also apply it to several downstream tasks, including localizing a touch within a scene and understanding material properties of the touched area. Our experiments suggest: \u2022 Touch signals can be localized in 3D space by exploiting multi-view geometry constraints between sight and touch. \u2022 Estimated touch measurements from novel views are not only qualitatively accurate, but also beneficial on downstream tasks. \u2022 Cross-modal prediction models can accurately estimate touch from sight for natural scenes. \u2022 Visually-acquired 3D scene geometry improves crossmodal prediction. 2. Related Work Visual-tactile datasets. Previous work has either used simulators [16, 17] or robotic arms [6, 8, 18, 35, 63] for data generation. Our work is closely related to that of Zhong et al. [63], which uses a NeRF and captured touch data to generate a tactile field for several small objects. They use the proprioception of an expensive robot to spatially align vision and touch. In contrast, we leverage the properties of the tactile sensor and novel view synthesis to use commonly available material (a smartphone and a selfie stick) to align vision and touch. This enables the collection of a larger, scene-level, and more diverse dataset, on which we train a higher-capacity diffusion model (rather than a conditional GAN). Like several previous works [5, 56], we also collect scene-level data. In contrast to them, we spatially align the signals by registering them in a unified 3D representation, thereby increasing the prediction power of the visual-tactile generative model. Capturing multimodal 3D scenes. Our work is related to methods that capture 3D visual reconstructions of spaces 2 \fusing RGB-D data [12, 49, 55, 59] and multimodal datasets of paired 3D vision and language [1, 2, 10]. Our work is also related to recent methods that localize objects in NeRFs using joint embeddings between images and language [29] or by semantic segmentation [62]. In contrast to language supervision, touch is tied to a precise position in a scene. 3D touch sensing. A variety of works have studied the close relationship between geometry and touch, motivating our use of geometry in imputing touch. Johnson et al. [25, 26] proposed vision-based touch sensing, and showed that highly accurate depth can be estimated from the touch sensor using photometric stereo. Other work has estimated object-scale 3D from touch [54]. By contrast, we combine sparse estimates of touch with quasi-dense tactile signals estimated using generative models. Cross-modal prediction of touch from sight. Recent work has trained generative models that predict touch from images. Li et al. [35] used a GAN to predict touch for images of a robotic arm, while Gao et al. [18] applied them to objects collected on a turntable. Yang et al. [57] used latent diffusion to predict touch from videos of humans touching objects. Our goal is different from these works: we want to predict touch signals that are spatially aligned with a visual signal, to exploit scene-specific information, and to use geometry. Thus, we use a different architecture and conditioning signal, and fit our model to examples from the same scenes at training and test time. Other work has learned joint embeddings between vision and touch [28, 36, 56, 58, 61]. 3. Method We collect visual and tactile examples from a scene and register them together with a 3D visual reconstruction to build a TaRF. Specifically, we capture a NeRF F\u03b8 : (x, r) 7\u2192(c, \u03c3) that maps a 3D point x = (x, y, z) and viewing direction r to its corresponding RGB color c and density \u03c3 [39]. We associate to the visual representation a touch model F\u03d5 : vt 7\u2192\u03c4 that generates the tactile signal that one would obtain by touching at the center of the image vt. In the following, we explain how to estimate F\u03b8 and F\u03d5 and put them into the same shared 3D space. 3.1. Capturing vision and touch signals Obtaining a visual 3D reconstruction. We build the visual NeRF, F\u03b8, closely following previous work [12, 55]. A human data collector moves through a scene and records a video, covering as much of the space as possible. We then estimate camera pose using structure from motion [47] and create a NeRF using off-the-shelf packages [53]. Additional details are provided in the supplement. Capturing and registering touch. We simultaneously collect tactile and visual signals by mounting a touch sensor Visual Camera Tactile Sensor Tactile frames Visual frames Visual-Tactile Correspondences Figure 3. Capturing setup. (a) We record paired vision and touch signals using a camera attached to a touch sensor. (b) We estimate the relative pose between the touch sensor and the camera using correspondences between sight and touch. on a camera (Fig. 3), obtaining synchronized touch signals {\u03c4 i}N i=1 and video frames v. We then estimate the pose of the video frames using off-the-shelf structure from motion methods [47], obtaining poses {pv i }N i=1. Finally, we use the calibration of the mount to obtain the poses {pt i}N i=1 of the tactile measurements with respect to the scene\u2019s global reference frame. As a collection device, we mount an iPhone 14 Pro to one end of a camera rod, and a DIGIT [32] touch sensor to the other end. Note that the devices can be replaced with any RGB-D camera and vision-based tactile sensor. Capturing setup calibration. To find the relative pose between the camera and the touch sensor (Fig. 3), we exploit the fact that arbitrary viewpoints can be synthesized from F\u03b8, and that ubiquitous vision-based touch sensors are based on perspective cameras. In these sensors, an elastomer gel is placed on the lens of a commodity camera, which is illuminated by colored lights. When the gel is pressed into an object, it deforms, and the camera records an image of the deformation; this image is used as the tactile signal. This design allows us to estimate the pose of the tactile sensor through multi-view constraints from visualtactile correspondences: pixels in visual images and tactile images that are of the same physical point. We start the calibration process by synthesizing novel views from F\u03b8. The views are generated at the camera location {pv i }N i=1, but rotated 90\u25e6on the x-axis. This is because the camera is approximately orthogonal to the touch sensor (see Fig. 3). Then, we manually annotate corresponding pixels between the touch measurements and the generated frames (Fig. 3). To simplify and standardize this process, we place a braille board in each scene and probe it with the touch sensor. This will generate a distinctive touch signal that is easy to localize [23]. We formulate the problem of estimating the six degrees of freedom relative pose (R, t) between the touch sensor and the generated frames as a resectioning problem [20]. We use the estimated 3D structure from the NeRF F\u03b8 to obtain 3D points {xi}M i=1 for each of the annotated corre3 \fspondences. Each point has a pixel position ui \u2208R2 in the touch measurement. We find (R, t) by minimizing the reprojection error: \\ min _ { { \\ma thbf R } , { \\ma t hbf t}} \\frac {1}{M}\\sum _{i=1}^M \\lVert \\pi ({\\mathbf K}[\\mathbf {R}\\,\\,|\\,\\,\\mathbf {t}], \\mathbf {X}_i) \\bu _i \\rVert _1, (1) where \u03c0 projects a 3D point using a given projection matrix, K are the known intrinsics of the tactile sensor\u2019s camera, and the point Xi is in the coordinate system of the generated vision frames. We perform the optimization on 6-15 annotated correspondences from the braille board. For robustness, we compute correspondences from multiple frames. We represent the rotation matrix using quaternions and optimize using nonlinear least-squares. Once we have (R, t) with respect to the generated frames, we can derive the relative pose between the camera and the touch sensor. 3.2. Imputing the missing touch We use a generative model to estimate the touch signal (represented as an image from a vision-based touch sensor) for other locations within the scene. Specifically, we train a diffusion model p\u03d5(\u03c4 | v, d, b), where v and d are images and depth maps extracted from F\u03b8 (see Fig. 4). We also pass as input to the diffusion model a background image captured by the touch sensor when it is not in contact with anything, denoted as b. Although not essential, we have observed that this additional input empirically improves the model\u2019s performance (e.g., Fig. 1 the background provides the location of defects in the gel, which appear as black dots). We train the model p\u03d5 on our entire vision-touch dataset (Sec. 4). The training of p\u03d5 is divided into two stages. In the first, we pre-train a cross-modal visual-tactile encoder with self-supervised contrastive learning on our dataset. This stage, initially proposed by [23, 57], is equivalent to the self-supervised encoding pre-training that is common for image generation models [45]. We use a ResNet-50 [21] as the backbone for this contrastive model. In the second stage, we use the contrastive model to generate the input for a conditional latent diffusion model, which is built upon Stable Diffusion [45]. A frozen pretrained VQ-GAN [14] is used to obtain the latent representation with a spatial dimension of 64 \u00d7 64. We start training the diffusion model from scratch and pre-train it on the task of unconditional tactile image generation on the YCBSlide dataset [52]. After this stage, we train the conditional generative model p\u03d5 on our spatially aligned visual-tactile dataset, further fine-tuning the contrastive model end-to-end with the generation task. At inference time, given a novel location in the 3D scene, we first render the visual signals \u02c6 v and \u02c6 d from NeRF, and then estimate the touch signal \u02c6 \u03c4 of the position using the diffusion model. Latent Diffusion Gaussian Noise \u001f\u001e\u001e\u001e\u001e\u001d\u001e\u001e\u001e\u001e\u001c Depth RGB Est. Touch NeRF { Figure 4. Touch estimation. We estimate the tactile signal for a given touch sensor pose (R, t). To do this, we synthesize a viewpoint from the NeRF, along with a depth map. We use conditional latent diffusion to predict the tactile signal from these inputs. 4. A 3D Visual-Tactile Dataset In the following, we show the details of the data collection process and statistics of our dataset. 4.1. Data Collection Procedure The data collection procedure is divided into two stages. First, we collect multiple views from the scene, capturing enough frames around the areas we plan to touch. During this stage, we collect approximately 500 frames. Next, we collect synchronized visual and touch data, maximizing the geometry and texture being touched. We then estimate the camera location of the vision frames collected in the previous two stages using off-the-shelf mapping tools [47]. After estimating the camera poses for the vision frames, the touch measurements\u2019 poses can be derived by using the mount calibration matrix. More details about the pose estimation procedure can be found in the supplement. Finally, we associate each touch sensor with a color image by translating the sensor poses upwards by 0.4 meters and querying the NeRF with such poses. The field of view we use when querying the NeRF is 50\u25e6. This provides us with approximately 1,500 temporally aligned vision-touch image pairs per scene. Note that this collection procedure is scalable since it does not require specific expertise or equipment and generates abundant scene-level samples. 4.2. Dataset Statistics We collect our data in 13 ordinary scenes including two offices, a workroom, a conference room, a corridor, a tabletop, a corridor, a lounge, a room with various clothes and four outdoor scenes with interesting materials. Typically, we collect 1k to 2k tactile probes in each scene, resulting in a total of 19.3k image pairs in the dataset. Some representative samples from the collected dataset are shown in Fig. 5. Our data includes a large variety of geometry (edges, surfaces, corners, etc.) and texture (plastic, clothes, snow, wood, etc.) of different materials in the scene. During capturing process, the collector will try to 4 \fFigure 5. Representative examples from the captured dataset. Our dataset is obtained from nine everyday scenes, such as offices, classrooms, and kitchens. We show three such scenes in the figure above, together with samples of spatially aligned visual and tactile data. In each scene, 1k to 2k tactile probes were collected, resulting in a total of 19.3k image pairs. The data encompasses diverse geometries (edges, surfaces, corners, etc.) and textures (plastic, clothes, snow, wood, etc.) of various materials. The collector systematically probed different objects, covering areas with distinct geometry and texture using different sensor poses. thoroughly probe various objects and cover the interesting areas with more distinguishable geometry and texture with different sensor poses. To the best of our knowledge, our dataset is the first dataset that captures full, scene-scale spatially aligned vision-touch image pairs. We provide more details about the dataset in the supplement. 5. Experiments Leveraging the spatially aligned image and touch pairs from our dataset, we first conduct experiments on dense touch estimation. We then show the effectiveness of both the aligned data pairs and the synthesized touch signals by conducting tactile localization and material classification as two downstream tasks. 5.1. Implementation Details NeRF. We use the Nerfacto method from Nerfstudio [53]. For each scene, we utilize approximately 2,000 images as training set, which thoroughly cover the scene from various view points. We train the network with a base learning rate of 1 \u00d7 10\u22122 using Adam [30] optimizer for 200,000 steps on a single NVIDIA RTX 2080 Ti GPU to achieve optimal performance. Visual-tactile contrastive model. Following prior works [27, 57], we leverage contrastive learning methods to train a ResNet-50 [21] as visual encoder. The visual and tactile encoders share the same architecture but have different weights. We encode visual and tactile data into latent vectors in the resulting shared representation space. We set the dimension of the latent vectors to 32. Similar to CLIP [43], the model is trained on InfoNCE loss obtained from the pairwise dot products of the latent vectors. We train the model for 20 epochs by Adam [30] optimizer with a learning rate of 10\u22124 and batch size of 256 on 4 NVIDIA RTX 2080 Ti GPUs. Visual-tactile generative model. Our implementation of the diffusion model closely follows Stable Diffusion [46], with the difference that we use a ResNet-50 to generate the visual encoding from RGB-D images for conditioning. Specifically, we also add the RGB-D images rendered from the tactile sensors\u2019 poses into the conditioning, which we refer to in Sec. 5.2 as multiscale conditioning. The model is optimized for 30 epochs by Adam [30] optimizer with a base learning rate of 10\u22125. The learning rate is scaled by gpu number \u00d7 batch size. We train the model with batch size of 48 on 4 NVIDIA A40 GPUs. At inference time, the model conducts 200 steps of denoising process with a 7.5 guidance scale. Following prior cross-modal synthesis work [44], we use reranking to improve the prediction quality. We obtain 16 samples from the diffusion model for every instance and re-rank the samples with our pretrained contrastive model. The sample with highest similarity is the final prediction. 5.2. Dense Touch Estimation Experimental setup. We now evaluate the diffusion model\u2019s ability to generate touch images. To reduce overlap between the training and test set, we first split the frames into sequences temporally (following previous work [56]). We split them into sequences of 50 touch samples, then divide these sequences into train/validation/test with a ratio of 8/1/1. We evaluate the generated samples on Frechet Inception Distance (FID), a standard evaluation metric for cross-modal generation [56]. We also include Peak Signal to Noise Ratio (PSNR) and Structural Similarity (SSIM), though we note that these metrics are highly sensitive to spatial position of the generated content, and can be optimized by models that minimize simple pixelwise losses [22]. We also include CVTP metric proposed by prior work [57], which measures the similarity between visual and tactile embeddings of a contrastive model, analogous to 5 \fedge Condition VisGel Condition G.T. Ours L1 Ours G.T. L1 VisGel brick rock chair sofa desk wall surface desk carpet Figure 6. Qualitative touch estimation results. Each model is conditioned on the RGB image and depth map rendered from the NeRF (left). The white box indicates the tactile sensor\u2019s approximate field of view (which is much smaller than the full conditional image). The G.T. column shows the ground truth touch images measured from a DIGIT sensor. L1 and VisGel often generate blurry textures and inaccurate geometry. By contrast, our model better captures the features of the tactile image, e.g., the rock\u2019s microgeometry and complex textures and shapes of furniture. The last row shows two failure cases of our model. In both examples, our model generates a touch image that is geometrically misaligned with the ground truth. All of the examples shown here are at least 10cm away from any training sample. CLIP [43] score. We compare against two baselines: VisGel, the approach from Li et. [35], which trains a GAN for touch generation, and L1, a model with the same architecture of VisGel but trained to minimize an L1 loss in pixel space. Results. As is shown in Table 2, our approach performs much better on the high-level metrics, with up to 4x lower FID and 80x higher CVTP. This indicates that our proposed diffusion model captures the distribution and characteristics of the real tactile data more effectively. On the low-level metrics (PSNR and SSIM), all methods are comparable. In particular, the L1 model slightly outperforms the other methods since the loss it is trained on is highly correlated with low-level, pixel-wise metrics. Fig. 6 qualitatively compares samples from the different models. Indeed, our generated samples exhibit enhanced details in micro-geometry of fabrics and richer textures, including snow, wood and carpeting. However, all methods fail on fine details that are barely visible in the image, such as the tree bark. Ablation study. We evaluate the importance of the main components of our proposed touch generation approach (Table 3). Removing the conditioning on the RGB image results in the most prominent performance drop. This is expected since RGB image uniquely determines the fineModel PSNR \u2191 SSIM \u2191 FID \u2193 CVTP \u2191 L1 24.34 0.82 97.05 0.01 VisGel [35] 23.66 0.81 130.22 0.03 Ours 22.84 0.72 28.97 0.80 Table 2. Quantitative results on touch estimation for novel views. While comparable on low-level metrics with the baselines, our approach captures the characteristics of the real tactile data more effectively, resulting in a lower FID score. grained details of a tactile image. Removing depth image or contrastive pretraining has small effect on CVTP but results in a drop on FID. Contrastive re-ranking largely improves CVTP, indicating the necessity of obtaining multiple samples from the diffusion model. We also find that multiscale conditioning provide a small benefit on FID and CVTP. 5.3. Downstream Task I: Tactile Localization To help understand the quality of the captured TaRFs, we evaluate the performance of the contrastive model (used for conditioning our diffusion model) on the task of tactile localization. Given a tactile signal, our goal is to find the corresponding regions in a 2D image or in a 3D scene that are associated with it, i.e., we ask the question: what part of this image/scene feel like this? We perform the following 6 \fQuery Heatmap Query Query Heatmap Heatmap Query Heatmap Figure 7. Tactile localization heatmaps. Given a tactile query image, the heatmap shows the image patches with a higher affinity to this tactile signal, as measured by a contrastive model trained on our dataset. We use a sliding window and compare each extracted patch with the touch signal. In each case, the center patch is the true position. Our model successfully captures the correlation between the two signals. This enables it to localize a variety of touch signals, including fine-grained geometry, e.g., a cable or a keyboard, various types of corners and edges, and large uniform regions, such as a clothing. This ability enables our diffusion model to effectively propagate sparse touch samples to other visually and structurally similar regions of the scene. Model variation PSNR \u2191SSIM \u2191FID \u2193CVTP \u2191 Full 22.84 0.72 28.97 0.80 No RGB conditioning 22.13 0.70 34.31 0.76 No depth conditioning 22.57 0.71 33.16 0.80 No contrastive pretraining 22.82 0.71 32.98 0.79 No re-ranking 22.92 0.72 29.46 0.61 No multiscale 23.19 0.72 30.89 0.77 Table 3. Ablation study. Since the fine-grained details of touch images can be determined from a RGB image, removing conditioning on the latter results in the largest performance drops. Reranking has notable impact on CVTP, indicating the necessity of obtaining multiple samples from the diffusion model. evaluations on the test set of our dataset. Note that we run no task-specific training. 2D Localization. To determine which part of an image are associated with a given tactile measurement, we follow the same setup of SSVTP [28]. We first split the image into patches and compute their embedding. Then, we generate the tactile embedding of the input touch image. Finally, we compute the pairwise similarities between the tactile and visual embeddings, which we plot as a heatmap. As we can see in Fig. 7, our constrastive encoder can successfully capture the correlations between the visual and tactile data. For instance, the tactile embeddings of edges are associated to edges of similar shape in the visual image. Note that the majority of tactile embeddings are highly ambiguous: all edges with a similar geometry feel the same. 3D Localization. In 3D, the association of an image to tactile measurements becomes less ambiguous. Indeed, since tactile-visual samples are rotation-dependent, objects with similar shapes but different orientations will generate different tactile measurements. Lifting the task to 3D still does not remove all ambiguities (for example, each side of a rectangular table cannot be precisely localized). Nonetheless, we believe it to be a good fit for a quantitative evaluation since it\u2019s rare for two ambiguous parts of the scene to be touched with exactly the same orientation. We use the following experimental setup for 3D localization. Given a tactile image as a query, we compute its distance in embedding space to all visual test images from the same scene. Note that all test images are associated with a 3D location. We define as ground-truth correspondences all test images at a distance of at most r from the 3D location of the test sample. We vary r to account for local ambiguities. As typical in the retrieval literature, we benchmark the performance with metric mean Average Precision (mAP). We consider three baselines: (1) chance, which randomly selects corresponding samples; (2) real, which uses the contrastive model trained on our dataset; and (3) real + estimated, which trains the contrastive model on both dataset samples and a set of synthetic samples generated via the scenes\u2019 NeRF and our touch generation model. Specifically, we render a new image and corresponding touch by interpolating the position of two consecutive frames in the training dataset. This results in a training dataset for the contrastive model that is twice as large. 7 \fr(m) Dataset 0.001 0.005 0.01 0.05 0.1 Chance 3.55 6.82 10.25 18.26 21.33 Real 12.10 22.93 32.10 50.30 57.15 Real + Est. 14.92 26.69 36.17 53.62 60.61 Table 4. Quantitative results on 3D tactile localization. We evaluate using mean Average Precision (mAP) as a metric. Training the contrastive model on our dataset of visually aligned real samples together with estimated samples from new locations in the scene results in the highest performance. The results, presented in Table 4, demonstrate the performance benefit of employing both real and synthetic tactile pairs. Combining synthetic tactile images with the original pairs achieves highest performance on all distance thresholds. Overall, this indicates that touch measurements from novel views are not only qualitatively accurate, but also beneficial for this downstream task. 5.4. Downstream Task II: Material Classification We investigate the efficacy of our visual-tactile dataset for understanding material properties, focusing on the task of material classification. We follow the formulation by Yang et al. [56], which consists of three subtasks: (i) material classification, requiring the distinction of materials among 20 possible classes; (ii) softness classification, a binary problem dividing materials as either hard or soft; and (iii) hardness classification, which requires the classification of materials as either rough or smooth. We follow the same experimental procedure of [56]: we pretrain a contrastive model on a dataset and perform linear probing on the sub-tasks\u2019 training set. Our experiments only vary the pretraining dataset, leaving all architectural choices and hyperparameters the same. We compare against four baselines. A random classifier (chance); the ObjectFolder 2.0 dataset [17]; the VisGel dataset [35]; and the Touch and Go dataset [56]. Note that the touch sensor used in the test data (GelSight) differs from the one used in our dataset (DIGIT). Therefore, we use for pretraining a combination of our dataset and Touch and Go. To ensure a fair comparison, we also compare to the combination of each dataset and Touch and Go. The findings from this evaluation, as shown in Table 5, suggest that our data improves the effectiveness of the contrastive pretraining objective, even though our data is from a different distribution. Moreover, we find that adding estimated touch probes for pretraining results in a higher performance on all the three tasks, especially the smoothness classification. This indicates that not only does our dataset covers a wide range of materials but also our diffusion model captures the distinguishable and useful patterns of different materials. Dataset Material Hard/ Soft Rough/ Smooth Chance 18.6 66.1 56.3 ObjectFolder 2.0 [17] 36.2 72.0 69.0 VisGel [35] 39.1 69.4 70.4 Touch and Go [56] 54.7 77.3 79.4 + ObjectFolder 2.0 [17] 54.6 87.3 84.8 + VisGel [35] 53.1 86.7 83.6 + Ours\u2217(Real) 57.6 88.4 81.7 + Ours\u2217(Real + Estimated) 59.0 88.7 86.1 Table 5. Material classification. We show the downstream material recognition accuracy of models pre-trained on different datasets. The final rows show the performance when combining different datasets with Touch and Go [56]. \u2217The task-specific training and testing datasets for this task are collected with a GelSight sensor. We note that our data comes from a different distribution, since it is collected with a DIGIT sensor [32]. 6." +} \ No newline at end of file diff --git a/abs_9K/test_abstract_short_2405.04674v1.json b/abs_9K/test_abstract_short_2405.04674v1.json new file mode 100644 index 0000000000000000000000000000000000000000..a9676b9358f9221bd093b9ad08751d2803c3f8b1 --- /dev/null +++ b/abs_9K/test_abstract_short_2405.04674v1.json @@ -0,0 +1,16 @@ +{ + "url": "http://arxiv.org/abs/2405.04674v1", + "title": "Towards Accurate and Efficient Document Analytics with Large Language Models", + "abstract": "Unstructured data formats account for over 80% of the data currently stored,\nand extracting value from such formats remains a considerable challenge. In\nparticular, current approaches for managing unstructured documents do not\nsupport ad-hoc analytical queries on document collections. Moreover, Large\nLanguage Models (LLMs) directly applied to the documents themselves, or on\nportions of documents through a process of Retrieval-Augmented Generation\n(RAG), fail to provide high accuracy query results, and in the LLM-only case,\nadditionally incur high costs. Since many unstructured documents in a\ncollection often follow similar templates that impart a common semantic\nstructure, we introduce ZenDB, a document analytics system that leverages this\nsemantic structure, coupled with LLMs, to answer ad-hoc SQL queries on document\ncollections. ZenDB efficiently extracts semantic hierarchical structures from\nsuch templatized documents, and introduces a novel query engine that leverages\nthese structures for accurate and cost-effective query execution. Users can\nimpose a schema on their documents, and query it, all via SQL. Extensive\nexperiments on three real-world document collections demonstrate ZenDB's\nbenefits, achieving up to 30% cost savings compared to LLM-based baselines,\nwhile maintaining or improving accuracy, and surpassing RAG-based baselines by\nup to 61% in precision and 80% in recall, at a marginally higher cost.", + "authors": "Yiming Lin, Madelon Hulsebos, Ruiying Ma, Shreya Shankar, Sepanta Zeigham, Aditya G. Parameswaran, Eugene Wu", + "published": "2024-05-07", + "updated": "2024-05-07", + "primary_cat": "cs.DB", + "cats": [ + "cs.DB" + ], + "label": "Original Paper", + "paper_cat": "Retrieval AND Augmented AND Generation AND RAG", + "gt": "Unstructured data formats account for over 80% of the data currently stored,\nand extracting value from such formats remains a considerable challenge. In\nparticular, current approaches for managing unstructured documents do not\nsupport ad-hoc analytical queries on document collections. Moreover, Large\nLanguage Models (LLMs) directly applied to the documents themselves, or on\nportions of documents through a process of Retrieval-Augmented Generation\n(RAG), fail to provide high accuracy query results, and in the LLM-only case,\nadditionally incur high costs. Since many unstructured documents in a\ncollection often follow similar templates that impart a common semantic\nstructure, we introduce ZenDB, a document analytics system that leverages this\nsemantic structure, coupled with LLMs, to answer ad-hoc SQL queries on document\ncollections. ZenDB efficiently extracts semantic hierarchical structures from\nsuch templatized documents, and introduces a novel query engine that leverages\nthese structures for accurate and cost-effective query execution. Users can\nimpose a schema on their documents, and query it, all via SQL. Extensive\nexperiments on three real-world document collections demonstrate ZenDB's\nbenefits, achieving up to 30% cost savings compared to LLM-based baselines,\nwhile maintaining or improving accuracy, and surpassing RAG-based baselines by\nup to 61% in precision and 80% in recall, at a marginally higher cost.", + "main_content": "INTRODUCTION The vast majority\u2014over 80%\u2014of data today exists in unstructured formats such as text, PDF, video, and audio, and is continuing to grow at the rate of over 50% annually [2, 8]. In fact, an overwhelming 95% of businesses have recognized management of this unstructured data as a significant problem [1]. Consider unstructured text documents, such as Word or PDF documents, with a rich treasure trove of untapped information. Due to the inherently free-form nature of natural language, coupled with visual formatting, real-world unstructured documents pose a particularly difficult challenge for data management. Is there any hope for successfully querying or extracting value from unstructured documents? Example 1.1 (Civic Agenda Report: Vanilla LLMs and RAG). Our journalism collaborators at Big Local News at Stanford have collected large tranches of civic meeting agenda PDF reports for various US counties as part of their agenda watch project, as in Figure 1-a, and want to analyze these reports. One such query could be to count the number of construction projects of a certain type, across meetings. To do so, one could use Large Language Models (LLMs). However, even advanced LLMs, such as GPT-4, struggle with queries issued on such reports (e.g., \ud835\udc441 in Figure 1-d), R A1 A3 B2 B3 \u2026\u2026 A2 c) Semantic Hierarchical Tree B1 \u2026\u2026 \u2026\u2026 b) Semantic Structure SELECT COUNT(Projects.name) FROM Projects WHERE Projects.type = \u2018Capital Improvement\u2019 AND Projects.begin_time > \u20182022\u2019 Q1: \u2018What is the number of Capital Improvement projects that start after 2022\u2019 Q2: d) Natural Language Query and corresponding SQL Query a) Civic Project Agenda Report R A1 B1 B2 C1 (1) (10) (11) (22) (63) (192) (64) Figure 1: Civic Agenda Document and Semantic Structures. especially when these queries involve aggregations and/or multiple filters on long documents. The error-prone nature of LLMs is not surprising given that LLMs can\u2019t effectively handle large contexts [19, 45], or complex data processing tasks [48, 49]. The costs of processing all documents in a collection via LLMs (e.g., through OpenAI APIs) are also high. Another strategy, Retrieval-Augmented Generation (RAG) [39, 41], identifies one or more text segments within each document that are most relevant (e.g., via embedding distance) to the given query, incorporating these segments into prompts, reducing the cost. However, RAG struggles to identify the appropriate text segments, even for simple queries. Suppose we want to identify the capital improvement projects. RAG retrieves the segments that most closely matches \"capital improvement projects\" within the document, such as the red box in Figure 1-a. However, it fails to capture over 20 additional projects in subsequent pages, such as the \"PCH Median Improvement Project\" (B2 in Figure 1-b) belonging to \"Capital Improvement Projects\" (A1). Overall, both the vanilla LLM approach and RAG are unsuitable: both have low accuracy, while the LLM approach additionally has high cost. Leveraging Semantic Structure Helps. The reason RAG didn\u2019t perform well above was because the text segment provided to the LLM did not leverage the semantic structure underlying the document. Instead, if we are aware of this semantic structure, we can identify the capital improvement projects (A1 in Figure 1-b) by checking all of the subportions (e.g., B1, B2) under it, where each one corresponds to the description of such a project, and provide this arXiv:2405.04674v1 [cs.DB] 7 May 2024 \fLLM ZenDB RAG Cheap 93% lower cost 97% lower cost 25% higher accuracy 48% higher accuracy Accurate Figure 2: Understanding the differences between ZenDB, LLMs and RAG. a) Scientific Papers b) Notice of Violations c) Employee Job Descriptions Figure 3: Templatized Documents: Scientific Papers, Notice of Violations, Job Descriptions. to an LLM to interpret. By doing so, we provide all of the pertinent information to an LLM, unlike RAG, while also not overwhelming it with too much information. Indeed, when we leverage semantic structure for a group of sample queries on GPT-4-32k, as in our system ZenDB, described next, we surpass the vanilla LLM and RAG approaches by 25% and 48% in accuracy, while only having 7% of the cost of LLMs, as detailed in Figure 2. Templatized Documents Provide Semantic Structure. Given that semantic structure is helpful, how do we extract this semantic structure within unstructured documents? Turns out, while unstructured documents vary considerably in format, many documents that are part of collections are created using templates, which we call templatized documents. Templatized documents are observed across domains, including civic agenda reports, scientific papers, employee job descriptions, and notices of violations, as listed in Figure 1 and Figure 3. For instance, two scientific papers from the same venue use similar templates, just as civic documents for the same purpose from the same local county often adhere to a uniform template. Templatized documents often exhibit consistent visual patterns in headers (e.g., font size and type), when describing content corresponding to the same semantic \u201clevel\u201d (e.g., section headers in a paper often follow the same visual pattern.) We highlight the \u201ctemplates\u201d using blue boxes in Figure 3. Thus, templatized documents are often have a discernible hierarchical structure that reflects different semantic levels within the document. For example, a 9-page complex civic agenda report (such as Figure 1-a) can be broken down into portions (e.g., A1, A2, A3 in Figure 1-b) and further into subportions (e.g., B2), indicating a possible semantic hierarchy, such as Figure 1-c, across the documents following the same template. Leveraging Semantic Structure: Challenges. Unfortunately, the semantic structure of the templates isn\u2019t known\u2014and neither do we expect these templates to be rigidly adhered to, nor do we expect there to just be one template across the collection of documents from a specific domain. Uncovering possible common semantic structures across documents is a challenge. In addition, to support queries over unstructured data where there isn\u2019t a predefined schema, it\u2019s not entirely clear what the data model or query interface should look like. Furthermore, using LLMs for query evaluation incurs high monetary costs and latencies; it\u2019s not obvious how we can leverage the semantic structures across documents to enable accurate query execution with low cost and latency. Addressing Challenges in ZenDB. We introduce ZenDB, a document analytics system that supports ad-hoc advanced SQL queries on templatized document collections, and address the aforementioned challenges. First, we introduce the notion of Semantic Hierarchical Trees (SHTs) that represent the semantic structure for a given document, and effectively act as an index to retrieve only portions of the document that are pertinent to a given query. We build SHTs across documents by leveraging the uniform visual patterns in the document templates. We cluster the visual patterns found across documents to extract and detect various template instantiations, coupled with minimal LLM calls for this purpose. We show that if documents obey a property we term well-formattedness, then our procedure correctly recovers their semantic structure. Second, we introduce an extension to SQL to query unstructured documents (e.g., \ud835\udc441 in Figure 1 could be expressed as a SQL query \ud835\udc442.) Users can easily impose a schema on a collection of documents by simply listing a table name as well as a description for the entities in the table, without listing the attributes, which can then be lazily defined and populated in response to queries. Finally, we introduce a novel tree search algorithm that leverages SHTs to minimize cost and latency while answering queries without compromising on quality. Specifically, we propose a summarization technique to create summary sketches for each node within the tree. ZenDB can navigate through the tree, identifying the appropriate node to answer a given query by examining these sketches, akin to how a person might use a table of contents to find the right chapter for a specific task. Other Related Work. Supporting queries on non-relational data isn\u2019t new. For unstructured data, the field of Information Retrieval (IR) [37, 53] investigates the retrieval of documents via keyword search queries, but doesn\u2019t consider advanced analytical queries. For semi-structured data [15, 16, 47], query languages like XQuery or XPath, as well as extensions to relational databases for querying XML and JSON, help query hierarchically organized data, as in our SHTs, but there, the hierarchy is explicit rather than implicit as in our setting. Recent efforts have sought to bridge the gap between structured queries, like SQL, and unstructured documents. One line of work [58, 61] has explored the upfront transformation of text documents into tables. Doing this ETL process with Large Language Models (LLMs) like GPT-4 on entire documents is expensive and \fDocument Ingestion (Section 3) Unstructured Documents Schema & Query Spec. (Section 4) Data Population (Section 5) Query Execution (Section 6) Query Results Figure 4: User Workflow with ZenDB. error-prone relative to approaches that focus the LLM\u2019s attention on specific semantic portions, as we saw above. Others [24, 55, 56] have explored writing SQL queries directly on text data, as part of multi-modal databases. Most work there boils down to applying LLMs to the entire document, and only works well on simple, small documents. However, using these methods on complex, large documents we saw above leads to high costs and reduced accuracy. None of the approaches above have explored the use of semantic structure to reduce cost and improve accuracy when querying documents. We cover this and other related work in Section 8. We make the following contributions in this paper, as part of building ZenDB, our document analytics system. \u2022 We identify that we can leverage templates within document collections to support ad-hoc analytical queries. \u2022 We introduce the notion of Semantic Hierarchical Trees (SHTs) that represents a concrete instantiation of a template for a specific document, as well as novel methods to efficiently extract SHTs from an array of templatized documents. \u2022 We develop a simple extension to SQL to declare a schema, specify attributes on-demand, and perform analytical queries. \u2022 We design a query engine that leverages SHTs, facilitating query execution in a cost-effective, efficient, and accurate manner. \u2022 We implement all of these techniques within ZenDB and evaluate its performance on three real-world datasets, demonstrating substantial benefits over other techniques. 2 USER WORKFLOW WITH ZENDB In this section, we present an overview of user workflows with ZenDB, as illustrated in Figure 4. First, 1 document collections are ingested into the system by understanding common semantic structure (Section 3). Then, 2 users (typically database administrators) can specify a schema for these documents, including tables and lazily-specified attributes, followed by queries that reference this schema, either specified by end-users who know SQL, or generated by applications (Section 4). ZenDB also populates upfront a set of system-defined tables/attributes to help capture the mapping between tuples and the documents (Section 5). Finally, 3 given queries on these documents, either generated by applications or by end-users directly, ZenDB will execute them efficiently, leveraging the semantic structure (Section 6). 1 Semantic Structure Extraction. Given a collection of templatized documents that adhere to one or more predefined semantic structures, the first step within ZenDB involves extracting this structure in the form of Semantic Hierarchical Trees (SHTs), per document, so that they can be used downstream for query execution. This is broken down into two sub-problems: First, how do we extract an SHT from a single document? Second, how do we leverage common semantic structure across documents to scale up SHT extraction? Since templatized documents typically display consistent visual patterns in headers for similar semantic content, we cluster based on such visual patterns, coupled with minimal LLM invocations, to construct a single SHT (Section 3.2). Then, we use a visual pattern detection approach to determine whether we CREATE TABLE Projects WITH DESCRIPTION \"The projects table contains the description for a set of civic agenda projects.\u201d ALTER TABLE Projects ADD name TEXT WITH DESCRIPTION \"Name of Project\", ADD type TEXT WITH DESCRIPTION \"Type of Project\", ADD begin_time DATE WITH DESCRIPTION \"Begin time of Project\"; Figure 5: Creating the Projects Table and Adding Attributes. can reuse a previously identified semantic structure in the form of a template, synthesized from a concrete SHT, or extract a new one (when there are multiple templates in a collection), all without using LLMs (Section 3.3). 2 Schema/Query Specification and Table Population. Given one SHT per document, ZenDB then enables users to specify a schema across documents in a selection, followed by issuing queries on that schema. Schema definition happens via an extension of standard SQL DDL: users (typically database administrators) provide a name and description for each table\u2014that we call document tables, along with names, types, and descriptions for any attributes; the attributes can be lazily added at any point after the table is created (Section 4.1). For example, Figure 5 shows the query used to create a \"Projects\" table along with attributes (e.g., name). Subsequently, other users can write queries that reference such tables and attributes (e.g., \ud835\udc442 in Figure 1), as in standard SQL (Section 4.2); these queries could also be generated by applications (including form-based or GUI-based applications), or by translating natural language queries into SQL. We still concretize the query in SQL to provide well-defined semantics. While attributes are added lazily and attribute values are computed or materialized in response to queries, we proactively identify mappings between tuples and documents during schema specification (Section 5). Specifically, we identify the SHT node that represents the portion of the document that captures all of the relevant tuples in a given user-specified table, as well as the mapping between tuples to individual SHT nodes, if they exist, using a combination of minimal LLM invocations and automated rules. These are then stored in our data model as hidden system-defined attributes, such as the span of the text that corresponds to the given tuple, leveraging nodes in the SHTs built earlier. These system-defined attributes allow for LLMs to extract the user-defined attribute values per tuple as needed, while reducing costs, while also leveraging the shared semantic structure across documents. 3 Query Execution. Finally, ZenDB executes the user-specified SQL queries using the pre-constructed SHTs per document, while minimizing cost and latency, and maximizing accuracy. Unlike traditional relational databases, where I/O and sometimes computation are often the bottleneck, here, the LLM calls invoked by ZenDB becomes both a cost and latency bottleneck. Therefore, ZenDB aims to minimize such calls, while still trying to extract attribute values as needed to answer queries, by using a combination of predicate pushdown and projection pull-up. We additionally develop a cost model for ZenDB, focusing on monetary cost (Section 6.1). Our cost model design is flexible and can be adapted to optimize for latency instead, e.g., if we instead use an open-source LLM on-prem. Furthermore, we design novel physical implementations that leverage SHTs (Section 6.2). In particular, we maintain a sketch for each node in each SHT, and leverage this sketch as part of a tree search to identify the appropriate text span to evaluate a given query, akin to how a person would use a table of contents to find the right chapter. Finally, we maintain provenance (i.e., the specific document text \fR A1 A2 B1 B2 1 10 63 11 22 R A1 A2 B1 B2 E1 E2 a) Phrase Clustering Based on Visual Patterns b) SHT (1) (10) C1 24 \u2026\u2026 E1 E2 64 75 p1 p2 (63) p2 (11) p3 (22) p3 (64) p4 (75) p4 p1 p2 p3 p4 p5 Figure 6: SHT Construction in Civic Agenda Report. span) for query answers, ensuring that users can verify the source of the information and ensuring trust in the system outputs. 3 SEMANTIC HIERARCHICAL TREE In this section, we describe our process for recovering structure from documents in the form of Semantic Hierarchical Trees (SHTs), which then acts as an index for subsequent querying. We start by formalizing the notion of SHTs and templates, and then describe how to extract an SHT for a single document, followed by extracting them across collections by leveraging shared templates. 3.1 Preliminaries We focus on rich text documents, such as PDF and Word documents, that include visual formatting information (e.g., multiple font types and sizes), as shown in Figure 3. Documents, Words, and Phrases. Consider a set of documents D = {\ud835\udc371, \ud835\udc372, ..., \ud835\udc37\ud835\udc59}. For each document \ud835\udc37\u2208D, which may be a PDF or Word document, we often instead operate on a plain text serialized representation, extracted as a preprocessing step. To generate this representation for a document \ud835\udc37, we use an extraction tool such as pdfplumber [13], which generates a sequence of words \ud835\udc4a\ud835\udc37= [\ud835\udc641, ...,\ud835\udc64\ud835\udc5a], each with formatting and location features (e.g., font name/size/bounding boxes). For simplicity, we ignore images, but they can be treated as a special word. For any two consecutive words \ud835\udc64\ud835\udc56and \ud835\udc64\ud835\udc56+1, if they have the same formatting features: font size, name (e.g., Times New Roman), and type (e.g., bold or underline), we group them into a phrase \ud835\udc60. We let \ud835\udc46\ud835\udc37= [\ud835\udc601, ...,\ud835\udc60\ud835\udc5b] be the sequence of phrases corresponding to \ud835\udc37\u2014we often operate on \ud835\udc46\ud835\udc37instead of the document directly. Visual Patterns. For each phrase \ud835\udc60\u2208\ud835\udc46\ud835\udc37, we further define a visual pattern, \ud835\udc5d(\ud835\udc60), as a vector of visual formatting features; we currently use:\ud835\udc5d(\ud835\udc60) = [\ud835\udc60\ud835\udc56\ud835\udc67\ud835\udc52,\ud835\udc5b\ud835\udc4e\ud835\udc5a\ud835\udc52,\ud835\udc61\ud835\udc66\ud835\udc5d\ud835\udc52,\ud835\udc4e\ud835\udc59\ud835\udc59_\ud835\udc50\ud835\udc4e\ud835\udc5d,\ud835\udc5b\ud835\udc62\ud835\udc5a_\ud835\udc60\ud835\udc61,\ud835\udc4e\ud835\udc59\ud835\udc5d\u210e\ud835\udc4e_\ud835\udc60\ud835\udc61,\ud835\udc50\ud835\udc52\ud835\udc5b\ud835\udc61\ud835\udc52\ud835\udc5f] but other features may be included. Here, the first three features correspond to the font, as in the word-level features we had previously, and the remaining three features are phrase-level features: \ud835\udc4e\ud835\udc59\ud835\udc59_\ud835\udc50\ud835\udc4e\ud835\udc5d is a Boolean value that denotes whether the phrase \ud835\udc60is capitalized, \ud835\udc5b\ud835\udc62\ud835\udc5a_\ud835\udc60\ud835\udc61and \ud835\udc4e\ud835\udc59\ud835\udc5d\u210e\ud835\udc4e_\ud835\udc60\ud835\udc61indicate whether the phrase starts with a number (e.g., 1) or a letter (e.g., A), while \ud835\udc50\ud835\udc52\ud835\udc5b\ud835\udc61\ud835\udc52\ud835\udc5findicates if a phrase is in the center of a line. Candidate SHTs. We are now in a position to define SHTs. We define a candidate SHT for a document \ud835\udc37to be a single-rooted, ordered, fully connected, directed tree \ud835\udc47= (\ud835\udc49, \ud835\udc38), where each \ud835\udc63\u2208\ud835\udc49corresponds to a single distinct phrase \ud835\udc60\ud835\udc56\u2208\ud835\udc46\ud835\udc37, denoted \ud835\udc56\ud835\udc5b\ud835\udc51(\ud835\udc63) = \ud835\udc56, the phrase index for \ud835\udc63, satisfying (1) \ud835\udc56\ud835\udc5b\ud835\udc51(\ud835\udc63) < \ud835\udc56\ud835\udc5b\ud835\udc51(\ud835\udc63\u2032) for any children\ud835\udc63\u2032 of\ud835\udc63, and (2)\ud835\udc56\ud835\udc5b\ud835\udc51(\ud835\udc63) < \ud835\udc56\ud835\udc5b\ud835\udc51(\ud835\udc63\u2032) for any right siblings\ud835\udc63\u2032 of \ud835\udc63. These two properties together imply that a pre-order traversal of \ud835\udc47visits nodes in increasing phrase index order. A candidate SHT for Figure 1a is shown in Figure 6b. Node A1 represents the phrase (and section header) \u201cCapital Improvement and Disaster Recovery Projects (Design)\u201d, while B2 represents the phrase (and subsection header) \u201cPCH Median Improvement Project\u201d. The phrase index for each node is shown in parenthesis, e.g., \ud835\udc56\ud835\udc5b\ud835\udc51(A1) = 10; i.e., A1 corresponds to \ud835\udc6010; ignore the p\ud835\udc56(in red) for now. The SHT obeys the two conditions listed, e.g., A1 (with phrase index 10) has children (11 and 22) and a sibling (63) with larger phrase indexes. Note, however, that not all phrases in \ud835\udc46\ud835\udc37are found in the SHT; this is by design: the SHT simply represents the phrases corresponding to the headers of the document, while those that correspond to the content are omitted. For example, Figure 6b omits phrases \ud835\udc602, ..,\ud835\udc609. However, in certain cases, it may be convenient to refer to headers and content together. For this, we define text span or \ud835\udc61\ud835\udc60, to be a sequence of phrases \ud835\udc60\ud835\udc56, ...,\ud835\udc60\ud835\udc56+\ud835\udc58\u2208\ud835\udc46\ud835\udc37, or equivalently [\ud835\udc56,\ud835\udc56+ \ud835\udc58]. We define \ud835\udc5b\ud835\udc52\ud835\udc65\ud835\udc61(\ud835\udc63) for a given node \ud835\udc63to be the phrase index corresponding to its sibling to the immediate right, if available, or, if not, the sibling to the immediate right of the closest ancestor that has one. If none of the ancestors of \ud835\udc63have a right sibling, \ud835\udc5b\ud835\udc52\ud835\udc65\ud835\udc61(\ud835\udc63) = \ud835\udc5b, where \ud835\udc5bis the total number of phrases in \ud835\udc46\ud835\udc37. To illustrate, \ud835\udc5b\ud835\udc52\ud835\udc65\ud835\udc61(A1) = \ud835\udc5b\ud835\udc52\ud835\udc65\ud835\udc61(B2) = 63 (i.e., A2), while \ud835\udc5b\ud835\udc52\ud835\udc65\ud835\udc61(A2) = \ud835\udc5b\ud835\udc52\ud835\udc65\ud835\udc61(R) = 100, assuming \ud835\udc60100 is the final phrase in our document. A given node \ud835\udc63\u2208\ud835\udc49has a text span: \ud835\udc61\ud835\udc60(\ud835\udc63) = [\ud835\udc56\ud835\udc5b\ud835\udc51(\ud835\udc63),\ud835\udc5b\ud835\udc52\ud835\udc65\ud835\udc61(\ud835\udc63) \u22121], i.e., \ud835\udc63\u201ccovers\u201d all of the phrases until the next node with phrase index \ud835\udc5b\ud835\udc52\ud835\udc65\ud835\udc61(\ud835\udc63). Thus, \ud835\udc61\ud835\udc60(R) is [1, 100], while \ud835\udc61\ud835\udc60(B2) is [22, 62]. That is, B2 \u201ccovers\u201d both the header, \ud835\udc6022, as well as the content \ud835\udc6023, . . . ,\ud835\udc6062, until the next header, A2. In the following, we equivalently refer to a node \ud835\udc63, its header phrase \ud835\udc60\ud835\udc56\ud835\udc5b\ud835\udc51(\ud835\udc63) (i.e., the header corresponding to \ud835\udc63), or text span \ud835\udc61\ud835\udc60(\ud835\udc63) (i.e., the header and content contained within \ud835\udc63). We finally introduce the notion of a granularity or height of a node \ud835\udc63, which is simply the depth of \ud835\udc63in the SHT; in our example, the depth of R is 1, and A1 is 2. 3.2 SHT Construction on a Single Document Given a document \ud835\udc37with phrases \ud835\udc46\ud835\udc37, there are exponentially many candidate SHTs; our goal is to identify the true SHT that correctly reflects the semantic structure of the document. To do so, our procedure, oracle_gen(\ud835\udc37), first identifies which phrases are header phrases (and therefore correspond to SHT nodes). We then assemble these phrases into a tree, ensuring that it is a candidate SHT. Header Phrase Identification. To identify if a phrase \ud835\udc60\u2208\ud835\udc46\ud835\udc37is a header phrase, we make use of visual patterns \ud835\udc5d(\ud835\udc60). We cluster the phrases in \ud835\udc46\ud835\udc37based on their visual patterns. For our running example, the clusters that emerge are shown in Figure 6a, each labeled with its visual pattern (in red). Here, the majority of the phrases end up in the cluster with pattern p5\u2014this corresponds to the content phrases in the document (e.g., C1 in Figure 1-a is a paragraph). To remove clusters whose phrases do not correspond to header phrases, we use LLMs as an oracle. We randomly sample \ud835\udc5a\ud835\udc56\ud835\udc5b(|\ud835\udc36|,\ud835\udc58) (\ud835\udc58is a predefined threshold) phrases in each cluster \ud835\udc36\u2208 C. For each sampled phrase \ud835\udc60\u2208\ud835\udc36, we construct the LLM prompt \u201cIs the phrase [s] a header in the document?\u201d. If over half of the sampled phrases in \ud835\udc36are non-headers, then \ud835\udc36is pruned (e.g., the cluster containing C1 is dropped since C1 is a paragraph). To verify if GPT-4 is effective at disambiguating headers from non-headers, we carefully examined over 200 documents from 16 datasets, covering six diverse domains. In our testing, when \ud835\udc58= 10, GPT-4 effectively removes non-header clusters on 97% of the documents with total \fcost as $0.37. Still, since this cost is non-zero, we would want to minimize it when working on a large collection of documents; as we illustrate in our next section, we only invoke LLMs for a small subset of documents, each corresponding to a different template. Tree Construction. Given the header phrases across the remaining clusters in C, we assemble the corresponding nodes into a tree. We proceed top-down, operating on one cluster at a time, adding the entire cluster to the partially constructed SHT. At each step, we pick the cluster \ud835\udc36that contains the phrase with the lowest index. For each phrase \ud835\udc60\ud835\udc56in this cluster \ud835\udc36, we create a corresponding node \ud835\udc63\ud835\udc56and add it to the partially constructed SHT, in increasing phrase index order, simultaneously. For each such node \ud835\udc63\ud835\udc56, we examine the \ud835\udc61\ud835\udc60of all existing nodes in the partially constructed SHT, and pick its parent to be the ancestor \ud835\udc63\ud835\udc57such that \ud835\udc56\ud835\udc5b\ud835\udc51(\ud835\udc63\ud835\udc56) \u2208\ud835\udc61\ud835\udc60(\ud835\udc63\ud835\udc57), and there is no other \ud835\udc63\ud835\udc58> \ud835\udc63\ud835\udc57such that\ud835\udc56\ud835\udc5b\ud835\udc51(\ud835\udc63\ud835\udc56) \u2208\ud835\udc61\ud835\udc60(\ud835\udc63\ud835\udc58). This condition basically ensures that \ud835\udc63\ud835\udc56is added under the most specific node \ud835\udc63\ud835\udc57that can accommodate it. Once we\u2019ve identified the appropriate parents for each node in the cluster, we then add all of these nodes together. The root (usually corresponding to \ud835\udc601) merits special treatment: if there is no cluster that contains \ud835\udc601, we create a node corresponding to \ud835\udc601, else we start with the cluster that contains \ud835\udc601. Usually this cluster just has \ud835\udc601; if it contains other phrases, we create an artificial root node corresponding to an empty phrase \ud835\udc600, and deem it to be the root. We then process the cluster that contains \ud835\udc601 along with other phrases. Returning to our example, the cluster corresponding to visual pattern \ud835\udc5d1 with phrase \ud835\udc601 is processed first, allowing R to be added to the tree. Then, the cluster corresponding to \ud835\udc5d2 is processed next as it has the lowest phrase index number 10, with A1 and A2 added to the tree together, both with R as parent. Then, the cluster corresponding to \ud835\udc5d3 is processed, with B1 and B2 being added as children of A1, and so on. Correctness for Well-Formatted SHTs. Next, we show that if the true SHT for a document has a property that we call wellformattedness, then oracle_gen(\ud835\udc37) correctly outputs the true SHT. Given an SHT\ud835\udc47, the visual prefix \ud835\udc63\ud835\udc56\ud835\udc60\ud835\udc5d\ud835\udc5f\ud835\udc52(\ud835\udc63) for a node \ud835\udc63is defined to be the sequence of visual patterns from the root to \ud835\udc63. In our example, \ud835\udc63\ud835\udc56\ud835\udc60\ud835\udc5d\ud835\udc5f\ud835\udc52(B1) = \ud835\udc5d1\ud835\udc5d2. We extend the definition to a set in the natural way, e.g., \ud835\udc63\ud835\udc56\ud835\udc60\ud835\udc5d\ud835\udc5f\ud835\udc52({B2, A1}) = {\ud835\udc5d1, \ud835\udc5d1\ud835\udc5d2}. Let \ud835\udc5d\ud835\udc60\ud835\udc52\ud835\udc61(\ud835\udc5d) be a function that accepts a visual pattern and returns all the nodes that obey that pattern. For example, \ud835\udc5d\ud835\udc60\ud835\udc52\ud835\udc61(\ud835\udc5d2) ={A1, A2}. Then, an SHT \ud835\udc47= (\ud835\udc49, \ud835\udc38) is said to be well-formatted if (1) for any two siblings \ud835\udc63\ud835\udc56, \ud835\udc63\ud835\udc57, \ud835\udc5d(\ud835\udc63\ud835\udc56) = \ud835\udc5d(\ud835\udc63\ud835\udc57); (2) for all visual patterns \ud835\udc5d, \ud835\udc63\ud835\udc56\ud835\udc60\ud835\udc5d\ud835\udc5f\ud835\udc52(\ud835\udc5d\ud835\udc60\ud835\udc52\ud835\udc61(\ud835\udc5d)) is unique. The first condition mandates that sibling nodes, such as \ud835\udc351 and \ud835\udc352, must share the same visual pattern. However, it does not require that all nodes at the same depth, like \ud835\udc352 and \ud835\udc381, must have identical visual patterns. In our agenda watch dataset, subsection headers within a section often have similar formatting, but this need not hold across sections, i.e., different sections may use different formatting. The second condition states that nodes sharing the same visual pattern must have identical visual prefixes. For example, \ud835\udc351 and \ud835\udc352 have the visual prefix \ud835\udc5d1\ud835\udc5d2. Thus, the visual pattern signifies a certain \u201csemantic level\u201d within the SHT, following a consistent path to the root. Theorem 3.1. If the true SHT for a document \ud835\udc37is well-formatted, and if an LLM can correctly identify non-headers, then oracle_gen(\ud835\udc37) outputs the true SHT. Proof. Let \ud835\udc47and \ud835\udc3a\ud835\udc47be the SHT returned by oracle_gen and in the ground truth, respectively. We prove \ud835\udc47= \ud835\udc3a\ud835\udc47when \ud835\udc47is a well-formatted SHT by induction. Let \ud835\udc63\ud835\udc56be the i-th node added in the oracle_gen approach, and \ud835\udc41\ud835\udc56\u22121 = {\ud835\udc631, \ud835\udc632, ...\ud835\udc63\ud835\udc56\u22121} be the first (\ud835\udc56\u22121)-th nodes added in oracle_gen, respectively. Let \ud835\udc47\ud835\udc56\u22121 and \ud835\udc3a\ud835\udc47\ud835\udc56\u22121 be the induced subgraph of \ud835\udc47and \ud835\udc3a\ud835\udc47based on the set of nodes \ud835\udc41\ud835\udc56\u22121. By induction, we assume that oracle_gen returns the correct SHT, i.e., \ud835\udc47\ud835\udc56\u22121 = \ud835\udc3a\ud835\udc47\ud835\udc56\u22121, when adding the first (\ud835\udc56\u22121)-th nodes, and we further prove that, by adding \ud835\udc63\ud835\udc56, \ud835\udc47\ud835\udc56= \ud835\udc3a\ud835\udc47\ud835\udc56. Let \ud835\udc63\ud835\udc57and \ud835\udc63 \u2032 \ud835\udc57be the parent node of \ud835\udc63\ud835\udc56in \ud835\udc47\ud835\udc56and \ud835\udc3a\ud835\udc47\ud835\udc56, respectively. We prove that \ud835\udc63\ud835\udc57= \ud835\udc63 \u2032 \ud835\udc57by considering two cases: one where there exists a node \ud835\udc63\ud835\udc58\u2208\ud835\udc47\ud835\udc56\u22121 (and \ud835\udc63\ud835\udc58\u2208\ud835\udc3a\ud835\udc47\ud835\udc56\u22121, since\ud835\udc47\ud835\udc56\u22121 = \ud835\udc3a\ud835\udc47\ud835\udc56\u22121) shares the same visual pattern as \ud835\udc63\ud835\udc56, i.e., \ud835\udc5d(\ud835\udc63\ud835\udc58) = \ud835\udc5d(\ud835\udc63\ud835\udc56), and one where it does not. Let \ud835\udc54(\ud835\udc63) be the granularity (i.e., height of node) of \ud835\udc63in \ud835\udc47\ud835\udc56. Let \ud835\udc5d\ud835\udc4e\ud835\udc61\u210e(\ud835\udc63\ud835\udc56) be the sequence of nodes from root to \ud835\udc63\ud835\udc56in \ud835\udc47\ud835\udc56. Assume \u2203\ud835\udc63\ud835\udc58\u2208\ud835\udc47\ud835\udc56\u22121 and \ud835\udc63\ud835\udc58\u2208\ud835\udc3a\ud835\udc47\ud835\udc56\u22121, s.t., \ud835\udc5d(\ud835\udc63\ud835\udc58) = \ud835\udc5d(\ud835\udc63\ud835\udc56). \ud835\udc54(\ud835\udc63\ud835\udc57) = \ud835\udc54(\ud835\udc63\ud835\udc56) + 1 since \ud835\udc63\ud835\udc57is the parent node of \ud835\udc63\ud835\udc56in \ud835\udc47\ud835\udc56. By definition, \u2200\ud835\udc63\u2208 \ud835\udc5d\ud835\udc4e\ud835\udc61\u210e(\ud835\udc63\ud835\udc56), \ud835\udc63\u2260\ud835\udc63\ud835\udc56, we have \ud835\udc56\ud835\udc5b\ud835\udc51(\ud835\udc63) < \ud835\udc56\ud835\udc5b\ud835\udc51(\ud835\udc63\ud835\udc56) and \ud835\udc56\ud835\udc5b\ud835\udc51(\ud835\udc63\ud835\udc56) \u2208\ud835\udc61\ud835\udc60(\ud835\udc63). We call each \ud835\udc63\u2208\ud835\udc5d\ud835\udc4e\ud835\udc61\u210e(\ud835\udc63\ud835\udc56), \ud835\udc63\u2260\ud835\udc63\ud835\udc56as a candidate parent node of \ud835\udc63\ud835\udc56since adding an edge from \ud835\udc63to \ud835\udc63\ud835\udc56will make\ud835\udc47\ud835\udc56a valid candidate SHT. Thus \ud835\udc63 \u2032 \ud835\udc57\u2208\ud835\udc5d\ud835\udc4e\ud835\udc61\u210e(\ud835\udc63\ud835\udc56) since GT should be at least a valid SHT and there is no other \ud835\udc63\ud835\udc5a> \ud835\udc63\ud835\udc57such that \ud835\udc56\ud835\udc5b\ud835\udc51(\ud835\udc63\ud835\udc56) \u2208\ud835\udc61\ud835\udc60(\ud835\udc63\ud835\udc5a). If \ud835\udc54(\ud835\udc63 \u2032 \ud835\udc57) \u2260\ud835\udc54(\ud835\udc63\ud835\udc57), there at least exists one node \ud835\udc63\ud835\udc59\u2208\ud835\udc5d\ud835\udc4e\ud835\udc61\u210e(\ud835\udc63\ud835\udc56) and \ud835\udc63\ud835\udc59is a child node of \ud835\udc63 \u2032 \ud835\udc57, s.t., \ud835\udc5d(\ud835\udc63\ud835\udc59) = \ud835\udc5d(\ud835\udc63\ud835\udc56), since \ud835\udc3a\ud835\udc47\ud835\udc56is a well-formatted SHT and the sibling nodes \ud835\udc63\ud835\udc59and \ud835\udc63\ud835\udc56belonging to the same parent \ud835\udc63 \u2032 \ud835\udc57should have the same visual pattern. By \ud835\udc5d(\ud835\udc63\ud835\udc58) = \ud835\udc5d(\ud835\udc63\ud835\udc56), we have \ud835\udc5d(\ud835\udc63\ud835\udc59) = \ud835\udc5d(\ud835\udc63\ud835\udc58), and thus \ud835\udc54(\ud835\udc63\ud835\udc59) = \ud835\udc54(\ud835\udc63\ud835\udc58), since \ud835\udc63\ud835\udc56\ud835\udc60\ud835\udc5d\ud835\udc5f\ud835\udc52(\ud835\udc63\ud835\udc59) = \ud835\udc63\ud835\udc56\ud835\udc60\ud835\udc5d\ud835\udc5f\ud835\udc52(\ud835\udc63\ud835\udc58). \ud835\udc54(\ud835\udc63\ud835\udc59) = \ud835\udc54(\ud835\udc63\ud835\udc58) implies \ud835\udc54(\ud835\udc63 \u2032 \ud835\udc57) = \ud835\udc54(\ud835\udc63\ud835\udc57), which contradicts with \ud835\udc54(\ud835\udc63 \u2032 \ud835\udc57) \u2260\ud835\udc54(\ud835\udc63\ud835\udc57). By contradiction, we have \ud835\udc54(\ud835\udc63 \u2032 \ud835\udc57) = \ud835\udc54(\ud835\udc63\ud835\udc57) and further \ud835\udc63\ud835\udc57= \ud835\udc63\ud835\udc57 \ud835\udc57since both \ud835\udc63\ud835\udc57and \ud835\udc63 \u2032 \ud835\udc57are in \ud835\udc5d\ud835\udc4e\ud835\udc61\u210e(\ud835\udc63\ud835\udc56). Assume \u009a\ud835\udc63\ud835\udc58\u2208\ud835\udc47\ud835\udc56\u22121 and \ud835\udc63\ud835\udc58\u2208\ud835\udc3a\ud835\udc47\ud835\udc56\u22121, s.t., \ud835\udc5d(\ud835\udc63\ud835\udc58) = \ud835\udc5d(\ud835\udc63\ud835\udc56). Similarly we show \ud835\udc63\ud835\udc57= \ud835\udc63 \u2032 \ud835\udc57by contradiction in this case. Assuming \ud835\udc63\ud835\udc57\u2260\ud835\udc63 \u2032 \ud835\udc57, there at least exist a node \ud835\udc63\ud835\udc59\u2208\ud835\udc5d\ud835\udc4e\ud835\udc61\u210e(\ud835\udc63\ud835\udc56), \ud835\udc63\ud835\udc59\u2260\ud835\udc63\ud835\udc56and \ud835\udc63\ud835\udc59is a child of \ud835\udc63 \u2032 \ud835\udc57, s.t., \ud835\udc5d(\ud835\udc63\ud835\udc59) = \ud835\udc5d(\ud835\udc63\ud835\udc56), since \ud835\udc63 \u2032 \ud835\udc57\u2208\ud835\udc5d\ud835\udc4e\ud835\udc61\u210e(\ud835\udc63\ud835\udc56) and \ud835\udc3a\ud835\udc47\ud835\udc56is a wellformatted SHT. However, this contradicts to the assumption that \ud835\udc5d(\ud835\udc63\ud835\udc58) = \ud835\udc5d(\ud835\udc63\ud835\udc56). By contradiction, we have \ud835\udc63\ud835\udc57= \ud835\udc63 \u2032 \ud835\udc57, which concludes the proof. \u25a1 3.3 SHT Construction across Documents Given a set of documents D = {\ud835\udc371, ..., \ud835\udc37\ud835\udc59}, applying oracle_gen(\ud835\udc37\ud835\udc56) to each \ud835\udc37\ud835\udc56can be costly when \ud835\udc59is large. Here, we leverage the fact that, in addition to being well-formatted, the documents share common templates. We define the notion of a template below. We process each document \ud835\udc37\ud835\udc56in turn, attempting it to match against one of the existing templates \ud835\udc61\ud835\udc5d\u2208T P via a function template_gen(\ud835\udc61\ud835\udc5d, \ud835\udc37\ud835\udc56); if a match is successful, a SHT for \ud835\udc37\ud835\udc56is returned\u2013without any LLM calls. Otherwise, we call oracle_gen(\ud835\udc37\ud835\udc56)\u2014here, the corresponding template \ud835\udc61\ud835\udc5dfor the returned SHT is added to T P. If there are multiple successful matches in T P, we return the largest SHT of them all; the rationale here is that we want to capture as much of the header information as possible as part of the SHT. Template. We now define the notion of a template associated with an SHT. The template for an SHT \ud835\udc47: \ud835\udc61\ud835\udc5d= {\ud835\udc54: {\ud835\udc5d}} is a sorted dictionary that captures the mapping between the granularities \ud835\udc54 \f1 2 5 6 3 a) SHT1 4 p1 p2 p2 p3 p3 p4 1: {p1} 2: {p2} 3: {p3, p4} tp(SHT1) 1 2 5 3 4 p1 p2 p2 p3 p3 b) SHT2 (1) (5) (20) (12) (24) 1 2 3 p1 p2 p2 c) SHT3 (1) (8) (35) 1 2 3 p2 p3 p4 d) SHT4 (1) (6) (74) Figure 7: SHT construction by Pattern Matching; the documents represented by b and c are matches to \ud835\udc61\ud835\udc5d(SHT1)) but not d. of nodes and the set {\ud835\udc5d} of visual patterns found at that granularity. This dictionary is additionally sorted by granularity in increasing order. \ud835\udc61\ud835\udc5d(SHT1) is the template of SHT1 shown in Figure 7-a. We let \ud835\udc61\ud835\udc5d.\ud835\udc54and \ud835\udc61\ud835\udc5d.\ud835\udc5dbe the granularities and visual patterns in \ud835\udc61\ud835\udc5d. For SHT1 in Figure 7-a, \ud835\udc61\ud835\udc5d.\ud835\udc54= {1, 2, 3} and \ud835\udc61\ud835\udc5d.\ud835\udc5d= {\ud835\udc5d1, \ud835\udc5d2, \ud835\udc5d3, \ud835\udc5d4}. Let \ud835\udc61\ud835\udc5d.\ud835\udc54(\ud835\udc5d) be the granularity of a visual pattern \ud835\udc5din \ud835\udc61\ud835\udc5d, e.g., \ud835\udc61\ud835\udc5d.\ud835\udc54(\ud835\udc5d1) = 1 for SHT1. (This value is unique by construction from Section 3.2.) If \ud835\udc5d\u2209\ud835\udc61\ud835\udc5d.\ud835\udc5d, \ud835\udc61\ud835\udc5d.\ud835\udc54(\ud835\udc5d) = \u22121. Template Matching and Generation. We say a document \ud835\udc37 matches a template \ud835\udc61\ud835\udc5dif the visual patterns contained amongst the phrases \ud835\udc46\ud835\udc37cover each granularity 1 . . .\ud835\udc56, for some \ud835\udc56which is a prefix of \ud835\udc61\ud835\udc5d. For instance, document \ud835\udc371 with true SHT, SHT1, has a corresponding template \ud835\udc61\ud835\udc5d(SHT1) in Figure 7-a, and document \ud835\udc372, has a true SHT, SHT2, Figure 7-b. Since \ud835\udc372 includes patterns {\ud835\udc5d1, \ud835\udc5d2, \ud835\udc5d3}, it covers every granularity of the template of \ud835\udc371, and therefore matches the template. Additionally, document \ud835\udc373 with true SHT, SHT3, in Figure 7-c, which includes patterns {\ud835\udc5d1, \ud835\udc5d2}, also matches the template, since it covers a prefix of the granularities in the template (namely 1 and 2), even though it lacks patterns {\ud835\udc5d3, \ud835\udc5d4}. On the other hand, document \ud835\udc374 with true SHT, SHT4, in Figure 7-d, does not contain a match for \ud835\udc5d1, thereby not meeting the prefix constraint, and not being a match for the template. Our rationale for admitting prefix matches is the observation that as the granularity of a header becomes more fine-grained, its visual pattern tends to be more varied. For example, for two scientific papers obeying the same template, the visual patterns of sections remain consistent, but within each section the visual patterns used may vary depending on individual preferences. Note here that in our implementation, we allow for any non-zero prefix for a match; for more constrained document collections, a user may set a prefix threshold, e.g., at least three levels of the template must be covered. Armed with templates and matches to a template, we can now describe our template_gen(\ud835\udc61\ud835\udc5d, \ud835\udc37) procedure, listed in Algorithm 1. We proceed in two phases, where we first identify all of the phrases \ud835\udc60\u2208\ud835\udc46\ud835\udc37that match those in \ud835\udc61\ud835\udc5d.\ud835\udc5d, we add these phrases as nodes to\ud835\udc49 for our yet-to-be-constructed SHT (Line 3-5). Given these phrases, we check if there is a match for the template \ud835\udc61\ud835\udc5d, where a match is defined as above to be a prefix of the template. If no match is found, an empty result is returned (Line 6-7), else we assemble the nodes in \ud835\udc49into an SHT; we use a similar tree construction procedure as in the previous section, operating on the phrases found in the first step, clustered based on visual pattern (Line 8-10). 4 DATA MODEL AND QUERY LANGUAGE In the previous section, we described how we can extract SHTs for each document in a collection as part of document ingestion. Here, Algorithm 1: template_gen(\ud835\udc61\ud835\udc5d, \ud835\udc37) 1 \ud835\udc46\ud835\udc3b\ud835\udc47\ud835\udc37= (\ud835\udc49, \ud835\udc38),\ud835\udc49= \u2205, \ud835\udc38= \u2205 2 \ud835\udc3a= {} 3 for \ud835\udc60\ud835\udc56\u2208\ud835\udc46\ud835\udc37do 4 if \ud835\udc5d(\ud835\udc60\ud835\udc56) \u2208\ud835\udc61\ud835\udc5d.\ud835\udc5dthen 5 \ud835\udc49= \ud835\udc49\u222a\ud835\udc60\ud835\udc56, \ud835\udc3a= \ud835\udc3a\u222a\ud835\udc61\ud835\udc5d.\ud835\udc54(\ud835\udc5d(\ud835\udc60\ud835\udc56)) 6 if \ud835\udc3a= \u2205or \u2203\ud835\udc56\u2208\ud835\udc3a,\ud835\udc56> 1, (\ud835\udc56\u22121) \u2209\ud835\udc3athen 7 Return {} 8 for \ud835\udc63\ud835\udc56\u2208\ud835\udc49, \ud835\udc63\ud835\udc57\u2208\ud835\udc49do 9 if \ud835\udc56\ud835\udc5b\ud835\udc51(\ud835\udc63\ud835\udc57) \u2208\ud835\udc61\ud835\udc60(\ud835\udc63\ud835\udc56) and \u009a\ud835\udc63\ud835\udc58\u2208\ud835\udc49,\ud835\udc56\ud835\udc5b\ud835\udc51(\ud835\udc63\ud835\udc58) > \ud835\udc56\ud835\udc5b\ud835\udc51(\ud835\udc63\ud835\udc56),\ud835\udc60.\ud835\udc61.,\ud835\udc56\ud835\udc5b\ud835\udc51(\ud835\udc63\ud835\udc57) \u2208\ud835\udc61\ud835\udc60(\ud835\udc63\ud835\udc58) then 10 \ud835\udc38= \ud835\udc38\u222a(\ud835\udc63\ud835\udc56, \ud835\udc63\ud835\udc57) 11 Return \ud835\udc46\ud835\udc3b\ud835\udc47\ud835\udc37 we define the data model used by ZenDB to represent the SHTs as well as other system-specific information, along with user-defined tables that we call DTables, short for Document Tables. 4.1 Data Model Definition In addition to traditional relational tables that we call base tables, ZenDB supports three new types of tables that respectively (i) represent the SHTs per document collection, (ii) let users specify one or more structured relations over the documents, called DTables, to be used within queries; (iii) maintain system metadata associated with the user-defined tables. We describe each one in turn. 4.1.1 SHT Table. The SHT table, shown in Figure 8-c, is a systemdefined and maintained table that represents the SHTs in a document collection. Each row captures information about an SHT Node, and is populated as described subsequently in Section 5. Its main attributes are: \u2022 doc_id, node_id identify the node in a given document. \u2022 name represents the header phrase \ud835\udc60corresponding to the node. \u2022 granularity represents the depth of the node in the tree. \u2022 context, summary, size correspond to the entire sequence of phrases in the text span, a short summary of the text span, and the number of tokens in the text span. \u2022 st_page and ed_page, listing the start and end pages for the text span. \u2022 child_ids and ancestor_ids, the IDs for the children and entire sequence of ancestors. We note that summary, size, st/ed_page, and ancestor_ids can be derived from the other attributes, but we store them explicitly for convenience. These attributes are all used during query processing. 4.1.2 User-defined DTables. Users can use SQL to define DTables, with those tables being used in subsequent queries (Figure 5). We use a special keyword DESCRIPTION to both designate the fact that this is not an ordinary table, and also allowing natural language to be provided that may be used in LLM prompts. To define such a table, the user can say: CREATE TABLE [name] (...) WITH DESCRIPTION [description] Here, the user provides a natural language description for the table. Attributes may be provided during table creation in parentheses (or omitted), and/or could be added afterwards, via the standard approach to alter schemas: \fnode_id doc_id granularity name st_page ed_page context summary size child_ids A1 1 1 Capital Improvement Projects (Design) 1 12 context_1 S(context_1) TC(context_1) [A0, A1, \u2026] B1 1 Marie Canyon Green Street R 1 Public Works Commission Agenda Report 2 3 1 1 1 1 context_2 S(context_2) context_3 S(context_3) TC(context_2) TC(context_3) [B1, B2, \u2026] [] c) SHT Table (Partial) doc_id* name type begin_time 1 NULL NULL NULL table_name table_node table_description t_range Projects R Projects table contains a set of projects in public agenda report\u2026 [3,3] doc_id 1 2 ? ? Projects Projects table contains a set of projects in public agenda report\u2026 text_span* TS1 node* B1 a) Projects (Partial) 1 NULL NULL NULL TS2 B2 B2 1 PCH Median Improvements Project 3 1 2 context_4 S(context_4) TC(context_4) [] doc_id* meeting_date subject 1 NULL NULL text_span* TS1 node* R b) Agenda Meeting (Partial) 1 R [1,1] Agenda Meeting Agenda Meeting table describes the agenda meeting\u2026 2 ? ? Agenda Meeting Agenda Meeting table describes the agenda meeting\u2026 d) Table Catalog (Partial) table_name attr_description Projects name of project type TEXT TEXT Projects type of project attr_name name type TEXT Agenda Meeting subject of meeting subject e) Attribute Catalog (Partial) table_text_span TS3 ? TS4 ? ancestor_ids [] [R] [A1, R] [A1,R] multi_tuple False ? False ? User-Defined Tables (* is system-defined attribute) System-Defined Tables Figure 8: Data Model: User-Defined Tables and System-Defined Tables. ALTER TABLE [name] ADD [name] [type] WITH DESCRIPTION [description], ... ; Again, a natural language description for the attributes are provided when they are added. As we will discuss in Section 5, when the user creates a DTable, ZenDB populates them offline with rows that correspond to tuples. Each tuple represents one entity that can be found in a document. User defined attributes for these tuples are populated with NULL, and are filled in on-demand during query time, as shown in Figure 8a. Here, the Project DTable contains user-defined attributes name, type, and begin-time. ZenDB also maintains three hidden system-defined attributes per DTable\u2014the document id, text span used to extract the tuple, and SHT nodes used in the derivation. These attributes track how each tuple was derived, to provide context when extracting tuple attributes later on, and for debugging and provenance purposes. For instance, \ud835\udc351 corresponds to the \u201cMarie Canyon Green Street\u201d project tuple, and the tuple\u2019s text span may be the same as \ud835\udc351 or a subset (Figure 8c). The user-defined attributes represent the result of a read operation over each attribute. In addition, every expression implicitly defines additional attributes in this table. For instance, if a query evaluates Projects.name = \u201cCapital Improvement\u201d directly using an LLM call, then the attribute [Projects.name|eq|Capital Improvement] is instantiated and populated with the LLM response. Note that we chose to represent these user-specified DTables as regular tables as opposed to views or materialized views; but they could also be represented as such. 4.1.3 System-Defined Tables. In addition to the SHT table, ZenDB maintains two system-defined tables: Table Catalog and Attribute Catalog store metadata related to tables and attributes respectively (Figure 8d,e). In addition to names and descriptions, Table Catalog tracks the text span and SHT node(s) used to identify the contents of the table (since a table may be a small portion of the document), used to localize search when extracting tuples\u2014thereby reducing cost during query processing. The attribute t_range refers to the min/max granularities of the nodes used to extract tuples in the table. For example, all Project tuples extracted so far have granularity 3, thus t_range = [3,3]; this is the setting where tuples correspond to nodes (of some granularity) within the SHT. Finally, to handle the special case where the table is extracted from a leaf node in the SHT, i.e., there are multiple tuples corresponding to a single node that has no finer granularity node below it, we mark this by setting multi_tuple to True. For instance, consider the scenario when users want to create a table called \u201cReferences\u201d and each tuple corresponds to a reference in a published paper. 4.2 Query Language ZenDB currently supports a subset of SQL, corresponding to simple non-nested queries on one or more DTables with optional aggregation, as represented by the following template: SELECT [attr] | agg(attr) FROM [ST]+ WHERE [predicate] GROUP BY [attr] where [..] denotes a list of elements, attr refers to an expression over an attribute, ST refers to one or more DTables, and agg() includes SUM, COUNT, AVG, MAX, MIN1. A predicate has the form: attr op operand, where the operators include >|\u2265|<|\u2264|=|LIKE|IN, and operand is one or more constants. LIKE is used for fuzzy matching where either string similarity or semantic similarity could be used2. We add a restriction that if multiple DTables are listed in the FROM clause, then the WHERE clause includes a predicate specifying that the tuples are equi-joined on doc_id. We add this restriction for now to only allow for within-document joins, but we plan to relax this in future work. Figure 9 shows a query where, for each document whose meeting time is before \u201c2023 October\u201d, we count the \u201cCapital Improvement\u201d projects starting after \u201c2022-06-01\u201d; here, we make use of the withindocument join across two tables. The query semantics are defined as fully populating the userdefined DTables with the LLM results of all attribute reads and expressions, and then executing the SQL query as normal. We follow these semantics because it allows for minor consistencies during query evaluation. Specifically, under an oracle LLM that always returns complete and correct responses, the contents of the attribute reads and expressions will always be consistent (e.g., type is [\u2019A\u2019, \u2019B\u2019], and type = \u2019A\u2019 is true). However, modern LLMs are imperfect and sensitive to the input prompt and context formulation, so the extracted attribute values and expressions 1Text attributes only support COUNT, date attributes only support COUNT, MAX, MIN. 2In ZenDB we use Jaccard similarity with a 0.9 threshold by default. \fover the attributes may be inconsistent (e.g., extracted type is \u2019B\u2019, but type=\u2019A\u2019 is true). Better understanding and reconciling these potential inconsistencies is outside the scope of this paper, and is important future work. 5 TABLE POPULATION We next describe how we can populate the system-defined tables and attributes described above. Populating the SHT table is straightforward and therefore omitted; we will describe how the summary field is populated in Section 6. Populating Tables Overview. When a user defines a new DTable T, updating Attribute Catalog (Figure 8e) and table_name, table_descr in Table Catalog (Figure 8d) is easy. However, ZenDB must process the document collection D to fill in the system-defined attributes (SDAs) in Table Catalog and T, and populate T with tuples. While ZenDB proactively identifies tuples for T, it doesn\u2019t populate any user-defined attributes until query time. Consider a partitioning of D = \u00d0 D\ud835\udc56\u2286D D\ud835\udc56, where D\ud835\udc56is a set of documents sharing the same template, as identified during SHT construction. For each D\ud835\udc56, ZenDB picks a document \ud835\udc37\u2208D\ud835\udc56and uses an LLM to populate T with tuples, and fill in the SDAs. ZenDB then uses a rule-based approach to extract tuples from the remaining documents \ud835\udc37\u2032 \u2208D\ud835\udc56\u2212{\ud835\udc37} without invoking LLMs. We describe the single document and multi-document extraction next. Single Document Extraction. To populate SDAs for \ud835\udc37for a given DTable T, we first identify the node in the SHT for \ud835\udc37that captures all of the entities for the T; we call this the table node. We then identify nodes that correspond to tuples that lie underneath this node. We use two prompts, table_oracle and tuple_oracle to identify if a given node corresponds to a table or tuple respectively. table_oracle: If the following text describes [table_name], [table_descr], return true. Otherwise , return false. [node_context ]. tuple_oracle: If the following text describes one [tuple_descr] in [ table_name], [table_descr], return true. Otherwise , return false. [ node_context ]. In these prompts, [] is a placeholder. [table_name], [table_descr], and [tuple_descr] correspond to the table name and description, and the tuple description in Table Catalog (e.g., Figure 8d). [node_context] provides the entire text span corresponding to the node from SHT table (e.g., in Figure 8c). To identify the table node, ZenDB walks the SHT top-down and submits table_oracle to LLMs for each node. If the response for all of a node \ud835\udc63\u2019s children are true, then we add \ud835\udc63as a candidate table node and stop descending into \ud835\udc63\u2019s children. Finally, ZenDB fills in the Least Common Ancestor (LCA) of the candidate table nodes as table_node in Table Catalog. Once the table_node is found, ZenDB attempts to populate T with tuples. Once again, ZenDB performs a top-down traversal starting from table_node and evaluates tuple_oracle on each node. If a node \ud835\udc63evaluates to true, it means the node corresponds to an entity. We insert a new tuple into T, assign its node and text span to that of \ud835\udc63\u2019s, and stop traversing \ud835\udc63\u2019s descendants. If no nodes evaluate to true, it implies a leaf node contains multiple tuples and so we flag multi_tuple as true in Table Catalog without populating T. We handle this case separately in Section 6. Multi-document Extraction. Repeated LLM calls for extracting tuple boundaries for every document is too expensive, so we use a rule-based approach to populate tuples (and other SDAs) from the rest of the documents that share the same template. Consider populating table_node for document \ud835\udc37\u2032 \u2208D\ud835\udc56, \ud835\udc37\u2032 \u2260 \ud835\udc37, where tuples from \ud835\udc37were populated as described previously. Let the table_node (i.e., the finest granularity node below which all the tuples are found) and t_range (i.e., tuple granularity range) of the table T in document \ud835\udc37(that has already been populated) be \ud835\udc63\ud835\udc61\ud835\udc5band [\ud835\udc59,\ud835\udc5f], respectively. For \ud835\udc37\u2032, if there exists a node \ud835\udc63in its SHT such that \ud835\udc63\u2019s granularity matches that of \ud835\udc63\ud835\udc61\ud835\udc5band the textual similarity between \ud835\udc63\u2019s phrase and that of \ud835\udc63\ud835\udc61\ud835\udc5bis greater than a threshold, then we set \ud835\udc63to be the table node for \ud835\udc37\u2032; else if no such \ud835\udc63exists, the root is set to be the table node. Now, to populate tuples, suppose for the tuple range [\ud835\udc59,\ud835\udc5f] in \ud835\udc37,\ud835\udc59= \ud835\udc5f= \ud835\udc65. In this easy case, there is a well-defined granularity in the SHT where tuples are found. Then, we add all nodes at granularity \ud835\udc65from \ud835\udc37\u2032 as candidate tuples to T (assuming there is a non-zero number of them). If \ud835\udc59\u2260\ud835\udc5for if the SHT for \ud835\udc37\u2032 has a maximum height < \ud835\udc65, then we simply set multi_tuple to true; in this case, the granularity for tuples is ambiguous, and so we treat it similar to the case where there may be multiple tuples at a given node. Multi-document Extraction Rules. In more detail, we define the following two rules. For each node \ud835\udc63in an SHT, we use \ud835\udc63.\ud835\udc4e\ud835\udc61\ud835\udc61\ud835\udc5f to denote any attribute \ud835\udc4e\ud835\udc61\ud835\udc61\ud835\udc5fbelonging to \ud835\udc63in the SHT table (e.g., \ud835\udc63.\ud835\udc54\ud835\udc5f\ud835\udc4e\ud835\udc5b\ud835\udc62\ud835\udc59\ud835\udc4e\ud835\udc5f\ud835\udc56\ud835\udc61\ud835\udc66). For the document \ud835\udc37\u2032 \u2208D\ud835\udc56, let \ud835\udc49\ud835\udc37\u2032 be the set of nodes corresponding to \ud835\udc37\u2032 in the SHT table, and \ud835\udc37\u2032.table_node be the table_node of T in document \ud835\udc37\u2032 in Table Catalog. Rule 1: \u2200\ud835\udc63\ud835\udc56\u2208\ud835\udc49\ud835\udc37\u2032, if \ud835\udc63\ud835\udc56.\ud835\udc54\ud835\udc5f\ud835\udc4e\ud835\udc5b\ud835\udc62\ud835\udc59\ud835\udc4e\ud835\udc5f\ud835\udc56\ud835\udc61\ud835\udc66= \ud835\udc63\ud835\udc61\ud835\udc5b.\ud835\udc54\ud835\udc5f\ud835\udc4e\ud835\udc5b\ud835\udc62\ud835\udc59\ud835\udc4e\ud835\udc5f\ud835\udc56\ud835\udc61\ud835\udc66as well as \ud835\udc46\ud835\udc56\ud835\udc5a(\ud835\udc63\ud835\udc56.\ud835\udc5b\ud835\udc4e\ud835\udc5a\ud835\udc52, \ud835\udc63\ud835\udc61\ud835\udc5b.\ud835\udc5b\ud835\udc4e\ud835\udc5a\ud835\udc52> \ud835\udf03, then \ud835\udc37\u2032.table_node= \ud835\udc63\ud835\udc56. Else, \ud835\udc37\u2032.table_node= \ud835\udc5f\ud835\udc5c\ud835\udc5c\ud835\udc61. If the rule is unsatisfied, we set the table_node to be the root node of SHT corresponding to \ud835\udc37\u2032. To populate the nodes corresponding to tuples, we first populate the granularity range of tuples t_range. Rule 2: If \u2203\ud835\udc63\ud835\udc57\u2208\ud835\udc37\u2032.table_node.child_ids,\ud835\udc59\u2264\ud835\udc63\ud835\udc57.\ud835\udc54\ud835\udc5f\ud835\udc4e\ud835\udc5b\ud835\udc62\ud835\udc59\ud835\udc4e\ud835\udc5f\ud835\udc56\ud835\udc61\ud835\udc66\u2264 \ud835\udc5f, then \ud835\udc37\u2032.t_range= [\ud835\udc59,\ud835\udc5f]. Else, multi_tuple = \ud835\udc61\ud835\udc5f\ud835\udc62\ud835\udc52. If the granularities of tuples of T in document \ud835\udc37\u2032 are consistent, i.e., \ud835\udc59= \ud835\udc5fin \ud835\udc37\u2032.t_range, then we create a set of nodes \ud835\udc49, where for each \ud835\udc63\u2208\ud835\udc49, \ud835\udc63.\ud835\udc54\ud835\udc5f\ud835\udc4e\ud835\udc5b\ud835\udc62\ud835\udc59\ud835\udc4e\ud835\udc5f\ud835\udc56\ud835\udc61\ud835\udc66= \ud835\udc59and \ud835\udc37\u2032.table_node\u2208 \ud835\udc63.\ud835\udc4e\ud835\udc5b\ud835\udc50\ud835\udc52\ud835\udc60\ud835\udc61\ud835\udc5c\ud835\udc5f_\ud835\udc56\ud835\udc51\ud835\udc60. \ud835\udc49is further converted to a set of tuples whose \ud835\udc61\ud835\udc52\ud835\udc65\ud835\udc61_\ud835\udc60\ud835\udc5d\ud835\udc4e\ud835\udc5b= \ud835\udc63.\ud835\udc50\ud835\udc5c\ud835\udc5b\ud835\udc61\ud835\udc52\ud835\udc65\ud835\udc61and \ud835\udc5b\ud835\udc5c\ud835\udc51\ud835\udc52\ud835\udc60= {\ud835\udc63}. These tuples are inserted into the table T. If Rule 2 is violated, we set multi_tuple as true to denote that we do not have a one-to-one mapping between the set of nodes and tuples when populating the table for \ud835\udc37\u2032. Note that doing so might introduce false positives instead of false negatives. False positives are permissive since they will not lose the context of where the answers may be present, and in Section 6 we will discuss how to reduce false positives during query execution. When multi_tuple in \ud835\udc37is true, we don\u2019t populate t_range but set multi_tuple as true for \ud835\udc37\u2032. Overall, when the number of distinct templates (i.e., |D\ud835\udc56|) in documents D is small, the cost incurred by LLMs to populate the SDAs is minimal, since we only invoke LLMs on a single document for each cluster. 6 QUERY ENGINE We discuss how ZenDB generates a query plan for a given query \ud835\udc44 in Section 6.1, and then describe our physical operator implementations that leverage SHTs in Section 6.2. \fSELECT Agenda_Meeting.doc_id, COUNT(Projects.name) FROM Projects, Agenda_Meeting WHERE Projects.type = \u2018Capital Improvement\u2019 AND Projects.begin_time > \u20182022-06-01\u2019 AND Agenda_Meeting.meeting_time < \u20182023 October\u2019 AND Projects.doc_id = Agenda_Meeting.doc_id GROUP BY Agenda_Meeting.doc_id Figure 9: A Query on Civic Agenda Documents. Figure 10: A Query Plan for the Query in Figure 9. 6.1 Logical Query Plan Unlike traditional settings where I/O and computation costs dominate, here, LLM invocations add to monetary cost3 and/or latency, and thus must be minimized if possible. Keeping this guideline in mind, when generating a logical query plan for a given query \ud835\udc44, ZenDB first parses the SQL query into a parse tree of relational operators. Subsequently, predicates are pushed down to reduce intermediate sizes and thereby downstream LLM invocations\u2014but also taking into account the fact that predicate evaluations that rely on LLMs can be expensive. ZenDB relies on the standard approach from prior work [32] for expensive predicate reordering that takes into account both the selectivity and cost. Specifically, we define a metric \ud835\udc53(\ud835\udc5c) for each selection operator \ud835\udc5c. Let \ud835\udc60\ud835\udc5cbe the selectivity of \ud835\udc5c, computed as \ud835\udc60\ud835\udc5c= |\ud835\udc47\ud835\udc60| |\ud835\udc47\ud835\udc50| , where \ud835\udc47\ud835\udc50(\ud835\udc47\ud835\udc60) are tuples that are processed (satisfy) the predicate associated with \ud835\udc5c. Let \ud835\udc52\ud835\udc5cbe the average cost for evaluating a tuple using operator \ud835\udc5c, which is estimated adaptively during query execution as more tuples are processed by \ud835\udc5c. The goodness of a selection operator \ud835\udc5cis then defined as \ud835\udc53\ud835\udc5c= \ud835\udc52\ud835\udc5c\u00d7 \ud835\udc60\ud835\udc5c. Intuitively, if an operator \ud835\udc5chas lower cost \ud835\udc52\ud835\udc5cand selectivity \ud835\udc50\ud835\udc5c, \ud835\udc5cis preferred to be executed early. ZenDB will sort the set of selection operators on the same table in the increasing order of \ud835\udc53(\ud835\udc5c). Projections on the other hand, are pulled up, to avoid having to populate attributes through LLM calls for tuples that may get discarded. Until a selection or projection is encountered that requires a specific attribute for a tuple, that attribute stays uninterpreted, and therefore NULL. From a join order standpoint, ZenDB adopts a greedy algorithm to generate a left-deep tree, in an approach akin to standard relational query optimization techniques. Here, instead of optimizing for reducing the sizes of intermediate results, we focus on reducing the LLM invocation cost. Let \ud835\udc38(\ud835\udc47) be the cost (in terms of dollars or latency) for evaluating all of the predicates in \ud835\udc44corresponding only to table \ud835\udc47on all of the tuples of \ud835\udc47. ZenDB ranks the tables in \ud835\udc44as \ud835\udc471,\ud835\udc472, ... based on their \ud835\udc38(\ud835\udc47\ud835\udc56) in increasing order, forming a left deep tree with \ud835\udc471 as the driving table, followed by \ud835\udc472 3This is common for several commercial LLMs like OpenAI, Claude-3 [7], Google Gemini [3]. Algorithm 2: \ud835\udc61\ud835\udc5f\ud835\udc52\ud835\udc52_\ud835\udc52\ud835\udc63\ud835\udc4e\ud835\udc59\ud835\udc62\ud835\udc4e\ud835\udc61\ud835\udc52(\ud835\udc46\ud835\udc3b\ud835\udc47,\ud835\udc61\ud835\udc62\ud835\udc5d\ud835\udc59\ud835\udc52,\ud835\udc52) 1 \ud835\udc36\ud835\udc62\ud835\udc5f\ud835\udc5f\ud835\udc52\ud835\udc5b\ud835\udc61\ud835\udc41\ud835\udc5c\ud835\udc51\ud835\udc52\ud835\udc60= {\ud835\udc61\ud835\udc62\ud835\udc5d\ud835\udc59\ud835\udc52.\ud835\udc5b\ud835\udc5c\ud835\udc51\ud835\udc52} 2 \ud835\udc34\ud835\udc5b\ud835\udc60= \u2205 3 \ud835\udc47= \ud835\udc54\ud835\udc52\ud835\udc61\ud835\udc47\ud835\udc5f\ud835\udc52\ud835\udc52(\ud835\udc46\ud835\udc3b\ud835\udc47,\ud835\udc5b\ud835\udc5c\ud835\udc51\ud835\udc52) 4 /*Refine candidate nodes*/ 5 while \ud835\udc60\ud835\udc61\ud835\udc5c\ud835\udc5d_\ud835\udc50\ud835\udc5c\ud835\udc5b\ud835\udc51\ud835\udc56\ud835\udc61\ud835\udc56\ud835\udc5c\ud835\udc5b(\ud835\udc47) = \ud835\udc39\ud835\udc4e\ud835\udc59\ud835\udc60\ud835\udc52do 6 \ud835\udc36\ud835\udc41\ud835\udc60= \u2205 7 for \ud835\udc5b\u2208\ud835\udc36\ud835\udc62\ud835\udc5f\ud835\udc5f\ud835\udc52\ud835\udc5b\ud835\udc61\ud835\udc41\ud835\udc5c\ud835\udc51\ud835\udc52\ud835\udc60do 8 if \ud835\udc60\ud835\udc52\ud835\udc4e\ud835\udc5f\ud835\udc50\u210e_\ud835\udc5c\ud835\udc5f\ud835\udc4e\ud835\udc50\ud835\udc59\ud835\udc52(\ud835\udc5b,\ud835\udc52) = True then 9 \ud835\udc36\ud835\udc41\ud835\udc60= \ud835\udc36\ud835\udc41\ud835\udc60\u222a\ud835\udc5b 10 \ud835\udc36\ud835\udc62\ud835\udc5f\ud835\udc5f\ud835\udc52\ud835\udc5b\ud835\udc61\ud835\udc41\ud835\udc5c\ud835\udc51\ud835\udc52\ud835\udc60= \ud835\udc36\ud835\udc41\ud835\udc60.\ud835\udc50\u210e\ud835\udc56\ud835\udc59\ud835\udc51\ud835\udc60_\ud835\udc56\ud835\udc51 11 if e.type = predicate then 12 /*Evaluating A Predicate*/ 13 for \ud835\udc5b\ud835\udc5c\ud835\udc51\ud835\udc52\u2208\ud835\udc36\ud835\udc41\ud835\udc60do 14 if \ud835\udc52\ud835\udc63\ud835\udc4e\ud835\udc59\ud835\udc62\ud835\udc4e\ud835\udc61\ud835\udc52_\ud835\udc5c\ud835\udc5f\ud835\udc4e\ud835\udc50\ud835\udc59\ud835\udc52(\ud835\udc5b\ud835\udc5c\ud835\udc51\ud835\udc52.\ud835\udc60\ud835\udc62\ud835\udc5a\ud835\udc5a\ud835\udc4e\ud835\udc5f\ud835\udc66,\ud835\udc52) = \ud835\udc47\ud835\udc5f\ud835\udc62\ud835\udc52then 15 \ud835\udc34\ud835\udc5b\ud835\udc60= \ud835\udc34\ud835\udc5b\ud835\udc60\u222a\ud835\udc5b\ud835\udc5c\ud835\udc51\ud835\udc52 16 Return \ud835\udc34\ud835\udc5b\ud835\udc60 17 if e.type = attribute then 18 /*Extracting Attribute Values*/ 19 for \ud835\udc5b\ud835\udc5c\ud835\udc51\ud835\udc52\u2208\ud835\udc36\ud835\udc41\ud835\udc60do 20 \ud835\udc34\ud835\udc5b\ud835\udc60= \ud835\udc34\ud835\udc5b\ud835\udc60\u222a\ud835\udc52\ud835\udc65\ud835\udc61\ud835\udc5f\ud835\udc4e\ud835\udc50\ud835\udc61_\ud835\udc5c\ud835\udc5f\ud835\udc4e\ud835\udc50\ud835\udc59\ud835\udc52(\ud835\udc5b\ud835\udc5c\ud835\udc51\ud835\udc52.\ud835\udc60\ud835\udc62\ud835\udc5a\ud835\udc5a\ud835\udc4e\ud835\udc5f\ud835\udc66,\ud835\udc52) 21 Return \ud835\udc34\ud835\udc5b\ud835\udc60 to form \ud835\udc471 \u22b2\u22b3\ud835\udc472, with the remaining tables being selected based on \ud835\udc38(.). When multi_tuple is false, implying that in table \ud835\udc47, we have pre-populated potential tuples, and therefore have a more precise estimate, \ud835\udc38(\ud835\udc47) = |\ud835\udc47| \u00d7 \ud835\udc52is estimated at query time, where |\ud835\udc47| is the number of tuples in \ud835\udc47, \ud835\udc52denotes the average cost of evaluating a single tuple. Initially, \ud835\udc38(\ud835\udc47) is set to be |\ud835\udc47| to prioritize evaluating the table with the smaller number of tuples, and \ud835\udc52will be estimated adaptively as more tuples are processed during query execution. One logical plan for the query in Figure 9 is shown in Figure 10, where agenda_meeting only has one tuple compared to the Projects table with more than 40 tuples, and thus is evaluated first. The estimation of \ud835\udc38(\ud835\udc47) when multi_tuple is true will be described in Section 6.2. 6.2 Physical Query Plan During query execution, each tuple in the user-defined DTables has attribute values that begin as NULL as in Figure 8a, but some attributes will get populated through selections or projections. When multi_tuple is true, ZenDB leverages LLMs to create a set of tuples satisfying the corresponding predicates with their attributes listed in the projections to be computed, as will discussed shortly. We now discuss our implementations of various operators. Scan. As part of our scan operator, ZenDB executes the query document by document (which explains the restriction of join on doc_id in Section 4.2). This operator first retrieves the tuples in the first document as a batch, followed by tuples in the second document; thus only one SHT is processed at a time. Selections and Projections. Consider a predicate \ud835\udc5d\ud835\udc5f\ud835\udc52\ud835\udc51or a projection \ud835\udc5d\ud835\udc5f\ud835\udc5c\ud835\udc57on table \ud835\udc47; a similar procedure is followed in either case. Say multi_tuple is false, so each row in \ud835\udc47corresponds to a single potential tuple. ZenDB then calls a function\ud835\udc52\ud835\udc63\ud835\udc4e\ud835\udc59\ud835\udc62\ud835\udc4e\ud835\udc61\ud835\udc52(\ud835\udc46\ud835\udc3b\ud835\udc47,\ud835\udc61\ud835\udc62\ud835\udc5d\ud835\udc59\ud835\udc52,\ud835\udc52), \flisted in Algorithm 2, with \ud835\udc52set to \ud835\udc5d\ud835\udc5f\ud835\udc52\ud835\udc51(respectively, \ud835\udc5d\ud835\udc5f\ud835\udc5c\ud835\udc57) to evaluate whether \ud835\udc61\ud835\udc62\ud835\udc5d\ud835\udc59\ud835\udc52satisfies \ud835\udc5d\ud835\udc5f\ud835\udc52\ud835\udc51, returning it if so (respectively, the value of the attribute in \ud835\udc5d\ud835\udc5f\ud835\udc5c\ud835\udc57). This function implements a tree search on the SHTs, leveraging summaries for each node, as defined in Section 4.1. We next describe how we populate this \ud835\udc60\ud835\udc62\ud835\udc5a\ud835\udc5a\ud835\udc4e\ud835\udc5f\ud835\udc66 per node in the SHT table (Figure 8c). Summary Creation. Given the SHT for a document \ud835\udc37and the expression \ud835\udc52, \ud835\udc46(\ud835\udc63), the summary for a node \ud835\udc63, comprises the following: (1) The phrase(s) corresponding to both \ud835\udc63and its ancestors. (2) An extractive summary of the text span of \ud835\udc63, which is a set of important sentences determined using standard (non-LLM) NLP tools like NLTK [10]. (3) The top-1 sentence the text span of \ud835\udc63with the highest semantic similarity (e.g., cosine similarity) with \ud835\udc52. Parts (1) and (2) are prepared offline when the SHT is built. Part (3) is added during query processing. Including phrases (i.e., headers) of ancestors in (1) often helps enhance accuracy by including additional background for interpreting \ud835\udc63\u2019s text span. For example, in Figure 1, the summary of node \ud835\udc352 contains the header phrase of its parent, \u201cCapital Improvement Projects (Design)\u201d, helping us identify \ud835\udc63as a candidate node when evaluating a predicate such as type = Capital Improvement. Tree Search Algorithm. Given a document \ud835\udc37with its \ud835\udc46\ud835\udc3b\ud835\udc47, a tuple node \ud835\udc5b\ud835\udc5c\ud835\udc51\ud835\udc52, an expression \ud835\udc52(either a predicate or a projection), our Algorithm 2, first identifies a sub-tree \ud835\udc47in \ud835\udc46\ud835\udc3b\ud835\udc47with \ud835\udc5b\ud835\udc5c\ud835\udc51\ud835\udc52as the root (Line 4), searches \ud835\udc47top-down. For each node \ud835\udc5bin one layer, it calls \ud835\udc60\ud835\udc52\ud835\udc4e\ud835\udc5f\ud835\udc50\u210e_\ud835\udc5c\ud835\udc5f\ud835\udc4e\ud835\udc50\ud835\udc59\ud835\udc52(\ud835\udc5b,\ud835\udc52) to check whether \ud835\udc5b\u2019s summary contains the right information to evaluate expression \ud835\udc52. It then adds all the nodes that pass \ud835\udc60\ud835\udc52\ud835\udc4e\ud835\udc5f\ud835\udc50\u210e_\ud835\udc5c\ud835\udc5f\ud835\udc4e\ud835\udc50\ud835\udc59\ud835\udc52into a candidate set \ud835\udc36\ud835\udc41\ud835\udc60(Line 6-12), and recursively searches their children until a stopping condition is met (Line 6). This condition is (1) the leaf node is reached, (2) the number of tokens in the summary of the node is larger than that of its context (i.e., text span). search_oracle(node , e): If the following text contains the information that describes [e.descr], return True; otherwise , return False. The context is [node.summary ]. Example: [e.descr] = 'the type of project is Capital Improvement ' For each candidate node \ud835\udc5b\u2208\ud835\udc36\ud835\udc41\ud835\udc60, if the expression \ud835\udc52is a predicate, then a call to an LLM with prompt\ud835\udc52\ud835\udc63\ud835\udc4e\ud835\udc59\ud835\udc62\ud835\udc4e\ud835\udc61\ud835\udc52_\ud835\udc5c\ud835\udc5f\ud835\udc4e\ud835\udc50\ud835\udc59\ud835\udc52(\ud835\udc5b\ud835\udc5c\ud835\udc51\ud835\udc52.\ud835\udc60\ud835\udc62\ud835\udc5a\ud835\udc5a\ud835\udc4e\ud835\udc5f\ud835\udc66,\ud835\udc52) is issued to evaluate if the summary of node satisfies the predicate. This step stops early when there exists one node that passes \ud835\udc52\ud835\udc63\ud835\udc4e\ud835\udc59\ud835\udc62\ud835\udc4e\ud835\udc61\ud835\udc52_\ud835\udc5c\ud835\udc5f\ud835\udc4e\ud835\udc50\ud835\udc59\ud835\udc52(\ud835\udc5f\ud835\udc50,\ud835\udc52) (Line 11-17). When \ud835\udc52is a projected attribute, \ud835\udc52\ud835\udc65\ud835\udc61\ud835\udc5f\ud835\udc4e\ud835\udc50\ud835\udc61_\ud835\udc5c\ud835\udc5f\ud835\udc4e\ud835\udc50\ud835\udc59\ud835\udc52(\ud835\udc5b\ud835\udc5c\ud835\udc51\ud835\udc52.\ud835\udc60\ud835\udc62\ud835\udc5a\ud835\udc5a\ud835\udc4e\ud835\udc5f\ud835\udc66,\ud835\udc52) is instead used to extract the value of the projected attribute (Line 18-22). evaluate_oracle(context , e): Return True if [e.descr] based on the following context [context ]. Otherwise , return False. Example: [e.descr] = 'type of project is Capital Improvement ' extract_oracle(context , e): Return [e.descr] based on the following context [context ]. Example: [e.descr] = 'name of project ' Each selection operator \ud835\udc5creturns the set of tuples in table \ud835\udc47 satisfying the predicate associated with \ud835\udc5cto downstream operators. We handle the case where multi_tuple is true for table \ud835\udc47in Section 6.3. Even though executing a tree search procedure by exposing node summaries to LLMs incurs additional cost, it is minimal in practice since the height of the tree is often small (thus, the number of iterations is small), and the size of the summary is small and controllable. In Section 7 we show that the benefit introduced by summaries, which achieves better accuracy and lower cost, dominates the additional cost. Other Operators. We use nested loop as our join algorithm. As mentioned earlier, even if we consider latency to be the primary optimization criterion, the evaluation of predicates and projections through LLM invocations would dominate overall latency, and the number of intermediate tuples to be processed during query execution is often not a large number. If we further treat monetary cost as the primary criterion, then joins are effectively free. Thus, a simple nested loop join suffices. Similarly, other operators like aggregation and group-by use simple relational variants. Provenance of Query Answers. ZenDB maintains the provenance in the form of the corresponding text span(s) for the returned query answers in a manner analogous to classical relational provenance [30]. During query processing, we keep track of the sequence of text spans consulted to populate attributes or verify predicates, as an additional metadata attribute, per tuple. These text spans are combined into an array during joins. While we could apply the same idea to aggregations and capture the provenance of contributing tuples into an array, this representation is unwieldy. Determining how best to show all of this provenance to end-users to ensure trust in query answers is an important topic for future work. 6.3 Operators for the Multiple Tuple Case When multi_tuple is true for table T, there are no tuples in T after population in Section 5, and the context of table_node may contain multiple tuples. Let \ud835\udc5d\ud835\udc5f\ud835\udc52\ud835\udc51(\ud835\udc47) and \ud835\udc5d\ud835\udc5f\ud835\udc5c\ud835\udc57(\ud835\udc47) be a set of predicates and projected attributes associated with table T in a given query \ud835\udc44. In this case, ZenDB searches the text span corresponding to the table_node of T, and creates a set of tuples satisfying \ud835\udc5d\ud835\udc5f\ud835\udc52\ud835\udc51(\ud835\udc47) with \ud835\udc5d\ud835\udc5f\ud835\udc5c\ud835\udc57(\ud835\udc47) being populated by LLMs. When table_node is a leaf node in its SHT, ZenDB submits the prompt multi_tuple_oracle (table_node, \ud835\udc5d\ud835\udc5f\ud835\udc52\ud835\udc51(\ud835\udc47), \ud835\udc5d\ud835\udc5f\ud835\udc5c\ud835\udc57(\ud835\udc47)) to LLMs to extract the projected values for the tuples that satisfies the given predicate \ud835\udc5d\ud835\udc5f\ud835\udc52\ud835\udc51(\ud835\udc47). multi_tuple_oracle(node ,pred(T),proj(T)): The following text describes one or more [tuple_descr ]. For each [tuple_descr], if pred(T), then return [proj(T)] based on the following context [node.context ]. Example: [tuple_descr] = 'paper ' [predT] = 'publication year is greater than 2009 and conference is VLDB' [proj(T)] = 'name of paper , authors of paper' As an example, consider a publication document \ud835\udc37, where users want to create a table called Reference with the schema as {name, year}, whose text span corresponds to the references section in a paper. Assume that in the SHT of \ud835\udc37, the references section is a leaf node. In this case, ZenDB will not further parse the reference section into individual references, but will call multi_tuple_oracle() to extract the paper name and authors per reference from VLDB whose publication year is later than 2009, directly over the references section. When table_node is not a leaf node in its SHT of document \ud835\udc37, let \ud835\udc37\u2032 be a document sharing the same template with \ud835\udc37and populating its system-defined attributes via \ud835\udc37in Section 5. Let \ud835\udc60\ud835\udc61\ud835\udc5c\ud835\udc5d_\ud835\udc54\ud835\udc5f\ud835\udc4e\ud835\udc5b\ud835\udc62\ud835\udc59\ud835\udc4e\ud835\udc5f\ud835\udc56\ud835\udc61\ud835\udc66be the granularity for stopping searching in Algorithm 3, and \ud835\udc60\ud835\udc61\ud835\udc5c\ud835\udc5d_\ud835\udc54\ud835\udc5f\ud835\udc4e\ud835\udc5b\ud835\udc62\ud835\udc59\ud835\udc4e\ud835\udc5f\ud835\udc56\ud835\udc61\ud835\udc66= \ud835\udc37.\ud835\udc61\ud835\udc62\ud835\udc5d\ud835\udc59\ud835\udc52_\ud835\udc5f\ud835\udc4e\ud835\udc5b\ud835\udc54\ud835\udc52.\ud835\udc59, i.e., the smallest granularity of tuples in \ud835\udc37. Note that this may introduce false \fAlgorithm 3: \ud835\udc61\ud835\udc5f\ud835\udc52\ud835\udc52_\ud835\udc52\ud835\udc63\ud835\udc4e\ud835\udc59\ud835\udc62\ud835\udc4e\ud835\udc61\ud835\udc52_\ud835\udc5a\ud835\udc62\ud835\udc59\ud835\udc61\ud835\udc56_\ud835\udc61\ud835\udc62\ud835\udc5d\ud835\udc59\ud835\udc52 Input: \ud835\udc46\ud835\udc3b\ud835\udc47,\ud835\udc61\ud835\udc4e\ud835\udc4f\ud835\udc59\ud835\udc52_\ud835\udc5b\ud835\udc5c\ud835\udc51\ud835\udc52, \ud835\udc5d\ud835\udc5f\ud835\udc52\ud835\udc51(\ud835\udc47), \ud835\udc5d\ud835\udc5f\ud835\udc5c\ud835\udc57(\ud835\udc47),\ud835\udc60\ud835\udc61\ud835\udc5c\ud835\udc5d_\ud835\udc54\ud835\udc5f\ud835\udc4e\ud835\udc5b\ud835\udc62\ud835\udc59\ud835\udc4e\ud835\udc5f\ud835\udc56\ud835\udc61\ud835\udc66 1 \ud835\udc36\ud835\udc62\ud835\udc5f\ud835\udc5f\ud835\udc52\ud835\udc5b\ud835\udc61\ud835\udc41\ud835\udc5c\ud835\udc51\ud835\udc52\ud835\udc60= {\ud835\udc61\ud835\udc4e\ud835\udc4f\ud835\udc59\ud835\udc52_\ud835\udc5b\ud835\udc5c\ud835\udc51\ud835\udc52} 2 \ud835\udc47\ud835\udc62\ud835\udc5d\ud835\udc59\ud835\udc52\ud835\udc60= \u2205 3 \ud835\udc54\ud835\udc5f\ud835\udc4e\ud835\udc5b\ud835\udc62\ud835\udc59\ud835\udc4e\ud835\udc5f\ud835\udc56\ud835\udc61\ud835\udc66= \ud835\udc61\ud835\udc4e\ud835\udc4f\ud835\udc59\ud835\udc52_\ud835\udc5b\ud835\udc5c\ud835\udc51\ud835\udc52.\ud835\udc54\ud835\udc5f\ud835\udc4e\ud835\udc5b\ud835\udc62\ud835\udc59\ud835\udc4e\ud835\udc5f\ud835\udc56\ud835\udc61\ud835\udc66 4 \ud835\udc47= \ud835\udc54\ud835\udc52\ud835\udc61\ud835\udc47\ud835\udc5f\ud835\udc52\ud835\udc52(\ud835\udc46\ud835\udc3b\ud835\udc47,\ud835\udc61\ud835\udc4e\ud835\udc4f\ud835\udc59\ud835\udc52_\ud835\udc5b\ud835\udc5c\ud835\udc51\ud835\udc52) 5 /*Refine candidate nodes*/ 6 while \ud835\udc54\ud835\udc5f\ud835\udc4e\ud835\udc5b\ud835\udc62\ud835\udc59\ud835\udc4e\ud835\udc5f\ud835\udc56\ud835\udc61\ud835\udc66\u2264\ud835\udc60\ud835\udc61\ud835\udc5c\ud835\udc5d_\ud835\udc54\ud835\udc5f\ud835\udc4e\ud835\udc5b\ud835\udc62\ud835\udc59\ud835\udc4e\ud835\udc5f\ud835\udc56\ud835\udc61\ud835\udc66do 7 \ud835\udc36\ud835\udc41\ud835\udc60= \u2205 8 for \ud835\udc5b\u2208\ud835\udc36\ud835\udc62\ud835\udc5f\ud835\udc5f\ud835\udc52\ud835\udc5b\ud835\udc61\ud835\udc41\ud835\udc5c\ud835\udc51\ud835\udc52\ud835\udc60do 9 if \ud835\udc60\ud835\udc52\ud835\udc4e\ud835\udc5f\ud835\udc50\u210e_\ud835\udc5c\ud835\udc5f\ud835\udc4e\ud835\udc50\ud835\udc59\ud835\udc52(\ud835\udc5b,\ud835\udc52) = True then 10 \ud835\udc36\ud835\udc41\ud835\udc60= \ud835\udc36\ud835\udc41\ud835\udc60\u222a\ud835\udc5b 11 \ud835\udc54\ud835\udc5f\ud835\udc4e\ud835\udc5b\ud835\udc62\ud835\udc59\ud835\udc4e\ud835\udc5f\ud835\udc56\ud835\udc61\ud835\udc66= \ud835\udc54\ud835\udc5f\ud835\udc4e\ud835\udc5b\ud835\udc62\ud835\udc59\ud835\udc4e\ud835\udc5f\ud835\udc56\ud835\udc61\ud835\udc66+ 1 12 \ud835\udc36\ud835\udc62\ud835\udc5f\ud835\udc5f\ud835\udc52\ud835\udc5b\ud835\udc61\ud835\udc41\ud835\udc5c\ud835\udc51\ud835\udc52\ud835\udc60= \ud835\udc36\ud835\udc41\ud835\udc60.\ud835\udc50\u210e\ud835\udc56\ud835\udc59\ud835\udc51\ud835\udc60_\ud835\udc56\ud835\udc51 13 \ud835\udc34\ud835\udc5b\ud835\udc60= \u2205 14 for \ud835\udc5b\u2208\ud835\udc36\ud835\udc41\ud835\udc60do 15 \ud835\udc34\ud835\udc5b\ud835\udc60= \ud835\udc34\ud835\udc5b\ud835\udc60\u222a\ud835\udc5a\ud835\udc62\ud835\udc59\ud835\udc61\ud835\udc56_\ud835\udc61\ud835\udc62\ud835\udc5d\ud835\udc59\ud835\udc52_\ud835\udc5c\ud835\udc5f\ud835\udc4e\ud835\udc50\ud835\udc59\ud835\udc52(\ud835\udc61, \ud835\udc5d\ud835\udc5f\ud835\udc52\ud835\udc51(\ud835\udc47), \ud835\udc5d\ud835\udc5f\ud835\udc5c\ud835\udc57(\ud835\udc47)) 16 Return \ud835\udc34\ud835\udc5b\ud835\udc60 Datasets # of Documents Avg # Pages Avg # Tokens Publication 100 11.5 13230 Civic Agenda 41 8.7 3185 Notice 80 7.1 3719 Table 1: Characteristics of Datasets. positives (one node might correspond to multiple tuples) but would avoid false negatives (there will not exist nodes that correspond to portions of a tuple). ZenDB executes tree_evaluate_multi_tuple in Algorithm 3. ZenDB starts searching the subtree of SHT with table_node as the root (Line 4). We use the same summary-based search as in tree_evaluate in Algorithm 2 to refine the nodes that are related to the given query top-down layer by layer, and stop the search when the granularity of current layer exceeds \ud835\udc60\ud835\udc61\ud835\udc5c\ud835\udc5d_\ud835\udc54\ud835\udc5f\ud835\udc4e\ud835\udc5b\ud835\udc62\ud835\udc59\ud835\udc4e\ud835\udc5f\ud835\udc56\ud835\udc61\ud835\udc66(Line 6-12). For each node \ud835\udc5b\u2208\ud835\udc36\ud835\udc41\ud835\udc60that are related to the query and might contain multiple tuples, we call multi_tuple_oracle to extract the corresponding tuples (Line 13-15). 7 EVALUATION In this section, we evaluate ZenDB over three real document collections on accuracy, latency, and cost. 7.1 Methodology 7.1.1 Data & Query Sets. We collected three real-world datasets (i.e., document collections): scientific publications, civic agenda reports, and notice of violations; details are displayed in Table 1. Scientific Publications. This dataset was collected from a systematic review study that examined research questions in the field of personal data management at UC Irvine [11]. The study analyzed over 500 publications; we randomly selected 100 papers for our dataset. The study explored 20 research questions with humanlabeled answers for all of the publications. Civic Agenda Reports. This dataset, from our collaborators at Big Local News, comprises 41 civic agenda reports from 2022 to 2024 in the City of Malibu [14]. Each report details a series of government projects, including their status, updates, decisions, and timelines for beginning, ending, and expected construction. Notice of Violations. This dataset, also from Big Local News, of 80 documents describe notices of violations issued by the US Dept. of Transportation from 2023 to 2024 [12]. Each document concerns potential violations detailed by the Hazardous Materials Safety Administration, including detailed violation orders and descriptions, penalty decisions, and proposed compliance orders. Query Workload. For each dataset, we devise a query workload comprising 9 SQL queries, informed by the needs of our collaborators. These 9 queries are divided into groups of three, QG1, QG2, and QG3, varying in the number of predicates, from one to three respectively. To generate these queries, we first define tables along with a set of attributes per dataset. Then we randomly select \ud835\udc56attributes to create \ud835\udc56predicates for the queries in group QGi, and in SELECT, we additionally include one attribute that is not used in the predicates, as well as doc_id. When we end up sampling attributes across multiple relations, we list both in the FROM clause, and additionally add an equijoin condition on doc_id. So, overall, our queries include selections, projections, and joins. We omit aggregations in our workload since we use relational versions for those operators evaluated after the corresponding attribute values are extracted; and thus the performance on such queries would be similar to that on the queries without them. 7.1.2 Strategies Compared and Evaluation Metrics. We compare ZenDB with four baselines, GPT_single, GPT_merge, RAG_seq, and RAG_tree. The first two operate on an entire document at a time. GPT_single uses a separate LLM call per predicate and projection by constructing a corresponding prompt, appending the entire document as context. GPT_merge combines all of the predicates and projections into a single LLM call alongside the entire document. RAG_seq and RAG_tree refer to RAG-based techniques in two variants implemented by LlamaIndex [9], a state-of-the-art open-source RAG framework: sequential chunking and tree-style chunking, respectively. In RAG_seq, we set the chunk size to 128 tokens and selected top-\ud835\udc58chunks, where \ud835\udc58= max(1, 5%\u00d7doc_size/128). That is, we retrieve at least one chunk, but no more than 5% of the of the document. RAG_tree constructs a hierarchical tree from the document without leveraging semantic structure. This tree is constructed by first chunking the leaves at a fixed granularity. Nodes higher up in the hierarchy are formed by recursively summarizing the nodes below. Subsequently, a path from the root to leaf is retrieved, instead of just one leaf. GPT-4-32k is used to evaluate the queries for all strategies. We use precision and recall to measure the quality of query answers. Given a query \ud835\udc44, let \ud835\udc47_\ud835\udc61\ud835\udc5f\ud835\udc62\ud835\udc61\u210e(\ud835\udc44) and \ud835\udc47_\ud835\udc5d\ud835\udc5f\ud835\udc52(\ud835\udc44) be the set of tuples in the ground truth vs. predicted by an approach, respectively. Precision is measured as |\ud835\udc47_\ud835\udc61\ud835\udc5f\ud835\udc62\ud835\udc61\u210e(\ud835\udc44)\u2229\ud835\udc47_\ud835\udc5d\ud835\udc5f\ud835\udc52(\ud835\udc44)| |\ud835\udc47_\ud835\udc5d\ud835\udc5f\ud835\udc52(\ud835\udc44)| , and recall is |\ud835\udc47_\ud835\udc61\ud835\udc5f\ud835\udc62\ud835\udc61\u210e(\ud835\udc44)\u2229\ud835\udc47_\ud835\udc5d\ud835\udc5f\ud835\udc52(\ud835\udc44) | |\ud835\udc47_\ud835\udc61\ud835\udc5f\ud835\udc62\ud835\udc61\u210e(\ud835\udc44) | . We count the number of input and output tokens to measure the cost of LLM invocations [6]. Finally, we measure the latency of query execution by taking three runs and reporting the average. 7.2 Experimental Results Experiment 1: ZenDB vs. GPT-only Strategies. We first compare ZenDB with GPT_single and GPT_merge, both operating on \fPrecision Recall Cost ($) / Tokens (\u00d7 1000) Latency (Seconds) Strategies PUB CIVIC NOTICE PUB CIVIC NOTICE PUB CIVIC NOTICE PUB CIVIC NOTICE GPT_single 0.74 0.45 0.71 0.38 0.45 0.77 0.98 / 16.2 0.33 / 5.4 0.3 / 5.3 14.6 15.3 6.1 GPT_merge 0.63 0.34 0.66 0.4 0.45 0.72 0.8 / 13.2 0.2 / 3.2 0.2 / 3.7 12.9 7.4 5 RAG_seq 0.51 0.12 0.36 0.38 0.13 0.38 0.02 / 0.4 0.02 / 0.29 0.01 / 0.18 3.76 5.1 1.3 RAG_tree 0.51 0.2 0.2 0.38 0.04 0.17 0.07 / 1.2 0.04 / 0.66 0.02 / 0.35 10 8.9 1.3 ZenDB 0.72 0.73 0.73 0.53 0.84 0.74 0.03 / 0.56 0.03 / 0.53 0.02 / 0.25 4.8 7 1.7 Table 2: Average Precision, Recall, Cost / # of Tokens and Latency of Strategies Per Query, Per Document, in Publication (PUB), Civic Agenda (CIVIC), Notice of Violation (NOTICE) Datasets. (GPT-4-32k is Used.) Precision Recall Cost ($) / Tokens (\u00d7 1000) Latency (Seconds) Strategies QG1 QG2 QG3 Avg QG1 QG2 QG3 Avg QG1 QG2 QG3 Avg QG1 QG2 QG3 Avg GPT_single 0.94 0.66 0.62 0.74 0.65 0.16 0.32 0.38 0.8 / 13.2 1 / 16.6 1.1 / 18.9 0.98 / 16.2 12.8 14.1 16.9 14.6 GPT_merge 0.94 0.41 0.63 0.63 0.65 0.13 0.41 0.4 0.8 / 13.2 0.8 / 13.2 0.8 / 13.2 0.8 / 13.2 12.8 12.9 13.1 12.9 RAG_seq 0.73 0.4 0.39 0.51 0.6 0.23 0.31 0.38 0.01 / 0.23 0.02 / 0.38 0.03 / 0.59 0.02 / 0.4 2.7 3.9 4.5 3.76 RAG_tree 0.79 0.33 0.42 0.51 0.68 0.19 0.27 0.38 0.05 / 0.82 0.08 / 1.3 0.1 / 1.6 0.07 / 1.2 8.4 10.4 11.2 10 ZenDB 0.93 0.64 0.6 0.72 0.7 0.54 0.34 0.53 0.02 / 0.41 0.03 / 0.56 0.04 / 0.71 0.03 / 0.56 3.9 5.2 5.4 4.8 Table 3: Average Precision, Recall, Cost / # of Tokens and Latency of Strategies Per Query, Per Document, in Publication Dataset. (GPT-4-32k is Used.) Precision Recall Cost ($) / Tokens (\u00d7 1000) Latency (Seconds) Strategies QG1 QG2 QG3 Avg QG1 QG2 QG3 Avg QG1 QG2 QG3 Avg QG1 QG2 QG3 Avg GPT_single 0.64 0.36 0.36 0.45 0.73 0.37 0.24 0.45 0.2 / 3.2 0.33 / 5.4 0.47 / 7.6 0.33 / 5.4 7.5 15.3 23.1 15.3 GPT_merge 0.64 0.22 0.16 0.34 0.73 0.32 0.29 0.45 0.2 / 3.2 0.2 / 3.2 0.2 / 3.2 0.2 / 3.2 7.3 6.9 7.5 7.4 RAG_seq 0.25 0.11 0 0.12 0.36 0.04 0 0.13 0.01 / 0.14 0.02 / 0.3 0.03 / 0.43 0.02 / 0.29 3.3 5.2 6.9 5.1 RAG_tree 0.36 0.23 0 0.2 0.12 0.01 0 0.04 0.03 / 0.49 0.04 / 0.6 0.05 / 0.88 0.04 / 0.66 5.9 8.9 12.3 8.9 ZenDB 0.89 0.72 0.61 0.73 0.86 0.79 0.83 0.84 0.02 / 0.43 0.04 / 0.59 0.04 / 0.68 0.03 / 0.53 5.1 7.2 8.8 7 Table 4: Average Precision, Recall, Cost / # of Tokens and Latency of Strategies Per Query, Per Document, in Civic Dataset. (GPT-4-32k is Used.) Precision Recall Cost ($) / Tokens (\u00d7 1000) Latency (Seconds) Strategies QG1 QG2 QG3 Avg QG1 QG2 QG3 Avg QG1 QG2 QG3 Avg QG1 QG2 QG3 Avg GPT_single 0.71 0.65 0.76 0.71 0.9 0.67 0.75 0.77 0.2 / 3.7 0.31 / 5.2 0.43 / 7.1 0.3 / 5.3 4.9 6.2 7.3 6.1 GPT_merge 0.7 0.56 0.62 0.66 0.8 0.6 0.77 0.72 0.2 / 3.7 0.2 / 3.7 0.2 / 3.7 0.2 / 3.7 4.8 5 5.1 5 RAG_seq 0.61 0.31 0.17 0.36 0.67 0.22 0.26 0.38 0.01 / 0.12 0.01 / 0.19 0.01 / 0.23 0.01 / 0.18 0.9 1.3 1.7 1.3 RAG_tree 0.58 0.36 0.24 0.2 0.39 0.5 0.17 0.17 0.02 / 0.25 0.02 / 0.38 0.03 / 0.41 0.02 / 0.35 2.1 2.7 3.1 2.6 ZenDB 0.79 0.67 0.72 0.73 0.87 0.62 0.73 0.74 0.01 / 0.19 0.02 / 0.26 0.02 / 0.3 0.02 / 0.25 1.4 1.7 2.1 1.7 Table 5: Average Precision, Recall, Cost / # of Tokens and Latency of Strategies Per Query, Per Document, in Notice Violation Dataset. (GPT-4-32k is Used.) an entire document at a time. Table 2 reports our metrics of interest on the three datasets, while Table 3, Table 4, and Table 5 provide a breakdown per dataset. We first note that ZenDB achieves comparable precision and recall to GPT_single on the publication and notice datasets. Notably, ZenDB surpasses GPT_single in the civic dataset, improving precision by 28% and recall by 39%, due to this dataset\u2019s complex semantic structure, which poses challenges for GPT_single in generating high-quality responses. ZenDB\u2019s approach of querying based on SHTs, focuses LLM attention on portions of documents at a time, thereby enhancing performance. We also observe that combining multiple predicates into a single prompt makes it more difficult for the LLM to provide the correct answer, resulting in performance degradation. On the cost and latency front, ZenDB significantly reduces both relative to GPT_single and GPT_merge. Specifically, ZenDB achieves cost savings of approximately 29\u00d7, 10\u00d7, and 4\u00d7 for the publication, civic, and notice datasets respectively. It\u2019s noteworthy that ZenDB\u2019s cost savings increase with document size, as the number of tokens it uses is somewhat independent of document size. Instead, it relies on the size of the summary and the number of levels of the SHTs explored during execution, which are controllable factors. Accordingly, we observe varying levels of latency savings with ZenDB, up to a 4\u00d7 reduction across datasets. Experiment 2: ZenDB vs. RAG-only Strategies. When compared with RAG_seq and RAG_tree, we observe that RAG_seq achieves significant cost and latency savings compared to GPT-only strategies. However, relying solely on retrieving physical chunks based on embedding similarity as in RAG, fails to accurately identify the appropriate text spans related to the queries, leading to a substantial degradation in precision and recall. While ZenDB incurs a slightly higher cost, it offers substantial advantages over RAG-based approaches thanks to the use of semantic structure, with increases in precision by up to 61% and recall by up to 80%. RAG_tree generally shows slight improvements in precision and recall over RAG_seq, but it similarly falls short of ZenDB for a similar reason. Its use of tree-style physical chunking often fails to accurately identify the appropriate text spans. Moreover, the \fPublication Civic Agenda Violation Datasets 0 0.2 0.4 0.6 0.8 1 Average Precision ZenDB no-ES no-node-name no-DS (a) Average Precision Publication Civic Agenda Violation Datasets 0 0.2 0.4 0.6 0.8 1 Average Recall ZenDB no-ES no-node-name no-DS (b) Average Recall Publication Civic Agenda Violation Datasets 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 Average # of Tokens (x1000) ZenDB no-ES no-node-name no-DS (c) Average # of Tokens (\u00d7 1000) Publication Civic Agenda Violation Datasets 0 1 2 3 4 5 6 7 8 9 Average Latency (Seconds) ZenDB no-ES no-node-name no-DS (d) Average Latency (Seconds) Figure 11: The Effect of Summary Construction to Performance of ZenDB in Real Datasets. Datasets (# of Docs) # of Nodes # of Layers Cost ($) /Tokens Latency Publication (100) 13.4 2.8 0.05 / 1.8k 6min Civic (41) 32.1 2.9 0.01 / 0.36k 1min Violation (80) 8.9 2.2 0.01 / 0.32k 1min Table 6: SHT Construction. (GPT-4 is Used.) Datasets Cost ($) / Tokens FP FN Latency Publication (100) 0.048 / 95.4k 0 0 7 min Civic (41) 0.005 / 10.1k 0.08 0 3 min Violation (80) 0.005 / 8.9k 0.04 0 2 min Table 7: Table Population. (GPT-3.5-Turbo is Used.) Publication Civic Agenda Violation 0 0.2 0.4 0.6 0.8 1 Average Precision ZenDB ZenDB-light (a) Precision Publication Civic Agenda Violation 0 0.2 0.4 0.6 0.8 1 Average Recall ZenDB ZenDB-light (b) Recall Figure 12: ZenDB vs. ZenDB-light: Precision and Recall. exhaustive summary construction and usage in RAG_tree results in higher cost and latency compared to ZenDB. Experiment 3: Data Preparation. Next, we examine two phases within ZenDB happening prior to queries, SHT construction and table population, and compare it to the costs of online queries. Experiment 3.1: SHT Construction. We present the average number of nodes and layers per SHT, and the total cost, number of tokens, and latency on three datasets, in Table 6. SHT construction is an offline process, making latency at the level of minutes not problematic. The cost is affected by the number of distinct templates in the datasets. ZenDB uses LLMs to verify headers for SHT generation for one document per template, with the remaining SHTs created through visual pattern matching. The cost is further reduced by sampling the phrase clusters. In the publication dataset, the publications originate from 6 conferences, whereas the other two datasets follow a consistent template. Therefore, the publication dataset has a higher cost than the others, although all costs are minimal. Experiment 3.2: Table Population. When users define a DTable, ZenDB populates the system-defined attributes using LLM-based and rulebased approaches. Table 7 presents the total cost and number of tokens (we use GPT-3.5-Turbo)4, with additional latency and 4When the context size of a node exceeds the token limit (e.g., the root node in publication dataset), we use NLTK [10] to summarize the context and adjust the summary size to approximately match the token limit of a prompt. Publication Civic Agenda Violation 0 1 2 3 4 5 6 7 8 9 \\# of Queries 103 ZenDB-light Figure 13: # of Queries on 1 Document by 1 $. Publication Civic Agenda Violation 0 1 2 3 4 5 6 7 8 Average Lantecy ZenDB ZenDB-light Figure 14: ZenDB VS ZenDBlight: Latency. quality results. In particular, to show the quality of table population, let \ud835\udc61\ud835\udc60\ud835\udc54(\ud835\udc52) and \ud835\udc61\ud835\udc60\ud835\udc5d(\ud835\udc52) be the text span of an entity \ud835\udc52(a table or a tuple) in the ground truth and predicted by ZenDB, respectively. We label \ud835\udc61\ud835\udc60\ud835\udc54(\ud835\udc52) \u2282\ud835\udc61\ud835\udc60\ud835\udc5d(\ud835\udc52) as a false positive (FP), indicating that the predicted text span contains the true text span but is larger, which is acceptable since it doesn\u2019t miss the correct answers will be refined by the tree-search algorithm. In constrast, (\ud835\udc61\ud835\udc60\ud835\udc5d(\ud835\udc52) \u2282\ud835\udc61\ud835\udc60\ud835\udc54(\ud835\udc52)) \u2228(\ud835\udc61\ud835\udc60\ud835\udc5d(\ud835\udc52) \u2229\ud835\udc61\ud835\udc60\ud835\udc54(\ud835\udc52)) = \u2205) is considered a false negative (FN) because the predicted text span does not encompass all the true text spans, potentially resulting in missed answers. Notably, ZenDB demonstrates a low FP rate in the violation and civic agenda datasets, showcasing the effectiveness of the approach. The cost incurred in this step is minimal, thanks to the use of the affordable LLM GPT-3.5-Turbo (around 100x cheaper than GPT-4). End-to-end cost comparison: ZenDB vs. Others. Although ZenDB incurs costs to construct SHTs and populate tables before a query arrives, these costs are minimal, totaling 0.1, 0.015, and 0.015 dollars for the publication, civic agenda, and notice of violations datasets, respectively. Even if we ran just a single query subsequently, we would have lower end-to-end costs for ZenDB compared to GPTsingle and GPT-merge, with the loading costs getting amortized across queries. Experiment 4: The Effect of Summary Construction in ZenDB. We examine the effect of summary construction on ZenDB performance in Figure 11. Recall that in Section 6.2, the summary of each node \ud835\udc63in a SHT consists of three components: an extractive summary (ES), the phrases of \ud835\udc63and its ancestors (node-name), and the top-1 sentence related to a given query predicate or projection within the text span of \ud835\udc63(DS, i.e., Dynamic Summary). We explored three variations of ZenDB by removing one component at a time from the summary: no-ES, no-node-name, and no-DS (e.g., no-ES refers to the strategy that excludes the extractive summary from the summary of the node). We observe that the extractive summary impacts the quality of query answers (i.e., precision and recall) the least, while both dynamic summaries and node names (i.e., \fthe header phrases) affect performance more significantly. Node names provide useful metadata that adds more context for the LLM, helping refine the search space. The dynamic summary plays a critical role in summary construction by not only identifying the relevant nodes but also retrieving the text span most related to the given query. We also note that storing node names has a minimal impact on cost and latency due to their compact size. In contrast, both extractive and dynamic summaries have a greater size, though they still represent a relatively small portion of the overall cost and latency. Experiment 5: ZenDB Driven by A Cheaper LLM: GPT-3.5Turbo. We next study the impact of replacing the more expensive LLM used in ZenDB, GPT-4-32k, with an almost 100\u00d7 cheaper LLM, GPT-3.5-turbo, when evaluating queries. We denote this version as ZenDB-light. In Figure 12, ZenDB-light exhibits approximately a 7% decrease in precision and a 3% decrease in recall compared to ZenDB, at 100\u00d7 lower cost. This demonstrates that by refining the text span that ZenDB uses for evaluating queries, as opposed to the entire complex document, ZenDB is able to provide a much simpler and more precise context for LLMs to evaluate. This makes it easier for less-advanced but cheaper models like GPT-3.5-turbo to not just process the entire text span, but also answer the query accurately. We report the average number of SQL queries that can be executed on a single document by spending 1 dollar using ZenDBlight, in Figure 13. ZenDB-light can run approximately 3.5k, 3.7k, and 8k SQL queries with 2 predicates and one projection on average in one document within budget for the publication, civic agenda reports, and notices of violations, respectively, demonstrating the practicality of ZenDB-light. 8 RELATED WORK We now survey related work on querying unstructured data. Text-to-Table Extraction. One approach to querying unstructured data is by simply extracting unstructured data into tables, following which they are queried as usual. This approach is followed by Google DocumentAI [4] and Azure Document Intelligence [5], as well as approaches such as text-to-table [61]. Using an LLM to populate entire tables upfront can be expensive and error-prone on large and complex document collections as in our case. Evaporate [18] uses an LLM to infer schema, and then populate tables, using synthesized rules if possible. Simple extraction rules, such as ones generated by Evaporate, are not applicable in our setting. Retrieval-Augmented Generation (RAG). RAG techniques [20, 34, 41, 60], help identify smaller text portions that are most relevant to a given query in order to fit into finite context windows, reduce cost, and in some cases improve accuracy. Most techniques use fixed granularity chunking policies and don\u2019t account for semantic structure, while recent extensions rely on potentially expensive recursive summarization to build a hierarchy [9, 52]. We showed that this RAG_tree approach suffers from the same issues as vanilla RAG. The leaf nodes still use fixed size chunks that are divorced from semantics, and thus fail to find relevant text segments. In comparison, ZenDB leverages semantic structure to boost precision and recall by up to 61% and 80%. Multi-Modal Databases. Recent work creates of multi-modal databases [24, 35, 55, 56, 58] that support SQL-like interfaces over text, images, and/or video. However, they all apply LLMs or other pre-trained models to entire documents at a time, and are thus limited to simple, small documents. This is equivalent to our vanilla LLM approach, which is expensive and not very accurate. Other work [31] has used interactive query processing to improve query results through user feedback. None of these approaches have explored the use of semantic structure to reduce cost and improve accuracy. Natural Language Interfaces to Data. Supporting natural language querying over structured data is a long-standing question in the database community; a recent survey is one by Quamar et al. [50]. While the database community has been working on this problem for over a decade, e.g., [40], LLMs have dominated recent benchmarks [21, 42]. In our work, we instead focus on the inverse problem of structured (SQL) queries over unstructured data\u2014but this line of work could aid the first step of SQL query construction. LLMs meet Data Management. LLMs potentially disrupts the field of data management [29], but the first step is to actually understand tables. Recent work [25, 28, 62] explores how well LLMs understand tabular data, and representing knowledge learned by the LLM as structured data [51, 59]. Many data management problems have been revisited, including query rewriting [44], database tuning [57], data preprocessing [63], data and join discovery [26, 27, 36], data profiling [33], and data wrangling [23, 43, 48]. Some recent work has also explored how well LLMs can generate tables [54]. ZenDB also uses LLMs, but to a new setting: document analytics. Structured Extraction. Structured extraction from web pages, pdfs, and images has a long history of work. For instance, Snowball [17] proposed structured extraction over the open web, and leverage common techniques such as wrapper induction [38, 46] which also leverage the hierarchical structure of HTML documents and headings. In contrast, ZenDB takes as input PDFs, which are often not hierarchically encoded. Other works, such as Shreddr [22] extract from images of forms where the templates are identical, and focus on efficient use of crowd workers. These are also relevant due to the similarities between LLMs and crowdsourcing [49]. 9" +} \ No newline at end of file diff --git a/abs_9K/test_abstract_short_2405.04682v1.json b/abs_9K/test_abstract_short_2405.04682v1.json new file mode 100644 index 0000000000000000000000000000000000000000..3b15cacf581a9bff7eebec585794315c52b8cb24 --- /dev/null +++ b/abs_9K/test_abstract_short_2405.04682v1.json @@ -0,0 +1,18 @@ +{ + "url": "http://arxiv.org/abs/2405.04682v1", + "title": "TALC: Time-Aligned Captions for Multi-Scene Text-to-Video Generation", + "abstract": "Recent advances in diffusion-based generative modeling have led to the\ndevelopment of text-to-video (T2V) models that can generate high-quality videos\nconditioned on a text prompt. Most of these T2V models often produce\nsingle-scene video clips that depict an entity performing a particular action\n(e.g., `a red panda climbing a tree'). However, it is pertinent to generate\nmulti-scene videos since they are ubiquitous in the real-world (e.g., `a red\npanda climbing a tree' followed by `the red panda sleeps on the top of the\ntree'). To generate multi-scene videos from the pretrained T2V model, we\nintroduce Time-Aligned Captions (TALC) framework. Specifically, we enhance the\ntext-conditioning mechanism in the T2V architecture to recognize the temporal\nalignment between the video scenes and scene descriptions. For instance, we\ncondition the visual features of the earlier and later scenes of the generated\nvideo with the representations of the first scene description (e.g., `a red\npanda climbing a tree') and second scene description (e.g., `the red panda\nsleeps on the top of the tree'), respectively. As a result, we show that the\nT2V model can generate multi-scene videos that adhere to the multi-scene text\ndescriptions and be visually consistent (e.g., entity and background). Further,\nwe finetune the pretrained T2V model with multi-scene video-text data using the\nTALC framework. We show that the TALC-finetuned model outperforms the baseline\nmethods by 15.5 points in the overall score, which averages visual consistency\nand text adherence using human evaluation. The project website is\nhttps://talc-mst2v.github.io/.", + "authors": "Hritik Bansal, Yonatan Bitton, Michal Yarom, Idan Szpektor, Aditya Grover, Kai-Wei Chang", + "published": "2024-05-07", + "updated": "2024-05-07", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.AI", + "cs.LG" + ], + "label": "Original Paper", + "paper_cat": "Diffusion AND Model", + "gt": "Recent advances in diffusion-based generative modeling have led to the\ndevelopment of text-to-video (T2V) models that can generate high-quality videos\nconditioned on a text prompt. Most of these T2V models often produce\nsingle-scene video clips that depict an entity performing a particular action\n(e.g., `a red panda climbing a tree'). However, it is pertinent to generate\nmulti-scene videos since they are ubiquitous in the real-world (e.g., `a red\npanda climbing a tree' followed by `the red panda sleeps on the top of the\ntree'). To generate multi-scene videos from the pretrained T2V model, we\nintroduce Time-Aligned Captions (TALC) framework. Specifically, we enhance the\ntext-conditioning mechanism in the T2V architecture to recognize the temporal\nalignment between the video scenes and scene descriptions. For instance, we\ncondition the visual features of the earlier and later scenes of the generated\nvideo with the representations of the first scene description (e.g., `a red\npanda climbing a tree') and second scene description (e.g., `the red panda\nsleeps on the top of the tree'), respectively. As a result, we show that the\nT2V model can generate multi-scene videos that adhere to the multi-scene text\ndescriptions and be visually consistent (e.g., entity and background). Further,\nwe finetune the pretrained T2V model with multi-scene video-text data using the\nTALC framework. We show that the TALC-finetuned model outperforms the baseline\nmethods by 15.5 points in the overall score, which averages visual consistency\nand text adherence using human evaluation. The project website is\nhttps://talc-mst2v.github.io/.", + "main_content": "Introduction The ability to generate videos that simulate the physical world has been a long-standing goal of artificial intelligence [1, 2, 3, 4]. In this regard, text-to-video (T2V) models have seen rapid advancements by pretraining on internet-scale datasets of images, videos, and texts [5, 6]. Previous works [7, 8, 9, 10, 11, 12] primarily focus on training conditional denoising diffusion probabilistic models [13] on paired video-text data [14, 15]. After training, these models allow for video generation by sampling from the trained diffusion model, conditioned on a text prompt. However, most of the open-models such as ModelScope[10] VideoCrafter [16, 17], OpenSora [18] are trained with single-scene video-text dataset [14, 19], which is widely available and easy to acquire. However, real-world scenarios often require the generation of multi-scene videos from multi-scene descriptions (e.g., Scene1: \u2018A koala is napping on a tree.\u2019 Scene2: \u2018The koala eats leaves on the tree.\u2019). In such cases, the generated video should accurately depict the events in their temporal order (e.g., Scene2 \u2020 Equal Contribution. \u2217Equal Advising. Contact hbansal@ucla.edu,yonatanbitton1@gmail.com. Preprint. arXiv:2405.04682v1 [cs.CV] 7 May 2024 \fScene 1: \u201cA red panda climbing a tree\u201d Scene 2 : \u201cThe red panda sleeps on the top of the tree\u201d Text2Video (a) Merging Captions (b) Merging Videos (c) Time-Aligned Captions (TALC) Text2Video Text2Video Text2Video \u201c{Scene 1} then {scene 2}\u201d Figure 1: Multi-scene video generation methods. (a) Generating a video by merging scene 1 and scene 2 descriptions. (b) The resulting video is composed from the video generated by the description of scene 1 and the video generated by the description of scene 2. (c) In our method (TALC) the generated video is conditioned on the description of scene 1 for the first half of the video frames and on the description of scene 2 for the later video frames. follows Scene1) while maintaining visual consistency, meaning that backgrounds and entities should remain consistent across scenes. While high-performance text-to-video models such as Sora [4] might be able to generate multi-scene videos, we point out that they are closed-source models trained with massive compute resources and lack sufficient details on the model design, training protocol, and datasets. In this work, we present a complementary approach and tackle the challenge of effectively leveraging the capabilities of base T2V models for multi-scene video generation. The multi-scene text-to-video generation differs from long video synthesis where the goal is to either interpolate (few frames to many frames) [8] or create continuing patterns of the single event in the generated video [11]. Prior works [20, 9] use a transformers [21, 22] to generate video frames for a given scene autoregressively. However, it is hard for their model to generate multiple scenes reliably as the context length increases with history of text descriptions and visual tokens [23] of the previous generated videos (e.g., generating Scene 4 conditioned on the Scene1, 2, 3 videos and descriptions). Other works [24] utilize a latent diffusion model [25] to generate video frames autoregressively by conditioning on the entire history of generated videos and scene descriptions. However, the approach is (a) slow due to repeated sampling, (b) generates only one frame per scene description, and (c) shown to work with only limited cartoon characters [26, 27] instead of wide range of visual concepts in the real-world. In this work, our goal is to generate multi-scene videos in the end-to-end manner, using a diffusion text-to-video generative model that is capable of producing content for a wide range of visual entities and actions. As shown in Figure 1(a), the naive approach to generating a multi-scene video for the scene descriptions (T \u2032 1, T \u2032 2) would condition the T2V generative model on the merged descriptions. In this setup, the diffusion model processes the entire scene description together, and lacks any information regarding the expected temporal order of events in the generated videos. As a result, we find that this approach leads to poor text-video alignment. As shown in Figure 1(b), an alternative approach generates videos for the individual text descriptions independently and concatenates them in the raw input space along the temporal dimension. While this approach achieves good alignment between the scene description and the scene-specific video segment, the resulting video lacks visual consistency in terms of entity and background appearances. Prior work [28, 29] generates multi-scene videos by utilizing knowledge of the entity, background, and their movements from large language models [30]. However, these videos are generated independently for each scene before being merged. Moreover, these methods do not offer a way to learn from realworld multi-scene video-text data. To remedy these challenges, we propose TALC (Time-ALigned Captions), a simple and effective framework to generate consistent and faithful multi-scene videos. As shown in Figure 1(c), our approach conditions the T2V generative model with the knowledge of the temporal alignment between the parts of the multi-scene video and multi-scene descriptions. 2 \f(c) Time-Aligned Captions (TALC) (b) Merging Videos (a) Merging Captions \u201cA grizzly bear catches a \ufb01sh in a rushing river\u201d \u201cThe grizzly bear looks over its territory.\u201d \u201cA grizzly bear catches a \ufb01sh in a rushing river then the grizzly bear looks over its territory.\u201d \u201cA grizzly bear catches a \ufb01sh in a rushing river\u201d \u201cThe grizzly bear looks over its territory.\u201d Figure 2: Examples of multi-scene video generation baselines. (a) Generating video on the merged descriptions, leads to a poor text-video alignment. (b) Generating videos for the individual text descriptions and concatenate them temporally, leads to a lack of background consistency. (c) Our approach (TALC) enhances the scene-level text-video alignment and maintains background consistency. Specifically, TALC conditions the visual representations of earlier video frames on the embeddings of the earlier scene description, and likewise, it conditions the representations of later video frames on the embeddings of the later scene description in the temporal dimension. Additionally, the temporal modules in the T2V diffusion architecture allows information sharing between video frames (the first half and the second half) to maintain visual consistency. Thus, TALC enhances the scene-level textvideo alignment while providing all the scene descriptions to the diffusion model at once. Further, our TALC framework can enhance the multi-scene text-to-video generation capabilities with real-world multi-scene data (\u00a73.3). In our experiments, we assess the visual consistency (background and entity consistency) and multiscene script adherence of the generated videos from Modelscope [10] and Lumiere [6]. Through our automatic and human evaluation, we find that merging scene descriptions leads to high visual consistency but poor text adherence. On the other hand, we observe that merging videos independently achieves the highest text adherence while the visual consistency is compromised. Interestingly, switching to TALC strikes an effective balance between visual consistency and text adherence, outperforming the baseline methods by 11.1 points on the overall score. This score represents the average of visual consistency and text adherence scores, as determined by human evaluation. Furthermore, we construct a multi-scene text-video dataset from real-world videos and fine-tune the T2V generative model using TALC. On our human evaluation, the generated videos from the TALC-finetuned model exhibit higher text adherence than the base model in multi-scene scenarios. Specifically, it outperforms the baseline methods by 15.5 points on the overall score. In summary, our contributions are: 2 Preliminaries In this work, we focus on generating multi-scene videos from scene descriptions using a diffusionbased Text-to-Video (T2V) generative model. The initial step is to equip the generative model with the knowledge of a wide range of visual concepts and actions. This is achieved during the pretraining stage (\u00a72.1). Subsequently, we aim to utilize the base model for multi-scene text-to-video generation task, which we formalize in (\u00a72.3). In \u00a73, we propose our TALC framework and discuss collection of real-world multi-scene text-video data for finetuning the base T2V model. 3 \f2.1 Diffusion Models for Text-to-Video Generation Diffusion models [13, 31] p\u03b8(x) are a class of generative models that learn data distribution pdata(x). Due to their flexible design, we can train their class-conditional versions to learn class-conditional data distributions pdata(x|y) where y is the conditioning variable, that can take various forms such as labels from a dataset or text description accompanying in a video [32]. We assume a dataset S \u2282V \u00d7 T consisting of pairs of (Vj, Tj) where Vj \u2208RL\u00d73\u00d7H\u00d7W is a raw video consisting of 3 RGB channels, L frames, H height, W width, and Tj is a text caption. We use V and T to denote the domain of videos and text, respectively. The aim of T2V generative modeling is to learn the conditional distribution of the videos conditioned on the text pS(Vj|Tj). In this work, we consider diffusion-based generative models that learn the data distribution via iterative denoising of the input video zj \u2208RL\u00d7C\u00d7H\u2032\u00d7W \u2032. Here, zj can either represent the input video in the raw pixel space Vj [6] or it can represent the latent representation of the video zj = E(Vj) for the latent diffusion models [25] where E is an encoder network such as VAE [33]. Given zj, diffused variable z\u03c4,j = \u03b1\u03c4zj + \u03b2\u03c4\u03f5 are constructed where \u03f5 \u223cN(0, I) where \u03b1\u03c4 and \u03b2\u03c4 are sampled from the noise scheduler p\u03c4 [34] which define the noise levels the model is trained on. Finally, we train a denoiser network f\u03b8 [35, 36] that inputs the diffused variable z\u03c4 and embeddings of the text caption to predict the target vector y where y can be the original noise \u03f5, which minimizes the denoising score matching objective [13]: E(Vj,Tj)\u2208S,\u03c4\u223cp\u03c4 ,\u03f5\u223cN(0,I) \u0002 ||\u03f5 \u2212f\u03b8(\u03c4, z\u03c4,j, hj)||2 2 \u0003 (1) where hj = H(Tj) \u2208Rd is the embedding of the text caption Tj where H is the text embedding model [37] and d is the dimension size. 2.2 Text Conditioning Mechanism To ensure the effective textual controllability of video generation, the structure of the denoiser networks is equipped with a cross-attention mechanism [10, 8]. Specifically, it conditions the visual content z\u03c4 \u2208RL\u00d7C\u00d7H\u2032\u00d7W \u2032 on the text. To do so, we first repeat the text embeddings of the text caption rj = R(hj) \u2208RL\u00d7d where R is a function that repeats the input text embedding hj for L times in the temporal dimension. Intuitively, the repeat operation represents that the L frames of the video zj are semantically aligned with the textual description Tj or its text embedding rj. In \u00a73, we will manipulate this operation to make the model architecture aware of the video-text alignment in the multi-scene scenario. These repeated text embeddings rj are inputs to the spatial attention block as the key and value in the multi-head attention block. The cross-attention enables the intermediate visual features to capture the semantic information that facilitates an alignment between the language and vision embeddings. Formally, z\u2032 \u03c4,j = CAf\u03b8(Q = z\u03c4,j; K = rj; V = rj) (2) where CAf\u03b8 is the cross attention mechanism with Q, K, V as the query, key, and value, respectively, in the spatial blocks of the denoiser network. Additionally, z\u2032 \u03c4,j is the intermediate representation that is informed with the visual and textual content of the data. In addition to the spatial blocks, the denoiser network also consists temporal blocks that aggregate features across video frames which are useful for maintaining visual consistency in the generated video. 2.3 Multi-Scene Text-to-Video Generation In many real-world scenarios, such as movies, stories, and instructional videos [38], a video may depict multiple transitions with the same or changing entities, as well as multiple actions or events. In addition, the different video segments often share contextual information such as the background or location. These videos are considered multi-scene videos. In this work, we aim to generate multi-scene video X = {x1, x2, . . . , xn} from multi-scene descriptions Y = {y1, y2, . . . , yn} where n are the number of sentences and each sentence yj is a scene description for scene j. Additionally, the index j also defines the temporal order of events in the multi-scene script i.e., we want the events 4 \fText2Video Denoising UNet Scene 1: \u201cA red panda climbing a tree.\u201d Scene 2 : \u201cThe red panda sleeps on the top of the tree\u201d Figure 3: The architecture of Time-Aligned Captions (TALC). During the generation process of the video, the initial half of the video frames are conditioned on the embeddings of the description of scene 1 (ry1), while the subsequent video frames are conditioned on the embeddings of the description of scene 2 (ry2). described in the scene j to be depicted earlier than the events described in the scene k where k > j. Further, we want the parts of the entire generated video X, given by xj, to have high video-text semantic alignment with the corresponding scene description yj, also referred to as text adherence. For instance, consider a two-scene description Y = {\u2018A red panda climbs on a bamboo forest.\u2019, \u2018The red panda sleeps peacefully in the treetop.\u2019}. Here, we need the T2V generative model to synthesize the appearance of the red panda (an entity) that remains consistent throughout the generated video, also referred to as entity consistency. In addition, we will expect that the context of the multi-scene video of a forest (a background) to remain consistent, also referred to as background consistency. 3 Method 3.1 TALC: Time-Aligned Captions for Multi-Scene T2V Generation Most of the existing T2V generative models [10, 16, 6] are trained with large-scale short video-text datasets (10 seconds 30 seconds) such as WebVid-10M [14]. Here, each instance of the dataset consists of a video and a human-written video description. These videos either lack the depiction of multiple events, or the video descriptions do not cover the broad set of events in the video, instead focusing on the major event shown. As a result, the pretrained T2V generative models only synthesize single video scenes depicting individual events. We introduce TALC, a novel and effective framework to generate multi-scene videos from diffusion T2V generative models based on the scene descriptions. Our approach focuses on the role of text conditioning mechanism that is widely used in the modern T2V generative models (\u00a72.2). Specifically, we take inspiration from the fact that the parts of the generated video xj should depict the events described in the scene description yj. To achieve this, we ensure that the representations for the part of the generated video aggregates language features from the scene description yj. Consider that we want to generate a multi-scene video X \u2208RL\u00d73\u00d7H\u00d7W from the scene descriptions yj \u2208Y , using a T2V generative model f\u03b8. Furthermore, we assume that individual video segments xj are allocated L/n frames within the entire video X. Let zX = [zx1; zx2; . . . ; zxn] \u2208RL\u00d7C\u00d7H\u2032\u00d7W \u2032 represent the representation for the entire video X, and zxj \u2208R(L/n)\u00d7C\u00d7H\u2032\u00d7W \u2032 for the jth part of the video that are concatenated in the temporal dimension. In addition, consider rY = {ry1, . . . , ryn} be the set of text embeddings for the multi-scene description Y and yj be an individual scene description. In the TALC framework, the Eq. 2 is changed to: z\u2032 \u03c4,xj = CAf\u03b8(Q = z\u03c4,xj, K = ryj, V = ryj) (3) z\u2032 \u03c4,X = [z\u2032 x1; z\u2032 x2; . . . ; z\u2032 xn] (4) Here, \u03c4 represents the timestamp in the diffusion modeling setup, which is applied during training as well as inference. We illustrate the framework in Figure 3. While TALC aims to equip the generative model with the ability to depict all the events in the multi-scene descriptions, the visual consistency 5 \fis ensured by the temporal modules (attentions and convolution blocks) in the denoiser network. By design, our approach can be applied to the pretrained T2V model during inference. 3.2 Baselines Here, we describe the baseline methods that could be used to generate videos for the multi-scene descriptions from a given diffusion text-to-video generative model. 3.2.1 Merging Captions In this setup, we create a single caption by merging all the multi-scene descriptions. Specifically, the multi-scene descriptions Y = {y1, y2, . . . , yn} can be written as a single prompt \u2018P = y1.Then, y2. . . . Then, yn.\u2019 For instance, the two-scene description Y = {\u2018A red panda climbs on a bamboo forest.\u2019, \u2018The red panda sleeps peacefully in the treetop.\u2019} will change to P = \u2018A red panda climbs on a bamboo forest. Then, the red panda sleeps peacefully in the treetop.\u2019 Subsequently, we generate a video from the T2V model f\u03b8 by conditioning it on P. While this approach mentions the temporal sequence of the events in a single prompt, the T2V model does not understand the temporal boundaries between the two events. Specifically, the Eq. 2 suggests that the visual features for all the video frames will aggregate information from the entire multi-scene description, at once, without any knowledge about the alignment between the scene description and its expected appearance in the generated video. 3.2.2 Merging Videos In this setup, we generate videos for each scene description individually and merge them in the raw input space. Formally, the individual scene description yi conditions the T2V model f\u03b8 to generate the parts of the multi-video xi. Finally, we stitch the individual videos together to synthesize the entire video X = x1, x2, . . . , xn. In this process, the parts of the multi-scene video closely adhere to the scene descriptions, leading to high text fidelity. However, since the generated videos do not have access to all the multi-scene descriptions (e.g., the video for Scene 2 is not informed about Scene 1), the visual consistency across the entire video is quite poor. 3.3 Multi-Scene Video-Text Data Generation 0:00 0:08 0:12 0:17 0:22 Seconds Gemini Multi-Image Captions The lady gets the dried/smoked prawns ready for use She then adds the dried crayfish to the pot Next, she includes tomato puree for that rich, tangy flavor Salt is added to taste, and everything is stirred together PyScene Scene Cuts Caption A woman in a colorful scarf is showing how to make a stew Figure 4: Our approach for generating time-aligned video captions. The process begins with PyScene cuts identifying the boundaries of distinct scenes within a video. Keyframes are then selected from the median of each scene. These frames are processed collectively through the Gemini model to produce multi-image captions that maintain narrative continuity by contextualizing each scene within the video\u2019s overall sequence. While our approach generates better multi-scene videos, the text adherence capabilities of the pretrained T2V generative model are limited. This is due to the lack of multi-scene video-text data during its pretraining. Unlike single video-text datasets, the multi-scene video-text datasets are not widely available and are hard to curate for model training. This is attributed to the fact that high-quality caption generation requires a lot of human labor which is time-consuming and expensive. Prior work such as ActivityNet [39] has curated human captions for specific video scenes 6 \fdepicting useful actions in long videos. However, the video scenes are either overlapping or have a large temporal gap between them that will be harmful for natural and smooth variations between the generated multi-scene videos. Hence, the absence of high-quality captions for continuous video scenes in the dataset makes unsuitable for T2V generative training. To this end, we aim to create a real-world multi-scene video-text dataset to allow further training of the pretrained T2V models. Specifically, we leverage the capability of the multimodal foundation model, Gemini-Pro-Vision [40], to generate high-quality synthetic data for enhanced video-text training [41]. Formally, we start with a video-text dataset M = A \u00d7 B consisting of pairs of (Ai, Bi) where Ai is a raw video and Bi is the corresponding video description from the dataset. Subsequently, we utilize PySceneDetect library 1 to generate continuous video scenes from Ai = {Ai,1, Ai,2, . . . , Ai,m} where m is the number of scene cuts in the video. A similar approach was used in a prior work [12] to detect scene changes in the video data. Then, we sample the middle video frame Fi,j as a representative of the semantic content in the video scene Ai,j. Finally, we input all the video frames Fi = {Fi,1, . . . , Fi,m} for a single video Ai and the entire video caption Bi to a large multimodal model [40]. Specifically, the model is prompted to generate high-quality captions for each of the frames Fi,j such they form a coherent narrative guided by the common caption Bi. We provide the prompt provided to the multimodal model in Appendix \u00a7A. In Figure 4 we provide an instance for the multi-scene video-text data generation. Datasets. To construct a multi-scene video-text dataset, we utilize existing dataset that include natural (real) videos and associated high-quality human-written captions that summarize the entire video. Specifically, we choose MSR-VTT [42] and VaTeX [43]. Most of the videos in MSR-VTT are 10-30 seconds long while VaTeX consists 10 seconds long videos. In addition, each video in MSR-VTT and VaTex consists 20 captions and 10 captions, respectively, out of which one is selected at random for multi-scene data generation. As described above, a single video is cut into multiple video segments using Pyscene library. In our experiments, we retain the first four video segments and discard any additional segments if the library generates more than four. Since the accuracy of the multi-scene captioning and the computational demands during finetuning are influenced by the number of scenes, we opt to limit the scene count to four for our experiments. However, future work could employ similar methodologies to scale the number of scenes, given more computing power and advanced multi-scene captioning models. We provide the data statistics for the final multi-scene data in Appendix \u00a7G. 4 Evaluation In this section, we describe the evaluation scheme for videos generated from multi-scene text descriptions. First, we describe the evaluation metrics that we aim to assess in this work (\u00a74.1). Then, we generate multi-scene descriptions for a diverse set of tasks (\u00a74.2). Finally, we present the details for automatic and human evaluation of the generated videos (\u00a74.3). 4.1 Metrics The ability to assess the quality of the generated multi-scene videos is a challenging task itself. As humans, we can judge the multi-scene videos across diverse perceptual dimensions [44] that the existing automatic methods often fails to capture [45]. Following [28], we focus on the visual consistency of the generated video, text adherence capabilities of the T2V models, and video quality of the video. Here, we present the metrics with the aspects that they intend to assess in the generated video for multi-scene text description. Visual Consistency. This metric aims to assess the (entity or background) consistency between the frames of the multi-scene videos. Here, the entity consistency aims to test whether the entities in the multi-scene video are consistent across the video frames. For instance, the appearance of an animal should not change without a change described in the text description. In addition, the background consistency aims to test whether the background of the multi-scene video remains consistent across the video frames. For instance, the room should not change without a change description in the text. 1https://github.com/Breakthrough/PySceneDetect 7 \fText Adherence. This metric aims to test whether the generated video adheres to the multi-scene text description. For instance, the events and actions described in the text script should be presented in the video accurately, and in the correct temporal order. In our experiments, we compute the visual consistency and text adherence with the automatic and human evaluators. Further, we compute the overall score, which is the average of the visual consistency and text adherence scores. In addition, we also assess the visual quality of the generated videos using human evaluation to understand whether the video contains any flimsy frames, shaky images, or undesirable artifacts (Table 1. 4.2 Task Prompts Here, we curate a set of task prompts for diverse scenarios, aiming to holistically assess the quality of the generated videos. Single character in multiple visual contexts (S1). In this scenario, we instruct an LLM, GPT-4, to create a coherent script consisting of four scenes. Each scene features a specific animal character performing diverse activities in every scene. This task assesses the capability of the T2V model to generate consistent appearance of the entity and its background while adhering to the different actions (or events) described in the multi-scene text script. For instance, a generated script could be \u2018Scene 1: A red panda is climbing a tree. Scene 2: The red panda eats the leaves on the tree. Scene 3: The red panda lies down on the branch of the tree. Scene 4: The red panda sleeps on the branch\u2019. In total, we generate 100 prompts in this scenario. Different characters in a specific visual context (S2). In this scenario, we instruct a language model, GPT-4, to create a coherent script consisting of four scenes. Each scene features different animal characters engaging in the same activity in every scene [20]. This task assesses the capability of the T2V model to generate consistent appearance of the background while adhering to the appearance of the different characters in the multi-scene text script. For instance, a generated script could be \u2018Scene 1: A cat leaps onto countertop. Scene 2: A dog leaps onto the same countertop. Scene 3: A rabbit leaps onto the same countertop. Scene 4: A raccoon leaps onto the same countertop\u2019. In total, we generate 100 prompts in this scenario. Multi-scene captions from real videos (S3). Here, we aim to assess the ability of the model to generate multi-scene videos for open-ended prompts that are derived from real-world videos. This task also assesses the ability of the T2V model to generate consistent appearances of the entity and its background while adhering to multi-scene descriptions. Specifically, we use our multi-scene video-text data generation pipeline (\u00a73.3) to create such prompts for the real videos from the test splits of the video-text datasets. For example, a multi-scene text script could be \u2018Scene 1: A beauty vlogger introduces her skincare routine. Scene 2: She applies a serum to her face, smoothing it in\u2019. We present a sample of the various task prompts in the Appendix \u00a7B. In total, we generate 100 prompts in this scenario. 4.3 Evaluator In this work, we devise an automatic evaluation framework and perform human evaluation to assess the quality of the multi-scene generated videos. Automatic Evaluation. Here, we utilize the capability of a large multimodal model, GPT-4-Vision [46], to reason over multiple image sequences. First, we sample four video frames, uniformly, from each scene in the generated video (e.g., 8 videos frames for two-scene video). Then, we prompt the multimodal model with the temporal sequence of video frames from different scenes and the multi-scene text description. Specifically, we instruct the multimodal model to decide the quality of the generated video across various metrics including entity consistency, background consistency, and text adherence. For each metric, the multimodal model assigns one of three possible response {yes = 1, partial = 0.5, no = 0}. For instance, yes for the entity consistency metric implies that the video frames sampled from the generated video have consistent appearance of the entity described in the multi-scene script. In this work, we do not utilize any existing video-text alignment models [47, 41] for evaluating text adherence as they are trained on single-scene video-text datasets. We present the automatic evaluation prompt in Appendix \u00a7C. 8 \fHuman Evaluation. We also conduct a human evaluation to assess the multi-scene generated videos along the dimensions of visual consistency, text adherence, and visual quality. Specifically, we ask the annotators from Amazon Mechanical Turk (AMT) to choose one of three options for each metric {yes, partial, no}, similar to the automatic evaluation. In addition, we choose the annotators that pass a preliminary qualification exam. We present the screenshot of the UI in Appendix \u00a7D. 4.4 Evaluation Setup Since merging captions (\u00a73.2) and TALC (\u00a73.1) methods input the entire multi-scene text description at once, the quality of the video generated by these methods is influenced by the number of scenes described in the text script. Hence, we calculate the performance of the baselines and TALC by averaging the scores assigned to videos generated for two, three, and four scenes. Additionally, we report on visual consistency by averaging the performance across the entity and background consistency metrics. Here, the entity consistency scores are calculated for the task prompts S1 and S3 (since S2 aims to change the characters across scenes), and the background consistency and text adherence scores are computed for all the task prompts. We also evaluate the impact of TALC-based finetuning on the single scene generation in Appendix \u00a7I. 5 Experiments 5.1 Text-to-Video Generative Models In this work, we utilize ModelScope [10] and Lumiere [6] T2V models for multi-scene video generation. Here, ModelScope is an open-source T2V model with 1.7 billion parameters including the video encoder, text encoder, and denoising U-net network. Specifically, it is trained to generate 16 video frames on the mix of WebVid [14] video-text dataset and LAION [48] image-text dataset. We perform most of our experiments on ModelScope due to its easy-of-access and adoption in prior works [28]. In addition, we also include Lumiere-T2V, a model that leverages space-time U-Net denoising networks to generate high-quality videos. In this work, we include early experiments with Lumiere to showcase the flexibility of the TALC approach for multi-scene video generation. Base model with TALC. As described in \u00a73.1, our approach modifies the traditional text-conditioning mechanism to be aware of the alignment between text descriptions and individual video scenes. By design, the TALC framework can be applied to the base T2V model during inference, without any multi-scene finetuning. Thus, we compare the performance of the multi-scene videos generated from ModelScope and Lumiere T2V base models under three settings: merging captions, merging videos, and TALC. In this setting, we generate 16 frames per scene from ModelScope and 80 frames per scene from Lumiere. We provide more details on the inference in Appendix \u00a7F. Finetuning with TALC. Since the base model is pretrained with single-scene data, we aim to show the usefulness of TALC framework when we have access to the multi-scene video-text data. To this end, we finetune ModelScope on the multi-scene video-text data (\u00a73.3) with TALC framework. As a pertinent baseline, we also finetune the ModelScope without TALC framework by naively merging the scene-specific captions in the raw text space. In this setting, we finetune the T2V model with 8 frames per scene and the maximum number of scenes in an instance is set to 4. We provide further details on the finetuning setup in Appendix \u00a7H. The inference settings are identical to the prior method of generating videos from the base model without finetuning. In this section, we present the results for the baselines and TALC framework averaged over a diverse task prompts and multiple scenes using automatic evaluation (\u00a75.2) and human evaluation (\u00a75.3). Finally, we provide qualitative examples for the multi-scene generated videos to showcase the usefulness of our approach (\u00a75.4). 5.2 Automatic Evaluation We compare the performance of the baselines (e.g., merging captions and merging videos) with the TALC framework for ModelScope and Lumiere using the automatic evaluation in Figure 5. TALC outperforms the baselines without any finetuning. In Figure 5(a), we find that the overall score, average of visual consistency and text adherence, of the multi-scene videos generated using 9 \fVisual Consistency T ext Adherence Overall Score 0 5 10 15 20 25 30 35 40 45 50 55 60 65 70 75 80 85 90 95 100 Average performance (0-100) 91.0 65.0 89.9 77.0 89.0 32.4 70.0 47.2 37.5 62.3 61.7 67.5 68.6 57.3 75.6 Merging Captions (Base) Merging Videos (Base) TALC (Base) Merging Captions (F .T.) TALC (F .T.) (a) Performance on ModelScope T2V model. Visual Consistency T ext Adherence Overall Score 0 5 10 15 20 25 30 35 40 45 50 55 60 65 70 75 80 85 90 95 100 Average performance (%) 94.7 68.0 97.8 34.0 65.0 39.0 64.4 66.5 68.4 Merging Captions (Base) Merging Videos (Base) TALC (Base) (b) Performance on Lumiere T2V model. Figure 5: Automatic evaluation results for (a) ModelScope and (b) Lumiere. In (a), we observe that TALC-finetuned ModelScope model achieves the highest overall score, that is the average of the visual consistency and text adherence scores. In (b), we find that TALC framework with the Lumiere base model outperforms merging captions and merging videos on the overall scores. We report the average performance across the diverse multi-scene prompts and the number of generated scenes. the base ModelScope with TALC (68.6 points), outperforms the overall score achieved by the videos generated using merging captions (61.7 points) and merging videos (67.5 points) with the base ModelScope. Specifically, we observe that the visual consistency of the generated video is high for merging captions (91 points) and TALC (89.9 points) while it is low for merging videos (65 points). This indicates that merging videos independently for the individual scene descriptions does not preserve the background and entity appearances across the different frames. In addition, we observe that the text adherence using TALC outperforms merging captions by 14.8 points, while the text adherence is the highest with a score of 70 points using merging videos. This can be attributed to the design of the merging videos baseline where individual video scenes adhere to the scene-specific descriptions well. Hence, merging videos independently approach can be viewed as an upper bound on the text adherence metric. In Figure 5(b), we observe similar trends for the Lumiere T2V generative model. Specifically, we find that the overall score for TALC outperforms merging captions and merging videos by 4 points and 2 points, respectively. In addition, we observe that merging captions and TALC achieve a high visual consistency score while merging videos independently has poor visual consistency. Further, we find that TALC outperforms merging captions by 5 points on text adherence, while merging videos achieves the highest text adherence 65 points. This highlights that the model more easily generates 10 \fmulti-scene videos that adhere to individual text scripts, whereas adherence to the text diminishes when the model is given descriptions of multiple scenes all at once. Finetuning with TALC achieves the best performance. Earlier, we evaluated the usefulness of the TALC framework with the base model. However, the base models are trained with the singlescene video-text data that might limit their capability for multi-scene video generation. To alleviate this issue, we finetune ModelScope T2V model on the multi-scene video-text data (\u00a73.3). Specifically, we finetune the model using the merging captions method and TALC framework, independently. In Figure 5(a), we find that finetuning with TALC achieves the highest overall score of 75.6 points in comparison to all the baselines. Specifically, we observe that the visual consistency does not change much with finetuning using the TALC method (89.9 points vs 89 points). Interestingly, we observe that finetuning with merging captions reduces the visual consistency by a large margin of 14 points. This can be attributed to the lack of knowledge about the natural alignment between video scenes and individual scene descriptions, which gets lost during the merging of captions. Additionally, we find that the text adherence of the TALC-finetuned model is 15.1 points more than the text adherence of the TALC-base model. Similarly, we find that the text adherence of the merging captions-finetuned model is 5.1 points more than the text adherence of the merging captions-base model. This highlights that finetuning a T2V model with multi-scene video-text data helps the most with enhancing its text adherence capability. Fine-grained Results. To perform fine-grained analysis of the performance, we assess the visual consistency and text adherence scores for the baselines and TALC framework across diverse task prompts and number of scenes on ModelScope. We present their results in Appendix \u00a7E. In our analysis, we find that finetuning with TALC achieves the highest overall score over the baselines across all the scenarios. In addition, we notice that the highest performance is achieved in the scenario that consist of the different entities in a specific visual context. Further, we observe that the performance of the all the methods reduces when the task prompts get more complex i.e., multiscene captions from real videos. In addition, we observe that finetuning with TALC achieves the highest overall score over the baselines across all the number of scenes. Specifically, we observe that the performance of the merging captions and TALC framework reduces as the number of scenes being generated increases. Overall, we show that the TALC strikes a good balance between visual consistency and text adherence to generate high-quality multi-scene videos. 5.3 Human Evaluation Table 1: Human evaluation results on the visual quality of the generated videos from ModelScope. We observe that the visual quality of the generated videos are close to each other for the base model. However, finetuning the model with merging captions reduces the video quality by a large margin while TALC-finetuned model retains the video quality. Method Quality Merging Captions (Base) 80.5 Merging Videos (Base) 86.5 TALC (Base) 84.5 Merging Captions (F.T.) 63.4 TALC (F.T.) 83.3 TALC achieves the best performance in human evaluation. We compare the performance of the baselines and TALC framework for ModelScope using human evaluation in Figure 6. We find that TALC-finetuned model outperforms the merging captions and merging video methods with the base model by 12 points and 15.5 points, respectively, on the overall score. In addition, we find that using TALC framework in the base model outperforms the merging captions and merging video methods with the base model by 7.6 points and 11.1 points, respectively, on the overall score. Further, we observe that the merging captions with the base model achieves the highest visual consistency score of 96.5 points while it is the lowest for merging videos generated from the base model. In addition, we find that the text adherence of the TALCfinetuned and TALC-base model is better than merging captions-finetuned and merging captions-base model, respectively. Our results highlight at the benefit of including the inductive bias of temporal alignment between the video scenes and their scene descriptions for multi-scene video generation. Visual quality of the generated videos. We compare the visual quality of the generated videos using human evaluation in Table 1. We find that the visual quality of videos generated from the base 11 \fVisual Consistency T ext Adherence Overall Score 0 5 10 15 20 25 30 35 40 45 50 55 60 65 70 75 80 85 90 95 100 Average performance (%) 96.5 55.0 92.3 80.0 86.4 33.0 67.5 52.5 42.3 67.2 64.8 61.3 72.4 61.1 76.8 Merging Captions (Base) Merging Videos (Base) TALC (Base) Merging Captions (F .T.) TALC (F .T.) Figure 6: Human evaluation results for ModelScope model. We observe that the base model using the TALC framework outperforms the merging captions and merging videos baselines on the overall score. In addition, TALC-finetuned model enhances the text adherence and achieves the highest overall score. We report the average performance across the diverse multi-scene prompts and the number of generated scenes. model ranges from 80.5 \u221286.5 using the baselines and TALC framework. However, we observe that the visual quality of generated videos is quite poor for the model finetuned with merging captions with a score of 63.4 points. This highlights that finetuning a T2V model with multi-scene video-text data by naively merging the scene-specific descriptions in the raw text space leads to undesirable artifacts in the generated video. Finally, we find that the TALC-finetuned model (83.3) achieves a video quality score similar to that of the TALC-base model (84.5), indicating that our finetuning data preserves the visual quality observed during the model\u2019s pretraining. While our work is centered around multi-scene evaluation, we also perform single-scene evaluation in Appendix \u00a7I. 5.4 Qualitative Analysis We provide qualitative examples of generating multi-scene videos using ModelScope (fine-tuned with TALC) and Lumiere (base model with TALC) for diverse scenarios in Figure 12. Our analysis reveals that both ModelScope and Lumiere are capable of producing multi-scene videos that exhibit high text adherence and visual consistency. Considering the case of the same animal engaging in multiple actions (referred to as \"one character multiple contexts\"). The videos generated by ModelScope successfully maintained the same animal while varying the background and action between the scenes. Conversely, the videos generated by Lumiere displayed the same animal performing different actions with minimal background alterations. We believe that this distinction is attributed to ModelScope\u2019s fine-tuning with TALC. Considering different animals within a particular visual setting (referred to as \"multiple-characters same context\"), both ModelScope and Lumiere demonstrated impressive abilities in preserving the consistency of the background across the videos and adhering closely to the provided text. During our analysis, we noticed that the multi-scene captions derived from real videos (referred to as \"open-ended captions\") exhibited a substantial number of changes between the various scenes. In this scenario, Lumiere, when employed without fine-tuning, displayed challenges in adhering to the text, while ModelScope achieved a higher degree of text adherence but was also prone to visual artifacts. 6 Related Work Text-to-Video Generative Modeling. The field of text-to-video (T2V) synthesis has significantly evolved from its inception with models like VGAN [2] and MoCoGAN [49], leveraging the foun12 \fdational technologies of GANs [50] and VAEs [51] to produce concise, single-scene videos. The narrative depth was further expanded through transformer-based architectures such as CogVideo [52] and VideoGPT [53], enhancing the complexity of video content yet remaining within the confines of single scenes. The advent of diffusion models, exemplified by Imagen Video [54], marked a notable advancement in T2V synthesis. Despite these strides, the challenge of creating multi-scene videos that reflect the complexity of the physical world [1, 2, 3] remains. Our work, TALC, extends the capabilities of T2V models to multi-scene storytelling, filling a crucial gap in the synthesis landscape. Image-to-Video Animation. The exploration of multi-scene video generation, innovative methods such as Lumiere [6] and Make-a-Video [55] have employed a two-step process, transforming text to images and then animating these images into videos. While these approaches have advanced visual quality, they often fall short in weaving seamless multi-scene narratives. This limitation is echoed in the work of Emu Video [8], which underscores the difficulty of achieving narrative coherence across multiple scenes. TALC focuses on direct generation of multi-scene narratives from textual prompts aiming for a narrative flow and visual consistency across scenes. Multi-Scene Video Generation. The pursuit of multi-scene T2V synthesis has been furthered by recent innovations like Phenaki [20] and Stable Video Diffusion [12], which have explored new frontiers in video generation from textual prompts and the scaling of latent diffusion models, respectively. Additionally, Dreamix [56] and Pix2Video [57] have broadened the scope of diffusion models, applying them to video editing and animation. Despite these advancements, the task of generating videos that convey coherent narratives across multiple scenes remains formidable, highlighted by recent works such as VideoPoet [9], ModelScope [10] and Make-A-Scene [58]. TALC tackles this task and offers a framework produces videos spanning multiple scenes. We also introduce nuanced evaluation approach. This approach integrates both automated assessments and human evaluations to rigorously gauge the quality and narrative coherence of the generated content, evaluating text adherence, object consistency and background consistency, contributing to the ongoing refinement of T2V synthesis. 7" +} \ No newline at end of file diff --git a/abs_9K/test_abstract_short_2405.04700v1.json b/abs_9K/test_abstract_short_2405.04700v1.json new file mode 100644 index 0000000000000000000000000000000000000000..b5832554746d38c2a7d4a4b2c0e86824a5152588 --- /dev/null +++ b/abs_9K/test_abstract_short_2405.04700v1.json @@ -0,0 +1,19 @@ +{ + "url": "http://arxiv.org/abs/2405.04700v1", + "title": "Robust Implementation of Retrieval-Augmented Generation on Edge-based Computing-in-Memory Architectures", + "abstract": "Large Language Models (LLMs) deployed on edge devices learn through\nfine-tuning and updating a certain portion of their parameters. Although such\nlearning methods can be optimized to reduce resource utilization, the overall\nrequired resources remain a heavy burden on edge devices. Instead,\nRetrieval-Augmented Generation (RAG), a resource-efficient LLM learning method,\ncan improve the quality of the LLM-generated content without updating model\nparameters. However, the RAG-based LLM may involve repetitive searches on the\nprofile data in every user-LLM interaction. This search can lead to significant\nlatency along with the accumulation of user data. Conventional efforts to\ndecrease latency result in restricting the size of saved user data, thus\nreducing the scalability of RAG as user data continuously grows. It remains an\nopen question: how to free RAG from the constraints of latency and scalability\non edge devices? In this paper, we propose a novel framework to accelerate RAG\nvia Computing-in-Memory (CiM) architectures. It accelerates matrix\nmultiplications by performing in-situ computation inside the memory while\navoiding the expensive data transfer between the computing unit and memory. Our\nframework, Robust CiM-backed RAG (RoCR), utilizing a novel contrastive\nlearning-based training method and noise-aware training, can enable RAG to\nefficiently search profile data with CiM. To the best of our knowledge, this is\nthe first work utilizing CiM to accelerate RAG.", + "authors": "Ruiyang Qin, Zheyu Yan, Dewen Zeng, Zhenge Jia, Dancheng Liu, Jianbo Liu, Zhi Zheng, Ningyuan Cao, Kai Ni, Jinjun Xiong, Yiyu Shi", + "published": "2024-05-07", + "updated": "2024-05-07", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.DC", + "cs.IR" + ], + "label": "Original Paper", + "paper_cat": "Retrieval AND Augmented AND Generation AND RAG", + "gt": "Large Language Models (LLMs) deployed on edge devices learn through\nfine-tuning and updating a certain portion of their parameters. Although such\nlearning methods can be optimized to reduce resource utilization, the overall\nrequired resources remain a heavy burden on edge devices. Instead,\nRetrieval-Augmented Generation (RAG), a resource-efficient LLM learning method,\ncan improve the quality of the LLM-generated content without updating model\nparameters. However, the RAG-based LLM may involve repetitive searches on the\nprofile data in every user-LLM interaction. This search can lead to significant\nlatency along with the accumulation of user data. Conventional efforts to\ndecrease latency result in restricting the size of saved user data, thus\nreducing the scalability of RAG as user data continuously grows. It remains an\nopen question: how to free RAG from the constraints of latency and scalability\non edge devices? In this paper, we propose a novel framework to accelerate RAG\nvia Computing-in-Memory (CiM) architectures. It accelerates matrix\nmultiplications by performing in-situ computation inside the memory while\navoiding the expensive data transfer between the computing unit and memory. Our\nframework, Robust CiM-backed RAG (RoCR), utilizing a novel contrastive\nlearning-based training method and noise-aware training, can enable RAG to\nefficiently search profile data with CiM. To the best of our knowledge, this is\nthe first work utilizing CiM to accelerate RAG.", + "main_content": "INTRODUCTION The emerging Large Language Models (LLMs) are deployed primarily on centralized cloud platforms [1, 2] (Cloud LLMs), raising concerns about user privacy and trustworthy issues [3]. These issues become even more prominent in areas such as healthcare [4], companionship [5], and personal assistance [6], where the user privacy and trustworthiness of LLMs are crucial. To address these issues, the cloud LLMs will eventually transform into personalized LLMs, capable of generating personalized responses, deployed on edge devices (Edge LLMs), where users can keep all their private data and the model learns from those data locally. To better suit the needs of individual users, Edge LLMs must learn from user interactions. However, their capability of learning is constrained by their limited RAM and computational power. Similar to Cloud LLMs, the Edge LLMs primarily learn by finetuning their model parameters. Yet, given that these models often contain over 3 billion parameters, updates can be challenging, even with numerous efforts to accelerate them [7\u20139]. For example, using the experimental high-performance embedded system like NVIDIAAGX, the pockengine method [9] can still take 90 hours to learn from a middle-sized dataset Alpaca with only 52k documents, making this option impractical for normal users. E x : \u201cI am sick?\u201d Sentence Embedding Model E(x) User Query Profile Data Embedding Space S \u2026 P(x, d83) P(x, d29) P(x, d37) Top k (k = 1) DAC ADC \u2026 CiM E(d83) Data: d83, d29, d37 User Query LLM Output NVM Digital Logic \ud835\udc6c(\ud835\udc99) \u2219\ud835\udc6c(\ud835\udc85\ud835\udc8a) = \ud835\udc0f(\ud835\udc31, \ud835\udc85\ud835\udc8a) E(d1) E(d2) E(d3) E(d4) E(dn) Document 2 Document 1 Document n \u2026 Figure 1: The workflow of RAG on edge-based CiM. CiM performs max inner product search (MIPS) to retrieve the top-ranked documents, concatenating them with user query to allow the LLM to generate personalized responses. Retrieval-augmented generation (RAG), on the other hand, is a more resource-efficient choice [10], and hence becoming the de facto learning method for Edge LLMs. In a typical RAG system, it consists of a retriever and a generator. The retriever is commonly backed by max inner product search (MIPS). When the retriever receives a user query, it will retrieve the most relevant document from profile data, as shown in Figure 1. The profile data has many documents, and each document \ud835\udc51\ud835\udc56contains specific information that may be relevant to user queries. The generator can be seen as a LLM, which takes the user query \ud835\udc65and retriever-obtained documents as a prompt and generates a corresponding response. For every document\ud835\udc51\ud835\udc56and the user query \ud835\udc65, RAG utilizes a sentence embedding model shown in Figure 1 to convert them into vectors (i.e., \ud835\udc38(\ud835\udc51\ud835\udc56) and \ud835\udc38(\ud835\udc65), respectively). The vectors for documents can be named as document embeddings and stored as a matrix as shown in Figure 1. The vector for user query, named query embedding \ud835\udc38(\ud835\udc65), will be used in MIPS to perform inner product with every document embedding. The larger the product \ud835\udc43(\ud835\udc65,\ud835\udc51\ud835\udc56), the more semantic similar it will be between the user query and the document. Using RAG, Edge LLMs can provide user-preferred responses by retrieving relevant documents from profile data, and the profile data can be incrementally updated with new documents. This is an efficient learning process without costly updating the model parameters via fine-tuning [11]. Other than the inevitable LLM inference cost, the primary computational cost of RAG is about retrieval, which is more than ten times less than the cost of updating model parameters. While the computational cost of RAG is more edge-friendly, there still exist two issues impeding RAG from being deployed for real-time user interaction on Edge LLMs. Firstly, the growing profile data as stored cannot be unlimited without affecting the access time. If the size of the profile data exceeds the RAM capacity, arXiv:2405.04700v1 [cs.LG] 7 May 2024 \fRuiyang Qin1, Zheyu Yan1, Dewen Zeng1, Zhenge Jia1, Dancheng Liu2, Jianbo Liu1, Ahmed Abbasi1,Zhi Zheng1, Ningyuan Cao1, Kai Ni1, Jinjun Xiong2, Yiyu Shi1 it will need to be offloaded into the storage, such as a hard disk drive (HDD) or solid-state drive (SSD). Accessing data from HDD or SSD will significantly increase the data transfer latency [12], rendering real-time user interaction impractical. Secondly, the core retrieval method of RAG, MIPS, may experience decreased efficiency as profile data grows, and it can become potentially prohibitive when dealing with overwhelmingly large datasets. For example, on Raspberry Pi 4B, MIPS can take 5 minutes to find one appropriate profile data among 21M documents [10], which is even longer than the 2-minute inference time of an Edge LLM. Unfortunately, few efforts have been made to optimize RAG towards Edge LLMs. Thus, we propose to utilize the Computing-in-Memory (CiM) architecture to address this issue. As shown in Figure 1, CiM architectures using memory arrays have shown substantial promise in accelerating matrix-vector multiplication [13], which is the key operation of MIPS. The CiM architectures often utilize massive parallel processing to perform computations directly within the memory array where the data is stored, such that they can minimize the data movement through in-situ data access and significantly increase the throughput [14]. Given the same amount of documents, CiM can finish computation within 50ms [15], which is negligible compared to the computation latency on normal edge devices. Furthermore, by incorporating non-volatile memory (NVM) devices, such as phase-change memories (PCMs), resistive random-access memories (RRAMs), and ferroelectric field-effect transistors (FeFETs), CiM can outperform conventional MOSFET-based designs in terms of energy efficiency [16]. 0.00 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5 0.55 0.6 Level of noise ( ) 0.0 0.2 0.4 0.6 0.8 1.0 Accuracy Citation Movie Rating News DBLP Figure 2: The impact on MIPS accuracy when the RAG\u2019s document embedding is perturbed by various levels of Gaussian noise caused by the device variations. An accurate retrieval means the document retrieved under the impact of the noise is the same as that retrieved without noise. Unfortunately, simply changing the underlying hardware is not enough, as the non-idealities of the NVM devices in CiM array could greatly deteriorate the RAG performance. First, the operations performed in CiM architectures are susceptible to various sources of noise, including electronic noise (thermal, shot, and flicker), deviceto-device variability, and line noise from the supporting circuitry [17]. These noise sources can corrupt the computations, especially when the signal levels are close to the noise floor, which is a common scenario in high-precision tasks. Such noise issues are critical in RAG applications where the accuracy and quality of the generated content heavily rely on the precision of the underlying computations. Additionally, the CiM architecture is primarily designed and optimized for low-resolution computation [18]. Moreover, CiM arrays are typically sized at a fixed dimension, such as 64x64 [19], which is different from the documents\u2019 embedding dimension (e.g., 128). Therefore, both RAG\u2019s data precision (typically FP32) and its embedding dimension need to be reduced to fit in the size of CiM\u2019s crossbar arrays. To illustrate the impact of these on RAG, as an example, we present a preliminary study on MIPS performance in Figure 2, where we use a simple yet representative Gaussian noise to simulate the noise from the device variations in CiM. As shown in Figure 2, as the noise level increases, MIPS accuracy (specified in section 4.1.3) drops dramatically, approaching random guessing. To address these issues, we further propose a novel optimization framework for CiM-backed RAG, called Robust CiM-backed RAG (RoCR). The framework consists of three parts. The first part is a contrastive learning method. We use it to optimize the document embedding model. The second part is a novel data construction method to generate both positively and negatively labeled data pairs for contrastive learning. For the profile data, they can be either labeled to indicate the explicit user-preferred response to certain input, or simply statements without explicit labels that only implicitly indicate user preferences. Our data construction method is capable of dealing with both types of profile data. The third part is a noise-aware training method. It goes in tandem with contrastive learning to obtain a sentence embedding model that can generate document and user query embeddings with high noise-resilient capability, while such embeddings can fit into CiM architectures under different designs and configurations. Our major contributions can be summarized as: \u2022 We propose the first work to harvest CiM advantages for RAG acceleration on the edge. We provide a pathway to utilize emerging CiM devices to expand the Edge LLMs\u2019 capability in terms of storing a high volume of profile data with fast MIPS computing. \u2022 We introduce noise-aware training to enhance the noiseresilient capabilities of RAG\u2019s document embedding. The resulting noise-resilient embeddings can be reused robustly, saving resources needed to calibrate and regenerate embeddings. \u2022 Our experiments on various datasets show that our proposed framework can improve the RAG performance on multiple CiM devices up to 35%, approaching to the theoretical RAG performance. Across a wide device variation (noise) range on a single CiM device, our proposed framework can still improve the RAG performance. 2 RELATED WORK 2.1 CiM Architectures and their NVMs As shown in the middle part of Figure 1, memory arrays are the key component for vector-matrix multiplication. In this array, matrix values are stored at NVM cells, such as emerging NVM technologies like PCMs, RRAMs, and FeFETs, at the cross-points of vertical and horizontal lines. Simultaneously, vector values flow along the horizontal lines of the array. Operations within the memory array take place in the analog domain by exploiting law of physics directly. However, for other essential functions like shift-and-add for multiple bits and sorting to find the top-k ranked values would be done in the digital domain. Thus, digital-to-analog and analog-to-digital \fRobust Implementation of Retrieval-Augmented Generation on Edge-based Computing-in-Memory Architectures profile data Data Construction Module positive examples anchor examples \u2026 Reshape Module embeddings Contrastive Learning close far Device Variation NVMs Sentence Embedding Model negative examples Flexible Noise-aware Training Module optimize constraints Figure 3: Overview of the proposed Robust CiM-backed RAG framework (RoCR). It optimizes the sentence embedding model to adapt different types of NVMs utilized by CiM. converters (DACs and ADCs) are used to connect these different components. CiM arrays suffer from various sources of variations and noises. Two major ones include spatial variations and temporal variations. Spatial variations result from fabrication defects and have both local and global correlations. FeFET devices also suffer from temporal variations due to the stochasticity in memory switching and also aging, which causes fluctuations in conductance when programmed at different times. Temporal variations are typically independent from device to device and are irrelevant to the value to be programmed [20]. In this work, as a proof of concept, we focus on the impact of temporal variations in the programming process on DNN performance. Temporal variation makes the programmed resistance of a device deviate from what is expected. The proposed framework can also be extended to other sources of variations with modification. Measurement results [21, 22] show that the noise on DNN weights caused by device variations can be safely modeled as a Gaussian noise with zero mean, each with a standard deviation associated with the weight value. A detailed representation is given by: v = v0 + \u0394v, \u0394v \u223cN (0, \ud835\udf0e\ud835\udc63) (1) where v is the actual embedding deployed on the accelerators, v0 is the target embedding value, and \ud835\udf0e\ud835\udc63is a value measured by the experiments. We collect the measurement results from RRAM and FeFET devices and the specific value will be discussed in Section 4.1. 2.2 Past Noise Mitigation Methods Several strategies have been introduced to tackle the challenge of device variations in CiM accelerators. These methods can be separated into software and hardware-based techniques. The software-based techniques are generally developed to obtain more robust DNN models [19, 22\u201324] or recommendation systems [25], and are thus not suitable for generating more robust MIPS solutions. For the hardware techniques, the write-verify procedure [26, 27] is one of the most commonly used approach during programming. Initially, a NVM device is programmed to a set state via a designated pulse pattern. Subsequent to this, the device\u2019s value is verified to ascertain if its conductance aligns with a stipulated range of the desired value, essentially assessing its accuracy. If discrepancies arise, a supplemental update pulse is initiated to reset the device conductance nearer to the target. This loop persists until the disparity between the programmed device value and the target value diminishes to a satisfactory margin, typically taking a handful of cycles. Cutting-edge research suggests that by selectively applying write-verify to a subset of pivotal devices, one can uphold the average accuracy of a DNN [21]. Additionally, a variety of circuit design initiatives [18, 28] have been put forth to counteract device variations. 3 PROPOSED WORK 3.1 Framework Overview As shown in Figure 3, our proposed framework, Robust CiM-backed RAG (RoCR), consists of three stages. First, we apply contrastive learning to utilize the training data to optimize the training module. To do that, in the second stage, we take the profile data and construct via a data construction module to obtain contrastive training data pairs, which are then used in the flexible noise-aware training module. In the third stage, we obtain the constraints of NVMs in CiM via profiling. These constraints will be encoded into the flexible noise-aware training module and used to train the sentence embedding model so that it can generate embedding that are robust against device variation of the target NVMs. After training, the training module can be turned into a new sentence embedding model and generate CiM-friendly embeddings. 3.2 Contrastive Learning: Triplet Loss Function When we apply RAG using CiM, we first need to store embeddings into NVMs as shown in Figure 1. Such embeddings are generated by the sentence embedding model, and they are the numerical representations of profile data. Each single document in the profile data can have its unique embedding, which is a vector. The embeddings stored on NVMs can consist of a matrix as the orange blocks shown in Figure 1. Given a user query, which will also be converted into an embedding, CiM can operate MIPS between this user query embedding and all profile embeddings simultaneously via vector-matrix multiplication. The top-ranked values in the product will be used as the index to retrieve the corresponding document data, as the pink block shown in Figure 1. This retrieved user-relevant document is the output of MIPS. However, as we have explained in Section 2.1, writing the document embeddings into NVMs can cause them to suffer from temporal variations (device variations). Then, the NVM-stored embeddings will be different from the original sentence embedding model generated embeddings. As shown in Figure 4, the vanilla embedding model generates desired embedding, which will deviate to the noise embedding under device variation, such that the irrelevant embedding is ranked higher than desired embedding due to its larger inner product. Contrastive learning can learn the representations via push away dissimilar examples and pull close similar examples [29]. In particular, the contrastive loss function can be used to increase the distance between dissimilar examples. In our work, we propose to improve the noise-resilient capability by contrastive learning. By increasing the distance between \fRuiyang Qin1, Zheyu Yan1, Dewen Zeng1, Zhenge Jia1, Dancheng Liu2, Jianbo Liu1, Ahmed Abbasi1,Zhi Zheng1, Ningyuan Cao1, Kai Ni1, Jinjun Xiong2, Yiyu Shi1 noise embedding irrelevant embedding query retrieve the wrong data irrelevant embedding query NVMs Device Variation lead to Vanilla CiM-backed RAG Robust CiM-backed RAG Our embedding model noise-resilient embeddings user profile data NVMs Device Variation desired embedding vanilla embedding model user profile data retrieve the desired data embeddings Figure 4: Improvement by our Robust CiM-backed RAG. Our framework generates noise-resilient embeddings, as shown the orange and blue point in right subfigure dissimilar examples, as shown the right subfigure in Figure 4, deviated desired embedding will still have a larger inner product with the query compared to the irrelevant embedding. Our contrastive learning loss function is based on Weinberger et al. [30]. For each example \ud835\udc65\ud835\udc56in a mini-batch of N anchor examples, our data construction method will construct \ud835\udc3epositive and \ud835\udc3enegative examples corresponding to \ud835\udc65\ud835\udc56. We can have {{(\ud835\udc65\ud835\udc56,\ud835\udc65\u2212 \ud835\udc56,\ud835\udc65+ \ud835\udc56)\ud835\udc58}\ud835\udc56=1,...,\ud835\udc41}\ud835\udc58=1,...,\ud835\udc3e, in which \ud835\udc65\u2212and \ud835\udc65+ are negative and positive examples corresponding to \ud835\udc65\ud835\udc56, where \ud835\udc65\ud835\udc56is closer to \ud835\udc65+ \ud835\udc56compared to \ud835\udc65\u2212 \ud835\udc56. Also, \ud835\udc52\ud835\udc5a\ud835\udc4f(\ud835\udc65\ud835\udc56) represents the learned embedding of \ud835\udc65\ud835\udc56. Then the loss function L can be defined as: L = \ud835\udc41 \u2211\ufe01 \ud835\udc56=1 1 \ud835\udc3e \ud835\udc3e \u2211\ufe01 \ud835\udc58=1 max \u0010 0, d(\ud835\udc65\ud835\udc56,\ud835\udc65\u2212 \ud835\udc56(\ud835\udc58)) \u2212d(\ud835\udc65\ud835\udc56,\ud835\udc65+ \ud835\udc56(\ud835\udc58)) + \ud835\udc5a \u0011 , d(\ud835\udc65\ud835\udc4e,\ud835\udc65\ud835\udc4f) = sim(emb(\ud835\udc65\ud835\udc4e), emb(\ud835\udc65\ud835\udc4f)) (2) The distance \ud835\udc51(\ud835\udc65\ud835\udc4e,\ud835\udc65\ud835\udc4f) is calculated by the Euclidean distance between embeddings of two data \ud835\udc52\ud835\udc5a\ud835\udc4f(\ud835\udc65\ud835\udc4e) and \ud835\udc52\ud835\udc5a\ud835\udc4f(\ud835\udc65\ud835\udc4f). The function \ud835\udc60\ud835\udc56\ud835\udc5a() calculate the semantic similarity. 3.3 Data Construction To train the sentence embedding model via contrastive learning, it is critical to construct pairs of examples where the positive examples and negative examples need to be distinct from each other [31]. In our work, since we use triplet contrastive loss, instead of pairs of examples, we will construct trios of examples where each triplet contains an anchor, positive, and negative example. We use profile data to construct triplets of examples. For the profile data, it is generated by the user during the user-LLM interaction and contains the user preference information. There exists two situations for such data. First, the profile data can contain explicit labels indicating the user preferred response to the corresponding content. Second, the profile data also can be statements containing the user-related information but without explicit user preferences As shown in Figure 5, to deal with the two situations, we come up with two data construction methods: Construction Data with Explicit labels (CDE) and Construction Data with Implicit labels (CDI). \u201cJake Blues, just released from prison, puts his old band back together to save the Catholic home where he and his brother Elwood were raised.\u201d is \u201cdystopia\u201d negative r = 0.1 r = 0 r = 0 \u201cFresh out of prison, Jake Blues rallies his old band to save their childhood Catholic home\u201d is \u201cclassic\u201d positive example (embedding) \u201cJake Blues, just released\u2026\u201d is \u201cclassic\u201d anchor example (embedding) \u201cJake Blues, just released \u2026\u201d is \u201cdystopia\u201d negative example (embedding) r = 0.1 \u201cVictims of traumatized \u2026\u201d r = 0 r = 0.9 CDE CDI E anchor/positive example negative example \u201cTwo victims of traumatized childhoods become lovers and serial murderers irresponsibly glorified by the mass media.\u201d anchor/positive/negative example \u201cTwo people with traumatic pasts turn into a couple on a crime spree, mistakenly idolized by the media.\u201d \u201cIndividuals, mired in traumas, unite *() crime-ridden bond, enthrall\u2606\u2609\u00a7ing the media's distorted spotlight.\" \u201cJake Blues, just released from prison, puts his old band back together to save the Catholic home where he and his brother Elwood were raised.\u201d is \u201cclassic\u201d explicit label Statement/implicit label Figure 5: Examples of the two data construction methods. For data with explicit labels, CDE is used to construct the training data. For data without explicit labels (implicit labeled data), CDI is used to construct the training data. 3.3.1 Construction Trios via Data with Explicit Labels (CDE). For the data with explicit labels, each of the data consists of a textual content c and its corresponding label l which indicates the user preferred response regarding to the content c. As shown in the CDE part in Figure 5, there exists explicit label circled by dashed line. Using the profile data, we will construct triplet examples in the format of (\ud835\udc65\ud835\udc56,\ud835\udc65\u2212 \ud835\udc56,\ud835\udc65+ \ud835\udc56). Given a dataset D with size of \ud835\udc5bprofile documents, each piece of data consists of a content \ud835\udc50\ud835\udc56and the corresponding label \ud835\udc59\ud835\udc56where \ud835\udc56\u2208{1, 2, ...,\ud835\udc5b}. The anchor example \ud835\udc65\ud835\udc56can be constructed as: \ud835\udc65\ud835\udc56= \ud835\udc50\ud835\udc56\u2295\ud835\udc59\ud835\udc56, for \ud835\udc56= 1, 2, . . . ,\ud835\udc5b (3) where \u2295denotes a concatenation operation, specifically used here to combine label and content. Negative examples \ud835\udc65\u2212 \ud835\udc56can be constructed by concatenating \ud835\udc50\ud835\udc56with a random label \ud835\udc59\ud835\udc57that is different from \ud835\udc59\ud835\udc56as follows: \ud835\udc65\u2212 \ud835\udc56= \ud835\udc50\ud835\udc56\u2295\ud835\udc59\ud835\udc57, where \ud835\udc59\ud835\udc56\u2260\ud835\udc59\ud835\udc57. (4) Randomly assigning a different label ensures diversity in the negative examples while maintaining the same content from the anchor. Different from constructing anchor and its negative examples, it is challenging to construct positive examples corresponding to the anchor examples since it is more difficult to formalize semantically similar data than to formalize semantically dissimilar data. To construct positive examples, we follow the SimCSE method [32] to add a dropout rate \ud835\udc5finto the sentence embedding model M. The process for constructing positive examples involves two main steps. First, the textual positive example is formalized as: \ud835\udc65+ \ud835\udc56= \ud835\udc65\ud835\udc56, for \ud835\udc56= 1, 2, ...,\ud835\udc5b (5) where we align each anchor with the corresponding positive example. This step effectively duplicates the anchor data as a starting point for generating the embeddings. \fRobust Implementation of Retrieval-Augmented Generation on Edge-based Computing-in-Memory Architectures Second, the embedding generation process varies based on the dropout rate applied within the model M. When model M is utilized to generate embeddings for anchor and negative examples, the dropout rate is set to 0. In contrast, for generating embeddings for positive examples, a non-zero dropout rate \ud835\udc5fis used. The anchor, negative, positive examples, as shown in Figure 5, can be constructed as: \ud835\udc52\ud835\udc5a\ud835\udc4f(\ud835\udc65\ud835\udc56) = M(\ud835\udc65\ud835\udc56,\ud835\udc51\ud835\udc5f\ud835\udc5c\ud835\udc5d\ud835\udc5c\ud835\udc62\ud835\udc61= 0) \ud835\udc52\ud835\udc5a\ud835\udc4f(\ud835\udc65\u2212 \ud835\udc56) = M(\ud835\udc65\u2212 \ud835\udc56,\ud835\udc51\ud835\udc5f\ud835\udc5c\ud835\udc5d\ud835\udc5c\ud835\udc62\ud835\udc61= 0) \ud835\udc52\ud835\udc5a\ud835\udc4f(\ud835\udc65+ \ud835\udc56) = M(\ud835\udc65+ \ud835\udc56,\ud835\udc51\ud835\udc5f\ud835\udc5c\ud835\udc5d\ud835\udc5c\ud835\udc62\ud835\udc61= \ud835\udc5f) (6) The condition of \ud835\udc5f\u22600 can induce variation in the embeddings, enhancing the model\u2019s ability to recognize semantically similar yet variably expressed content. Given the construction factor \ud835\udc3e, we can construct the triplet data examples as: D\ud835\udc61\ud835\udc5f\ud835\udc56\ud835\udc5d\ud835\udc59\ud835\udc52\ud835\udc61= \ud835\udc41 \u00d8 \ud835\udc56=1 n (\ud835\udc65\ud835\udc56(\ud835\udc58),\ud835\udc65\u2212 \ud835\udc56(\ud835\udc58),\ud835\udc65+ \ud835\udc56(\ud835\udc58)) : \ud835\udc58= 1, 2, . . . , \ud835\udc3e o (7) For the triplet data examples D\ud835\udc61\ud835\udc5f\ud835\udc56\ud835\udc5d\ud835\udc59\ud835\udc52\ud835\udc61, their embeddings for each augmentation \ud835\udc58are given by: E = \ud835\udc41 \u00d8 \ud835\udc56=1 n (\ud835\udc52\ud835\udc5a\ud835\udc4f(\ud835\udc65\ud835\udc56(\ud835\udc58)),\ud835\udc52\ud835\udc5a\ud835\udc4f(\ud835\udc65\u2212 \ud835\udc56(\ud835\udc58)),\ud835\udc52\ud835\udc5a\ud835\udc4f(\ud835\udc65+ \ud835\udc56(\ud835\udc58)) : \ud835\udc58= 1, 2, . . . , \ud835\udc3e o (8) As shown in Figure 5, for data with explicit labels, a content\ud835\udc50can concatenate with its corresponding label \ud835\udc59to formalize the positive and anchor example. That content \ud835\udc50can also concatenate with other labels \ud835\udc59\u2032 to formalize the negative example. The positive example can be finally obtained from the sentence embedding model with dropout rate \ud835\udc5f. The anchor and negative example can be finally obtained from the sentnece embedding model with \ud835\udc5f= 0. 3.3.2 Construction Trios via Data with Implicit Labels (CDI). For data with implicit labels, each of the data consists of solely textual content c. As shown of the CDI part in Figure 5, there is no explicit label to indicate user preferences. Instead, the data can be seen as a statement containing some user-related information. To construct the anchor examples and positive examples, we can use the exact same method in EDC. Given a dataset D with size of n profile data, each piece of data consits of a content \ud835\udc50\ud835\udc56. The anchor data \ud835\udc65\ud835\udc56can be constructed as: \ud835\udc65\ud835\udc56= \ud835\udc50\ud835\udc56, for \ud835\udc56= 1, 2, . . . ,\ud835\udc5b (9) For each anchor data \ud835\udc65\ud835\udc56, constructing its corresponding negative example is not as simple as merely concatenating the content\ud835\udc50\ud835\udc56with a non-corresponding label \ud835\udc59\ud835\udc58. To construct negative examples, we employ a reciprocal approach with the positive examples, applying a similar method to both. We first initialize the negative example and positive example following the equation 5: \ud835\udc65\u2212 \ud835\udc56= \ud835\udc65+ \ud835\udc56= \ud835\udc65\ud835\udc56, for \ud835\udc56= 1, 2, . . . ,\ud835\udc5b (10) For the positive example \ud835\udc65+ \ud835\udc56, it can be finalized by incorporating a dropout rate \ud835\udc5finto the sentence embedding model M, where a rate of 0 < \ud835\udc5f\u22640.2 can generate a sentence embedding with a semantic representation similar to \ud835\udc65\ud835\udc56and ensure good model training performance [32]. Increasing the dropout rate to a higher value, such as 0.5, can distort the semantic representation of \ud835\udc65+ \ud835\udc56, making it dissimilar to that of \ud835\udc65\ud835\udc56. Training the model with such positive examples can result in poorer performance. For positive examples in training the sentence embedding model, the higher dropout rate performs more like a noise rather than a data augmentation method. In our work, we train the sentence embedding model to generate embeddings that maintain their integrity under noisy conditions, such as during writing into Compute-in-Memory (CiM). The noise can alter or fragment the original semantic representations. For instance, as illustrated in Figure 5, using a high dropout rate \ud835\udc5f= 0.9 can lead to a negative example with a corrupted representation. Although it may lack certain informative content, this negative example becomes semantically distinct from both the anchor and positive examples, effectively simulating the effect of CiM corruption. This approach not only differentiates the negative examples semantically but also aligns them with the corrupted data scenarios for noise-aware training. Given the triple examples (\ud835\udc65\ud835\udc56,\ud835\udc65\u2212 \ud835\udc56,\ud835\udc65+ \ud835\udc56), for \ud835\udc56= 1, 2, ...,\ud835\udc5bas shown in equation 10, we have the dropout rate \ud835\udc5ffor formalizing the positive examples where 0 < \ud835\udc5f\u22640.2. Correspondingly, the dropout rate for formailzing the negative examples can be 1 \u2212\ud835\udc5f. Given the sentence embedding model M, the anchor example, positive example, and negative example can be constructed as: emb(\ud835\udc65\ud835\udc56) = M(\ud835\udc65\ud835\udc56, dropout = 0) emb(\ud835\udc65\u2212 \ud835\udc56) = M(\ud835\udc65\u2212 \ud835\udc56, dropout = 1 \u2212\ud835\udc5f) emb(\ud835\udc65+ \ud835\udc56) = M(\ud835\udc65+ \ud835\udc56, dropout = \ud835\udc5f) (11) 3.4 Flexible Noise-aware Training In the previous two stages, we construct the data to train the sentence embedding model based on contrastive learning. Meanwhile, the training can be more effective when injecting the simulated device variation [33] so that the model can be optimized with consideration of the device variation. Additionally, the sentence embedding model needs to produce embeddings that can fit with the different CiMs, which might have various NVM designs. To do that, we need the sentence embedding model reshapes its output embeddings into certain dimensions and precision. Hence, we propose a flexible noise-aware training method, which can generate the noise-resilient embedding, fitting to various CiMs. As shown in Figure 3, in the flexible noise-aware training module, the embedding generated by sentence embedding model will be shaped based on the CiM\u2019s NVMs constraints where required dimension is \ud835\udc51and required precision is \ud835\udc5d, and being injected device variation to formalize the embeddings. The reshape module, shown in Figure 3, seen as an autoencoder to reconstruct its input embedding [34], can be expressed as \ud835\udc60\u210e\ud835\udc5d(), initialized by \ud835\udc51 and \ud835\udc5d, takes the anchor embedding \ud835\udc52\ud835\udc5a\ud835\udc4f(\ud835\udc65\ud835\udc56) as input. We can have \ud835\udc60\u210e\ud835\udc5d(\ud835\udc52\ud835\udc5a\ud835\udc4f(\ud835\udc65\ud835\udc56)) = \ud835\udc52\ud835\udc5a\ud835\udc4f(\ud835\udc65\ud835\udc56)\ud835\udc51\u2217\ud835\udc5d. Based on the device variation shown as Table 2, we can have: \ud835\udc52\ud835\udc5a\ud835\udc4f(\ud835\udc65\ud835\udc56)\ud835\udc51\u2217\ud835\udc5d \ud835\udf0e = (\ud835\udc52\u2032 \u2217\ud835\udc3f0 + \ud835\udc52\u2032 \u2217\ud835\udc3f1 + \ud835\udc52\u2032 \u2217\ud835\udc3f2 + \ud835\udc52\u2032 \u2217\ud835\udc3f3) \u2217\ud835\udf0e, (12) \fRuiyang Qin1, Zheyu Yan1, Dewen Zeng1, Zhenge Jia1, Dancheng Liu2, Jianbo Liu1, Ahmed Abbasi1,Zhi Zheng1, Ningyuan Cao1, Kai Ni1, Jinjun Xiong2, Yiyu Shi1 Table 1: Performance comparison between our framework and four baselines on five CiM devices with device variation specified in Table 2 across five datasets. Evaluate the performance of our framework using EDC (RoCR-EDC) and using IDC (RoCR-IDC) to optimize the performance of RAG, which utilizes Gemma-2 as its LLM. Dataset Citation Movie Rating News DBLP CiM Method Acc \u2191 F1 \u2191 Acc \u2191 F1 \u2191 MAE \u2193 RMSE \u2193 ROUGE-1 \u2191 ROUGE-L \u2191 ROUGE-1 \u2191 ROUGE-L \u2191 Device-1 SWV 0.4208 0.3339 0.1305 0.1974 0.3850 0.8093 0.0754 0.0731 0.1709 0.1590 CxDNN 0.4223 0.3576 0.1516 0.1762 0.4404 0.9135 0.0640 0.0632 0.1646 0.1449 CorrectNet 0.4155 0.3791 0.0996 0.1305 0.3609 0.7071 0.0512 0.0764 0.1603 0.1538 Vanilla RAG 0.4401 0.3476 0.1017 0.0838 0.3903 0.8944 0.0754 0.0731 0.1731 0.1473 RoCR-CDE 0.5536 0.3956 0.2242 0.2303 0.3108 0.6856 0.1041 0.0987 0.2066 0.1924 RoCR-CDI 0.5409 0.5117 0.2273 0.2487 0.2767 0.6083 0.0831 0.0808 0.2317 0.2176 Device-2 SWV 0.1831 0.1552 0.1992 0.1957 0.4205 0.8775 0.0296 0.0289 0.1968 0.1874 CxDNN 0.4013 0.3557 0.2167 0.2019 0.4423 0.8367 0.0604 0.0791 0.1517 0.1401 CorrectNet 0.3827 0.3209 0.1625 0.1909 0.3762 0.8062 0.0513 0.0505 0.2042 0.1945 Vanilla RAG 0.4801 0.3462 0.1576 0.2079 0.4153 0.9354 0.0296 0.0289 0.1618 0.1353 RoCR-CDE 0.5407 0.4396 0.2924 0.2509 0.2553 0.5385 0.1209 0.0946 0.2025 0.1906 RoCR-CDI 0.5299 0.4591 0.2971 0.2386 0.2124 0.5763 0.0884 0.0853 0.2240 0.2098 Device-3 SWV 0.2450 0.2564 0.1695 0.1641 0.3460 0.7416 0.0725 0.069 0.1018 0.0954 CxDNN 0.4811 0.4006 0.2367 0.2113 0.2851 0.6928 0.0761 0.0707 0.1425 0.1111 CorrectNet 0.4510 0.3918 0.0792 0.1029 0.3704 0.7937 0.0585 0.0555 0.1715 0.1346 Vanilla RAG 0.4852 0.3618 0.1614 0.1636 0.3255 0.7649 0.0725 0.0690 0.1647 0.1437 RoCR-CDE 0.5139 0.4116 0.2242 0.2215 0.3208 0.6481 0.0825 0.0805 0.1893 0.1754 RoCR-CDI 0.5515 0.4984 0.2152 0.2131 0.2916 0.6245 0.1099 0.1049 0.2294 0.2140 Device-4 SWV 0.5135 0.4260 0.1271 0.1178 0.3610 0.8196 0.0259 0.0256 0.1871 0.1786 CxDNN 0.4733 0.3964 0.1267 0.2158 0.3468 0.7616 0.0646 0.0634 0.1603 0.1538 CorrectNet 0.4628 0.4019 0.1592 0.1847 0.4013 0.9274 0.0705 0.0750 0.1628 0.1292 Vanilla RAG 0.2101 0.2401 0.1219 0.2019 0.4015 0.8544 0.0505 0.0489 0.1929 0.1814 RoCR-CDE 0.5836 0.5555 0.1706 0.2817 0.3139 0.6856 0.0873 0.0851 0.1984 0.1882 RoCR-CDI 0.5352 0.4289 0.1642 0.2445 0.2706 0.5916 0.1154 0.1128 0.2148 0.1978 Device-5 SWV 0.4320 0.3541 0.1250 0.1076 0.3652 0.7616 0.0434 0.0427 0.0985 0.0923 CxDNN 0.4301 0.0538 0.0751 0.0458 0.3503 0.8185 0.0707 0.0682 0.2042 0.1945 CorrectNet 0.4145 0.3926 0.1083 0.1395 0.5526 0.8185 0.0735 0.0776 0.2096 0.1879 Vanilla RAG 0.4256 0.3522 0.0847 0.0863 0.3951 0.8515 0.0676 0.0653 0.2018 0.1846 RoCR-CDE 0.5698 0.5223 0.2152 0.1669 0.2959 0.6245 0.0936 0.0891 0.1946 0.1844 RoCR-CDI 0.5254 0.4504 0.2394 0.2458 0.2624 0.6325 0.0799 0.0764 0.2238 0.2095 where \ud835\udc52\u2032 = \ud835\udc52\ud835\udc5a\ud835\udc4f(\ud835\udc65\ud835\udc56)\ud835\udc51\u2217\ud835\udc5d. The device variation, as noise, is injected into embeddings to formalize \ud835\udc52\ud835\udc5a\ud835\udc4f(\ud835\udc65\ud835\udc56)\ud835\udc51\u2217\ud835\udc5d \ud835\udf0e , which will be used in contrastive learning to train the sentence embedding model, as shown in Figure 3. 4 EXPERIMENTAL EVALUATION 4.1 Experimental Setup 4.1.1 Datasets. To demonstrate our robust CiM-backed RAG, we employ five datasets with different tasks and domains, including Citation Identification [35] (Citation), Movie Tagging [36] (Movie), Product Rating [37] (Rationg), News Headline Generation [38] (News), and DBLP-Citation-network V14 [39] (DBLP) to evaluate the proposed framework. The data in each dataset consists of query data and profile data. In our evaluation, the profile data will be used to formalize user history, and the profile corresponding query data will be used as the user input. The first three datasets contain binary, five-class, and fifteen-class classification tasks respectively. The last two datasets contain text generation tasks. In the Citation Identification dataset, every piece of query data consists of a paper title and two references, and the correct reference is provided. RAG uses the profile data corresponding to the paper titles with their detailed contents to choose the appropriate reference. In the Movie Tagging dataset, each query data contains a description of a movie, and RAG uses a similar description and its corresponding tag in the profile data to tag the query data. The Product Rating dataset has a similar structure as the Movie Tagging dataset. In News Headline Generation and DBLP datasets, each query data contains an abstract, which can be summarized into a title. RAG uses a similar abstract and its corresponding title in profile data to generate the title for query data. All five datasets have labels in their query data. 4.1.2 Default Experimental Setting. Our framework chooses all-MiniLM-L6-v2 [40] as the sentence embedding model. For each dataset, we randomly select 2000 documents from profile data as the anchor examples. To examine the data construction method of CDE, we set the augmentation factor \ud835\udc58= 5 to obtain 10000 negative and positive examples. We set dropout rate as 0.1 to obtain the positive examples while maintain it as 0 when process anchor and negative examples. To examine the data construction method CDI, we set dropout rate for positive examples as 0.1 and dropout rate for negative examples as 0.9. To align with experiments for CDE, we also set \ud835\udc58= 5 in the experiments for CDI. For the experimental results, we run five times and get the average. In experiments, we set the device variation \ud835\udf0e= 0.1 and shape embeddings into dimension of 64 with precision of \ud835\udc56\ud835\udc5b\ud835\udc618. The learning rate is 2\ud835\udc52\u22125. In all experiments, we adhere to the device variation model previously described. The specific parameters are abstracted and then simplified from three representative NVM devices, two of them \fRobust Implementation of Retrieval-Augmented Generation on Edge-based Computing-in-Memory Architectures D-1 D-2 D-3 D-4 D-5 0.2 0.4 0.6 0.8 Accuracy SWV CxDNN CorrectNet Vanilla RAG RoCR-CDE RoCR-CDI (a) Citation on Gemma-2B D-1 D-2 D-3 D-4 D-5 0.2 0.4 0.6 0.8 Accuracy SWV CxDNN CorrectNet Vanilla RAG RoCR-CDE RoCR-CDI (b) Citation on Phi-2 D-1 D-2 D-3 D-4 D-5 0.2 0.4 0.6 0.8 Accuracy SWV CxDNN CorrectNet Vanilla RAG RoCR-CDE RoCR-CDI (c) Citation on Mistral-7B D-1 D-2 D-3 D-4 D-5 0.2 0.4 0.6 0.8 Accuracy SWV CxDNN CorrectNet Vanilla RAG RoCR-CDE RoCR-CDI (d) Citation on Llama-2-3B D-1 D-2 D-3 D-4 D-5 0.0 0.1 0.2 0.3 0.4 0.5 Accuracy SWV CxDNN CorrectNet Vanilla RAG RoCR-CDE RoCR-CDI (e) Movie on Gemma-2B D-1 D-2 D-3 D-4 D-5 0.0 0.1 0.2 0.3 0.4 0.5 Accuracy SWV CxDNN CorrectNet Vanilla RAG RoCR-CDE RoCR-CDI (f) Movie on Phi-2 D-1 D-2 D-3 D-4 D-5 0.0 0.1 0.2 0.3 0.4 0.5 Accuracy SWV CxDNN CorrectNet Vanilla RAG RoCR-CDE RoCR-CDI (g) Movie on Mistral-7B D-1 D-2 D-3 D-4 D-5 0.0 0.1 0.2 0.3 0.4 0.5 Accuracy SWV CxDNN CorrectNet Vanilla RAG RoCR-CDE RoCR-CDI (h) Movie on Llama-2-3B Figure 6: Performance comparison between our framework and four baselines on RAG utilizing the LLMs including Gemma-2B, Phi-2, Mistral-7B, and Llama-2-3B with device variation specified in Table 2, given dataset \ud835\udc36\ud835\udc56\ud835\udc61\ud835\udc4e\ud835\udc61\ud835\udc56\ud835\udc5c\ud835\udc5band \ud835\udc40\ud835\udc5c\ud835\udc63\ud835\udc56\ud835\udc52. Table 2: Device non-ideality modeling for different real and synthesized devices. For devices with more than two levels, the device variation for each level is depicted as \ud835\udc3f\ud835\udc65. Name # of Levels Device Variations \ud835\udf0e\ud835\udc63 \ud835\udc3f0 \ud835\udc3f1 \ud835\udc3f2 \ud835\udc3f3 \ud835\udc45\ud835\udc45\ud835\udc34\ud835\udc401 (Device-1) 1 0.0100 0.0100 0.0100 0.0100 \ud835\udc39\ud835\udc52\ud835\udc39\ud835\udc38\ud835\udc472 (Device-2) 4 0.0067 0.0135 0.0135 0.0067 \ud835\udc39\ud835\udc52\ud835\udc39\ud835\udc38\ud835\udc473 (Device-3) 4 0.0049 0.0146 0.0146 0.0049 \ud835\udc45\ud835\udc45\ud835\udc34\ud835\udc404 (Device-4) 4 0.0038 0.0151 0.0151 0.0038 \ud835\udc39\ud835\udc52\ud835\udc39\ud835\udc38\ud835\udc476 (Device-5) 4 0.0026 0.0155 0.0155 0.0026 are resistive random-access memory (RRAM) devices extracted from [27, 41] and the other is a ferroelectric field effect transistor (FeFET) device extracted from [42]. We name them \ud835\udc45\ud835\udc45\ud835\udc34\ud835\udc401, \ud835\udc45\ud835\udc45\ud835\udc34\ud835\udc404 and \ud835\udc39\ud835\udc52\ud835\udc39\ud835\udc38\ud835\udc472, respectively. We also extrapolate the modeling data to obtain two synthesized \ud835\udc39\ud835\udc52\ud835\udc39\ud835\udc38\ud835\udc473 and \ud835\udc39\ud835\udc52\ud835\udc39\ud835\udc38\ud835\udc476 devices. Detailed device modeling results are demonstrated in Table 2. A \ud835\udc65-level device means this device can represent \ud835\udc65distinct values and \ud835\udf0e\ud835\udc3f2 = 0.01 means the variation of this device is 0.01 when it is representing the level value 2. Using the device variations obtained from real CiM devices, we perform our experiments on a single Nvidia A10 GPU. Document embeddings are shaped based on different CiM devices and stored as parallel arrays, similar to how they would be mapped to multiple NVM devices in practical scenarios. For example, if an embedding is shaped to contain all uint8 values, when it is mapped to 4-level (2-bit) devices such as \ud835\udc39\ud835\udc52\ud835\udc39\ud835\udc38\ud835\udc472, each element of the vector is represented by four devices. 4.1.3 Evaluation Methods. Our first three datasets examine the model classification capability, and the rest of two datasets examine the text generation capability. In particular, dataset \ud835\udc36\ud835\udc56\ud835\udc61\ud835\udc4e\ud835\udc61\ud835\udc56\ud835\udc5c\ud835\udc5band \ud835\udc40\ud835\udc5c\ud835\udc63\ud835\udc56\ud835\udc52has two and fifteen labels respectively. We can examine the binary and multiclass classification capabilities of the LLMs enhanced by our framework. In this way, we use accuracy to examine the ability of the models to correctly classify instances across different classes, and we use F1 score to examine the balance between precision and recall in classification tasks. For dataset \ud835\udc45\ud835\udc4e\ud835\udc61\ud835\udc56\ud835\udc5b\ud835\udc54, while it has five labels and also examine the multiclass classification, we use mean absolute error (MAE) and root mean square error (RMSE) to evaluate from from a regression perspective [43]. For MAE, it measures the average magnitude of errors in the predictions, providing a straightforward assessment of the model\u2019s overall accuracy in predicting the rating values. For RMSE, it captures the square root of the average squared differences between predicted and actual ratings, offering a metric sensitive to larger errors, which can highlight significant discrepancies between the model\u2019s predictions and true values. For dataset \ud835\udc41\ud835\udc52\ud835\udc64\ud835\udc60and \ud835\udc37\ud835\udc35\ud835\udc3f\ud835\udc43, their labels are sentences. Such datasets examine the text generation capabilities. We use ROUGE1 and ROUGE-L to evaluate the overlap between generated texts and reference texts [44], capturing both the precision and recall of individual words (ROUGE-1) and the longest matching sequence (ROUGE-L), ensuring a comprehensive evaluation of the text generation quality. For accuracy, F1, ROUGE-1 and ROUGE-L, their higher values reflect the better performance. For MAE and RMSE, their lower value represent the better performance. Additionally, we use accuracy to measure the MIPS performance (MIPS accuracy), representing the ratio of MIPS results under device variation and MIPS results without device variation (references). 4.1.4 Baselines. As this is the first work to improve the RAG robustness on Edge-based CiM, we do not have state-of-the-art for comparison. As such, we construct baselines from the past noise mitigation methods originally designed to boost DNN robustness. The first baseline is selective write verify [21] (SWV). While it originally utilizes the second derivation to evaluate the device variation impact on neural network weights, we use the second derivation to measure the embedding deviation between the ground truth embedding and the embedding under device variation. The second baseline is (CxDNN) [45]. While they use compensation factor to improve the robustness of vector-matrix multiplication, we use the compensation factor the calibrate the embedding impacted by device variation. The third baseline is CorrectNet [46], where it utilizes the cross entropy loss and regularization to improve the robustness of neural networks in CiM. To use it as a baseline, we also use the cross entropy loss the regularization as the loss function to \fRuiyang Qin1, Zheyu Yan1, Dewen Zeng1, Zhenge Jia1, Dancheng Liu2, Jianbo Liu1, Ahmed Abbasi1,Zhi Zheng1, Ningyuan Cao1, Kai Ni1, Jinjun Xiong2, Yiyu Shi1 calibrate the device output embedding. Additionally, we examine the Vanilla RAG, which contains no noise mitigation methods, as our fourth baseline. The baselines use the same experimental setting as our framework does. 4.2 Results For RAG, it can be simplified as the combination of MIPS and LLM, where the MIPS as a retriever searches the appropriate information and the LLM as a generator processes the searched results. Hence, in our experiments, we first evaluate the performance of MIPS under the device variation of device-1. We take the MIPS results obtained without device variation as the references (i.e., ground truth). Using the metric of MIPS accuracy, we examine how many MIPS results under device variation will match the references. Since the quality of retrieved content largely depends on the base sentence embedding model, and we focus on mitigating the device variation impact on the embedding model, we do not assess the quality of references. As shown in Table 3, our framework using the two data construction methods outperforms the four baselines across five datasets. It shows that our framework can mitigate the embedding perturbation due to device variation. These results can also correspond to the preliminary study shown in Figure 2, where the increment of \ud835\udf0e in naive Gaussian noise will jeopardize the MIPS performance. Table 3: Performance (MIPS accuracy) comparison between our framework and baselines. Accuracy is computed based on MIPS-retrieved documents under device variation of device-1 and the these retrieved without device variation. Dataset Citation Movie Rating News DBLP SWV 0.4200 0.1728 0.1050 0.0855 0.2295 CxDNN 0.4401 0.2017 0.0503 0.0754 0.1681 CorrectNet 0.4013 0.0699 0.0509 0.0533 0.1609 Vanilla RAG 0.4547 0.1694 0.0933 0.0649 0.1747 RoCR-CDE 0.9231 0.4639 0.1583 0.1921 0.2750 RoCR-CDI 0.9344 0.4355 0.1266 0.1708 0.2905 After we compare the MIPS performance of our framework and baselines, we further present a comprehensive evaluation to show the RAG performance of them. We use Gemma-2B as the LLM in RAG. Additionally, with Gemma-2B, we run RAG without device variation to obverse its ideal performance, where we get 0.5200 of accuracy for Citation, 0.3728 of accuracy for Movie, 0.3150 of MAE for Rating, 0.0855 of ROUGE-1 for News, and 0.2295 of ROUGE-1 for DBLP. On five CiM devices, whose device variations have been shown in Table 2, we examine RAG with five datasets. As shown in Table 1, given the same datasets, it is clear that each device variation significantly compromises the RAG robustness, whereas our framework can mitigate the different device variation. For example, the RAG performance for Citation dataset on Device-2 can range from 0.18 to 0.48, while our framework can boost the accuracy performance of Citation dataset above 0.5 for all five devices. Compared to the four baselines whose performances are relatively worse than the ideal performance, our framework significantly approaches and sometimes outperforms the ideal performance via generating better sentence embeddings. This is because RoCR also serves as a regularization to improve the model\u2019s generalization. In addition, we evaluate the impact of different LLMs on the performance of our framework. As Figure 1 shown, the LLM takes the concatenation of MIPS searched data and user query as the input and generates the response regarding the user query. Since different LLMs may have different response given the same query, we select four emerging edge-friendly medium-size LLMs in our experiments to examine the performance of our framework. Gemma-2B [47] is a new SOTA open model introduced by Google, with 4.95G model weights. According to Google, Gemma can outperform the same sized Llama-2 in reasoning capabilities. Hence, we also use Llama2-3B [48], one of the earliest open LLMs introduced by Meta, with 6.85G model weights. Similarly, Phi-2 [49] released by Microsoft, is a powerful small LLM with 5G model weights. Additionally, Mistral7B-GPTQ [50] made by Mistral AI, is a well-performed LLM after Llama model. We select dataset \ud835\udc36\ud835\udc56\ud835\udc61\ud835\udc4e\ud835\udc61\ud835\udc56\ud835\udc5c\ud835\udc5band dataset \ud835\udc40\ud835\udc5c\ud835\udc56\ud835\udc63\ud835\udc52. We use the default experimental setting with \ud835\udf0e= 0.1 and use CiM Device-1 as the experimental environment. The results are shown on Figure 6. It is evident that our framework outperforms each baseline across five CiM devices. Besides, the performance of each baseline on the same dataset can be largely different given different device, while our framework can produce a more robust performance. 0.000 0.025 0.050 0.075 0.100 0.125 0.150 Device Variation ( ) 0.10 0.15 0.20 0.25 0.30 ROUGE-1 SWV CxDNN CorrectNet Vanilla RAG RoCR-CDE RoCR-CDI Figure 7: Performance comparison between our framework and four baselines on CiM device-1 with different device variation \ud835\udf0e, given dataset DBLP. By default, we use \ud835\udf0e= 0.1 to calculate the device variation of the five CiM devices. We also conduct an additional study to evaluate our framework given different \ud835\udf0evalues. Since we have already use dataset Citation and dataset Movie to study the performance of our frameworks seen in Figure 6, we choose a different dataset DBLP, using ROUGE-1 as the metric. For the LLM in RAG, we choose Mistral-7B. We examine the \ud835\udf0evalues higher and lower than 0.1, including 0, 0.025, 0.05, 0.075, 0.125, and 0.15. The case of \ud835\udf0e = 0 reflects the ideal performance. For the CiM device, we use CiM device-1. As shown in Figure 7, our framework outperforms baselines across different device variation values. Finally, RoCR is a training method that generates more robust weights for the sentence embedding model. It does not change the model structure. Thus, there is no hardware (e.g., energy and latency) overhead during inference. 5" +} \ No newline at end of file diff --git a/abs_9K/test_abstract_short_2405.04781v1.json b/abs_9K/test_abstract_short_2405.04781v1.json new file mode 100644 index 0000000000000000000000000000000000000000..750b0c73c2dd41eb651676eec170f2a95555a9d1 --- /dev/null +++ b/abs_9K/test_abstract_short_2405.04781v1.json @@ -0,0 +1,16 @@ +{ + "url": "http://arxiv.org/abs/2405.04781v1", + "title": "CourseGPT-zh: an Educational Large Language Model Based on Knowledge Distillation Incorporating Prompt Optimization", + "abstract": "Large language models (LLMs) have demonstrated astonishing capabilities in\nnatural language processing (NLP) tasks, sparking interest in their application\nto professional domains with higher specialized requirements. However,\nrestricted access to closed-source LLMs via APIs and the difficulty in\ncollecting massive high-quality datasets pose obstacles to the development of\nlarge language models in education fields of various courses. Given these\nchallenges, we propose CourseGPT-zh, a course-oriented education LLM that\nsupports customization and low-cost deployment. To address the\ncomprehensiveness and diversity requirements of course-specific corpora, we\ndesign a high-quality question-answering corpus distillation framework\nincorporating prompt optimization, which effectively mines textbook knowledge\nand enhances its diversity. Moreover, considering the alignment of LLM\nresponses with user needs, a novel method for discrete prompt optimization\nbased on LLM-as-Judge is introduced. During optimization, this framework\nleverages the LLM's ability to reflect on and exploit error feedback and\npatterns, allowing for prompts that meet user needs and preferences while\nsaving response length. Lastly, we obtain CourseGPT-zh based on the open-source\nLLM using parameter-efficient fine-tuning. Experimental results show that our\ndiscrete prompt optimization framework effectively improves the response\nquality of ChatGPT, and CourseGPT-zh exhibits strong professional capabilities\nin specialized knowledge question-answering, significantly outperforming\ncomparable open-source models.", + "authors": "Zheyan Qu, Lu Yin, Zitong Yu, Wenbo Wang, Xing zhang", + "published": "2024-05-08", + "updated": "2024-05-08", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "label": "Original Paper", + "paper_cat": "Parameter AND Efficient AND Fine AND Tuning", + "gt": "Large language models (LLMs) have demonstrated astonishing capabilities in\nnatural language processing (NLP) tasks, sparking interest in their application\nto professional domains with higher specialized requirements. However,\nrestricted access to closed-source LLMs via APIs and the difficulty in\ncollecting massive high-quality datasets pose obstacles to the development of\nlarge language models in education fields of various courses. Given these\nchallenges, we propose CourseGPT-zh, a course-oriented education LLM that\nsupports customization and low-cost deployment. To address the\ncomprehensiveness and diversity requirements of course-specific corpora, we\ndesign a high-quality question-answering corpus distillation framework\nincorporating prompt optimization, which effectively mines textbook knowledge\nand enhances its diversity. Moreover, considering the alignment of LLM\nresponses with user needs, a novel method for discrete prompt optimization\nbased on LLM-as-Judge is introduced. During optimization, this framework\nleverages the LLM's ability to reflect on and exploit error feedback and\npatterns, allowing for prompts that meet user needs and preferences while\nsaving response length. Lastly, we obtain CourseGPT-zh based on the open-source\nLLM using parameter-efficient fine-tuning. Experimental results show that our\ndiscrete prompt optimization framework effectively improves the response\nquality of ChatGPT, and CourseGPT-zh exhibits strong professional capabilities\nin specialized knowledge question-answering, significantly outperforming\ncomparable open-source models.", + "main_content": "Introduction Large language models, such as ChatGPT [1], GPT4 [2], LLaMA [3], and ChatGLM [4], have demonstrated remarkable performance and generalization capabilities across various NLP tasks, significantly expanding the boundaries of language applications. With the increase in model parameters and pretraining corpus size, capabilities such as logical reasoning, instruction following, and In-Context Learning [5],[6],[7] have emerged. Based on these breakthroughs, the latest LLMs have shown profound understanding and professionalism in various fields, such as virtual assistants, text generation, and code annotation. Utilizing LLMs to disrupt industries has become an inevitable trend, including the field of education[8],[9]. Recently, there has been a desire to leverage the extensive knowledge of large language models to construct domainspecific LLMs in various vertical fields, which require greater expertise and accuracy. To address the issue that general-purpose LLMs cannot meet specific domain requirements, a variety of methods have been proposed. For instance, steering foundation models through role-playing or prompt engineering have been used to tap into the knowledge learned during the pre-training phase, which can unleash their deep-seated expert capabilities [10],[11]. Other approaches involve pretraining or continual pre-training with domain-specific corpus to incorporate domainspecific knowledge into large language models [8],[12],[13],[14]. In addition, to reduce the hallucination during the response generation, retrieval augmentation has also been applied to provide reliable references [8],[15]. Based on these \u2217Xing zhang is the corresponding author. arXiv:2405.04781v1 [cs.CL] 8 May 2024 \fapproaches, successful implementations such as MedAgents [10], ChatLaw [15], EduChat [8], and FinGPT [16] have demonstrated the potential of LLMs to provide professional responses and insights in various vertical fields, including healthcare, law, finance, and education. However, constructing domain-specific large language models is still labor-consuming and expensive. To begin with, for closed-source large language models like ChatGPT, the high costs of text generation and fine-tuning services are often prohibitive. As for open-source LLMs, there is a significant gap in parameter size and pre-training corpus compared to closed-source LLMs, resulting in significantly weaker general capabilities such as reasoning, and domain-specific knowledge extraction [9],[17],[18],[19]. Faced with complex professional terminology, open-source large language models often fail to meet user requirements for domain knowledge. In this context, it often requires a large amount of in-domain pre-training corpus or expertise datasets to enhance professionalism in vertical fields. Although various existing works have developed specialized datasets and evaluation criteria for various fields such as philosophy, medicine, and law, as well as for scenarios including network operation and geospatial semantics [17],[18],[19],[20],[21], there is still a considerable demand for manual effort in constructing datasets for courses or privatized scenarios that are not covered by these datasets. This challenge is particularly pronounced when accessible corpora in the field are scarce, making it extremely difficult to construct tens of thousands of specialized instruction data. Furthermore, the majority of models are primarily pre-trained on English corpora, which may lead to a degradation in their performance in other languages [22],[23]. In addition to the challenges of constructing specialized corpora, the high cost of inference incurred by open-source large language models cannot be overlooked. Compared to the concise responses provided by humans, the responses generated by large language models, while more comprehensive, also include a significant amount of redundant information, resulting in unnecessary inference overhead. Typically, to further align the responses of large language models with specific preferences, methods such as RLHF (Reinforcement Learning from Human Feedback)[24] are introduced for fine-tuning models. However, this approach still requires a substantial amount of human-labeled preference data. Consequently, promoting alignment between the responses and human preferences, as well as reducing inference costs, is also a key factor in fostering the widespread adoption of open-source large models in specialized vertical domains. Targeted at these issues, we propose CourseGPT-zh, an open-source education large language model, and design a pipeline for constructing high-quality question-answer pairs through mining textbook knowledge. By utilizing the constructed diverse question-answer pairs, we perform parameter-efficient fine-tuning on the open-source model to mitigate the resource constraints required for deployment. In addition, in the data construction process, we incorporate LLM-as-Judge and utilize discrete prompt optimization to generate optimal prompts, steering ChatGPT to produce high-quality training data aligned with human preferences. Through this method, we ensure high-quality responses while reducing the deployment costs associated with response length. Our main contributions can be summarized as: \u2022 In this paper, we propose CourseGPT-zh, an open-source education large language model, with a pipeline for constructing high-quality and diverse question-answer pairs. Based on textbooks, we guide the model to conduct thorough exploration and questioning of textbooks, extracting knowledge from both closed-source large language models and specialized texts. Additionally, we employ a method inspired by self-instruct to guide the large language models in generating related questions, further enhancing the diversity. \u2022 Considering that although large language models can generate comprehensive answers, some content may be redundant or incorrect. Therefore, we employ prompt engineering to guide ChatGPT in generating responses that align with human preferences. To obtain the optimal prompts, we have designed an iterative discrete prompt optimization framework, which incorporates LLM-as-Judge to facilitate automatic evaluation of the quality of responses guided by prompts. Furthermore, the optimized prompt allows the large language model to achieve a balance between the quality of responses and their length, achieving information compression in responses. \u2022 A parameter-efficient fine-tuning method of the ChatGLM3 model is conducted based on constructed highquality question-answering data, resulting in the CourseGPT-zh. Experimental evidence has shown that CourseGPT-zh exhibits improved alignment with human responses, and delivers more concise answers while maintaining a high level of response quality. On various NLP task evaluation metrics, CourseGPT-zh significantly outperforms other open-source large models. 2 \f2 Related-work With fierce competition and rapid development, large language models ranging from billions to trillions of parameters have achieved remarkable performance across various NLP tasks after being pre-trained on massive amounts of text. Represented by LLMs such as ChatGPT, GPT4, and GPT4-Turbo, the OpenAI model family has successively reset the benchmarks for NLP tasks, being regarded as one of the greatest inventions in history. Concurrently, a multitude of open-source large language models, including llama-2-13b, ChatGLM3-6b, and Mistral-8x7B-MoE[25], have also shown astonishing improvements, even surpassing the level of ChatGPT on some dimensions. More importantly, they can be deployed on a single to several GPUs and can be flexibly customized through fine-tuning. 2.1 Domain-specific LLMs Although general-purpose large language models have achieved exceptional performance on generic NLP tasks, they often fall short in vertical domains that necessitate extensive specialized knowledge and high accuracy requirements. The performance of zero-shot large language models in these domains is typically inadequate, thereby granting domainspecific LLMs significant attention. Closed-source large language models, while exhibiting superior performance across various capabilities, present challenges for continual pre-training and fine-tuning with private corpora. Therefore, the construction of domain-specific models based on closed-source LLMs frequently leverages role-playing or collaboration abilities to extract knowledge in the specialized field during the pre-training phase. In contrast, open-source LLMs can be further pre-trained or fine-tuned with extensive high-quality domain-specific data, and they have achieved multiple successful applications in fields such as medicine, law, education, finance, etc. HuatuoGPT [26] employs a mixed dataset comprising distilled data from ChatGPT and real-world data provided by physicians\u2019 medical advice to fine-tune an open-source model. Furthermore, it aligns the model\u2019s response with human preferences through RLAIF (Reinforcement Learning from Artificial Intelligence Feedback). By learning from the response styles of real-world doctor-patient interactions, the fine-tuned model can engage with users in a human-like manner and significantly surpasses other models at a similar level across various metrics. MedChatZH [12] has developed a dialogue model specifically designed for Traditional Chinese Medicine, incorporating extensive Chinese medical literature for continual pre-training. After fine-tuning millions of question-answer data from the Internet and various Chinese hospitals, the model achieves state-of-the-art performance in the field of Chinese medicine. ChatLaw [15], targeting the legal domain, not only provides professional responses concerning legal knowledge but also acquires problem-solving abilities through training on multiple-choice question data. Furthermore, it employs a method combining vector database retrieval with keyword search, effectively reducing the hallucination in responses. EduChat [8] offers a range of functionalities, including open-ended question answering, paper assessment, and Socratic teaching, enhancing various skills through fine-tuning and the integration of tools. The model gains interdisciplinary knowledge through continual pre-training and strengthens its question-answering and instruction-following capabilities with large-scale instruction and open-domain dialogue datasets. FinGPT [16] adopts a data-centric approach, focusing on automated data management pipelines and lightweight adaptive technologies, establishing a comprehensive framework from data processing to feature engineering and application, while also enhancing the transparency of the overall framework. One of its strengths lies in its ability to integrate seamlessly with both open-source and closed-source large language models without the need for further training. 2.2 Discrete prompt engineering Prompt engineering aims to guide large language models to fully leverage their potential through the meticulous design of prompts. Extensive research has demonstrated that well-crafted prompts can significantly enhance the ability of large language models to improve their performance across various NLP tasks [27],[28]. Prompt engineering encompasses continuous prompt learning and discrete prompt optimization. Continuous prompt learning aims to adapt large language models to various tasks by incorporating learnable parameters within the prompts [29], [30]. However, continuous prompt learning typically requires access to the gradient vectors of the LLMs, which restricts its application in closed-source models that are accessed only through APIs. For discrete prompts, traditional methods often rely on meticulous manual design, which not only demands considerable human effort but also may not necessarily maximize the model\u2019s performance. Consequently, numerous methods for automatically generating optimal discrete prompts have been explored, leveraging the large model itself as an optimizer to autonomously enhance its performance in NLP tasks. Recently, several leading automated discrete prompt optimization frameworks have been proposed. EVOPROMPT[31] draws on the principles of evolutionary algorithms (EAs) to iteratively guide LLMs to generate new prompts through evolutionary operators. It does not require any gradient information from LLMs and can achieve a balance between exploration and exploitation. Experiments on nine datasets have shown that optimized prompts can significantly improve task performance. APE[32], inspired by program synthesis, represents discrete prompting optimization as 3 \fOpen-source Pre-trained Model Course-oriented Chat Model Factual Accuracy User Satisfaction Clarity Condensability Paragraphs Reflection Resample" +} \ No newline at end of file diff --git a/abs_9K/test_abstract_short_2405.04795v1.json b/abs_9K/test_abstract_short_2405.04795v1.json new file mode 100644 index 0000000000000000000000000000000000000000..3db6b18c8a268f7293f626b646121ef9150a3ed3 --- /dev/null +++ b/abs_9K/test_abstract_short_2405.04795v1.json @@ -0,0 +1,16 @@ +{ + "url": "http://arxiv.org/abs/2405.04795v1", + "title": "Variational Schr\u00f6dinger Diffusion Models", + "abstract": "Schr\\\"odinger bridge (SB) has emerged as the go-to method for optimizing\ntransportation plans in diffusion models. However, SB requires estimating the\nintractable forward score functions, inevitably resulting in the costly\nimplicit training loss based on simulated trajectories. To improve the\nscalability while preserving efficient transportation plans, we leverage\nvariational inference to linearize the forward score functions (variational\nscores) of SB and restore simulation-free properties in training backward\nscores. We propose the variational Schr\\\"odinger diffusion model (VSDM), where\nthe forward process is a multivariate diffusion and the variational scores are\nadaptively optimized for efficient transport. Theoretically, we use stochastic\napproximation to prove the convergence of the variational scores and show the\nconvergence of the adaptively generated samples based on the optimal\nvariational scores. Empirically, we test the algorithm in simulated examples\nand observe that VSDM is efficient in generations of anisotropic shapes and\nyields straighter sample trajectories compared to the single-variate diffusion.\nWe also verify the scalability of the algorithm in real-world data and achieve\ncompetitive unconditional generation performance in CIFAR10 and conditional\ngeneration in time series modeling. Notably, VSDM no longer depends on warm-up\ninitializations and has become tuning-friendly in training large-scale\nexperiments.", + "authors": "Wei Deng, Weijian Luo, Yixin Tan, Marin Bilo\u0161, Yu Chen, Yuriy Nevmyvaka, Ricky T. Q. Chen", + "published": "2024-05-08", + "updated": "2024-05-08", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "label": "Original Paper", + "paper_cat": "Diffusion AND Model", + "gt": "Schr\\\"odinger bridge (SB) has emerged as the go-to method for optimizing\ntransportation plans in diffusion models. However, SB requires estimating the\nintractable forward score functions, inevitably resulting in the costly\nimplicit training loss based on simulated trajectories. To improve the\nscalability while preserving efficient transportation plans, we leverage\nvariational inference to linearize the forward score functions (variational\nscores) of SB and restore simulation-free properties in training backward\nscores. We propose the variational Schr\\\"odinger diffusion model (VSDM), where\nthe forward process is a multivariate diffusion and the variational scores are\nadaptively optimized for efficient transport. Theoretically, we use stochastic\napproximation to prove the convergence of the variational scores and show the\nconvergence of the adaptively generated samples based on the optimal\nvariational scores. Empirically, we test the algorithm in simulated examples\nand observe that VSDM is efficient in generations of anisotropic shapes and\nyields straighter sample trajectories compared to the single-variate diffusion.\nWe also verify the scalability of the algorithm in real-world data and achieve\ncompetitive unconditional generation performance in CIFAR10 and conditional\ngeneration in time series modeling. Notably, VSDM no longer depends on warm-up\ninitializations and has become tuning-friendly in training large-scale\nexperiments.", + "main_content": "Introduction Diffusion models have showcased remarkable proficiency across diverse domains, spanning large-scale generations *Equal contribution (Alphabetical) 1Machine Learning Research, Morgan Stanley 2Peking University 3Duke University 4Meta AI (FAIR). Correspondence to: Wei Deng . Proceedings of the 41 st International Conference on Machine Learning, Vienna, Austria. PMLR 235, 2024. Copyright 2024 by the author(s). of image, video, and audio, conditional text-to-image tasks, and adversarial defenses (Dhariwal & Nichol, 2022; Ho et al., 2022; Kong et al., 2021; Ramesh et al., 2022; Zhang et al., 2024). The key to their scalability lies in the closedform updates of the forward process, highlighting both statistical efficiency (Koehler et al., 2023) and diminished dependence on dimensionality (Vono et al., 2022). Nevertheless, diffusion models lack a distinct guarantee of optimal transport (OT) properties (Lavenant & Santambrogio, 2022) and often necessitate costly evaluations to generate higherfidelity content (Ho et al., 2020; Salimans & Ho, 2022; Lu et al., 2022; Xue et al., 2023; Luo, 2023). Alternatively, the Schr\u00a8 odinger bridge (SB) problem (L\u00b4 eonard, 2014; Chen & Georgiou, 2016; Pavon et al., 2021; Caluya & Halder, 2022; De Bortoli et al., 2021), initially rooted in quantum mechanics (L\u00b4 eonard, 2014), proposes optimizing a stochastic control objective through the use of forward-backward stochastic differential equations (FBSDEs) (Chen et al., 2022b). The alternating solver gives rise to the iterative proportional fitting (IPF) algorithm (Kullback, 1968; Ruschendorf, 1995) in dynamic optimal transport (Villani, 2003; Peyr\u00b4 e & Cuturi, 2019). Notably, the intractable forward score function plays a crucial role in providing theoretical guarantees in optimal transport (Chen et al., 2023c; Deng et al., 2024). However, it simultaneously sacrifices the simulation-free property and largely relies on warm-up checkpoints for conducting large-scale experiments (De Bortoli et al., 2021; Chen et al., 2022b). A natural follow-up question arises: Can we train diffusion models with efficient transport? To this end, we introduce the variational Schr\u00a8 odinger diffusion model (VSDM). Employing variational inference (Blei et al., 2017), we perform a locally linear approximation of the forward score function, and denote it by the variational score. The resulting linear forward stochastic differential equations (SDEs) naturally provide a closed-form update, significantly enhancing scalability. Compared to the singlevariate score-based generative model (SGM), VSDM is a multivariate diffusion (Singhal et al., 2023). Moreover, hyperparameters are adaptively optimized for more efficient transportation plans within the Schr\u00a8 odinger bridge framework (Chen et al., 2022b). 1 arXiv:2405.04795v1 [cs.LG] 8 May 2024 \fVariational Schr\u00a8 odinger Diffusion Models Theoretically, we leverage stochastic approximation (Robbins & Monro, 1951) to demonstrate the convergence of the variational score to the optimal local estimators. Although the global transport optimality is compromised, the notable simulation-free speed-ups in training the backward score render the algorithm particularly attractive for training various generation tasks from scratch. Additionally, the efficiency of simulation-based training for the linearized variational score significantly improves owing to computational advancements in convex optimization. We validate the strength of VSDM through simulations, achieving compelling performance on standard image generation tasks. Our contributions unfold in four key aspects: \u2022 We introduce the variational Schr\u00a8 odinger diffusion model (VSDM), a multivariate diffusion with optimal variational scores guided by optimal transport. Additionally, the training of backward scores is simulationfree and becomes much more scalable. \u2022 We study the convergence of the variational score using stochastic approximation (SA) theory, which can be further generalized to a class of state space diffusion models for future developments. \u2022 VSDM is effective in generating data of anisotropic shapes and motivates straighter transportation paths via the optimized transport. \u2022 VSDM achieves competitive unconditional generation on CIFAR10 and conditional generation in time series modeling without reliance on warm-up initializations. 2. Related Works Flow Matching and Beyond Lipman et al. (2023) utilized the McCann displacement interpolation (McCann, 1997) to train simulation-free CNFs to encourage straight trajectories. Consequently, Pooladian et al. (2023); Tong et al. (2023) proposed straightening by using minibatch optimal transport solutions. Similar ideas were achieved by Liu (2022); Liu et al. (2023) to iteratively rectify the interpolation path. Albergo & Vanden-Eijnden (2023); Albergo et al. (2023) developed the stochastic interpolant approach to unify both flow and diffusion models. However, \u201cstraighter\u201d transport maps may not imply optimal transportation plans in general and the couplings are still not effectively optimized. Dynamic Optimal Transport Finlay et al. (2020); Onken et al. (2021) introduced additional regularization through optimal transport to enforce straighter trajectories in CNFs and reduce the computational cost. De Bortoli et al. (2021); Chen et al. (2022b); Vargas et al. (2021) studied the dynamic Schr\u00a8 odinger bridge with guarantees in entropic optimal transport (EOT) (Chen et al., 2023c); Shi et al. (2023); Peluchetti (2023); Chen et al. (2023b) generalized bridge matching and flow matching based EOT and obtained smoother trajectories, however, scalability remains a significant concern for Schr\u00a8 odinger-based diffusions. 3. Preliminaries 3.1. Diffusion Models The score-based generative models (SGMs) (Ho et al., 2020; Song et al., 2021b) first employ a forward process (1a) to map data to an approximate Gaussian and subsequently reverse the process in Eq.(1b) to recover the data distribution. d\u2212 \u2192 x t = f t(\u2212 \u2192 x t)dt + p \u03b2td\u2212 \u2192 wt (1a) d\u2190 \u2212 x t = \u0002 f t(\u2190 \u2212 x t) \u2212\u03b2t\u2207log \u03c1t \u0000\u2190 \u2212 x t \u0001\u0003 dt + p \u03b2td\u2190 \u2212 wt, (1b) where \u2190 \u2212 x t, \u2212 \u2192 x t \u2208Rd; \u2212 \u2192 x 0 \u223c\u03c1data and \u2190 \u2212 x T \u223c\u03c1prior; f t denotes the vector field and is often set to 0 (a.k.a. VE-SDE) or linear in x (a.k.a. VP-SDE); \u03b2t > 0 is the time-varying scalar; \u2212 \u2192 wt is a forward Brownian motion from t \u2208[0, T] with \u03c1T \u2248\u03c1prior; \u2190 \u2212 wt is a backward Brownian motion from time T to 0. The marginal density \u03c1t of the forward process (1a) is essential for generating the data but remains inaccessible in practice due to intractable normalizing constants. Explicit Score Matching (ESM) Instead, the conditional score function \u2207log \u03c1t|0 (\u00b7) \u2261\u2207log \u03c1t \u0000\u00b7|\u2212 \u2192 x 0 \u0001 is estimated by minimizing a user-friendly ESM loss (weighted by \u03bb) between the score estimator st \u2261s\u03b8(\u00b7, t) and exact score (Song et al., 2021b) such that Et \u0002 \u03bbtE\u2212 \u2192 x 0E\u2212 \u2192 x t|\u2212 \u2192 x 0[\u2225st(\u2212 \u2192 x t) \u2212\u2207log \u03c1t|0 \u0000\u2212 \u2192 x t \u0001 \u22252 2] \u0003 . (2) Notably, both VPand VE-SDEs yield closed-form expressions for any \u2212 \u2192 x t given \u2212 \u2192 x 0 in the forward process (Song et al., 2021b), which is instrumental for the scalability of diffusion models in real-world large-scale generation tasks. Implicit Score Matching (ISM) By integration by parts, ESM is equivalent to the ISM loss (Hyv\u00a8 arinen, 2005; Huang et al., 2021; Luo et al., 2024b) and the evidence lower bound (ELBO) follows log \u03c10 (x0) \u2265E\u03c1T |0(\u00b7) \u0002 log \u03c1T |0 (xT ) \u0003 \u22121 2 Z T 0 E\u03c1t|0(\u00b7) h \u03b2t \u2225st\u22252 2 + 2\u2207\u00b7 (\u03b2tst \u2212f t) i dt. ISM is naturally connected to Song et al. (2020), which supports flexible marginals and nonlinear forward processes but becomes significantly less scalable compared to ESM. 3.2. Schr\u00a8 odinger Bridge The dynamic Schr\u00a8 odinger bridge aims to solve a full bridge inf P\u2208D(\u03c1data,\u03c1prior) KL(P|Q), (3) 2 \fVariational Schr\u00a8 odinger Diffusion Models where D(\u03c1data, \u03c1prior) is the family of path measures with marginals \u03c1data and \u03c1prior at t = 0 and t = T, respectively; Q is the prior process driven by dxt = f t(xt)dt+\u221a2\u03b2t\u03b5d\u2212 \u2192 wt. It also yields a stochastic control formulation (Chen et al., 2021; Pavon et al., 2021; Caluya & Halder, 2022). inf u\u2208U E \u001a Z T 0 1 2\u2225ut(\u2212 \u2192 x t)\u22252 2dt \u001b s.t. d\u2212 \u2192 x t = h f t(\u2212 \u2192 x ) + p \u03b2tut(\u2212 \u2192 x ) i dt + p 2\u03b2t\u03b5d\u2212 \u2192 wt (4) \u2212 \u2192 x 0 \u223c\u03c1data, \u2212 \u2192 x T \u223c\u03c1prior, where U is the family of controls. The expectation is taken w.r.t \u2212 \u2192 \u03c1 t(\u00b7), which denotes the PDF of the controlled diffusion (4); \u03b5 is the temperature of the diffusion and the regularizer in EOT (Chen et al., 2023c). Solving the underlying Hamilton\u2013Jacobi\u2013Bellman (HJB) equation and invoking the time reversal (Anderson, 1982) with \u03b5 = 1 2, Schr\u00a8 odinger system yields the desired forward-backward stochastic differential equations (FBSDEs) (Chen et al., 2022b): d\u2212 \u2192 x t = h f t(\u2212 \u2192 x t) + \u03b2t\u2207log \u2212 \u2192 \u03c8 t(\u2212 \u2192 x t) i dt + p \u03b2td\u2212 \u2192 wt, (5a) d\u2190 \u2212 x t = \u0002 f t(\u2190 \u2212 x t) \u2212\u03b2t\u2207log \u2190 \u2212 \u03c6 t(\u2190 \u2212 x t) \u0003 dt + p \u03b2td\u2190 \u2212 wt, (5b) where \u2212 \u2192 \u03c8 t(\u00b7)\u2190 \u2212 \u03c6 t(\u00b7) = \u2212 \u2192 \u03c1 t(\u00b7), \u03c10(\u00b7) \u223c\u03c1data, \u03c1T (\u00b7) \u223c\u03c1prior. To solve the optimal controls (scores) (\u2207log \u2212 \u2192 \u03c8 , \u2207log \u2190 \u2212 \u03c6 ), a standard tool is to leverage the nonlinear Feynman-Kac formula (Ma & Yong, 2007; Karatzas & Shreve, 1998; Chen et al., 2022b) to learn a stochastic representation. Proposition 1 (Nonlinear Feynman-Kac representation). Assume Lipschitz smoothness and linear growth condition on the drift f and diffusion g in the FB-SDE (5). Define \u2212 \u2192 y t = log \u2212 \u2192 \u03c8 t(xt) and \u2190 \u2212 y t = log \u2190 \u2212 \u03c6 t(xt). Then the stochastic representation follows \u2190 \u2212 y s = E \u0014 \u2190 \u2212 y T \u2212 Z T s \u0393\u03b6(\u2190 \u2212 z t; \u2212 \u2192 z t)dt \f \f \f \f\u2212 \u2192 x s = xs \u0015 , \u0393\u03b6(\u2190 \u2212 z t; \u2212 \u2192 z t)\u22611 2\u2225\u2190 \u2212 z t\u22252 2 + \u2207\u00b7 \u0000p \u03b2t\u2190 \u2212 z t \u2212f t \u0001 + \u03b6\u27e8\u2190 \u2212 z t, \u2212 \u2192 z t\u27e9, (6) where \u2212 \u2192 z t = \u221a\u03b2t\u2207\u2212 \u2192 y t, \u2190 \u2212 z t = \u221a\u03b2t\u2207\u2190 \u2212 y t, and \u03b6 = 1. 4. Variational Schr\u00a8 odinger Diffusion Models SB outperforms SGMs in the theoretical potential of optimal transport and an intractable score function \u2207log \u2212 \u2192 \u03c8 t(xt) is exploited in the forward SDE for more efficient transportation plans. However, there is no free lunch in achieving such efficiency, and it comes with three notable downsides: \u2022 Solving \u2207log \u2212 \u2192 \u03c8 t in Eq.(5a) for optimal transport is prohibitively costly and may not be necessary (Marzouk et al., 2016; Liu et al., 2023). \u2022 The nonlinear diffusion no longer yields closed-form expression of \u2212 \u2192 x t given \u2212 \u2192 x 0 (Chen et al., 2022b). \u2022 The ISM loss is inevitable and the estimator suffers from a large variance issue (Hutchinson, 1989). 4.1. Variational Inference via Linear Approximation FB-SDEs naturally connect to the alternating-projection solver based on the IPF (a.k.a. Sinkhorn) algorithm, boiling down the full bridge (3) to a half-bridge solver (Pavon et al., 2021; De Bortoli et al., 2021; Vargas et al., 2021). With P1 given and k = 1, 2, ..., we have: P2k := arg min P\u2208D(\u03c1data, \u00b7) KL(P\u2225P2k\u22121), (7a) P2k+1 := arg min P\u2208D(\u00b7, \u03c1prior) KL(P\u2225P2k). (7b) More specifically, Chen et al. (2022b) proposed a neural network parameterization to model (\u2190 \u2212 z t, \u2212 \u2192 z t) using (\u2190 \u2212 z \u03b8 t , \u2212 \u2192 z \u03c9 t ), where \u03b8 and \u03c9 refer to the model parameters, respectively. Each stage of the half-bridge solver proposes to solve the models alternatingly as follows \u2190 \u2212 L (\u03b8) = \u2212 Z T 0 E\u2212 \u2192 x t\u223d(5a) \u0014 \u03931(\u2190 \u2212 z \u03b8 t ; \u2212 \u2192 z \u03c9 t )dt \f \f \f \f\u2212 \u2192 x 0 = x0 \u0015 (8a) \u2212 \u2192 L (\u03c9) = \u2212 Z T 0 E\u2190 \u2212 x t\u223d(5b) \u0014 \u03931(\u2212 \u2192 z \u03c9 t ; \u2190 \u2212 z \u03b8 t )dt \f \f \f \f\u2190 \u2212 x T = xT \u0015 , (8b) where \u03931 is defined in Eq.(6) and \u223ddenotes the approximate simulation parametrized by neural networks * However, solving the backward score in Eq.(8a) through simulations, akin to the ISM loss, is computationally demanding and affects the scalability in generative models. To motivate simulation-free property, we leverage variational inference (Blei et al., 2017) and study a linear approximation of the forward score \u2207log \u2212 \u2192 \u03c8 (x, t) \u2248Atx with f t(\u2212 \u2192 x t) \u2261\u22121 2\u03b2t\u2212 \u2192 x t, which ends up with the variational FB-SDE (VFB-SDE): d\u2212 \u2192 x t = \u0014 \u22121 2\u03b2t\u2212 \u2192 x t + \u03b2tAt\u2212 \u2192 x t \u0015 dt + p \u03b2td\u2212 \u2192 wt, (9a) d\u2190 \u2212 x t = \u0014 \u22121 2\u03b2t\u2190 \u2212 x t \u2212\u03b2t\u2207log \u2212 \u2192 \u03c1 t(\u2190 \u2212 x t) \u0015 dt + p \u03b2td\u2190 \u2212 wt, (9b) where t \u2208[0, T] and \u2207log \u2212 \u2192 \u03c1 t is the score function of (9a) and the conditional version is to be derived in Eq.(15). The half-bridge solver is restricted to a class of OU processes OU(\u03c1data, \u00b7) with the initial marginal \u03c1data. arg min P\u2208D(\u03c1data,\u00b7) KL(P\u2225P2k\u22121) \u21d2 arg min b P\u2208OU(\u03c1data,\u00b7) KL(b P\u2225P2k\u22121). *\u223c(resp. \u223d) denotes the exact (resp. parametrized) simulation. 3 \fVariational Schr\u00a8 odinger Diffusion Models By the mode-seeking property of the exclusive (reverse) KL divergence (Chan et al., 2022), we can expect the optimizer b P to be a local estimator of the nonlinear solution in (7a). Additionally, the loss function (8b) to learn the variational score At, where t \u2208[0, T], can be simplified to \u2212 \u2192 L (A) = \u2212 Z T 0 Ext\u223d(9b) \u0014 \u0393\u03b6(Atxt; \u2190 \u2212 z \u03b8 t )dt \f \f \f \f\u2190 \u2212 x T = xT \u0015 , (10) where \u0393\u03b6 is defined in Eq.(6). Since the structure property \u2212 \u2192 \u03c8 t\u2190 \u2212 \u03c6 t = \u2212 \u2192 \u03c1 t in Eq.(5) is compromised by the variational inference, we propose to tune \u03b6 in our experiments. 4.2. Closed-form Expression of Backward Score Assume a prior knowledge of At is given, we can rewrite the forward process (9a) in the VFB-SDE and derive a multivariate forward diffusion (Singhal et al., 2023): d\u2212 \u2192 x t = \u0014 \u22121 2\u03b2tI + \u03b2tAt \u0015 \u2212 \u2192 x tdt + p \u03b2td\u2212 \u2192 wt = \u22121 2Dt\u03b2t\u2212 \u2192 x tdt + p \u03b2td\u2212 \u2192 wt, (11) where Dt = I \u22122At \u2208Rd\u00d7d is a positive-definite matrix \u2020. Consider the multivariate OU process (11). The mean and covariance follow d\u00b5t|0 dt = \u22121 2\u03b2tDt\u00b5t|0 (12a) d\u03a3t|0 dt = \u22121 2\u03b2t \u0000Dt\u03a3t|0 + \u03a3t|0D\u22ba t \u0001 + \u03b2tI. (12b) Solving the differential equations with the help of integration factors, the mean process follows \u00b5t|0 = e\u22121 2 [\u03b2D]tx0, (13) where [\u03b2D]t = R t 0 \u03b2sDsds. By matrix decomposition \u03a3t|0 = CtH\u22121 t (S\u00a8 arkk\u00a8 a & Solin, 2019), the covariance process follows that: \u0012Ct Ht \u0013 = exp \" \u0012\u22121 2[\u03b2D]t [\u03b2I]t 0 1 2[\u03b2D\u22ba]t \u0013 # \u0012\u03a30 I \u0013 , (14) where the above matrix exponential can be easily computed through modern computing libraries. Further, to avoid computing the expensive matrix exponential for highdimensional problems, we can adopt a diagonal and timeinvariant Dt. Suppose \u03a3t|0 has the Cholesky decomposition \u03a3t|0 = LtL\u22ba t for some lower-triangular matrix Lt. We can have a closed-form update that resembles the SGM. \u2212 \u2192 x t = \u00b5t|0 + Lt\u03f5, \u2020Dt = \u22122At \u2208Rd\u00d7d when the forward SDE is VE-SDE. where \u00b5t|0 is defined in Eq.(13) and \u03f5 is the standard ddimensional Gaussian vector. The score function follows \u2207log \u2212 \u2192 \u03c1 t|0(\u2212 \u2192 x t) = \u22121 2\u2207[(\u2212 \u2192 x t \u2212\u00b5t)\u22ba\u03a3\u22121 t|0(\u2212 \u2192 x t \u2212\u00b5t)] = \u2212\u03a3\u22121 t|0(\u2212 \u2192 x t \u2212\u00b5t) (15) = \u2212L\u2212\u22ba t L\u22121 t Lt\u03f5 := \u2212L\u2212\u22ba t \u03f5. Invoking the ESM loss function in Eq.(2), we can learn the score function \u2207log \u2212 \u2192 \u03c1 t|0(\u2212 \u2192 x t|\u2212 \u2192 x 0) using a neural network parametrization st(\u00b7) and optimize the loss function: \u2207A\u2225L\u2212\u22ba t \u03f5 \u2212st(xt)\u22252 2. (16) One may further consider preconditioning techniques (Karras et al., 2022) or variance reduction (Singhal et al., 2023) to stabilize training and accelerate training speed. Speed-ups via time-invariant and diagonal Dt If we parametrize Dt as a time-invariant and diagonal positivedefinite matrix, the formula (14) has simpler explicit expressions that do not require calling matrix exponential operators. We present such a result in Corollary 1. For the image generation experiment in Section 7.3, we use such a diagonal parametrization when implementing the VSDM. Corollary 1. If Dt = \u039b := diag(\u03bb), where \u03bbi \u22650, \u22001 \u2264 i \u2264d. If we denote the \u03c32 t := R t 0 \u03b2sds, then matrices Ct and Ht has simpler expressions with Ct = \u039b\u22121\b exp(1 2\u03c32 t \u039b) \u2212exp(\u22121 2\u03c32 t \u039b) \t Ht = exp(1 2\u03c32 t \u039b), which leads to CtH\u22121 t = \u039b\u22121\b I \u2212exp(\u2212\u03c32 t \u039b) \t . As a result, the corresponding forward transition writes \u00b5t|0 = exp(\u22121 2\u03c32 t \u039b)x0, Lt = \u039b\u22121 2 q I \u2212exp(\u2212\u03c32 t \u039b). In Corrolary 1 detailed in Appendix A, since the matrix \u039b = diag(\u03bb) is diagonal and time-invariant, the matrix exponential and square root can be directly calculated elementwise on each diagonal elements \u03bbi independently. 4.2.1. BACKWARD SDE Taking the time reversal (Anderson, 1982) of the forward multivariate OU process (11), the backward SDE satisfies d\u2190 \u2212 x t = (\u22121 2Dt\u03b2t\u2190 \u2212 x t \u2212\u03b2tst(\u2190 \u2212 x t))dt + p \u03b2td\u2190 \u2212 wt. (17) Notably, with a general PD matrix Dt, the prior distribution follows that xT \u223cN(0, \u03a3T |0)\u2021. We also note that the prior is now limited to Gaussian distributions, which is not a general bridge anymore. \u2021See the Remark on the selection of \u03c1prior in section B.1. 4 \fVariational Schr\u00a8 odinger Diffusion Models 4.2.2. PROBABILITY FLOW ODE We can follow Song et al. (2021b) and obtain the deterministic process directly: d\u2190 \u2212 x t = \u0012 \u22121 2Dt\u03b2t\u2190 \u2212 x t \u22121 2\u03b2tst(\u2190 \u2212 x t) \u0013 dt, (18) where xT \u223cN(0, \u03a3T |0) and the sample trajectories follow the same marginal densities \u2212 \u2192 \u03c1 t(xt) as in the SDE. 4.3. Adaptive Diffusion via Stochastic Approximation Our major goal is to generate high-fidelity data with efficient transportation plans based on the optimal A\u22c6 t in the forward process (11). However, the optimal A\u22c6 t is not known a priori. To tackle this issue, we leverage stochastic approximation (SA) (Robbins & Monro, 1951; Benveniste et al., 1990) to adaptively optimize the variational score A(k) t through optimal transport and simulate the backward trajectories. (1) Simulate backward trajectoriest {\u2190 \u2212 x (k+1) nh }N\u22121 n=0 via the Euler\u2013Maruyama (EM) scheme of the backward process (17) with a learning rate h. (2) Optimize variational scores \b A(k) nh }N\u22121 n=0 : A(k+1) nh = A(k) nh \u2212\u03b7k+1\u2207\u2212 \u2192 L nh(A(k) nh ; \u2190 \u2212 x (k+1) nh ), where \u2207\u2212 \u2192 L nh(A(k) nh ; \u2190 \u2212 x (k+1) nh ) is the loss function (10) at time nh and is known as the random field. We expect that the simulation of backward trajectories {\u2190 \u2212 x (k+1) nh }N\u22121 n=0 given s(k+1) nh helps the optimization of A(k+1) nh and the optimized A(k+1) nh in turn contributes to a more efficient transportation plan for estimating s(k+2) nh and simulating the backward trajectories {\u2190 \u2212 x (k+2) nh }N\u22121 n=0 . Trajectory Averaging The stochastic approximation algorithm is a standard framework to study adaptive sampling algorithms (Liang et al., 2007). Moreover, the formulation suggests to stabilize the trajectories (Polyak & Juditsky, 1992) with averaged parameters A (k) nh as follows A (k) nh = k X i=1 A(i) nh = \u0012 1 \u22121 k \u0013 A (k\u22121) nh + 1 k A(k) nh , where A (k) nh is known to be an asymptotically efficient (optimal) estimator (Polyak & Juditsky, 1992) in the local state space A by assumption A1. Exponential Moving Average (EMA) Despite guarantees in convex scenarios, the parameter space differs tremendously in different surfaces in non-convex state space A. Empirically, if we want to exploit information from multiple modes, a standard extension is to employ the EMA technique (Trivedi & Kondor, 2017): A (k) nh = (1 \u2212\u03b7)A (k\u22121) nh + \u03b7A(k) nh , where \u03b7 \u2208(0, 1). The EMA techniques are widely used empirically in diffusion models and Schr\u00a8 odinger bridge (Song & Ermon, 2020; De Bortoli et al., 2021; Chen et al., 2022b) to avoid oscillating trajectories. Now we are ready to present our methodology in Algorithm 1. Computational Cost Regarding the wall-clock computational time: i) training (linear) variational scores, albeit in a simulation-based manner, becomes significantly faster than estimating nonlinear forward scores in Schr\u00a8 odinger bridge; ii) the variational parametrization greatly reduced the number of model parameters, which yields a muchreduced variance in the Hutchinson\u2019s estimator (Hutchinson, 1989); iii) since we don\u2019t need to update At as often as the backward score model, we can further amortize the training of At. In the simulation example in Figure.9(b), VSDM is only 10% slower than the SGM with the same training complexity of backward scores while still maintaining efficient convergence of variational scores. 5. Convergence of Stochastic Approximation In this section, we study the convergence of A(k) t to the optimal A\u22c6 t , where t \u2208[0, T] \u00a7. The primary objective is to show the iterates (19) follow the trajectories of the dynamical system asymptotically: dAt = \u2207\u2212 \u2192 L t(At)ds, (20) where dAt ds = lim\u03b7\u21920 A(k+1) t \u2212A(k) t \u03b7 and \u2207\u2212 \u2192 L t(\u00b7) is the mean field at time t: \u2207\u2212 \u2192 L t(At) = Z X \u2207\u2212 \u2192 L t(At; \u2190 \u2212 x (\u00b7) t )\u2190 \u2212 \u03c1 t(d\u2190 \u2212 x (\u00b7) t ), (21) where X denotes the state space of data x and \u2207\u2212 \u2192 L t denotes the gradient w.r.t. At; \u2190 \u2212 \u03c1 t is the distribution of the continuous-time interpolation of the discretized backward SDE (22) from t = T to 0. We denote by A\u22c6 t one of the solutions of \u2207\u2212 \u2192 L t(A\u22c6 t ) = 0. The aim is to find the optimal solution A\u22c6 t to the mean field \u2207\u2212 \u2192 L t(A\u22c6 t ) = 0. However, we acknowledge that the equilibrium is not unique in general nonlinear dynamical systems. To tackle this issue, we focus our analysis around a neighborhood \u0398 of the equilibrium by assumption A1. After running sufficient many iterations with a small enough \u00a7We slightly abuse the notation and generalize A(k) nh to A(k) t . 5 \fVariational Schr\u00a8 odinger Diffusion Models Algorithm 1 Variational Schr\u00a8 odinger Diffusion Models (VSDM). \u03c1prior is fixed to a Gaussian distribution. \u03b7k is the step size for SA and h is the learning rate for the backward sampling of Eq.(17). \u03ben denotes the standard Gaussian vector at the sampling iteration n. The exponential moving averaging (EMA) technique can be used to further stabilize the algorithm. repeat Simulation-free Optimization of Backward Score Draw x0 \u223c\u03c1data, n \u223c{0, 1, \u00b7 \u00b7 \u00b7 , N \u22121}, \u03f5 \u223cN(0, I). Sample xnh|x0 \u223cN(\u00b5nh|0, \u03a3nh|0) by Eq.(13) and (14) given A(k) nh . Cache {\u00b5nh|0}N\u22121 n=0 and {L\u2212\u22ba nh }N\u22121 n=0 via Cholesky decomposition of {\u03a3nh}N\u22121 n=0 to avoid repeated computations. Optimize the score functions s(k+1) nh sufficiently through the loss function \u2207\u03b8\u2225L\u2212\u22ba nh \u03f5 \u2212s(k+1) nh (xnh)\u22252 2. Optimization of Variational Score via Stochastic Approximation (SA) Simulate the backward trajectory \u2190 \u2212 x (k+1) nh given A(k) nh via Eq.(22), where \u2190 \u2212 x (k+1) (N\u22121) \u223cN(0, \u03a3(k) (N\u22121)h|0). Optimize variational score A(k+1) nh using the loss function (10), where n \u2208{0, 1, \u00b7 \u00b7 \u00b7 , N \u22121}: A(k+1) nh = A(k) nh \u2212\u03b7k+1\u2207\u2212 \u2192 L nh(A(k) nh ; \u2190 \u2212 x (k+1) nh ). (19) until Stage k = kmax Sample \u2190 \u2212 x 0 with stochastic (resp. deterministic) trajectories via the discretized Eq.(17) (resp. Eq.(18)). step size \u03b7k, suppose A(k) t \u2208\u0398 is somewhere near one equilibrium A\u22c6 t (out of all equilibrium), then by the induction method, the iteration tends to get trapped in the same region as shown in Eq.(32) and yields the convergence to one equilibrium A\u22c6 t . We also present the variational gap of the (sub)-optimal transport and show our transport is more efficient than diffusion models with Gaussian marginals. Next, we outline informal assumptions and sketch our main results, reserving formal ones for readers interested in the details in the appendix. We also formulate the optimization of the variational score At using stochastic approximation in Algorithm 2 in the supplementary material. Assumption A1 (Regularity). (Positive definiteness) For any t \u22650 and At \u2208A, Dt = I \u22122At is positive definite. (Locally strong convexity) For any stable local minimum A\u22c6 t with \u2207\u2212 \u2192 L t(A\u22c6 t ) = 0, there is always a neighborhood \u0398 s.t. A\u22c6 t \u2208\u0398 \u2282A and \u2212 \u2192 L t is strongly convex in \u0398. By the mode-seeking property of the exclusive (reverse) KL divergence (Chan et al., 2022), we only make a mild assumption on a small neighborhood of the solution and expect the convergence given proper regularities. Assumption A2 (Lipschitz Score). For any t \u2208[0, T], the score \u2207log \u2212 \u2192 \u03c1 t is L-Lipschitz. Assumption A3 (Second Moment Bound). The data distribution has a bounded second moment. Assumption A4 (Score Estimation Error). We have bounded score estimation errors in L2 quantified by \u03f5score. We first use the multivariate diffusion to train our score estimators {s(k) t }N\u22121 n=0 via the loss function (16) based on the pre-specified A(k) t at step k. Similar in spirit to Chen et al. (2023a; 2022a), we can show the generated samples based on {s(k) t }N\u22121 n=0 are close in distribution to the ideal samples in Theorem 1. The novelty lies in the extension of single-variate diffusions to multi-variate diffusions. Theorem 1 (Generation quality, informal). Assume assumptions A1-A4 hold with a fixed A(k) t , the generated data distribution is close to the data distributions \u03c1data such that TV(\u2190 \u2212 \u03c1 (k) 0 , \u03c1data) \u2272exp(\u2212T) + ( \u221a dh + \u03f5score) \u221a T. To show the convergence of A(k) t to A\u22c6 t , the proof hinges on a stability condition such that the solution asymptotically tracks the equilibrium A\u22c6 t of the mean field (20). Lemma 2 (Local stability, informal). Assume the assumptions A1 and A2 hold. For \u2200t \u2208[0, T] and \u2200A \u2208\u0398, the solution satisfies a local stability condition such that \u27e8A \u2212A\u22c6 t , \u2207\u2212 \u2192 L t(A)\u27e9\u2273\u2225A \u2212A\u22c6 t \u22252 2. The preceding result illustrates the convergence of the solution toward the equilibrium on average. The next assumption assumes a standard slow update of the SA process, which is standard for theoretical analysis but may not be always needed in empirical evaluations. Assumption A5 (Step size). The step size {\u03b7k}k\u2208N is a positive and decreasing sequence \u03b7k \u21920, \u221e X k=1 \u03b7k = +\u221e, \u221e X k=1 \u03b72 k < +\u221e. 6 \fVariational Schr\u00a8 odinger Diffusion Models Next, we use the stochastic approximation theory to prove the convergence of A(k) t to an equilibrium A\u22c6 t . Theorem 2 (Convergence in L2). Assume assumptions A1-A5 hold. The variational score A(k) t converges to an equilibrium A\u22c6 t in L2 such that E[\u2225A(k) t \u2212A\u22c6 t \u22252 2] \u22642\u03b7k, where the expectation is taken w.r.t samples from \u2190 \u2212 \u03c1 (k) t . In the end, we adapt Theorem 1 again to show the adaptively generated samples are asymptotically close to the samples based on the optimal A\u22c6 t in Theorem 3, which quantifies the quality of data based on more efficient transportation plans. Theorem 3 (Generation quality of adaptive samples). Given assumptions A1-A5, the generated sample distribution at stage k is close to the exact sample distribution based on the equilibrium A\u22c6 t such that TV(\u2190 \u2212 \u03c1 \u22c6 0, \u03c1data) \u2272exp(\u2212T) + ( \u221a dh + \u03f5score + \u221a\u03b7k) \u221a T. 6. Variational Gap Recall that the optimal and variational forward SDEs follow d\u2212 \u2192 x t = h f t(\u2212 \u2192 x t) + \u03b2t\u2207log \u2212 \u2192 \u03c8 t(\u2212 \u2192 x t) i dt + p \u03b2td\u2212 \u2192 wt, d\u2212 \u2192 x t = h f t(\u2212 \u2192 x t) + \u03b2tA(k) t \u2212 \u2192 x t i dt + p \u03b2td\u2212 \u2192 wt, d\u2212 \u2192 x t = \u0002 f t(\u2212 \u2192 x t) + \u03b2tA\u22c6 t \u2212 \u2192 x t \u0003 dt + p \u03b2td\u2212 \u2192 wt, where we abuse the notion of \u2212 \u2192 x t for the sake of clarity and they represent three different processes. Despite the improved efficiency based on the ideal A\u22c6 t compared to the vanilla At \u22610, the variational score inevitably yields a sub-optimal transport in general nonlinear transport. We denote the law of the above processes by L, L(k), and L\u22c6. To assess the disparity, we leverage the Girsanov theorem to study the variational gap. Theorem 3 (Variational gap). Assume the assumption A2 and Novikov\u2019s condition hold. Assume f t and \u2207log \u2212 \u2192 \u03c8 t are Lipschitz smooth and satisfy the linear growth. The variational gap follows that KL(L\u2225L\u22c6) = 1 2 Z T 0 E \u0014 \u03b2t\u2225A\u22c6 t \u2212 \u2192 x t \u2212\u2207log \u2212 \u2192 \u03c8 t(\u2212 \u2192 x t)\u22252 2 \u0015 dt KL(L\u2225L(k)) \u2272\u03b7k + KL(L\u2225L\u22c6). Connections to Gaussian Schr\u00a8 odinger bridge (GSB) When data follows a Gaussian distribution, VSDM approximates the closed-form OT solution of Schr\u00a8 odinger bridge (Janati et al., 2020; Bunne et al., 2023). We refer readers to Theorem 3 (Bunne et al., 2023) for the detailed transportation plans. Compared to the vanilla At \u22610, we can significantly reduce the variational gap with KL(L\u2225L\u22c6) using proper parametrization and sufficient training. 7. Empirical Studies 7.1. Comparison to Gaussian Schrodinger Bridge VSDM is approximating GSB (Bunne et al., 2023) when both marginals are Gaussian distributions. To evaluate the solutions, we run our VSDM with a fixed \u03b2t \u22614 in Eq.(25) in Song et al. (2021b) and use the same marginals to replicate the VPSDE of the Gaussian SB with \u03b1t \u22610 and ct \u2261\u22122 in Eq.(7) in Bunne et al. (2023). We train VSDM with 20 stages and randomly pick 256 samples for presentation. We compare the flow trajectories from both models and observe in Figure 1 that the ground truth solution forms an almost linear path, while our VSDM sample trajectories exhibit a consistent alignment with trajectories from Gaussian SB. We attribute the bias predominantly to score estimations and numerical discretization. (a) GSB (b) VSDM Figure 1. Gaussian v.s. VSDM on the flow trajectories. 7.2. Synthetic Data We test our variational Schr\u00a8 odinger diffusion models (VSDMs) on two synthetic datasets: spiral and checkerboard (detailed in section D.2.1). We include SGMs as the baseline models and aim to show the strength of VSDMs on general shapes with straighter trajectories. As such, we stretch the Y-axis of the spiral data by 8 times and the X-axis of the checkerboard data by 6 times and denote them by spiral-8Y and checkerboard-6X, respectively. We adopt a monotone increasing {\u03b2nh}N\u22121 n=0 similar to Song et al. (2021b) and denote by \u03b2min and \u03b2max the minimum and maximum of {\u03b2nh}N\u22121 n=0 . We fix \u03b6 = 0.75 and \u03b2min = 0.1 and we focus on the study with different \u03b2max. We find that SGMs work pretty well with \u03b2max = 10 (SGM-10) on standard isotropic shapes. However, when it comes to spiral-8Y, the SGM-10 struggles to recover the boundary regions on the spiral-8Y data as shown in Figure 2 (top). Generations of Anisotropic Shapes To illustrate the effectiveness of our approach, Figure 2 (bottom) shows that VSDM-10 accurately reconstructs the edges of the spiral 7 \fVariational Schr\u00a8 odinger Diffusion Models and generates high-quality samples. 2.5 0.0 2.5 20 0 20 t=0.00 2.5 0.0 2.5 20 0 20 t=0.33 0 5 10 0 10 20 t=0.67 2.5 0.0 2.5 4 2 0 2 4 t=1.00 2.5 0.0 2.5 20 0 20 t=0.00 2.5 0.0 2.5 20 0 20 t=0.33 2.5 0.0 2.5 10 0 10 t=0.67 2.5 0.0 2.5 4 2 0 2 4 t=1.00 Figure 2. Variational Schr\u00a8 odinger diffusion models (VSDMs, bottom) v.s. SGMs (top) with the same hyperparameters (\u03b2max = 10). Straighter Trajectories The SGM-10 fails to fully generate the anisotropic spiral-8Y and increasing \u03b2max to 20 or 30 (SGM-20 and SGM-30) significantly alleviates this issue. However, we observe that excessive \u03b2max values in SGMs compromises the straightness and leads to inefficient transport, especially in the X-axis of spiral-8Y. 3 1 1 3 25 10 5 20 (a) SGM-10 3 1 1 3 25 10 5 20 (b) SGM-20 3 1 1 3 25 10 5 20 (c) SGM-30 3 1 1 3 25 10 5 20 (d) VSDM-10 Figure 3. Probability flow ODE via VSDMs and SGMs. SGM with \u03b2max = 10 is denoted by SGM-10 for convenience. Instead of setting excessive \u03b2max on both axes, our VSDM10, by contrast, proposes conservative diffusion scales on the X-axis of spiral-8Y and explores more on the Y-axis of spiral-8Y. As such, we obtain around 40% improvement on the straightness in Figure 3 and Table 4. Additional insights into a similar analysis of the checkboard dataset, convergence analysis, computational time, assessments of straightness, and evaluations via a smaller number of function evaluations (NFEs) can be found in Appendix D.2. 7.3. Image Data Modeling Experiment Setup In this experiment, we evaluate the performance of VSDM on image modeling tasks. We choose the CIFAR10 datasetas representative image data to demonstrate the scalability of the proposed VSDM on generative modeling of high-dimensional distributions. We refer to the code base of FB-SDE (Chen et al., 2022b) and use the same forward diffusion process of the EDM model (Karras et al., 2022). Since the training of VSDM is an alternative manner between forward and backward training, we build our implementations based on the open-source Figure 4. Unconditional generated samples from VSDM on CIFAR10 (32\u00d732 resolution) trained from scratch. diffusion distillation code base (Luo et al., 2024a) \u00b6, which provides a high-quality empirical implementation of alternative training with EDM model on CIFAR10 data. To make the VSDM algorithm stable, we simplify the matrix Dt to be diagonal with learnable diagonal elements, which is the case as we introduced in Corollary 1. We train the VSDM model from scratch on two NVIDIA A100-80G GPUs for two days and generate images from the trained VSDM with the Euler\u2013Maruyama numerical solver with 200 discretized steps for generation. Performances. We measure the generative performances in terms of the Fretchat Inception Score (FID (Heusel et al., 2017), the lower the better), which is a widely used metric for evaluating generative modeling performances. Tables 2 summarize the FID values of VSDM along with other optimal-transport-based and score-based generative models on the CIFAR10 datasets (unconditional without labels). The VSDM outperforms other optimal transportbased models with an FID of 2.28. This demonstrates that the VSDM has applicable scalability to model highdimensional distributions. Figure 7.3 shows some noncherry-picked unconditional generated samples from VSDM trained on the CIFAR10 dataset. Convergence Speed. To demonstrate the convergence speed of VSDM along training processes, we record the FID values in Table 1 for a training trail with no warmup on CIFAR10 datasets (unconditional). We use a batch size of 256 and a learning rate of 1e \u22124. We use the 2nd-order Heun numerical solver to sample. The result shows that VSDM has a smooth convergence performance. \u00b6See code in https://github.com/pkulwj1994/diff_instruct 8 \fVariational Schr\u00a8 odinger Diffusion Models Table 1. CONVERGENCE SPEED OF FID VALUES FOR VSDM. K IMAGES 0 10K 20K 30K 40K 50K 100K 150K 200K CONVERGE FID\u2193(NFE=35) 406.13 13.13 8.65 6.83 5.66 5.21 3.62 3.29 3.01 2.28 Table 2. CIFAR10 EVALUATION USING SAMPLE QUALITY (FID SCORE). OUR VSDM OUTPERFORMS OTHER OPTIMAL TRANSPORT BASELINES BY A LARGE MARGIN. CLASS METHOD FID \u2193 OT VSDM (OURS) 2.28 SB-FBSDE (CHEN ET AL., 2022B) 3.01 DOT (TANAKA, 2019) 15.78 DGFLOW (ANSARI ET AL., 2020) 9.63 SGMS SDE (SONG ET AL. (2021B)) 2.92 SCOREFLOW (SONG ET AL., 2021A) 5.7 VDM (KINGMA ET AL., 2021) 4.00 LSGM(VAHDAT ET AL., 2021) 2.10 EDM(KARRAS ET AL., 2022) 1.97 7.4. Time Series Forecasting We use multivariate probabilistic forecasting as a real-world conditional modeling task. Let {(t1, x1), . . . , (tn, xn)}, x \u2208Rd, denote a single multivariate time series. Given a dataset of such time series we want to predict the next P values xn+1, . . . , xn+P . In probabilistic modeling, we want to generate forecasts from learned p(xn+1:n+P |x1:n). The usual approach is to have an encoder that represents a sequence x1:i with a fixed-sized vector hi \u2208Rh, \u2200i, and then parameterize the output distribution p(xi+1|hi). At inference time we encode the history into hn and sample the next value from p(xn+1|hn), then use xn+1 to get the updated hn+1 and repeat until we obtain xn+P . In the previous works, the output distribution has been specified with a Copulas (Salinas et al., 2019) and denoising diffusion (Rasul et al., 2021). We augment our approach to allow conditional generation which requires only changing the model to include the conditioning vector hi. For that we adopt the U-Net architecture. We use the LSTM neural network as a sequence encoder. We use three real-world datasets, as described in Appendix D.3. We compare to the SGM and the denoising diffusion approach from Rasul et al. (2021) which we refer to as DDPM. Table 3 shows that our method matches or outperforms the competitors. Figure 5 is a demo for conditional time series generation and more details are presented in Figure 12 to demonstrate the quality of the forecasts. 8." +} \ No newline at end of file diff --git a/abs_9K/test_abstract_short_2405.04834v1.json b/abs_9K/test_abstract_short_2405.04834v1.json new file mode 100644 index 0000000000000000000000000000000000000000..ac29f7e989dcdf2d02eee6cb9dcd5e557eecd70e --- /dev/null +++ b/abs_9K/test_abstract_short_2405.04834v1.json @@ -0,0 +1,16 @@ +{ + "url": "http://arxiv.org/abs/2405.04834v1", + "title": "FlexEControl: Flexible and Efficient Multimodal Control for Text-to-Image Generation", + "abstract": "Controllable text-to-image (T2I) diffusion models generate images conditioned\non both text prompts and semantic inputs of other modalities like edge maps.\nNevertheless, current controllable T2I methods commonly face challenges related\nto efficiency and faithfulness, especially when conditioning on multiple inputs\nfrom either the same or diverse modalities. In this paper, we propose a novel\nFlexible and Efficient method, FlexEControl, for controllable T2I generation.\nAt the core of FlexEControl is a unique weight decomposition strategy, which\nallows for streamlined integration of various input types. This approach not\nonly enhances the faithfulness of the generated image to the control, but also\nsignificantly reduces the computational overhead typically associated with\nmultimodal conditioning. Our approach achieves a reduction of 41% in trainable\nparameters and 30% in memory usage compared with Uni-ControlNet. Moreover, it\ndoubles data efficiency and can flexibly generate images under the guidance of\nmultiple input conditions of various modalities.", + "authors": "Xuehai He, Jian Zheng, Jacob Zhiyuan Fang, Robinson Piramuthu, Mohit Bansal, Vicente Ordonez, Gunnar A Sigurdsson, Nanyun Peng, Xin Eric Wang", + "published": "2024-05-08", + "updated": "2024-05-08", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "label": "Original Paper", + "paper_cat": "Diffusion AND Model", + "gt": "Controllable text-to-image (T2I) diffusion models generate images conditioned\non both text prompts and semantic inputs of other modalities like edge maps.\nNevertheless, current controllable T2I methods commonly face challenges related\nto efficiency and faithfulness, especially when conditioning on multiple inputs\nfrom either the same or diverse modalities. In this paper, we propose a novel\nFlexible and Efficient method, FlexEControl, for controllable T2I generation.\nAt the core of FlexEControl is a unique weight decomposition strategy, which\nallows for streamlined integration of various input types. This approach not\nonly enhances the faithfulness of the generated image to the control, but also\nsignificantly reduces the computational overhead typically associated with\nmultimodal conditioning. Our approach achieves a reduction of 41% in trainable\nparameters and 30% in memory usage compared with Uni-ControlNet. Moreover, it\ndoubles data efficiency and can flexibly generate images under the guidance of\nmultiple input conditions of various modalities.", + "main_content": "Introduction In the realm of text-to-image (T2I) generation, diffusion models exhibit exceptional performance in transforming textual descriptions into visually accurate images. Such models exhibit extraordinary potential across a plethora of applications, spanning from content creation [1, 9, 43, 47, 51,55,65], image editing [4,5,12,23,31,41,43,59,70], and also fashion design [7]. We propose a new unified method that can tackle two problems in text-to-image generation: improve the training efficiency of T2I models concerning memory usage, computational requirements, and a thirst for extensive datasets [48,51,54]; and improve their controllability especially when dealing with multimodal conditioning, e.g. multiple edge maps and at the same time follow the guidance of text prompts, as shown in Figure 1 (c). Controllable text-to-image generation models [42] often come at a significant training computational cost, with linear growth in cost and size when training with different conditions. Our approach can improve the training efficiency of existing text-to-image diffusion models and unify and flexibly handle different structural input conditions all together. We take cues from the efficient parameterization strategies prevalent in the NLP domain [26, 27, 44, 66] and computer vision literature [20]. The key idea is to learn shared decomposed weights for varied input conditions, ensuring their intrinsic characteristics are conserved. Our method has several benefits: It not only achieves greater compactness [51], but also retains the full representation capacity to handle various input conditions of various modalities; Sharing weights across different conditions contributes to the data efficiency; The streamlined parameter space aids in mitigating overfitting to singular conditions, thereby reinforcing the flexible control aspect of our model. Meanwhile, generating images from multiple homogeneous conditional inputs, especially when they present conflicting conditions or need to align with specific text prompts, is challenging. To further augment our model\u2019s capability to handle multiple inputs from either the same or diverse modalities as shown in Figure 1, during training, we introduce a new training strategy with two new loss functions introduced to strengthen the guidance of corresponding conditions. This approach, combined with our compact parameter optimization space, empowers the model to learn and manage multiple controls efficiently, even within the same category (e.g., handling two distinct segmentation maps and two separate edge maps). Our primary contributions are summarized below: \u2022 We propose FlexEControl, a novel text-to-image generation model for efficient controllable image generation that substantially reduces training memory overhead and model parameters through decomposition of weights shared across different conditions. \u2022 We introduce a new training strategy to improve the 1 arXiv:2405.04834v1 [cs.CV] 8 May 2024 \f(c) Controllable T2I w. Same Input Conditions (b) Controllable T2I w. Different Input Conditions Text Prompt: Stormtrooper's lecture at the football field (a) Efficiency Comparisons\u00a0 Figure 1. (a) FlexEControl excels in training efficiency, achieving superior performance with just half the training data compared to its counterparts on (b) Controllable Text-to-Image Generation w. Different Input Conditions (one edge map and one segmentation map). (c) FlexEControl effectively conditions on two canny edge maps. The text prompt is Stormtrooper\u2019s lecture at the football field in both Figure (b) and Figure (c). flexible controllability of FlexEControl. Compared with previous works, FlexEControl can generate new images conditioning on multiple inputs from diverse compositions of multiple modalities. \u2022 FlexEControl shows on-par performance with UniControlNet [71] on controllable text-to-image generation with 41% less trainable parameters and 30% less training memory. Furthermore, FlexEControl exhibits enhanced data efficiency, effectively doubling the performance achieved with only half amount of training data. 2. Method The overview of our method is shown in Figure 2. In general, we use the copied Stable Diffusion encoder which accepts structural conditional input and then perform efficient training via parameter reduction using Kronecker Decomposition first [67] and then low-rank decomposition over the updated weights of the copied Stable Diffusion encoder. To enhance the control from language and different input conditions, we propose a new training strategy with two newly designed loss functions. The details are shown in the sequel. SD Encoder SD Decoder Text Prompt Mask diffusion \u00a0loss Zero Conv Cross-Attention \u00a0Supervision Loss Multimodal Conditioning Shared across conditions Updated Weights Copied\u00a0 Encoder Original parameter size: 36 New parameter size: 4+6n Figure 2. Overview of FlexEControl: a decomposed green matrix is shared across different input conditions, significantly enhancing the model\u2019s efficiency. During training, we integrate two specialized loss functions to enable flexible control and to adeptly manage conflicting conditions. In the example depicted here, the new parameter size is efficiently condensed to 4 + 6n, where n denotes the number of decomposed matrix pairs. 2.1. Preliminary We use Stable Diffusion 1.5 [51] in our experiments. This model falls under the category of Latent Diffusion Models (LDM) that encode input images x into a latent representation z via an encoder E, such that z = E(x), and subsequently carry out the denoising process within the latent space Z. An LDM is trained with a denoising objective as follows: Lldm = Ez,c,e,t h \u2225\u02c6 \u03f5\u03b8(zt | c, t) \u2212\u03f5\u22252i (1) where (z, c) constitute data-conditioning pairs (comprising image latents and text embeddings), \u03f5 \u223cN(0, I) , t \u223c Uniform(1, T), and \u03b8 denotes the model parameters. 2.2. Efficient Training for Controllable Text-toImage (T2I) Generation Our approach is motivated by empirical evidence that Kronecker Decomposition [67] effectively preserves critical weight information. We employ this technique to encapsulate the shared relational structures among different input conditions. Our hypothesis posits that by amalgamating diverse conditions with a common set of weights, data utilization can be optimized and training efficiency can be improved. We focus on decomposing and fine-tuning only the cross-attention weight matrices within the U-Net [52] of the diffusion model, where recent works [33] show their dominance when customizing the diffusion model. As depicted in Figure 2, the copied encoder from the Stable Diffusion will accept conditional input from different modalities. During training, we posit that these modalities, being transformations of the same underlying image, share common information. Consequently, we hypothesize that the updated weights of this copied encoder, \u2206W , can be effi2 \fFigure 3. The visualization of decomposed shared \u201cslow\u201d weights (right image) for single condition case where the input condition (left image) is the depth map and the input text prompt is Car. We took the average over the decomposed shared weights of the last cross-attention block across all attention heads in Stable Diffusion. ciently adapted within a shared decomposed low-rank subspace. This leads to: \u2206W = n X i=1 Hi \u2297 \u0000uiv\u22a4 i \u0001 (2) with n is the number of decomposed matrices, ui \u2208R k n \u00d7r and vi \u2208Rr\u00d7 d n , where r is the rank of the matrix which is a small number, Hi are the decomposed learnable matrices shared across different conditions, and \u2297is the Kronecker product operation. The low-rank decomposition ensures a consistent low-rank representation strategy. This approach substantially saves trainable parameters, allowing efficient fine-tuning over the downstream text-to-image generation tasks. The intuition for why Kronecker decomposition works for finetuning partially is partly rooted in the findings of [20, 40, 67]. These studies highlight how the model weights can be broken down into a series of matrix products and thereby save parameter space. As shown in Figure 2, the original weights is 6x6, then decomposed into a series of matrix products. When adapting the training approach based on the decomposition to controllable T2I, the key lies in the shared weights, which, while being common across various conditions, retain most semantic information. For instance, the shared \u201cslow\u201d weights [61] of an image, combined with another set of \u201cfast\u201d low-rank weights, can preserve the original image\u2019s distribution without a loss in semantic integrity, as illustrated in Figure 3. This observation implies that updating the slow weights is crucial for adapting to diverse conditions. Following this insight, it becomes logical to learn a set of condition-shared decomposed weights in each layer, ensuring that these weights remain consistent across different scenarios. The data utilization and parameter efficiency is also improved. 2.3. Enhanced Training for Conditional Inputs We then discuss how to improve the control under multiple input conditions of varying modalities with the efficient training approach. Dataset Augmentation with Text Parsing and Segmentation To optimize the model for scenarios involving multiple homogeneous (same-type) conditional inputs, we initially augment our dataset. We utilize a large language model (gpt-3.5-turbo) to parse texts in prompts containing multiple object entities. The parsing query is structured as: Given a sentence, analyze the objects in this sentence, give me the objects if there are multiple. Following this, we apply CLIPSeg [39] (clipseg-rd64-refined version) to segment corresponding regions in the images, allowing us to divide structural conditions into separate sub-feature maps tailored to the parsed objects. Cross-Attention Supervision For each identified segment, we calculate a unified attention map, Ai, averaging attention across layers and relevant N text tokens: Ai = 1 L L X l=1 N X i=1 JTi \u2208TjKCAl i, (3) where J\u00b7K is the Iverson bracket, CAl i is the cross-attention map for token i in layer l, and Tj denotes the set of tokens associated with the j-th segment. The model is trained to predict noise for image-text pairs concatenated based on the parsed and segmented results. An additional loss term, designed to ensure focused reconstruction in areas relevant to each text-derived concept, is introduced. Inspired by [2], this loss is calculated as the Mean Squared Error (MSE) deviation from predefined masks corresponding to the segmented regions: Lca = Ez,t h \u2225Ai(vi, zt) \u2212Mi\u22252 2 i , (4) where Ai(vi, zt) is the cross-attention map between token vi and noisy latent zt, and Mi represents the mask for the ith segment, which is derived from the segmented regions in our augmented dataset and appropriately resized to match the dimensions of the cross-attention maps. Masked Noise Prediction To ensure fidelity to the specified conditions, we apply a condition-selective diffusion loss that concentrates the denoising effort on conceptually significant regions. This focused loss function is applied solely to pixels within the regions delineated by the concept masks, which are derived from the non-zero features of the input structural conditions. Specifically, we set the masks to be binary where non-zero feature areas are assigned value of ones [21], and areas lacking features are set to zero. Because of the sparsity of pose features for this condition, we use the all-ones mask. These masks serve to underscore the regions referenced in the corresponding text prompts: 3 \fLmask = Ez,\u03f5,t h \u2225(\u03f5 \u2212\u03f5\u03b8(zt, t)) \u2299M\u22252 2 i , (5) where M represents the union of binary mask obtained from input conditions, zt denotes the noisy latent at timestep t, \u03f5 the injected noise, and \u03f5\u03b8 the estimated noise from the denoising network (U-Net). The total loss function employed is: Ltotal = Lldm + \u03bbcaLca + \u03bbmaskLmask, (6) with \u03bbrec and \u03bbattn set to 0.01. The integration of Lca and Lmask ensure the model will focus at reconstructing the conditional region and attend to guided regions during generation. 3. Experiments 3.1. Datasets In pursuit of our objective of achieving controlled Textto-Image (T2I) generation, we employed the LAION improved aesthetics 6plus [57] dataset for our model training. Specifically, we meticulously curated a subset comprising 5,082,236 instances, undertaking the elimination of duplicates and applying filters based on criteria such as resolution and NSFW score. Given the targeted nature of our controlled generation tasks, the assembly of training data involved considerations of additional input conditions, specifically edge maps, sketch maps, depth maps, segmentation maps, and pose maps. The extraction of features from these maps adhered to the methodology expounded in [68]. 3.2. Evaluation Metrics We employ a comprehensive benchmark suite of metrics including mIoU [50], SSIM [60], mAP, MSE, FID [25], and CLIP Score [24, 46] 1. The details are given in the Appendix. 3.3. Experimental Setup In accordance with the configuration employed in UniControlNet, we utilized Stable Diffusion 1.5 2 as the foundational model. Our model underwent training for a singular epoch, employing the AdamW optimizer [32] with a learning rate set at 10\u22125. Throughout all experimental iterations, we standardized the dimensions of input and conditional images to 512 \u00d7 512. The fine-tuning process was executed on P3 AWS EC2 instances equipped with 64 NVIDIA V100 GPUs. For quantitative assessment, a subset comprising 10,000 high-quality images from the LAION improved aesthetics 6.5plus dataset was utilized. The resizing of input conditions to 512 \u00d7 512 was conducted during the inference process. 1https://github.com/jmhessel/clipscore 2https://huggingface.co/runwayml/stable-diffusion-v1-5 Table 1. Text-to-image generation efficiency comparison: FlexEControl shows substantial reductions in memory cost, trainable parameters, and training time, highlighting its improved training efficiency with the same model architecture. Training times are averaged over three runs up to 400 iterations for consistency. Models Memory Cost \u2193 # Params. \u2193 Training Time \u2193 Uni-ControlNet [71] 20.47GB 1271M 5.69 \u00b1 1.33s/it LoRA [27] 17.84GB 1074M 3.97 \u00b1 1.27 s/it PHM [67] 15.08GB 819M 3.90 \u00b1 2.01 s/it FlexEControl (ours) 14.33GB 750M 2.15 \u00b1 1.42 s/it 3.3.1 Structural Input Condition Extraction We start from the processing of various local conditions used in our experiments. To facilitate a comprehensive evaluation, we have incorporated a diverse range of structural conditions, each processed using specialized techniques: \u2022 Edge Maps: For generating edge maps, we utilized two distinct techniques: \u2013 Canny Edge Detector [6] A widely used method for edge detection in images. \u2013 HED Boundary Extractor [63] HolisticallyNested Edge Detection, an advanced technique for identifying object boundaries. \u2013 MLSD [17] A method particularly designed for detecting multi-scale line segments in images. \u2022 Sketch Maps: We adopted a sketch extraction technique detailed in [58] to convert images into their sketch representations. \u2022 Pose Information: OpenPose [8] was employed to extract human pose information from images, which provides detailed body joint and keypoint information. \u2022 Depth Maps: For depth estimation, we integrated Midas [49], a robust method for predicting depth information from single images. \u2022 Segmentation Maps: Segmentation of images was performed using the method outlined in [62], which focuses on accurately segmenting various objects within an image. 3.4. Baselines In our comparative evaluation, we assess T2IAdapter [42], PHM [67], Uni-ControlNet [71], and LoRA [27]. 3.5. Quantitative Results Table 1 highlights FlexEControl\u2019s superior efficiency compared to Uni-ControlNet. It achieves a 30% reduction 4 \f\u00a0 \u00a0Input Condition\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 Input Condition2\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0Uni-ControlNet\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0LoRA\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0Uni-Control\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0Ours Text Prompt: A\u00a0coffee and the candle Text Prompt: A car is parking Figure 4. Qualitative comparison of FlexEControl and existing controllable diffusion models with multiple heterogeneous conditions. First row: FlexEControl effectively integrates both the segmentation and edge maps to generate a coherent image while Uni-ControlNet and LoRA miss the segmentation map and Uni-Control generates a messy image. Second row: The input condition types are one depth map and one sketch map. FlexEControl can do more faithful generation while all three others generate the candle in the coffee. Figure 5. Qualitative comparison of FlexEControl and existing controllable diffusion models with single condition. Text prompt: A bed. The image quality of FlexEControl is comparable to existing methods and Uni-ControlNet + LoRA, while FlexEControl has much more efficiency. in memory cost, lowers trainable parameters by 41% (from 1271M to 750M), and significantly reduces training time per iteration from 5.69s to 2.15s. Table 2 provides a comprehensive comparison of FlexEControl\u2019s performance against Uni-ControlNet and T2IAdapter across diverse input conditions. After training on a dataset of 5M text-image pairs, FlexEControl demonstrates better, if not superior, performance metrics compared to Uni-ControlNet and T2IAdapter. Note that UniControlNet is trained on a much larger dataset (10M textimage pairs from the LAION dataset). Although there is a marginal decrease in SSIM scores for sketch maps and mAP scores for poses, FlexEControl excels in other metrics, notably surpassing Uni-ControlNet and T2IAdapter. This underscores our method\u2019s proficiency in enhancing efficiency and elevating overall quality and accuracy in controllable text-to-image generation tasks. To substantiate the efficacy of FlexEControl in en5 \fTable 2. Quantitative evaluation of controllability and image quality for single structural conditional inputs. FlexEControl performs overall better while maintaining much improved efficiency. Models Canny MLSD HED Sketch Depth Segmentation Poses FID\u2193 CLIP Score\u2191 (SSIM)\u2191 (SSIM)\u2191 (SSIM)\u2191 (SSIM)\u2191 (MSE)\u2193 (mIoU)\u2191 (mAP)\u2191 T2IAdapter [42] 0.4480 0.5241 90.01 0.6983 0.3156 27.80 0.4957 Uni-Control [45] 0.4977 0.6374 0.4885 0.5509 90.04 0.7143 0.2083 27.80 0.4899 Uni-ControlNet [71] 0.4910 0.6083 0.4715 0.5901 90.17 0.7084 0.2125 27.74 0.4890 PHM [67] 0.4365 0.5712 0.4633 0.4878 91.38 0.5534 0.1664 27.91 0.4961 LoRA [27] 0.4497 0.6381 0.5043 0.5097 89.09 0.5480 0.1538 27.99 0.4832 FlexEControl (ours) 0.4990 0.6385 0.5041 0.5518 90.93 0.7496 0.2093 27.55 0.4963 Table 3. Quantitative evaluation of controllability and image quality on FlexEControl along with its variants and Uni-ControlNet. For UniControlNet, we implement multiple conditioning by adding two homogeneous conditional images after passing through feature extractors. Models Canny MLSD HED Sketch Depth Segmentation Poses FID\u2193 CLIP Score\u2191 (SSIM)\u2191 (SSIM)\u2191 (SSIM)\u2191 (SSIM)\u2191 (MSE)\u2193 (mIoU)\u2191 (mAP)\u2191 Single Conditioning Uni-ControlNet 0.3268 0.4097 0.3177 0.4096 98.80 0.4075 0.1433 29.43 0.4844 FlexEControl (w/o Lca) 0.3698 0.4905 0.3870 0.4855 94.90 0.4449 0.1432 28.03 0.4874 FlexEControl (w/o Lmask) 0.3701 0.4894 0.3805 0.4879 94.30 0.4418 0.1432 28.19 0.4570 FlexEControl 0.3711 0.4920 0.3871 0.4869 94.83 0.4479 0.1432 28.03 0.4877 Multiple Conditioning Uni-ControlNet 0.3078 0.3962 0.3054 0.3871 98.84 0.3981 0.1393 28.75 0.4828 FlexEControl (w/o Lca) 0.3642 0.4901 0.3704 0.4815 94.95 0.4368 0.1405 28.50 0.4870 FlexEControl (w/o Lmask) 0.3666 0.4834 0.3712 0.4831 94.89 0.4400 0.1406 28.68 0.4542 FlexEControl 0.3690 0.4915 0.3784 0.4849 92.90 0.4429 0.1411 28.24 0.4873 hancing training efficiency while upholding commendable model performance, and to ensure a fair comparison, an ablation study was conducted by training models on an identical dataset. We traine FlexEControl along its variants and Uni-ControlNet on a subset of 100,000 training samples from LAION improved aesthetics 6plus. When trained with the identical data, FlexEControl performs better than Uni-ControlNet. The outcomes are presented in Table 3. Evidently, FlexEControl exhibits substantial improvements over Uni-ControlNet when trained on the same dataset. This underscores the effectiveness of our approach in optimizing data utilization, concurrently diminishing computational costs, and enhancing efficiency in the text-to-image generation process. To validate FlexEControl\u2019s effectiveness in handling multiple structural conditions, we compared it with UniControlNet through human evaluations. Two scenarios were considered: multiple homogeneous input conditions (300 images, each generated with 2 canny edge maps) and multiple heterogeneous input conditions (500 images, each generated with 2 randomly selected conditions). Results, summarized in Table 4, reveal that FlexEControl was preferred by 64.00% of annotators, significantly outperforming Uni-ControlNet (23.67%). This underscores FlexEControl\u2019s proficiency with complex, homogeneous inputs. Additionally, FlexEControl demonstrated superior alignment with input conditions (67.33%) compared to UniControlNet (23.00%). In scenarios with random heterogeneous conditions, FlexEControl was preferred for overall Table 4. Human evaluation of FlexEControl and Uni-ControlNet under homogenous and heterogeneous structural conditions, assessing both human preference and condition alignment. \u201dWin\u201d indicates FlexEControl\u2019s preference, \u201dTie\u201d denotes equivalence, and \u201dLose\u201d indicates Uni-ControlNet\u2019s preference. Results indicate that under homogeneous conditions, FlexEControl outperforms Uni-ControlNet in both human preference and condition alignment. Condition Type Metric Win Tie Lose Homogeneous Human Preference (%) 64.00 12.33 23.67 Condition Alignment (%) 67.33 9.67 23.00 Heterogeneous Human Preference (%) 9.80 87.40 2.80 Condition Alignment (%) 6.60 89.49 4.00 quality and alignment over Uni-ControlNet. In addition to our primary comparisons, we conducted an additional quantitative evaluation of FlexEControl and Uni-ControlNet. This evaluation focused on assessing image quality under scenarios involving multiple conditions from both the homogeneous and heterogeneous modalities. The findings of this evaluation are summarized in Table 5. FlexEControl consistently outperforms Uni-ControlNet in both categories, demonstrating lower FID scores for better image quality and higher CLIP scores for improved alignment with text prompts. 6 \fTable 5. Quantitative evaluation of controllability and image quality in scenarios with multiple conditions from heterogeneous and homogeneous modalities for FlexEControl and Uni-ControlNet. The \u2019heterogeneous\u2019 category averages the performance across one Canny condition combined with six other different modalities. The \u2019homogeneous\u2019 category represents the average performance across seven identical modalities (three inputs). Condition Type Baseline FID\u2193 CLIP Score\u2191 Heterogeneous Uni-ControlNet 27.81 0.4869 FlexEControl 27.47 0.4981 Homogeneous Uni-ControlNet 28.98 0.4858 FlexEControl 27.65 0.4932 3.6. Qualitative Results We present qualitative results of our FlexEControl under three different settings: single input condition, multiple heterogeneous conditions, and multiple homogeneous conditions, illustrated in Figure 5, Figure 4, and Figure 6, respectively. The results indicate that FlexEControl is comparable to baseline models when a single condition is input. However, with multiple conditions, FlexEControl consistently and noticeably outperforms other models. Particularly, under multiple homogeneous conditions, FlexEControl excels in generating overall higher quality images that align more closely with the input conditions, surpassing other models. 4. Related Work FlexEControl is an instance of efficient training and controllable text-to-image generation. Here, we overview modeling efforts in the subset of efficient training towards reducing parameters and memory cost and controllable T2I. Efficient Training Prior work has proposed efficient training methodologies both for pretraining and fine-tuning. These methods have established their efficacy across an array of language and vision tasks. One of these explored strategies is Prompt Tuning [35], where trainable prompt tokens are appended to pretrained models [22, 29, 30, 56]. These tokens can be added exclusively to input embeddings or to all intermediate layers [37], allowing for nuanced model control and performance optimization. LowRank Adaptation (LoRA) [27] is another innovative approach that introduces trainable rank decomposition matrices for the parameters of each layer. LoRA has exhibited promising fine-tuning ability on large generative models including diffusion models [19], indicating its potential for broader application. Furthermore, the use of Adapters inserts lightweight adaptation modules into each layer of a pretrained transformer [26, 53]. This method has been successfully extended across various setups [16, 42, 69], demonstrating its adaptability and practicality. Other approaches including post-training model compression [14] facilitate the transition from a fully optimized model to a compressed version \u2013 either sparse [15], quantized [18,36], or both. This methodology was particularly helpful for parameter quantization [13]. Different from these methodologies, our work puts forth a new unified strategy that aims to enhance the efficient training of text-to-image diffusion models through the leverage of low-rank structure. Our proposed method integrates principles from these established techniques to offer a fresh perspective on training efficiency, adding to the rich tapestry of existing solutions in this rapidly evolving field. Controllable Text-to-Image Generation Recent developments in the text-to-image generation domain strives for more control over image generation, enabling more targeted, stable, and accurate visual outputs, several models like T2I-Adapter [42] and Composer [28] have emerged to enhance image generations following the semantic guidance of text prompts and multiple different structural conditional control. However, existing methods are struggling at dealing with multiple conditions from the same modalities, especially when they have conflicts, e.g. multiple segmentation maps and at the same time follow the guidance of text prompts; Recent studies also highlight challenges in controllable text-to-image generation (T2I), such as omission of objects in text prompts and mismatched attributes [3,34], showing that current models are strugging at handling controls from different conditions. Towards these, the Attendand-Excite method [10] refines attention regions to ensure distinct attention across separate image regions. ReCo [64], GLIGEN [38], and Layout-Guidance [11] allow for image generation informed by bounding boxes and regional descriptions. Our work improves the model\u2019s controllability by proposing a new training strategy. 5." +} \ No newline at end of file diff --git a/abs_9K/test_abstract_short_2405.04925v1.json b/abs_9K/test_abstract_short_2405.04925v1.json new file mode 100644 index 0000000000000000000000000000000000000000..8f314b443773251cafcc2bd1f8868801bf89e5a5 --- /dev/null +++ b/abs_9K/test_abstract_short_2405.04925v1.json @@ -0,0 +1,16 @@ +{ + "url": "http://arxiv.org/abs/2405.04925v1", + "title": "The many colors of the TNG100 simulation", + "abstract": "We apply the 3D dust radiative transfer code SKIRT to the low-redshift\n($z\\leq0.1$) galaxy population in the TNG100 cosmological simulation, the\nfiducial run of the IllustrisTNG project. We compute global fluxes and spectral\nenergy distributions (SEDs) from the far-ultraviolet to the sub-millimeter for\n$\\approx\\,$60 000 galaxies. Our post-processing methodology follows the study\nof Tr\\v{c}ka et al. (2022) of the higher-resolution TNG50 simulation. We verify\nthat TNG100 reproduces observational luminosity functions at low redshifts to\nexcellent precision, unlike TNG50. Additionally, we test the realism of our\nTNG100 plus SKIRT fluxes by comparing various flux and color relations to data\nfrom the GAMA survey. TNG100 broadly reproduces the observed distributions, but\nwe predict ultraviolet colors that are too blue by $\\approx\\,$0.4 mag, possibly\nrelated to the extinction in the star-forming regions subgrid model not being\nselective enough. Furthermore, we find that the simulated galaxies exhibit\nmid-infrared fluxes elevated by up to $\\approx\\,$0.5 mag that we attribute to\noverly effective stochastic heating of the diffuse dust. All synthetic\nbroadband fluxes and SEDs are made publicly available in three orientations and\nfour apertures, and can readily be used to study TNG100 galaxies in a mock\nobservational fashion.", + "authors": "Andrea Gebek, Ana Tr\u010dka, Maarten Baes, Marco Martorano, Annalisa Pillepich, Anand Utsav Kapoor, Angelos Nersesian, Arjen van der Wel", + "published": "2024-05-08", + "updated": "2024-05-08", + "primary_cat": "astro-ph.GA", + "cats": [ + "astro-ph.GA" + ], + "label": "Original Paper", + "paper_cat": "Diffusion AND Model", + "gt": "We apply the 3D dust radiative transfer code SKIRT to the low-redshift\n($z\\leq0.1$) galaxy population in the TNG100 cosmological simulation, the\nfiducial run of the IllustrisTNG project. We compute global fluxes and spectral\nenergy distributions (SEDs) from the far-ultraviolet to the sub-millimeter for\n$\\approx\\,$60 000 galaxies. Our post-processing methodology follows the study\nof Tr\\v{c}ka et al. (2022) of the higher-resolution TNG50 simulation. We verify\nthat TNG100 reproduces observational luminosity functions at low redshifts to\nexcellent precision, unlike TNG50. Additionally, we test the realism of our\nTNG100 plus SKIRT fluxes by comparing various flux and color relations to data\nfrom the GAMA survey. TNG100 broadly reproduces the observed distributions, but\nwe predict ultraviolet colors that are too blue by $\\approx\\,$0.4 mag, possibly\nrelated to the extinction in the star-forming regions subgrid model not being\nselective enough. Furthermore, we find that the simulated galaxies exhibit\nmid-infrared fluxes elevated by up to $\\approx\\,$0.5 mag that we attribute to\noverly effective stochastic heating of the diffuse dust. All synthetic\nbroadband fluxes and SEDs are made publicly available in three orientations and\nfour apertures, and can readily be used to study TNG100 galaxies in a mock\nobservational fashion.", + "main_content": "INTRODUCTION Cosmological hydrodynamical simulations that emulate the assembly and evolution of thousands of galaxies have proven an indispensable tool to understand many facets of the observed galaxy population (Somerville & Dav\u00e9 2015; Vogelsberger et al. 2020a). Assessing the realism and reliability of cosmological simulations by comparing their outcome to observations is critical: a solid baseline agreement is necessary in order to draw meaningful conclusions from the simulations. Furthermore, discrepancies can be used to unveil gaps in our understanding of galaxy formation and evolution. However, comparing the simulated and observed galaxy populations comes with a major caveat: observations of galaxies only trace the light emitted by stellar populations (and partially reprocessed by dust and gas), while in simulations only the \u2018physical\u2019 parameters of the stellar populations and interstellar medium (such as masses and metallicities) are known. Tracking the radiation field in cosmological simulations is computationally prohibitive, unless the simulation run only covers the high-redshift regime (\ud835\udc67\u22735) and only few wavelength bins are considered as is done in the SPHINX (Rosdahl et al. 2018) or THESAN (Kannan et al. 2022) simulations. Comparing the simulated and observed galaxy populations in the \u2018physical\u2019 realm (e.g. the stellar mass function or the main sequence of star-forming galaxies) bears the main caveat that all physical properties need to be inferred from observations. Such retrievals of phys\u2605E-mail: andrea.gebek@ugent.be ical parameters sensitively depend on the adopted model used in e.g. the SED fitting process, relying on simplified star-formation histories and dust-to-star geometries (Pacifici et al. 2023). As an example cautionary note, the long-standing disagreement in the star-forming main sequence for 0.5 < \ud835\udc67< 3 with the simulated galaxy population offset to lower star-formation rates (Mitchell et al. 2014; Leja et al. 2015; Furlong et al. 2015; Tomczak et al. 2016; Donnari et al. 2019; Katsianis et al. 2020) could only recently be remedied with more sophisticated SED fitting methods (Nelson et al. 2021; Leja et al. 2022). As a complementary approach, it is therefore critical to move the simulated galaxies into the observational realm by postprocessing them with radiative transfer. This method circumvents any uncertainties in the parameter inference from observations (e.g. choice of free parameters and prior ranges), but requires a postprocessing scheme based on the stars and gas of the simulated galaxies that comes with its own caveats (e.g. choice of dust allocation recipe if dust is not modelled in the cosmological simulation). As a substantial fraction of the light emitted by stellar populations is reprocessed by dust and gas in the interstellar medium (Popescu & Tuffs 2002; Viaene et al. 2016; Bianchi et al. 2018), methods to solve for the transport of radiation are required. Since dust efficiently scatters and absorbs starlight at ultraviolet (UV) and optical wavelengths, Monte Carlo radiative transfer (MCRT) methods are generally used to accurately simulate the radiation field in galaxies taking the 3D dust and stellar distributions into account. Using such MCRT methods, synthetic broadband fluxes and images for a large variety of cos\u00a9 2024 The Authors arXiv:2405.04925v1 [astro-ph.GA] 8 May 2024 \f2 A. Gebek et al. mological simulations such as EAGLE (Camps et al. 2016; Trayford et al. 2017), SIMBA (Narayanan et al. 2021); AURIGA (Kapoor et al. 2021; Kapoor et al. in prep.); ARTEMIS (Camps et al. 2022), IllustrisTNG (Rodriguez-Gomez et al. 2019; Schulz et al. 2020; Vogelsberger et al. 2020b; Tr\u010dka et al. 2022; Popping et al. 2022; Costantin et al. 2023; Guzm\u00e1n-Ortega et al. 2023; Baes et al. 2024a; Bottrell et al. 2024), and NewHorizon (Jang et al. 2023) have been calculated and compared to observational data. In Tr\u010dka et al. (2022), broadband fluxes from the UV to the farinfrared (FIR) have been computed for a sample of \u223c14 000 galaxies at low redshift (\ud835\udc67\u22640.1) from the TNG50 simulation. Comparing the simulated fluxes to observational low-redshift luminosity functions (LFs), Tr\u010dka et al. (2022) found that the TNG50 LFs at all wavelengths exceed the observational estimates. Tr\u010dka et al. (2022) attribute this tension mostly to the subgrid parameter calibration in the IllustrisTNG project. As some of the physical processes cannot be resolved by cosmological simulations, these simulations typically rely on a number of subgrid parameters (e.g. the strength of feedback from active galactic nuclei) to reproduce some important galaxy statistics (e.g. the stellar mass-halo mass relation, Schaye et al. 2015; Kugel et al. 2023). In the case of the IllustrisTNG project, these subgrid parameters were chosen at the resolution of the fiducial TNG100 run and then left constant for other simulation runs. This leads to small systematic resolution-dependent differences in the outcomes of the various IllustrisTNG runs (see Appendices of Pillepich et al. 2018a,b, 2019 for more details). In this study, we want to test if the IllustrisTNG subgrid choices truly caused the discrepancies between TNG50 and observations found by Tr\u010dka et al. (2022). To this end, we apply the postprocessing method of Tr\u010dka et al. (2022) to the TNG100 simulation, the fiducial run of the IllustrisTNG simulation suite. Following Tr\u010dka et al. (2022), we apply the MCRT code SKIRT (Baes et al. 2011; Camps & Baes 2015, 2020) to a stellar mass-limited sample of \u224860 000 TNG100 galaxies at \ud835\udc67= 0 and \ud835\udc67= 0.1 . We generate broadband fluxes in 53 broadband filters ranging from the GALEX far-UV (FUV) to the ALMA band 6 and low-resolution SEDs ranging from 0.1\u22122000 \ud835\udf07m with \ud835\udc45= 39 for this sample of TNG100 galaxies. To reveal potential biases in our postprocessing method and to further assess the realism of the cosmological simulation in the observational realm, we also explore different galaxy flux-flux and color-color relations over a large wavelength range. Since these relations trace the underlying distributions and scaling relations of physical properties (e.g. specific star-formation rate, dust mass, age), they provide an important testbed for the cosmological simulation plus radiative transfer postprocessing approach. This provides a complementary approach of assessing the simulation\u2019s realism, as the simulations are typically evaluated for their ability to reproduce the physical properties of the galaxy population inferred from observations (e.g. Dav\u00e9 et al. 2017; De Rossi et al. 2017; Torrey et al. 2019; Rosito et al. 2019; Nelson et al. 2021). The outline of this paper is as follows: We describe the cosmological simulation as well as the SKIRT postprocessing method in Section 2, and compare TNG100 LFs to observations in Section 3. We proceed by comparing the simulated fluxes to observational data from the GAMA survey in terms of flux-flux and color-color relations (Section 4), and summarize our results in Section 5. We adopt a flat \u039bCDM cosmology, with parameters measured by the Planck satellite (Planck Collaboration et al. 2016), consistent with the IllustrisTNG cosmology. We use the AB magnitude system (Oke 1971) throughout this study. 2 SIMULATION METHODS 2.1 IllustrisTNG The IllustrisTNG suite (Pillepich et al. 2018b; Springel et al. 2018; Nelson et al. 2018; Naiman et al. 2018; Marinacci et al. 2018) is a set of cosmological, magnetohydrodynamical simulations run using the moving-mesh code AREPO (Springel 2010). The simulation suite consists of three different volumes with box sizes of approximately 50, 100, and 300 comoving Mpc, each realized with three to four different resolutions. All of these simulations were run with the same physical model, with the subgrid parameters chosen for the fiducial TNG100-1 run (hereafter \u2018TNG100\u2019), which is the highest-resolution run for the 100-cMpc box. Unlike in the EAGLE suite (Schaye et al. 2015), the subgrid parameters were not recalibrated for other IllustrisTNG simulations (at different resolutions and box sizes). For the cosmological parameters, the simulations use the 2015 results measured by the Planck satellite (Planck Collaboration et al. 2016), i.e. \u03a9\ud835\udc5a= 0.3089, \u03a9\ud835\udc4f= 0.0486, \u03a9\u039b = 0.6911, \ud835\udc3b0 = 100 \u210ekm s\u22121Mpc\u22121 with \u210e= 0.6774). In the following, we briefly describe the aspects of IllustrisTNG and its galaxy formation model (Weinberger et al. 2017; Pillepich et al. 2018a) that are most relevant to this study. TNG100 simulates a cube with box size of 110.7 comoving Mpc from \ud835\udc67= 127 to \ud835\udc67= 0. This volume is resolved with 18203 baryonic and dark matter particles, corresponding to a mean particle mass of 1.4 \u00d7 106 \ud835\udc40\u2299and 7.5 \u00d7 106 \ud835\udc40\u2299, respectively. Galaxies are identified as gravitationally bound substructures using the SUBFIND algorithm (Springel et al. 2001). Since molecular clouds cannot be resolved in the simulation, star formation is modelled stochastically for gas with \ud835\udc5bH > 0.106 cm\u22123 according to the two-phase model of Springel & Hernquist (2003). Stellar populations are modelled with a Chabrier initial mass function (Chabrier 2003). These star particles subsequently affect the surrounding interstellar medium (ISM) via metal enrichment as well as feedback from supernovae explosions. The IllustrisTNG model furthermore incorporates gas radiative processes (including metal-line cooling and heating in an evolving UV background), formation and merging of supermassive black holes, as well as feedback from active galactic nuclei in a thermal and a kinetic mode. In Tr\u010dka et al. (2022), we calculated broadband fluxes (in 53 filters) as well as low-resolution SEDs (at 387 wavelengths between 0.1 and 2000 \ud835\udf07m) for the TNG50-1 and the lower-resolution TNG50-2 simulations (Nelson et al. 2019b; Pillepich et al. 2019, see Table 1 for an overview of the different simulation resolutions) and publicly released them on the IllustrisTNG website1. This data is available at two snapshots (099 and 091), corresponding to \ud835\udc67= 0 and \ud835\udc67= 0.1, for all galaxies above a stellar mass threshold. The stellar mass threshold of 108 M\u2299ensures that the galaxies are resolved by enough (\u2273102) star particles for the radiative transfer postprocessing. We remark that we always use the stellar mass within two stellar halfmass radii for the simulation stellar masses (as opposed to the total graviationally bound stellar mass for instance), which is available from the IllustrisTNG galaxy catalogue. With the present study, we add the same data products (i.e. broadband fluxes and low-resolution SEDs) for TNG100 at redshifts \ud835\udc67= 0 and \ud835\udc67= 0.1 for 61 076 galaxies with \ud835\udc40\u2605> 108.5 M\u2299(we choose a higher stellar mass threshold for TNG100 due to the lower particle mass resolution compared to the TNG50 runs) to the database. For the galaxy samples of all three simulations, subhalos that are flagged 1 https://www.tng-project.org/ MNRAS 000, 1\u201315 (2024) \fThe colors of TNG100 3 Simulation \ud835\udc49[cMpc3] \ud835\udc5ab [M\u2299] \ud835\udc40min \u2605 [M\u2299] \ud835\udc41\ud835\udc67=0 gal \ud835\udc41\ud835\udc67=0.1 gal TNG100-1 106.53 1.4 \u00d7 106 108.5 30 712 30 364 TNG50-1 51.73 8.5 \u00d7 104 108 7 375 7 302 TNG50-2 51.73 6.8 \u00d7 105 108 5 669 5 665 Table 1. Runs of the IllustrisTNG suite that we consider in this study. For each simulation, we list the volume, the target baryon mass (the resolution) and the stellar mass (more specifically, the stellar mass in two stellar half-mass radii) threshold which defines the galaxy samples. \ud835\udc41gal indicates the number of galaxies (in the snapshots at \ud835\udc67= 0 and \ud835\udc67= 0.1) that conform to our sample selection criteria. as being not of cosmological origin are excluded2. An overview of the different sample definitions and galaxy sample sizes is shown in Table 1. We caution that the chosen stellar mass thresholds are relatively low, meaning that the postprocessing results for the lowestmass galaxies could be unreliable for TNG100-1 and TNG50-2. 2.2 Radiative transfer postprocessing The methodology for the radiative transfer postprocessing adopted here for TNG100 galaxies is exactly the same as in Tr\u010dka et al. (2022), which, in turn, is based on Camps et al. (2016, 2018) and Kapoor et al. (2021). We briefly summarize the main steps here and refer the reader to Tr\u010dka et al. (2022) for more details. We use the 3D dust MCRT code SKIRT (Baes et al. 2011; Camps & Baes 2015, 2020) to generate broadband fluxes over a large (UV-FIR) wavelength range. We simulate the emission of photon packets from evolved stellar populations as well as star-forming regions. The photon packets are then propagated through the dusty ISM, where they get absorbed and scattered. Furthermore, the dust grains are stochastically heated and subsequently emit IR radiation (Camps et al. 2015). Finally, the photon packets are recorded in synthetic instruments that emulate different apertures, orientations, and broadband filters. We briefly describe the different components of the SKIRT simulations and how they are imported from IllustrisTNG in the following. \u2022 Evolved stellar populations: All star particles with ages above 10 Myr are treated as evolved stellar populations. We model their SED using the Bruzual & Charlot (2003) template library with a Chabrier IMF. All parameters to model the emission of evolved stellar populations (positions, current masses, metallicites, ages, and smoothing lengths) are directly available from the IllustrisTNG snapshot data. \u2022 Star-forming regions: Star particles with ages below 10 Myr are modelled as star-forming regions, i.e. young stars that are still partially enshrouded within their dusty birth clouds. We use the template library MAPPINGS-III (Groves et al. 2008) to model their SED, which contains the light contribution from the young stellar population as well as nebular and dust emission. In addition to the positions, metallicities, and smoothing lengths, this template library has a number of parameters that are not directly available from the snapshot data. These are the star-formation rates (calculated as initial mass of the star particle divided by its age), ISM pressure (set to a constant value of \ud835\udc43/\ud835\udc58\ud835\udc35= 105 K cm\u22123), and compactness parameter (randomly sampled from a Gaussian distribution). Lastly, the photodissociation region (PDR) covering factor is calculated as 2 The IllustrisTNG subhalo finder sometimes falsely identifies baryonic fragments or clumps as galaxies. The IllustrisTNG galaxy catalogue (Nelson et al. 2019a) contains a flag that indicates if a subhalo is probably not of cosmological origin, in which case the \u2018SubhaloFlag\u2019 field is set to zero. We omit these objects from the postprocessing analysis. \ud835\udc53PDR = \ud835\udc52\u2212\ud835\udc61/\ud835\udf0f, with \ud835\udc61being the age of the star particle and \ud835\udf0fa free parameter in the radiative transfer postprocessing scheme. \u2022 Diffuse dust: As IllustrisTNG does not track the dust content in the ISM, we assign dust to gas cells based on their metallicity. Specifically, we use the criterion of Torrey et al. (2012, 2019) to select dust-containing gas cells based on their temperature and mass density. This criterion separates the hot circumgalactic medium (CGM) from the ISM. While we do not assign dust to the CGM gas cells, the dust mass in all other cells is scaled to their metal masses, with the dust-to-metal ratio \ud835\udc53dust being a free parameter of the postprocessing scheme. All other parameters that control the diffuse dust (positions, mass densities, temperatures, and metallicities) are directly available from the snapshot data. For the optical properties of the diffuse dust, we use the THEMIS dust model from Jones et al. (2017). The dusty medium is discretised on an octtree grid (Saftly et al. 2013, 2014) with a maximum subdivision level of twelve. The SKIRT postprocessing simulations are performed for a defined spatial domain. In our case, we use a cube with side length ten times stellar half-mass radii, centered on the subhalo positions. Additionally, we consider only star particles within a sphere of radius five stellar half-mass radii3 for the postprocessing. Lastly, we use 5 \u00d7 107 photon packets to perform the radiative transfer simulations. In Tr\u010dka et al. (2022), the free parameters \ud835\udf0fand \ud835\udc53dust were calibrated using a test sample of TNG50 galaxies which are compared to low-redshift multiwavelength observational data from the DustPedia archive4 (Davies et al. 2017; Clark et al. 2018). Using various luminosity and color scaling relations, the default parameters were determined to \ud835\udf0f= 3 Myr and \ud835\udc53dust = 0.2. We kept these parameters unchanged for the postprocessing of TNG100 galaxies. We have verified that the TNG100 galaxies exhibit a similar behaviour compared to TNG50 on the scaling relations that were used to calibrate the free parameters. 2.3 Simulation products The main output of the radiative transfer postprocessing are broadband fluxes in 53 filters, from the UV (GALEX FUV) to the ALMA band 6. These fluxes are available for all galaxies in TNG100-1 (as well as TNG50-1 and TNG50-2 already presented by Tr\u010dka et al. 2022) above the stellar mass threshold (see Table 1), at redshifts 0 and 0.1. The broadband flux is given both in the galaxy rest-frame (in absolute AB magnitudes) and in the observational frame5 (in Jy). Additionally, we provide low-resolution SEDs (\ud835\udc45= 39) in the observational frame (in Jy) for all TNG100-1 galaxies in the base sample. All data are available in three different galaxy orientations (random, edge-on, and face-on) as well as four different circular apertures (with aperture radii of 10 kpc, 30 kpc, two stellar half-mass radii, and five stellar half-mass radii). 3 This ensures that we capture most of the starlight emitted by the galaxy. To test this more quantitatively, we compared half-light sizes derived by Baes et al. (2024b) for massive (\ud835\udc40\u2605\u2265109.8 M\u2299) TNG50-1 galaxies to their halfmass sizes. The bluest available band (LSST u) shows the highest half-light to half-mass size ratios, with 28.3 % (3.47 %) of all galaxies having a half-light size larger than two (five) half-mass radii. Hence, there is a sizeable fraction of galaxies for which we miss some starlight in the bluest optical and the UV filters, but we remark that our maximum aperture of five stellar half-mass radii is comparable or larger than the observational apertures used in this paper (see Figure 2). 4 http://dustpedia.astro.noa.gr/ 5 For the data in the observational frame, the SKIRT instrument is placed at 20 Mpc for redshift zero or at the corresponding redshift for \ud835\udc67= 0.1. MNRAS 000, 1\u201315 (2024) \f4 A. Gebek et al. 3 GALAXY LUMINOSITY FUNCTIONS We begin by investigating low-redshift luminosity functions in various broadband filters. As in Tr\u010dka et al. (2022), we use the rest-frame magnitudes (which we convert into solar luminosities) for our main galaxy sample which combines the \ud835\udc67= 0 and \ud835\udc67= 0.1 snapshots. We use a default orientation (random) throughout this work and adopt an aperture of five stellar half-mass radii in Section 3, the default choice for the simulated LFs in Tr\u010dka et al. (2022). Since the observational LFs are thought to be representative of the local galaxy population, we do not mimic any observational selection effect (as instead done and described in Section 4.2). In Tr\u010dka et al. (2022) (their figure 9), luminosity functions of the TNG50-1 simulation were found to overestimate the observational estimates in all filters and at all luminosities from the UV to the farIR. At the bright end, this discrepancy can be mitigated by choosing a significantly smaller aperture (10 kpc instead of the default five stellar half-mass radii), but this value is less representative of observational apertures and does not resolve the tension for galaxies fainter than the knee of the luminosity functions. Tr\u010dka et al. (2022) found that the discrepancy is largely mitigated when using the lower-resolution TNG50-2 simulation. Indeed, within the IllustrisTNG model, the resolution improvement from the fiducial TNG100 resolution to TNG50 results in somewhat larger galaxy masses and SFRs (Pillepich et al. 2018a,b, 2019; Donnari et al. 2019). We test this statement here explicitly by investigating the LFs of the TNG100 simulation, which is the fiducial resolution at which the subgrid parameters were chosen. We show the low-redshift luminosity functions for TNG100 in Figure 1. The observational estimates from various low-redshift surveys6 are equivalent to the ones from Tr\u010dka et al. (2022) (see their section 3.2.1 for more details), which are corrected to \u210e= 0.6774 to be consistent with the cosmological parameters of IllustrisTNG. We also include the LFs from the TNG50-1 (hereafter \u2018TNG50\u2019) simulation to highlight the convergence behaviour of the cosmological simulations. To not overcrowd the figure TNG50-2 is not shown, but we note that the TNG50-2 LFs closely align with the TNG100-1 results, meaning that the LFs are converged with simulation box size. In Figure 1, the Poisson error for the simulated LFs is shown as shaded area, and luminosity bins with fewer than ten galaxies are marked. We cut the calculation of the simulated LFs at a minimum luminosity to ensure that the shown LFs are complete. This minimum luminosity is calculated as the 90 % luminosity percentile in the lowest 5 % stellar mass bin. The number of galaxies above this luminosity threshold are noted in each panel for TNG100 and TNG50 separately. Figure 1 shows how the agreement between TNG100 and observational LFs improves compared to TNG50. In fact, the TNG100 LFs provide an excellent match to the observational data in the nearUV (NUV) and FIR bands. In the FUV, optical and near-infrared (NIR) bands (GALEX FUV, SDSS and UKIDSS filters), the faint ends and knees of the observed LFs are also precisely reproduced in TNG100. At the bright ends TNG100 overestimates the observational estimates, but we note that in this regime there are also large differences across the observational datasets. As an example, the LFs in the SDSS filters from Loveday et al. (2012) are given in Petrosian apertures, while Driver et al. (2012) use Kron apertures. Even though 6 Specifically, we use LF data at \ud835\udc67\u22720.1 from the GALEX MIS (Budav\u00e1ri et al. 2005), GALEX AIS (Wyder et al. 2005), GAMA (Driver et al. 2012), SDSS (Loveday et al. 2012), SDSS + UKIDSS LAS + MGC redshift survey (Hill et al. 2010), H-ATLAS (Dunne et al. 2011), Planck ERCSC (Negrello et al. 2013), and Spitzer Data Fusion database + HerMES (Marchetti et al. 2016) surveys. both studies use data from the GAMA survey, the differences in the LFs reach almost an order of magnitude for the brightest luminosity bins (see also Hill et al. 2011 and Bernardi et al. 2013 for a discussion on this issue). For a detailed discussion on the impact of the aperture for the simulated LFs, we refer the reader to section 4 in Tr\u010dka et al. (2022). We conclude that, as suggested by Tr\u010dka et al. (2022), the way how the subgrid parameters are chosen in the IllustrisTNG model (at the fiducial TNG100 resolution) indeed caused the discrepancy in the LFs for TNG50. Acknowledging observational uncertainties at the bright end related to aperture choices, the agreement between TNG100 and low-redshift observational LFs is excellent. 4 UV-SUBMM BROADBAND FLUXES: COMPARISON WITH GAMA To assess galaxy scaling relations and distributions in the observational realm, we continue by analyzing different flux-flux and colorcolor relations over a large wavelength range. As opposed to analyzing scaling relations in the physical realm, this analysis provides a complementary approach of assessing the simulation\u2019s realism. We also use these relations to evaluate the accuracy and reveal potential systematics in our radiative transfer postprocessing scheme. We only analyze TNG100 in this section, and refer the reader to Appendix A for a comparison to TNG50. We first detail the observational dataset in Section 4.1 and describe how we homogenize the observational and simulated galaxy samples in Section 4.2, before discussing the results for the flux-flux and color-color relations in Sections 4.3 and 4.4, respectively. 4.1 Observational data from GAMA The Galaxy and Mass Assembly (GAMA) survey (Driver et al. 2009, 2011; Liske et al. 2015; Baldry et al. 2018; Driver et al. 2022) is a spectroscopic survey of galaxies with the AAOmega spectrograph in the optical wavelength range, mounted on the Anglo Australian Telescope (AAT). The survey consists of five different fields with varying input catalogues (used for target selection), observing a total area of 286 deg2. The most recent data release (DR4) of GAMA (Driver et al. 2022) contains spectra, spectroscopic redshifts, X-ray to FIR photometry from various other surveys7, as well as derived data such as stellar masses and rest-frame fluxes for some \u223c300 000 galaxies. All accessed data used in this study is part of the GAMA data release 4, described in Driver et al. (2022). Due to its large sample size of low-redshift galaxies (\ud835\udc67\u22720.6) and large wavelength coverage of photometric data, the GAMA database provides an excellent observational sample to compare to the simulated photometric data from TNG100. The GAMA project consists of three phases, which are different in their target selection as the input catalogues were updated with more recent photometric data from other surveys over time. As the three equatorial fields (labelled G09, G12, and G15) observed as part of GAMA II have the highest availability of derived data products (importantly, those are the only galaxies within GAMA with matched-aperture photometry), we only use this dataset throughout 7 Specifically, the GAMA database includes photometry from the XMMXXL, GALEX, SDSS, KiDS, VIKING, WISE, and Herschel-ATLAS surveys. MNRAS 000, 1\u201315 (2024) \fThe colors of TNG100 5 7 7 8 8 9 9 10 10 11 11 12 12 10 5 10 4 10 3 10 2 10 1 [cMpc 3dex 1] NTNG50 = 7653 NTNG100 = 21627 GALEX FUV TNG100 TNG50 7 7 8 8 9 9 10 10 11 11 12 12 NTNG50 = 7810 NTNG100 = 22654 GALEX NUV Budavari+ 2005 Wyder+ 2005 Driver+ 2012 7 7 8 8 9 9 10 10 11 11 12 12 NTNG50 = 9712 NTNG100 = 34880 SDSS u 10 5 10 4 10 3 10 2 10 1 [cMpc 3dex 1] NTNG50 = 11400 NTNG100 = 42348 SDSS g NTNG50 = 12022 NTNG100 = 45269 SDSS r NTNG50 = 11973 NTNG100 = 44794 SDSS i 10 5 10 4 10 3 10 2 10 1 [cMpc 3dex 1] NTNG50 = 11999 NTNG100 = 51233 SDSS z NTNG50 = 11823 NTNG100 = 50492 UKIDSS Y Hill+ 2010 Loveday+ 2012 NTNG50 = 12648 NTNG100 = 47711 UKIDSS J 10 5 10 4 10 3 10 2 10 1 [cMpc 3dex 1] NTNG50 = 12109 NTNG100 = 51240 UKIDSS H NTNG50 = 12626 NTNG100 = 47597 UKIDSS K NTNG50 = 9453 NTNG100 = 34239 SPIRE 250 7 8 9 10 11 12 log10 (L/L ) 10 5 10 4 10 3 10 2 10 1 [cMpc 3dex 1] NTNG50 = 9583 NTNG100 = 35598 SPIRE 350 7 8 9 10 11 12 log10 (L/L ) NTNG50 = 9102 NTNG100 = 34001 SPIRE 500 7 8 9 10 11 12 log10 (L/L ) NTNG50 = 8790 NTNG100 = 29155 TIR Dunne+ 2011 Negrello+ 2014 Marchetti+ 2016 Figure 1. Luminosity functions in 14 bands and the total infrared (TIR). Continuous lines mark the simulation results for TNG50 (blue) and TNG100 (red), for \ud835\udc67\u22640.1. The shaded area corresponds to the Poisson error, crosses mark luminosity bins with fewer than ten galaxies. The simulated luminosity functions are computed only above a completeness limit, see text for details. The number of simulated galaxies above this completeness limit is shown in each panel. Observational data are shown as various markers. The TNG100 LFs are in excellent agreement with the observations. MNRAS 000, 1\u201315 (2024) \f6 A. Gebek et al. this study. The target selection is defined as having an apparent Petrosian \ud835\udc5f-band magnitude below 19.8 mag in SDSS DR7. This limit is the same for all three fields. For the analysis in this paper, we use various catalogues from the GAMA database, which we describe in this section. To select only galaxies that are part of the main GAMA II survey, we use the TilingCat v46 catalogue from the EqInputCat data management unit (DMU). Objects that are part of the main survey have a survey class of four or higher. We enforce this criterion for our GAMA sample. We use broadband fluxes from the LambdarCat v01 catalogue in the LambdarPhotometry DMU. In this catalogue, the fluxes are extracted using matched aperture photometry with the Lambdar code (Wright et al. 2016). Lambdar measures the photometry given an input aperture (in this case, the apertures come from a combination of SExtractor (Bertin & Arnouts 1996) runs on the SDSS \ud835\udc5fand VIKING \ud835\udc4d-bands of imaging as well as visual inspection) and performs aperture convolution, deblending, correction, and sky substraction. The fluxes are available for the GALEX, SDSS, VISTA, WISE, PACS, and SPIRE bands. The fluxes are corrected for Milky Way extinction but not K-corrected, hence these are fluxes in the observational frame (as opposed to rest-frame fluxes). As we want to limit the observational galaxy sample in redshift, we also download redshift estimates from the DistanceFrames v14 catalogue in the LocalFlowCorrection DMU (Baldry et al. 2012). We use the redshifts from the Tonry flow model (Tonry et al. 2000), which equals the cosmic microwave background redshift for \ud835\udc67\u22650.03 and takes into account local flows at lower redshifts. Following the documentation of this DMU, we impose \ud835\udc67\u22650.002 as lower-redshift objects are potentially not galaxies. We also impose \ud835\udc67\u22640.1 to not extrapolate our simulation results into higher redshift ranges. Only galaxies with a high-quality redshift (redshift flag must be three or larger) are kept in our sample. We also impose a stellar mass limit (\ud835\udc40\u2605\u2265108.5 M\u2299) to the GAMA galaxies, the same stellar mass limit of our TNG100 galaxy sample. Stellar masses8 are obtained from the StellarMassesLambdar v20 catalogue in the StellarMasses DMU. These are inferred from SED fits to the Lambdar aperture photometry (see Taylor et al. 2011 for details). The cuts in survey class (\u22654), redshift flag (\u22653), redshift (0.002 \u2264\ud835\udc67\u22640.1), and stellar mass (\ud835\udc40\u2605\u2265108.5 M\u2299) lead to a base sample of 17 932 galaxies contained in the GAMA dataset. We note that not all galaxies in this GAMA catalogue have detected broadband fluxes in all filters. The GAMA base sample is then cut further depending on the broadbands that are involved in a specific flux-flux or color-color plot. We first impose SNR cuts on all involved filters to ensure that the GAMA galaxies have reliable fluxes. Specifically, we discard all galaxies with SNR < 3 in any of the involved filters. In a second step, we want to define a flux threshold that broadly corresponds to a volume-limited sample. The same threshold can then be applied to the simulated galaxies to ensure a fair comparison. We noted that the GAMA galaxies exhibit noise distributions with outliers multiple orders of magnitude below the median, even after this SNR cut. This leads to some galaxies having very low flux values, which are not representative of the typical sensitivity of the respective surveys. Hence, we compute the 10 %-percentiles of the GAMA galaxies with 8 Different stellar mass estimates exist in this GAMA table. While the TNG100 stellar masses would correspond to the sum of the GAMA stellar and remnant masses, we just consider the more commonly used stellar masses. Adding the remnant masses would shift the stellar masses by less than 0.1 dex. Furthermore, we correct the stellar masses to \u210e= 0.6774, but do not perform an aperture correction. SNR > 3 in each band, and use these fluxes as thresholds for the GAMA and TNG100 datasets. This means that in every flux-flux or color-color plot, if a GAMA or TNG100 galaxy has a flux below the threshold in any band it is omitted from the plot. The flux thresholds are given in Table 2 for all filters considered in Figures 3 and 4. We caution that the choice of SNR (3) and flux (10 %-percentile) thresholds are arbitrary. We have tested different strategies (changing the SNR and flux percentile values, or either just using an SNR or a flux percentile criterion), and find that the peaks and correlations of the distributions are hardly affected. On the other hand, the widths of the distributions are altered (e.g. lowering or dropping the SNR criterion primarily makes the GAMA distribution wider). We adopted the specific thresholds as a compromise between galaxy sample size and mitigating noise and incompleteness effects in GAMA. For our chosen thresholds, we find the GAMA noise levels moderate in the sense that the widths of the flux and color distributions of TNG100 and GAMA are similar, i.e. the intrinsic scatter in the galaxy population dominates over instrumental effects. Due to this ambiguity in SNR and flux thresholds, we focus the discussion in Sections 4.3 and 4.4 on the peaks and correlations of the distributions. Making firm statements about the scatter of the shown flux-flux and colorcolor relations would require adding realistic GAMA-like noise to the TNG100 galaxies, which is beyond the scope of this study. Lastly, we remark that aperture mismatches in the observed and simulated datasets can substantially bias the comparison. The distribution of GAMA apertures (given as the circularized radii9 of the elliptical aperture used by Lambdar) as a function of stellar mass for all 17 932 galaxies in the base sample is shown in Figure 2. These apertures are compared to the four different available apertures for the TNG100 data (10 kpc, 30 kpc, 2 or 5 stellar half-mass radii). We find that two stellar half-mass radii provide the closest match to the GAMA apertures, even though the TNG100 apertures are significantly smaller for all stellar masses below 1011 M\u2299in that case. Hence, we adopt two stellar half-mass radii as our default aperture in Section 4. 4.2 Observational sensitivity limits for simulated galaxies A major caveat when comparing observational and simulated datasets is that the galaxy samples can be very different. This caveat is usually mitigated by matching the samples in some physical properties like stellar masses or star-formation rates (e.g. Diemer et al. 2019; Donnari et al. 2021; Tr\u010dka et al. 2022; Goddy et al. 2023). However, this approach bears the problem that the observational and simulated definitions of those properties can be different10, and physical parameters inferred from observations come with their own caveats. Hence, we implement a different method to homogenize the galaxy samples. We base our method on the observational sensitivity limits in various filters, which determine the flux limits of the galaxies. We use these limits to filter out \u2018fainter\u2019 TNG100 galaxies which would lie below the observational detection threshold. This approach is similar to postprocessing studies of semi-analytical models (SAMs) over large redshift ranges which have been used to study galaxy clustering 9 We use \ud835\udc45aperture = \u221a \ud835\udc4e\ud835\udc4fwith \ud835\udc4eand \ud835\udc4fthe semi-major and semi-minor axes of the aperture, respectively. 10 As an example, the galaxy star-formation rate in the simulation is typically defined as the instantaneous SFR of the star-forming gas. On the other hand, in observations the SFR is determined for some tracer of young stellar populations, yielding the average SFR over a certain timescale. MNRAS 000, 1\u201315 (2024) \fThe colors of TNG100 7 8.5 9.0 9.5 10.0 10.5 11.0 11.5 log10 (M /M ) 0 5 10 15 20 25 30 35 40 Raperture [kpc] GAMA TNG (2R) TNG (5R) Figure 2. Apertures of TNG100 galaxies (red) and objects from the GAMA survey, for 0.002 \u2264\ud835\udc67\u22640.1 and \ud835\udc40\u2605\u2265108.5 M\u2299. The GAMA apertures correspond to the cricularized radii of the elliptical Lambdar apertures. For TNG100 we show the constant apertures (10 or 30 kpc) as dotted lines. The other available TNG100 apertures (2 or 5 stellar half-mass radii) and the GAMA apertures are displayed as running medians as a function of stellar mass. Shaded areas indicate the interquartile range (not shown for 5 stellar half-mass radii). Since an aperture of two stellar half-mass radii provides the closest match to the GAMA apertures, we adopt this as our default aperture for the TNG100 fluxes in Section 4. (Blaizot et al. 2005; Kitzbichler & White 2007). In this approach, the (periodic) simulation box at different snapshots is stacked many times to construct a sufficiently large volume and to calculate a mock lightcone. Unfortunately, such a mock lightcone construction requires the postprocessing of many different snapshots, which is feasible for the SAM postprocessing but prohibitive for our 3D dust radiative transfer modelling. Hence, we do not stack the simulation box at different snapshots, but rather place the friend-of-friend halos (FoF groups) of the \ud835\udc67= 0 and \ud835\udc67= 0.1 snapshots at arbitrary distances (within the redshift bounds from the observational sample, i.e. 0.002 < \ud835\udc67< 0.1) from the mock observer. We assume that the halos are uniformly distributed in space, such that the comoving number density of halos \ud835\udc5bis constant: \ud835\udc5b(\ud835\udc37\ud835\udc50) = \ud835\udc41(\ud835\udc37\ud835\udc50, \ud835\udc37\ud835\udc50+ d\ud835\udc37\ud835\udc50) \ud835\udc49(\ud835\udc37\ud835\udc50, \ud835\udc37\ud835\udc50+ d\ud835\udc37\ud835\udc50) = const = \ud835\udc41tot \ud835\udc49tot . (1) Here, \ud835\udc41tot denotes the total number of halos from TNG100 that are now distributed, \ud835\udc41(\ud835\udc37\ud835\udc50, \ud835\udc37\ud835\udc50+ d\ud835\udc37\ud835\udc50) indicates the number of halos within a small comoving distance interval d\ud835\udc37\ud835\udc50, and \ud835\udc49(\ud835\udc37\ud835\udc50, \ud835\udc37\ud835\udc50+ d\ud835\udc37\ud835\udc50) corresponds to the volume of this comoving distance slice. The total comoving volume of the (mock) survey, \ud835\udc49tot, is given by the redshift limits \ud835\udc67min and \ud835\udc67max: \ud835\udc49tot = 4\ud835\udf0b 3 \u0000\ud835\udc37\ud835\udc50(\ud835\udc67max)3 \u2212\ud835\udc37\ud835\udc50(\ud835\udc67min)3\u0001. (2) The normalized probability distribution function for \u2018placing\u2019 a halo at a specific distance, \ud835\udc5d(\ud835\udc37\ud835\udc50), can then be written as follows: \ud835\udc5d(\ud835\udc37\ud835\udc50)d\ud835\udc37\ud835\udc50= \ud835\udc41(\ud835\udc37\ud835\udc50, \ud835\udc37\ud835\udc50+ d\ud835\udc37\ud835\udc50) \ud835\udc41tot = 4\ud835\udf0b\ud835\udc372 \ud835\udc50d\ud835\udc37\ud835\udc50 \ud835\udc49tot . (3) With this procedure, we draw random redshifts within 0.002 \u2264 \ud835\udc67\u22640.1 for each TNG100 halo and then assign these random halo redshifts to all subhalos (i.e. galaxies) that belong to a particular halo. This is done independently for the \ud835\udc67= 0 and \ud835\udc67= 0.1 snapshots. We then compute the broadband flux \ud835\udc39j \ud835\udf08(\ud835\udc67) in any filter j of the galaxy at that arbitrary redshift. Since we need this flux in the observational frame, we cannot simply use the fluxes that we stored for the TNG100 galaxies (they are stored in the restand in the observational frame, but only at the fixed snapshot redshifts of 0 and 0.1). Hence we convolve the low-resolution SED \ud835\udc39\ud835\udf08(\ud835\udc67snap, \ud835\udf06) (which is stored for each galaxy in the observational frame at its snapshot redshift \ud835\udc67snap) with filter transmission curves11 counter instruments, the transmission curves are multiplied by the wavelengths. \ud835\udc47j(\ud835\udf06), accounting for the redshifting of the photons: \ud835\udc39j \ud835\udf08(\ud835\udc67) = \u222b \ud835\udc47j(\ud835\udf06\u00b7 \ud835\udc58) \u00b7 \ud835\udc39\ud835\udf08(\ud835\udc67snap, \ud835\udf06) \u00b7 \ud835\udc58d\ud835\udf06 \u222b \ud835\udc47j(\ud835\udf06\u00b7 \ud835\udc58) d\ud835\udf06 \u00d7 \ud835\udc37\ud835\udc59(\ud835\udc67snap)2 \ud835\udc37\ud835\udc59(\ud835\udc67)2 , (4) with \ud835\udc58= (1+ \ud835\udc67)/(1+ \ud835\udc67snap) and \ud835\udc37\ud835\udc59indicates the luminosity distance (for the \ud835\udc67= 0 snapshot we use \ud835\udc37\ud835\udc59= 20 Mpc as the SKIRT instrument is placed at this distance). Placing TNG100 galaxies at arbitrary redshifts introduces inconsistencies due to galaxy evolution between the snapshot redshift from which they were extracted and the new redshift at which they are placed. The unknown result without this systematic effect, which would be obtained if we had access to each galaxy at the random continuous redshift between 0.002 and 0.1, is bound by the results using only one of the \ud835\udc67= 0 and \ud835\udc67= 0.1 snapshots. To estimate if this inconsistency affects our results, we repeat our analysis using only the snapshot \ud835\udc67= 0 and \ud835\udc67= 0.1, respectively. We find that none of our results are affected significantly12. The end product of this procedure are observer-frame fluxes in Jansky (Eq. 4) for the entire \ud835\udc67= 0 and \ud835\udc67= 0.1 TNG100 galaxy sample, in all available 53 filters. These fluxes can be computed for continuous redshifts within arbitrary redshift intervals, and readily used to mimic observational sensitivity limits in various filters. We emulate the observational galaxy selection by distributing the TNG100 galaxies over the same redshift range (0.002 < \ud835\udc67< 0.1) as the GAMA dataset. Consistent with the GAMA data, only TNG100 galaxies with fluxes above the thresholds from Table 2 are shown in Figures 3 and 4. Under the assumption that the GAMA data is complete (i.e. volume-limited) above these flux limits, this procedure mitigates any sample selection effects to ensure a fair comparison of the TNG100 and GAMA galaxy samples. 4.3 Galaxy flux-flux relations We compare the simulated and observed fluxes in six different fluxflux relations in Figure 3. We always consider the VISTA \ud835\udc3eband in combination with various other bands. The \ud835\udc3eband is a good tracer for stellar mass (Kauffmann & Charlot 1998; Bell & de Jong 2001), 11 We obtained the filter transmission curves from the Spanish Virtual Observatory (SVO) filter profile service (http://svo2.cab.inta-csic.es/ theory/fps/). For photon 12 The similarity of the TNG100 and GAMA distributions quantified by the 2D Kolmogorov-Smirnov test statistic \ud835\udc37KS never changes by more than 0.04 in Figures 3 and 4. MNRAS 000, 1\u201315 (2024) \f8 A. Gebek et al. Filter Pivot wavelength [\ud835\udf07m] Flux limit [Jy] GALEX FUV 0.154 4.48 \u00d7 10\u22126 GALEX NUV 0.230 6.36 \u00d7 10\u22126 SDSS \ud835\udc62 0.356 1.41 \u00d7 10\u22125 SDSS \ud835\udc5f 0.618 5.76 \u00d7 10\u22125 VISTA \ud835\udc3d 1.25 9.42 \u00d7 10\u22125 VISTA \ud835\udc3e 2.21 1.00 \u00d7 10\u22124 WISE W1 3.39 1.26 \u00d7 10\u22124 WISE W3 12.6 4.02 \u00d7 10\u22124 WISE W4 22.3 4.25 \u00d7 10\u22123 PACS 100 101 5.39 \u00d7 10\u22122 SPIRE 250 253 2.10 \u00d7 10\u22122 SPIRE 500 515 2.15 \u00d7 10\u22122 Table 2. Flux limits for the various broadband filters used to construct fluxflux and color-color relations (Figures 3 and 4). These flux limits correspond to the 10 %-percentile of the GAMA fluxes with SNR > 3 in each filter. Only galaxies (for both the GAMA and TNG100 samples) which have fluxes above these thresholds in all involved filters for a specific flux-flux/color-color relation are plotted in this relation in Figures 3 and 4. hence this analysis is analogous to various galaxy scaling relations as a function of stellar mass in the \u2018observational realm\u2019. In Figures 3 and 4, we show the GAMA and TNG100 2D distributions as kernel density estimates (KDE), with the contours indicating various percentiles of enclosed fraction of the galaxy population density. 1D histograms are shown on the sides, and observational errors are indicated by the grey ellipses in the upper left corner where the darker (lighter) ellipse indicates the median (upper quartile) 1-\ud835\udf0eobservational error bar. The error bars are computed by propagating the flux uncertainties in quadrature, assuming that the flux uncertainties are uncorrelated with each other. The number of galaxies above the flux limits are given both for TNG100 and GAMA in the top right of each panel. To quantify the degree of agreement between the TNG100 and GAMA distributions, we also compute the twodimensional Kolmogorov-Smirnov test statistic \ud835\udc37KS (Kolmogorov 1933; Smirnov 1948). This number is given in the top right of each panel, with lower numbers indicating a better agreement between the two distributions. We also discuss two alternative realizations of Figure 3 in the appendix. Figure A1 displays the exact same flux-flux relations, but using TNG50 instead of TNG100 to explore the impact of the simulation resolution. In Figure B1, we test the same flux-flux relations using a conditional KDE, i.e. exactly matching the TNG100 and GAMA VISTA \ud835\udc3edistributions. All results in Figures 3 and 4 use a TNG100 aperture of two stellar half-mass radii, which is systematically smaller than the GAMA apertures (see Figure 2). To verify if this aperture choice significantly affects our results, we have reproduced (but do not show) all fluxflux and color-color relations using a TNG100 aperture of five stellar half-mass radii. We find that the differences are minor (\ud835\udc37KS never changes by more than 0.05) and do not affect any of our conclusions. 4.3.1 VISTA K vs. GALEX FUV The relation between galaxy stellar mass and star-formation rate is a fundamental galaxy evolution diagnostic (e.g. Popesso et al. 2023). We begin the TNG100-GAMA comparison by showing an analogue of this fundamental relation in the observational realm: VISTA \ud835\udc3e versus GALEX FUV luminosity (top left panel of Figure 3). The FUV-luminosity is dominated by young stellar populations and hence traces SFR (modulo dust attenuation effects, e.g. Salim et al. 2007). The TNG100 and GAMA distributions in this flux-flux relation match to excellent precision (\ud835\udc37KS = 0.08), with both datasets showing the expected relation between stellar mass and SFR (the main sequence of star-forming galaxies, Noeske et al. 2007). We highlight that while the IllustrisTNG model has been calibrated to reproduce several galaxy scaling relations (e.g. the stellar mass-halo mass relation, Pillepich et al. 2018a), the stellar mass-SFR relation was not invoked. On the other hand, the two free parameters of the radiative transfer postprocessing (the dust-to-metal ratio \ud835\udc53dust and the clearing timescale for the birth clouds of star-forming regions \ud835\udf0f) have been calibrated to reproduce various flux and color relations from the DustPedia sample in Tr\u010dka et al. (2022), including a WISE W1-FUV relation which is very similar to the one presented here (see Section 4.3.3 for our reasoning why to replace WISE W1 with VISTA \ud835\udc3eas stellar mass tracer). 4.3.2 VISTA K vs. SDSS r The SDSS \ud835\udc5f-band luminosity also traces stellar mass (e.g. Mahajan et al. 2018), but due to increased dust attenuation and variability with stellar age it is less often used as a direct stellar mass proxy compared to the \ud835\udc3eband (Bell et al. 2003). On the other hand, the stellar evolution templates in the NIR carry systematic uncertainties related to TP-AGB stars (Maraston et al. 2006; Taylor et al. 2011). We find that the \ud835\udc5fand \ud835\udc3e-band fluxes correlate very tightly, in a similar fashion for both the GAMA and TNG100 data. The TNG100 galaxies are redder by \u22480.25 mag which could be due to an overly effective dust attenuation in the \ud835\udc5fband. Comparatively older or more metal-rich stellar populations in TNG100 could also contribute to this discrepancy, but Nelson et al. (2018) find that the TNG100 galaxy ages and stellar metallicities broadly agree with observational SDSS data within the systematic uncertainties (see their figure 2). Lastly, we find that systematic uncertainties of the SED templates for the evolved stellar populations are of the order of \u22480.2 mag when testing different template libraries. 4.3.3 VISTA K vs. WISE W1 Since the WISE W1 flux traces the Rayleigh-Jeans tail of evolved stars, this band can also be used as a stellar mass estimate (e.g. Jarrett et al. 2013; Meidt et al. 2014; Jarrett et al. 2023; Sureshkumar et al. 2023). The comparison with GAMA fluxes reveals that there is a sizeable population of TNG100 galaxies above the GAMA distribution. While the 1D histograms indicate that this offset seems to be mostly due to the \ud835\udc3e-band flux being too low in TNG100, we caution that a strong selection effect is at play: only galaxies that have both \ud835\udc3eand WISE W1 fluxes above the thresholds shown in Table 2 are included in the plot. If the selection is dominated by the WISE W1 band, and if the TNG100 galaxies are systematically brighter in this band than GAMA galaxies (at a fixed \ud835\udc3e-band luminosity), then this \u2018WISE W1 excess\u2019 could manifest itself as a \u2018VISTA \ud835\udc3e deficiency\u2019 even if the TNG100 and GAMA \ud835\udc3e-band distributions would match exactly. This is because TNG100 galaxies which are comparatively faint in the \ud835\udc3e-band can reach the required WISE W1 flux threshold, while GAMA galaxies at similar \ud835\udc3e-band luminosities would be discarded leading to the shown offset in the 1D \ud835\udc3e-band distributions. To visualize the flux-flux relations under this assumption of perfectly matching \ud835\udc3e-band luminosity distributions between TNG100 and GAMA, we show the results of a conditional KDE in Figure B1. We find that the offset from the GAMA distribution strongly correlates with the number of star-forming regions (stellar populations MNRAS 000, 1\u201315 (2024) \fThe colors of TNG100 9 with ages below 10 Myr which we model with the MAPPINGS-III templates) relative to the number of evolved stellar populations. At first sight, this suggests that the MAPPINGS-III templates are the cause of excess WISE W1 emission for the TNG100 galaxies. However, we found that the contribution of star-forming regions to the WISE W1 flux is small, typically below 5 %. Instead, we suggest that emission from the diffuse dust causes the elevated WISE W1 fluxes, as the diffuse dust contribution also strongly correlates with the offset from the GAMA distribution and reaches values up to 70 %. Upon inspection of the simulated TNG100 spectra, we find emission features at the WISE W1 band which corresponds to the 3.3-micron polycyclic aromic hydrocarbon (PAH) feature (Tokunaga et al. 1991; Kim et al. 2012). It seems plausible that it is this PAH emission which causes the excess WISE W1 fluxes for the TNG100 galaxies, but whether this originates in overly emissive PAH dust in the THEMIS dust mix or if the MAPPINGS-III templates are overly effective in stochastically heating the surrounding diffuse dust remains unclear. 4.3.4 VISTA K vs. WISE W3 Since the WISE W3 band predominantly traces the PAH emission from PDRs (Kapoor et al. 2023), this flux is used as an alternative tracer for star-formation rate which is unaffected by dust attenuation (e.g. Cluver et al. 2017; Elson et al. 2019; Naluminsa et al. 2021; Sureshkumar et al. 2023). Similarly as in Section 4.3.1, we see the star-forming main sequence with similar slopes in the TNG100 and GAMA data, but with a clearer separation between the star-forming and quiescent galaxy populations compared to the \ud835\udc3e-FUV relation. The TNG100 galaxies populating the sequence in the bottom right corner, with WISE W3 luminosities 1.5 dex below the main sequence, are all devoid of star-forming gas (i.e. have zero star-formation rate). This population of quiescent galaxies is also seen in the GAMA data. On the other hand, the star-forming TNG100 galaxies are slightly offset towards the top left corner. The 1D WISE W3 distributions match to great precision, but the TNG100 VISTA \ud835\udc3eluminosities seem to be offset to lower values compared to the GAMA data. This is exactly the same effect as discussed in Section 4.3.3 (a TNG100 excess in WISE W3 flux disguised as a deficiency in \ud835\udc3e-band flux due to selection effects), and we speculate that it also has the same origin (an excess of PAH emission from the diffuse dust component) as the diffuse dust emission contributes at least \u224860 % of the WISE W3 flux for all star-forming galaxies. 4.3.5 VISTA K vs. PACS 100 Since the FIR dust emission peak is usually encompassed by the 100 and 160 \ud835\udf07m bands (Cortese et al. 2014), the PACS 100 flux traces relatively warm dust. The correlation of this flux with the \ud835\udc3e-band exhibits a similar slope and scatter in the TNG100 and GAMA distributions. The TNG100 PACS 100 fluxes are systematically smaller than the GAMA fluxes, but the offset is very small (\u22480.1 dex). We note that for this and the next panel involving FIR fluxes, the galaxy samples shrink substantially (c.f. the GAMA and TNG100 base samples of 17 932 and 61 076 galaxies, respectively). 4.3.6 VISTA K vs. SPIRE 500 The SPIRE 500 band traces relatively cold dust, and can be used as a dust mass proxy since the dust budget in the ISM is dominated by cold dust (\ud835\udc47\u227225 K, Dunne & Eales 2001) and the SPIRE 500 flux is less affected by dust temperature variations than for instance the SPIRE 250 flux (Galametz et al. 2012). Hence, the correlation between \ud835\udc3eand SPIRE 500 flux is a purely observational counterpart of the physical non-linear relation between stellar and cold dust mass (e.g. Cortese et al. 2012). While we find that the TNG100 and GAMA flux distributions broadly agree in this flux-flux relation, there is a sizable population of GAMA galaxies at low \ud835\udc3e-band luminosities (\ud835\udc3fK \u223c109 L\u2299) with substantially elevated SPIRE 500 fluxes (by approximately one order of magnitude). When replacing the SPIRE 500 with the SPIRE 250 band we find a better agreement (\ud835\udc37KS = 0.13), moreover the 2D KDE contours do not show a population of elevated SPIRE 250 fluxes for the GAMA galaxies. The SPIRE 500 band is (unlike the SPIRE 250 band) susceptible to submillimeter (submm) excess. This excess flux could be due to very cold dust shielded from starlight or changes in the emission properties of the dust grains at submm wavelengths, but the exact origin remains unknown (Kirkpatrick et al. 2013; Hermelo et al. 2016). As the cold (\ud835\udc47\u22728000 K) ISM is not modelled explicitly in the IllustrisTNG model but treated according to the two-phase model of Springel & Hernquist (2003), the lack of a cold ISM component could explain the absence of this galaxy population with elevated SPIRE 500 fluxes in TNG100. However, the SPIRE 500 fluxes are also known to suffer more from source confusion (Rigby et al. 2011) and are less reliable than the SPIRE 250 fluxes. We tested a more stringent SNR criterion of five (instead of three), which mostly affects the SPIRE 500 band. We find that the population of GAMA galaxies with elevated SPIRE 500 fluxes vanishes almost completely in this case13, with \ud835\udc37KS never changing by more than 0.07 for any of the relations in Figures 3 and 4. Hence, this particular tension is not robust and could be due to observational uncertainties. 4.4 Galaxy color-color relations We show four different color-color relations in Figure 4. The galaxy samples are determined in the same way as in Figure 3, i.e. they are derived from the GAMA and TNG100 base samples by imposing SNR and flux thresholds on each band involved in a specific colorcolor relation. An alternative realization of Figure 4 using TNG50 instead of TNG100 is shown in Figure A2. 4.4.1 (SDSS r VISTA J) vs. (SDSS u SDSS r) This color-color relation emulates the commonly used UVJ diagram (using the \ud835\udc49-\ud835\udc3dand \ud835\udc48-\ud835\udc49Johnson filters), which is relevant due to its capability of separating the star-forming and quiescent galaxy populations observationally (Williams et al. 2009; Whitaker et al. 2010; Patel et al. 2012; see Leja et al. 2019 for some limitations of the UVJ diagram). While dust attenuation shifts galaxies in the top right direction of the UVJ diagram, quiescent galaxies appear as a distinct population that is offset towards the top left direction. The UVJ diagram has also been studied in postprocessed cosmological simulations for TNG100 (Donnari et al. 2019; Nagaraj et al. 2022), TNG50 (Baes et al. 2024a), and SIMBA (Akins et al. 2022). Using a raytracing postprocessing method developed by Nelson et al. (2018), Donnari et al. (2019) derive the rest-frame UVJ diagram for TNG100 at \ud835\udc67= 0 and find that it is broadly consistent with observational data, 13 The impact on any of the other results is minor when using a more stringent SNR criterion of SNR > 5 MNRAS 000, 1\u201315 (2024) \f10 A. Gebek et al. 9 10 11 log10 (LK/L ) 8.0 8.5 9.0 9.5 10.0 log10 (LFUV/L ) NTNG = 21954 NGAMA = 8112 DKS = 0.08 0.0 0.5 PDF [dex 1] TNG GAMA 0 1 PDF [dex 1] 8 9 10 11 log10 (LK/L ) 8.5 9.0 9.5 10.0 10.5 11.0 log10 (Lr/L ) NTNG = 32483 NGAMA = 14415 DKS = 0.12 0.0 0.5 PDF [dex 1] 0 1 PDF [dex 1] 8 9 10 11 log10 (LK/L ) 8.0 8.5 9.0 9.5 10.0 10.5 log10 (LW1/L ) NTNG = 28489 NGAMA = 10383 DKS = 0.16 0.0 0.5 PDF [dex 1] 0.0 0.5 PDF [dex 1] 9 10 11 log10 (LK/L ) 7.5 8.0 8.5 9.0 9.5 10.0 log10 (LW3/L ) NTNG = 19805 NGAMA = 6210 DKS = 0.29 0.0 0.5 PDF [dex 1] 0 1 PDF [dex 1] 9 10 11 log10 (LK/L ) 8.5 9.0 9.5 10.0 10.5 11.0 log10 (LP100/L ) NTNG = 5424 NGAMA = 1938 DKS = 0.24 0.0 0.5 1.0 PDF [dex 1] 0 1 PDF [dex 1] 9 10 11 log10 (LK/L ) 7.0 7.5 8.0 8.5 9.0 9.5 log10 (LS500/L ) NTNG = 2135 NGAMA = 937 DKS = 0.2 0.0 0.5 1.0 PDF [dex 1] 0 1 PDF [dex 1] Figure 3. Six different flux-flux relations, for TNG100 (red) and observational data from the GAMA survey (black), for 0.002 \u2264\ud835\udc67\u22640.1 and \ud835\udc40\u2605\u2265108.5 M\u2299. MNRAS 000, 1\u201315 (2024) \fThe colors of TNG100 11 Figure 3 \u2013 continued The panels always have the VISTA \ud835\udc3e-band flux on the \ud835\udc65-axis, and feature various bands (increasing with wavelength from the top left to the bottom right panel) on the \ud835\udc66-axis. For both datasets, we filter out galaxies which lie below specific flux thresholds in any of the bands (see text for details). The number of remaining galaxies is given in the top right corner of each panel. The 2D distribution is estimated using a kernel density estimate (KDE). The different levels correspond to 5, 25, 60, and 90 % of the total KDE density. 1D color histograms for both datasets are also shown. Note that we use observer-frame fluxes here. An estimate of the average noise in the observations is indicated by the grey ellipses, with the darker (lighter) ellipse indicating the median (upper quartile) 1-\ud835\udf0eobservational error bar. \ud835\udc37KS indicates the distance between the two distributions according to a two-dimensional Kolmogorov-Smirnov test. The flux-flux relations seen in the GAMA data are well reproduced by the TNG100 galaxies. but do not compare the simulated and observed color distributions in detail. In our case, we find two galaxy populations that are clearly separated as seen in observations. As we know the star-formation rates for the TNG100 galaxies, we can verify if these two populations indeed correspond to star-forming and quiescent galaxies. When splitting the galaxy population by specific star-formation rate (sSFR), we find that star-forming galaxies (with sSFR > 10\u221210.5 yr\u22121) indeed occupy the peak at blue colors and broadly extend to the top right corner, while quiescent galaxies (with sSFR < 10\u221211.5 yr\u22121 are located along a very narrow sequence offset from the star-forming sequence to redder \ud835\udc62-\ud835\udc5fcolors. However, the star-forming sequence appears to be slightly too red in TNG100 by \u22480.25 mag along both axes. Multiple effects could contribute to render the star-forming galaxies too red for TNG100 (as discussed in section 4.3.2): at these wavelengths, the amount of dust as well as the dust model affects the colors. Furthermore, the SED template libraries for the evolved stars can affect the UVJ colors by up to 0.2 mag (G. Worthey, private communication). And lastly, the stellar populations of the star-forming TNG100 galaxies could also be intrinsically too old or metal-rich. However, we remark that the \ud835\udc62-\ud835\udc5fcolor of \ud835\udc67= 0 TNG100 galaxies postprocessed with the simpler method of Nelson et al. (2018) reproduce observational data from SDSS, i.e. the star-forming galaxies are bluer by \u22480.25 mag compared to our TNG100 colors calculated with SKIRT. At the same time, the quiescent galaxies which are less affected by dust attenuation are slightly redder (by \u22480.1 mag) in Nelson et al. (2018) compared to our results. This points towards too much dust reddening in our SKIRT pipeline, which is puzzling given the excellent agreement of our SKIRT fluxes with other flux-flux/color-color relations and luminosity functions. We defer a more detailed assessment of the intrinsic and dust-reddened optical colors of TNG100 galaxies to future work. 4.4.2 (GALEX FUV VISTA K) vs. (VISTA K SPIRE 250) As discussed in Sections 4.3.1 and 4.3.6, the GALEX FUV and SPIRE 250 fluxes trace galaxy SFR and dust mass, respectively. Hence, this color-color relation is an analogue of the physical galaxy scaling relation between specific star-formation rate and specific dust mass (e.g. Cortese et al. 2012; R\u00e9my-Ruyer et al. 2015; Nanni et al. 2020; Shivaei et al. 2022). Both GAMA and TNG100 feature a mild correlation between these colors, with a tail extending towards the bottom left corner which contains quiescent galaxies with very small specific dust masses. The peaks, widths, and correlation of the GAMA and TNG100 color distributions match to great precision for this relation. We briefly compare this result to a similar color-color relation which was used to calibrate the SKIRT postprocessing parameters in Tr\u010dka et al. (2022) (their figure 6, panel g). Their color-color relation slightly differs from the one shown here as Tr\u010dka et al. (2022) adopted the WISE W1 band to trace stellar mass, while we use the VISTA \ud835\udc3eband. As discussed in Section 4.3.3, the WISE W1 flux can contain significant PAH contribution from the diffuse dust component. We also examined the exact same color-color relation replacing the VISTA \ud835\udc3ewith the WISE W1 band, to reproduce the color-color relation that was used by Tr\u010dka et al. (2022) in the calibration process. We find for our datasets that the excellent match between the GAMA and TNG100 distributions vanishes (\ud835\udc37KS = 0.6), with the WISE W1-SPIRE 250 (GALEX FUV-WISE W1) color of GAMA being significantly redder bluer by \u22480.5 mag compared to TNG100. This means that the radiative transfer calibration of Tr\u010dka et al. (2022) (using DustPedia data and WISE W1 fluxes) produces sensible results when using GAMA data and VISTA \ud835\udc3efluxes as shown here, but is in tension when using the same GAMA data with WISE W1 fluxes. We found two different effects with coincidentally similar magnitudes, which can explain these discrepant results: first, the TNG100 WISE W1 fluxes contain PAH emission that seems to be too strong in the SKIRT setup used here (see Section 4.3.3). Second, the DustPedia WISE W1-SPIRE 250 colors are bluer by \u22480.5 mag compared to the GAMA colors, probably related to selection effects (the DustPedia archive is a much smaller sample of 814 local galaxies with Herschel and WISE W1 detections). These two effects conspire to give consistent results for this color-color relation using the WISE W1 band and DustPedia or using the VISTA \ud835\udc3eband and GAMA data. 4.4.3 (GALEX FUV VISTA K) vs. (GALEX FUV GALEX NUV) This color-color relation has the UV slope GALEX FUV-NUV on the \ud835\udc66-axis. Since the UV is dominated by star-forming regions and dust attenuation is very strong at these wavelengths, the UV slope of the TNG100 galaxies sensitively depends on the treatment of starforming regions and the subsequent attenuation in the diffuse ISM. Hence, the UV slope \ud835\udefdis correlated with the infrared excess IRX (ratio of IR and UV luminosity) and commonly used as a measure for attenuation in the ISM using the IRX-\ud835\udefdrelation (Calzetti 1997; Meurer et al. 1999). We examine the FUV-NUV color as a function of FUV-VISTA \ud835\udc3e, which we use as a proxy for sSFR\u22121 as in Section 4.4.2. We find that the sSFR and UV slopes are anticorrelated in both datasets, but the anticorrelation is substantially stronger in the GAMA data. Furthermore, the TNG100 UV slopes are also offset to lower values, with the peaks of the distributions differing by \u22480.4 mag. We also note that the FUV-NUV distribution of the GAMA galaxies is wider, which is (at least partially) caused by the relatively high noise levels of this particular color. When calculating FUV-NUV colors without diffuse dust component for the TNG100 galaxies we find that the FUV-NUV colors hardly change, meaning that the diffuse dust has a negligible impact on the UV slope. Instead, the FUV-NUV color is driven by the SED templates of both the evolved stellar populations and the star-forming regions, which contribute roughly similar fractions to the total UV fluxes. A redder FUV-NUV color (i.e. a steeper UV slope) could for instance be obtained with a more selective extinction in the FUV band MNRAS 000, 1\u201315 (2024) \f12 A. Gebek et al. 0.25 0.50 0.75 1.00 1.25 SDSS r VISTA J [mag] 1.0 1.5 2.0 2.5 3.0 SDSS u SDSS r [mag] NTNG = 27328 NGAMA = 13233 DKS = 0.22 0 1 PDF [mag 1] 0 1 PDF [mag 1] 2 4 6 GALEX FUV VISTA K [mag] 2 3 4 5 6 VISTA K SPIRE 250 [mag] NTNG = 9185 NGAMA = 2881 DKS = 0.14 0.0 0.2 0.4 PDF [mag 1] TNG GAMA 0 1 PDF [mag 1] 2 4 6 8 GALEX FUV VISTA K [mag] 0.2 0.0 0.2 0.4 0.6 0.8 1.0 1.2 GALEX FUV GALEX NUV [mag] NTNG = 20809 NGAMA = 7309 DKS = 0.75 0.0 0.2 0.4 PDF [mag 1] 0.0 2.5 PDF [mag 1] 1 2 3 4 WISE W4 SPIRE 250 [mag] 1.5 2.0 2.5 3.0 3.5 4.0 4.5 WISE W3 SPIRE 250 [mag] NTNG = 3096 NGAMA = 1148 DKS = 0.64 0.0 0.5 1.0 PDF [mag 1] 0 1 PDF [mag 1] Figure 4. Four different color-color relations, for TNG100 (red) and observational data from the GAMA survey (black), for 0.002 \u2264\ud835\udc67\u22640.1 and \ud835\udc40\u2605\u2265108.5 M\u2299. For both datasets, we filter out galaxies which lie below specific flux thresholds in any of the bands involved in the color-color relation (see text for details). The number of remaining galaxies is given in the top right corner of each panel. The 2D distribution is estimated using a kernel density estimate (KDE). The different levels correspond to 5, 25, 60, and 90 % of the total kernel density. 1D color histograms for both datasets are also shown. Note that we use observer-frame fluxes here. An estimate of the average noise in the observations is indicated by the grey ellipses, with the darker (lighter) ellipse indicating the median (upper quartile) 1-\ud835\udf0eobservational error bar. \ud835\udc37KS indicates the distance between the two distributions according to a two-dimensional Kolmogorov-Smirnov test. TNG100 reproduces the observed color distributions in the two upper panels, but the TNG100 galaxies have flatter UV slopes and bluer WISE W3 SPIRE 250 colors compared to the GAMA data. from the dusty birth clouds in the MAPPINGS-III templates. Kapoor et al. (in prep.) find that for the 30 MW-like galaxies from the AURIGA simulation (Grand et al. 2017), the recent TODDLERS library for star-forming regions (Kapoor et al. 2023) yields redder FUVNUV colors of \u22480.15 mag compared to MAPPINGS-III. Whether this change fully resolves the tension in this color-color relation or additional adjustments need to be made (e.g. in the templates for the evolved stellar populations, which can also contribute substantially to the UV fluxes) would require postprocessing the TNG100 galaxMNRAS 000, 1\u201315 (2024) \fThe colors of TNG100 13 ies again varying the SED templates of the star-forming regions and evolved stellar populations, which is beyond the scope of this study. 4.4.4 (WISE W4 SPIRE 250) vs. (WISE W3 SPIRE 250) Lastly, we show a color-color relation involving the SPIRE 250, WISE W3 and WISE W4 fluxes. As discussed in Section 4.3.4, the WISE W3 band traces PAH emission from the diffuse dust component. The WISE W4 flux originates from hot dust around star-forming regions (Kapoor et al. 2023), and we find that it comes roughly in equal parts from the MAPPINGS-III star-forming regions and the diffuse dust. Hence, this color-color relation measures the amount of hot dust and PAH emission relative to cold dust traced by SPIRE 250. This relation is observationally particularly challenging to measure, resulting in the large observational errors and wider GAMA distributions. While the WISE W4-SPIRE 250 color distributions broadly match, the TNG100 WISE W3-SPIRE 250 colors are bluer by \u22480.5 mag which we attribute to elevated WISE W3 fluxes due to PAH emission from the diffuse dust (as discussed in Section 4.3.4). The 2D distributions show that the slope of the relation is steeper in TNG100. This is expected since galaxies with high WISE W4-SPIRE 250 colors have a comparatively large fraction of their dust heated to high temperatures due to emission from star-forming regions. This in turn leads to a stronger WISE W3 excess for those galaxies and thus a steepening of this color-color relation for TNG100 galaxies. 5 SUMMARY We applied the radiative transfer postprocessing method developed by Tr\u010dka et al. (2022), where the TNG50 simulation was analyzed, to the fiducial TNG100 run of the IllustrisTNG suite. The postprocessing method uses the dust MCRT code SKIRT to propagate the emission from evolved stars and star-forming regions through the dusty ISM. We generated broadband fluxes and low-resolution SEDs from the UV to the far-IR for all TNG100 galaxies in the \ud835\udc67= 0 and \ud835\udc67= 0.1 snapshots resolved by more than \u2248200 star particles (\ud835\udc40\u2605> 108.5 M\u2299), leading to a sample of \u224860 000 postprocessed galaxies. This dataset (as well as the TNG50 and TNG50-2 fluxes and SEDs generated by Tr\u010dka et al. 2022) is publicly available on the IllustrisTNG website14. To test the fidelity of the cosmological simulation and our postprocessing method, we compared the simulated fluxes to low-redshift observational data. The following points summarize our main findings: \u2022 TNG100 luminosity functions from the UV to the far-IR fall within the range of low-redshift observational results (Figure 1). Residual discrepancies at the bright end in the UV/optical/NIR are on the level of systematic effects in the observations related to aperture choices. As noted by Tr\u010dka et al. (2022), the improvement over the TNG50 simulation stems from the fact that the IllustrisTNG model was designed at the resolution of TNG100, i.e. the subgrid parameters were chosen such that TNG100 reproduces some key statistics of the low-redshift galaxy population (e.g. the stellar mass-halo mass relation). \u2022 We compare six different flux-flux relations between TNG100 and observational data from GAMA in Figure 3. To mimic the strong observational selection effects, we redistribute the TNG100 galaxies to arbitrary redshifts to compute a realistic apparent brightness 14 www.tng-project.org/gebek24 distribution (Section 4.2). Exploring the fluxes in various bands as a function of \ud835\udc3e-band luminosity (which traces stellar mass), we find a broad baseline agreement between TNG100 and GAMA. Tension in the WISE bands is correlated with the abundance of star-forming regions in TNG100 galaxies and with emission from the diffuse dust component. Hence, we attribute this tension to excess PAH emission, potentially related to overly effective stochastic dust heating from the star-forming regions. \u2022 Lastly, we use the same method applied for the flux-flux relations to compare four different color-color relations between TNG100 and GAMA. Tension exists mostly in the UV slope (TNG100 galaxies exhibit flatter UV slopes, i.e. lower FUV-NUV colors, than GAMA data) and in IR colors involving WISE bands. The former could be related to the extinction in the dusty birth clouds of the MAPPINGSIII templates not being selective enough, while the latter is again caused by excess PAH emission from the diffuse dust. However, we remark that uncertainties in the dust model, dust distribution, and templates for evolved stellar populations could also play a role. We conclude that this low-redshift dataset provides a useful resource to test the fidelity of TNG100, explore observational systematics (e.g. aperture, inclination, or sample selection effects), and interpret the complexity faced in the observed galaxy population. Fundamentally, this is made possible by shifting the simulated data into the \u2018observational realm\u2019. This approach is complementary to studies in the \u2018physical realm\u2019, and we highlight the importance of considering both approaches as they carry different systematics and biases. The dataset presented in this study represents an important step towards analyzing the vast IllustrisTNG simulation landscape in the \u2018observational realm\u2019. ACKNOWLEDGEMENTS We thank Eric Rohr and Peter Camps for enlightening discussions. We also wish to express our gratitude towards the anonymous referee, whose feedback substantially improved the quality of this paper. AG gratefully acknowledges financial support from the Fund for Scientific Research Flanders (FWO-Vlaanderen, project FWO.3F0.2021.0030.01). This study made extensive use of the Python programming language, especially the numpy (van der Walt et al. 2011), matplotlib (Hunter 2007), and scipy (Virtanen et al. 2020) packages. We also acknowledge the use of the Topcat visualization tool (Taylor 2005) and the ndtest Python package (https://github.com/syrte/ ndtest). The IllustrisTNG simulations were undertaken with compute time awarded by the Gauss Centre for Supercomputing (GCS) under GCS Large-Scale Projects GCS-ILLU and GCS-DWAR on the GCS share of the supercomputer Hazel Hen at the High Performance Computing Center Stuttgart (HLRS), as well as on the machines of the Max Planck Computing and Data Facility (MPCDF) in Garching, Germany. GAMA is a joint European-Australasian project based around a spectroscopic campaign using the Anglo-Australian Telescope. The GAMA input catalogue is based on data taken from the Sloan Digital Sky Survey and the UKIRT Infrared Deep Sky Survey. Complementary imaging of the GAMA regions is being obtained by a number of independent survey programmes including GALEX MIS, VST KiDS, VISTA VIKING, WISE, Herschel-ATLAS, GMRT and ASKAP providing UV to radio coverage. GAMA is funded by the STFC (UK), the ARC (Australia), the AAO, and the participating institutions. The GAMA website is http://www.gama-survey.org/. We also use MNRAS 000, 1\u201315 (2024) \f14 A. Gebek et al. VISTA VIKING data from the GAMA database, based on observations made with ESO Telescopes at the La Silla Paranal Observatory under programme ID 179.A-2004. This research has made use of the Spanish Virtual Observatory (https://svo.cab.inta-csic.es, Rodrigo et al. 2012; Rodrigo & Solano 2020) project funded by MCIN/AEI/10.13039/501100011033/ through grant PID2020112949GB-I00. DATA AVAILABILITY The IllustrisTNG data used in this work as well as the generated broadband fluxes are publicly available at https://www. tng-project.org/ as described by Nelson et al. (2019a). The GAMA data is publicly available as part of data release 4 (DR4, Driver et al. 2022) of the GAMA survey. DR4 can be accessed at http://www.gama-survey.org/dr4/. All other data (observational luminosity functions, derived data for GAMA) and the analysis scripts are publicly available at https: //github.com/andreagebek/TNG100colors." +} \ No newline at end of file diff --git a/abs_9K/test_abstract_short_2405.04940v1.json b/abs_9K/test_abstract_short_2405.04940v1.json new file mode 100644 index 0000000000000000000000000000000000000000..3740474e5731fde2a07a3200ac6a27968db6b4e7 --- /dev/null +++ b/abs_9K/test_abstract_short_2405.04940v1.json @@ -0,0 +1,16 @@ +{ + "url": "http://arxiv.org/abs/2405.04940v1", + "title": "Harnessing the Power of MLLMs for Transferable Text-to-Image Person ReID", + "abstract": "Text-to-image person re-identification (ReID) retrieves pedestrian images\naccording to textual descriptions. Manually annotating textual descriptions is\ntime-consuming, restricting the scale of existing datasets and therefore the\ngeneralization ability of ReID models. As a result, we study the transferable\ntext-to-image ReID problem, where we train a model on our proposed large-scale\ndatabase and directly deploy it to various datasets for evaluation. We obtain\nsubstantial training data via Multi-modal Large Language Models (MLLMs).\nMoreover, we identify and address two key challenges in utilizing the obtained\ntextual descriptions. First, an MLLM tends to generate descriptions with\nsimilar structures, causing the model to overfit specific sentence patterns.\nThus, we propose a novel method that uses MLLMs to caption images according to\nvarious templates. These templates are obtained using a multi-turn dialogue\nwith a Large Language Model (LLM). Therefore, we can build a large-scale\ndataset with diverse textual descriptions. Second, an MLLM may produce\nincorrect descriptions. Hence, we introduce a novel method that automatically\nidentifies words in a description that do not correspond with the image. This\nmethod is based on the similarity between one text and all patch token\nembeddings in the image. Then, we mask these words with a larger probability in\nthe subsequent training epoch, alleviating the impact of noisy textual\ndescriptions. The experimental results demonstrate that our methods\nsignificantly boost the direct transfer text-to-image ReID performance.\nBenefiting from the pre-trained model weights, we also achieve state-of-the-art\nperformance in the traditional evaluation settings.", + "authors": "Wentao Tan, Changxing Ding, Jiayu Jiang, Fei Wang, Yibing Zhan, Dapeng Tao", + "published": "2024-05-08", + "updated": "2024-05-08", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "label": "Original Paper", + "paper_cat": "Multi AND Modal AND LLM", + "gt": "Text-to-image person re-identification (ReID) retrieves pedestrian images\naccording to textual descriptions. Manually annotating textual descriptions is\ntime-consuming, restricting the scale of existing datasets and therefore the\ngeneralization ability of ReID models. As a result, we study the transferable\ntext-to-image ReID problem, where we train a model on our proposed large-scale\ndatabase and directly deploy it to various datasets for evaluation. We obtain\nsubstantial training data via Multi-modal Large Language Models (MLLMs).\nMoreover, we identify and address two key challenges in utilizing the obtained\ntextual descriptions. First, an MLLM tends to generate descriptions with\nsimilar structures, causing the model to overfit specific sentence patterns.\nThus, we propose a novel method that uses MLLMs to caption images according to\nvarious templates. These templates are obtained using a multi-turn dialogue\nwith a Large Language Model (LLM). Therefore, we can build a large-scale\ndataset with diverse textual descriptions. Second, an MLLM may produce\nincorrect descriptions. Hence, we introduce a novel method that automatically\nidentifies words in a description that do not correspond with the image. This\nmethod is based on the similarity between one text and all patch token\nembeddings in the image. Then, we mask these words with a larger probability in\nthe subsequent training epoch, alleviating the impact of noisy textual\ndescriptions. The experimental results demonstrate that our methods\nsignificantly boost the direct transfer text-to-image ReID performance.\nBenefiting from the pre-trained model weights, we also achieve state-of-the-art\nperformance in the traditional evaluation settings.", + "main_content": "Introduction Text-to-image person re-identification (ReID) [13\u201316, 24, 41, 42, 44, 50, 52, 61] is a task that retrieves pedestrian *Corresponding author a woman with blonde hair, wearing a colorful top and black leggings. a woman with brown hair, wearing a red jacket, grey pants and white shoes. She has a handbag with her. The person in the image is a woman with long dark hair, wearing a green and white striped jacket, light blue jeans, and white sneakers. MLLM Similar Structure The woman is wearing sneakers, a blue and white striped shirt, jeans, and a black purse. She has long dark hair. In a green and white striped shirt and gray jeans, the woman also has long brown hair. With long brown hair, the woman is wearing white shoes, a green and white striped shirt, gray jeans, and a black purse. She is carrying a black purse and is walking down the street. Template1: With [hair], the [gender] is wearing [footwear], [clothing], [accessory], and carrying [belongings]. \u2714 MLLM Diverse Structure Template3: In [clothing] and [footwear], the [gender] also has [hair]. Template2: The [gender] is wearing [footwear], [accessory], [clothing], and [belongings]. The [gender] has [hair]. The person in the image is a woman with blonde hair, wearing a colorful top and black leggings. The person in the image is a woman with brown hair, wearing a red jacket, grey pants and white shoes. She has a handbag with her. Figure 1. Illustration of textual descriptions generated by an MLLM (i.e., Qwen [3]). (Top) The description patterns are similar for different images. (Bottom) Our proposed Template-based Diversity Enhancement (TDE) method significantly enhances the description pattern diversity. It is worth noting that some errors are present in the generated descriptions shown in this figure. images according to textual descriptions. It is a powerful tool when probe images of the target person are unavailable and only textual descriptions exist. It has various potential applications, including video surveillance [6], social media analysis [31], and crowd management [18]. However, it remains challenging mainly because annotating textual descriptions for pedestrian images is time-consuming [64]. Consequently, existing datasets [15, 31, 70] for textto-image person ReID are usually small, resulting in insufficient deep model training. Previous studies on text-to-image ReID usually assumed that training and testing data are drawn from the same domain. They proposed novel model architectures [4, 15, 38, 41, 57, 58], loss functions [60, 66, 69], and pre-training strategies [42, 64] to improve model performance for each database. However, researchers have recently discovered arXiv:2405.04940v1 [cs.CV] 8 May 2024 \fthat the cross-dataset generalization ability of their approaches is significantly low [41], limiting real-world applications. Since annotating textual descriptions is timeconsuming, collecting training data for each target domain is infeasible. Therefore, training a model that can be directly deployed to various target domains is necessary. Accordingly, we study the transferable text-to-image ReID problem. The term \u201ctransferable\u201d is derived from the seminal work CLIP [38], which refers to a large-scale pre-trained model\u2019s capacity that directly applies its knowledge to other domains or tasks without fine-tuning on labeled data. Due to the rapid advancements in multi-modal large language models (MLLMs) [3, 8, 29, 65], we utilize them to generate textual descriptions automatically and employ them to replace traditional manual annotations. Specifically, we utilize the large-scale LUPerson dataset [17] as the image source and generate textual descriptions using MLLMs. The obtained image-text pairs are utilized to train a model directly evaluated in existing text-to-image ReID databases. However, to improve the model\u2019s transfer ability, two essential challenges must be addressed: (1) guiding MLLMs to generate diverse textual descriptions for a single image and (2) reducing the impact of the noise in the synthesized textual descriptions. First, MLLMs tend to generate descriptions with similar sentence structures, as shown in Fig. 1. This causes the text-to-image ReID model to overfit specific sentence patterns, reducing the model\u2019s ability to generalize to various human description styles encountered in real-world applications. To address this issue, we propose a Templatebased Diversity Enhancement (TDE) method that instructs MLLMs to conduct image captioning according to given description templates. Obtaining these templates with minimal effort involves performing multi-turn dialogues with ChatGPT [37] and prompting it to generate diverse templates. Then, we randomly integrate one of these templates into the MLLM\u2019s captioning instruction, resulting in vivid descriptions with varied sentence structures. This approach significantly enhances textual description diversity. Second, although MLLMs are highly effective, the generated descriptions still contain errors. This implies that certain words in a textual description may not match the paired image. Thus, we propose a novel Noise-aware Masking (NAM) method to address this problem. Specifically, we compute the similarities between each text token and all image tokens in the paired image for a specific textual description. The similarity scores between the unmatched word and image tokens are usually low. Hence, we identify potentially incorrect words and mask them with a large probability in the next training epoch before they are fed into a text encoder. Furthermore, NAM and Masked Language Modeling (MLM) are similar but have two key differences: (1) MLM masks all tokens with equal probability, while NAM masks them based on their noise level. (2) MLM applies cross-entropy loss to predict the masked tokens, whereas NAM focuses on masking words without predicting potentially noisy words. In the experimentation section, we demonstrate NAM\u2019s ability to effectively alleviate the impact of noisy textual descriptions. To the best of our knowledge, this is the first study focusing on the transferable text-to-image ReID problem by harnessing the power of MLLMs. We innovatively generate diverse textual descriptions and minimize the impact of the noise contained in these descriptions. The experimental results show that our method performs excellently on three popular benchmarks in both direct transfer and traditional evaluation settings. 2. Related Works Text-to-Image Re-Identification. Existing approaches for this task improve model performance from three perspectives: model backbone [4, 24], feature alignment strategies [24, 41, 66], and pre-training [42, 64]. The first method category improves the model backbone. Early approaches adopted the VGG model [9, 31] and LSTM [35, 62, 66] as image and text encoders, respectively. These encoders gradually evolve into ResNet50 [15, 16, 22, 52] and BERT [12, 32, 40, 43, 70] models. Moreover, the CLIP [21, 38] and ALBEF-based encoders [4, 28, 64] have recently become popular. Notably, the CLIP model contains jointly pre-trained image and text encoders. Thus, its cross-modal alignment capabilities are advantageous and have proven more effective than the individually pre-trained encoders [42]. Moreover, the ALBEF model [28] performs interaction between visual and textual features, which improves the feature representation capacity but brings in significant computational cost. The second category of methods enhances feature alignment strategies. Previous methods aligned an image\u2019s holistic features with its textual description [1, 43, 49, 51, 56, 59, 66]. Subsequent approaches [10, 19, 25, 26, 33, 36, 45, 53, 54] focused on aligning the image-text pair\u2019s local features to suit the fine-grained retrieval nature of text-toimage ReID. These approaches can be divided into explicit and implicit alignment methods. Explicit methods [15, 52] extract the visualand textual-part features and then compute the alignment loss between them. Implicit methods can also align local features [16, 41, 64]. For example, Jiang et al. [24] applied MLM to text tokens and then predicted the masked tokens using image token features. This indirectly realizes local feature alignment between the image patch and noun phrase representations. Since existing databases are small, two recent studies explored pre-training for text-to-image ReID. Shao et al. [42] utilized the CLIP model to predict the attributes of a pedestrian image. Then, they inserted these attributes into man\fually defined description templates. As a result, they obtained a large number of pre-training data. Similarly, Yang et al. [64] utilized the text descriptions from the CUHKPEDES [31] and ICFG-PEDES [15] datasets to synthesize images using a diffusion model [39]. Then, they used the BLIP model [29] to caption these images and obtain a largescale pre-training dataset. However, these two studies targeted at pre-training and did not investigate the direct transfer setting where no target domain data is available for finetuning. Moreover, they overlooked the noise or diversity issues generated in the obtained textual descriptions. The above methods achieve excellent in-domain performance; however, their cross-dataset performance is usually significantly low [41]. This paper explores the transferable text-to-image ReID task with minimal manual operations. Also, we address the challenges in textual descriptions generated by MLLMs. Multi-modal Large Language Models. Multi-modal Large Language Models (MLLMs) [34, 46, 47, 65, 71] are built on Large Language Models (LLMs) [5, 11, 63, 67, 68] and incorporate textual and non-textual information as input [3, 8, 20]. This paper only considers MLLMs that use both texts and images as input signals. The input text (i.e., the \u201cinstruction\u201d or \u201cprompt\u201d) describes the tasks assigned to MLLMs to understand the image\u2019s content. Regarding MLLM architecture, most studies [30, 34, 48] first map the image patch and text token embeddings into a shared feature space and then perform decoding using a LLM. Some methods [2] improve the interaction and alignment strategies between the image and text tokens during decoding, facilitating more stable training [27]. In this paper, we utilize MLLMs to eliminate the need to manually annotate textual descriptions. We also explore strategies to address the diversity and noise issues in the obtained textual descriptions, facilitating the development of a transferable text-to-image ReID model. 3. Methods The overview of our solution to the transferable text-toimage ReID problem is illustrated in Fig. 2. Section 3.1 addresses diversity issues associated with textual descriptions generated by MLLMs. Section 3.2 discusses the reduction of noise impact in the descriptions. And section 3.3 outlines the loss function utilized for model optimization. 3.1. Generating Diverse Descriptions Manually annotating textual descriptions for pedestrian images is time-consuming and hardly scalable. Fortunately, MLLMs have advanced rapidly and provide effective image captioning. Therefore, we decide to utilize MLLMs to create large-scale text annotations for training a model with excellent transfer capacity. Instruction Design. We adopt the LUPerson database [17] as the image source because it holds a significant amount of images that were captured in diverse environments. A technical aspect of using MLLMs lies in designing an effective instruction, which usually depends on user experience. We solve this problem using a multi-turn dialogue with ChatGPT [37], and this process is detailed in the supplementary material. The resulting instruction is as follows: \u201cWrite a description about the overall appearance of the person in the image, including the attributes: clothing, shoes, hairstyle, gender and belongings. If any attribute is not visible, you can ignore it. Do not imagine any contents that are not in the image.\u201d This is considered a static instruction as it is fixed for all images. In this paper, the textual descriptions generated using the static instruction are denoted as static texts or T s. Diversity Enhancement. An MLLM generates textual descriptions with similar sentence patterns for different images using the static instruction, as illustrated in Fig. 1. This causes the text-to-image ReID model to overfit these sentence patterns, limiting its generalization to real-world descriptions. We attempt to improve the static instruction, but the obtained sentence patterns remained limited. Although using more MLLMs can bring in multiple sentence patterns, these patterns are still far from diverse. Again, we resort to ChatGPT to solve this problem. Specifically, we propose a Template-based Diversity Enhancement (TDE) method. First, we generate two descriptions for each of a set of images using two MLLMs [3, 8] according to the static instruction. Then, we feed these descriptions to ChatGPT to capture their sentence patterns (i.e., description templates). With the guidance of these templates, we instruct ChatGPT to create more templates. Finally, it produces 46 templates after multi-turn dialogues, which are detailed in the supplementary material. We randomly select one of the templates and insert it into the static instruction, obtaining a dynamic instruction as follows: \u201cGenerate a description about the overall appearance of the person, including clothing, shoes, hairstyle, gender, and belongings, in a style similar to the template: \u2018{template}\u2019. If some requirements in the template are not visible, you can ignore them. Do not imagine any contents that are not in the image.\u201d The \u2018{template}\u2019 is replaceable. Furthermore, the textual descriptions generated according to the dynamic instruction are referred to as dynamic texts (T d). As illustrated in Fig. 1, MLLMs can follow the sentence patterns specified in the templates, significantly enhancing the diversity of the obtained textual descriptions. Dataset Description. We utilize the publicly available Qwen [3] and Shikra [8] models in this paper. By harnessing the power of the two MLLMs, we obtain the large\fThe person is wearing black dress, [MASK] socks, and white shoes, and carrying [MASK] [MASK]. The person has long blonde hair. Transformer \u2026 Noise-Level Estimation \ufffd\u2019 \ufffd\ufffd\ufffd\ufffd \u2026 Transformer Transformer Transformer \u2026 \u2026 \ufffd\u2019\ufffd\ufffd\ufffd \ufffd\u2019\ufffd\ufffd\ufffd \ufffd\ufffd\ufffd\ufffd \ufffd\ufffd\ufffd\ufffd \ufffd\ufffd \ufffd\ufffd \u2026 \u2026 \u00d7 \ufffd= 0 22 0 27 0 15 0 29 0 35 0 40 0 04 0 09 0 08 0 21 0 18 0 16 . . . . . . . . . . . . \uf0e9 \uf0f9 \uf0ea \uf0fa \uf0ea \uf0fa \uf0ea \uf0fa \uf0ea \uf0fa \uf0ea \uf0fa \uf0ea \uf0fa \uf0eb \uf0fb \uf04c \uf04c \uf04c \uf04c \uf04c \uf04c \uf04c \uf04c 0.27 0.40 0.09 0.21 \uf0e9 \uf0f9 \uf0eb \uf0fb \uf04c \ufffd\u2212 0.73 0.60 0.91 0.79 \uf0e9 \uf0f9 \uf0eb \uf0fb \uf04c \ufffd\u00a0 = \ufffd\ufffd\u2212 0.15 0.02 0.33 0.21 \uf0e9 \uf0f9 \uf0eb \uf0fb \uf04c \ufffd\u2019 = \ufffd\u2019\ufffd\ufffd\ufffd \ufffd\ufffd\ufffd\ufffd\ufffd \ufffd\ufffd\ufffd\ufffd \u2026 \u2026 \ufffd\ufffd\ufffd\ufffd \ufffd\u2019\ufffd\ufffd\ufffd No Gradient Propagation The person is wearing black dress, black socks, and white shoes, and carrying a bag. The person has long blonde hair. (b) Noise-Level Estimation (a) Overall Framework Noise-Aware Masking SDM Loss \uf03d \uf067j s \uf067j s \ufffd\ufffd\u2208\u211d\ufffd\u00d7\ufffd r E +\ufffd \ufffd\ufffd\u2208\u211d\ufffd\u00d7\ufffd Figure 2. Overview of our framework. We adopt the CLIP-ViT/B-16 model as the backbone. Our framework uses one pedestrian image, the original textual description T full, and a masked textual description T nam as input during training. T nam is obtained by applying NAM to T full. To perform NAM, we first compute the similarity matrix S between the text tokens Ft of T full and the image tokens Fv according to their embeddings at the l-th layer of the encoders. Then, we estimate the probability of each text token\u2019s noisiness according to the similarity between its embedding and the image token embeddings. The similarity distribution matching (SDM) loss is computed between the global visual feature vcls of the pedestrian image and the global textual feature t\u2032 eos of T nam. The model\u2019s optimization quality is enhanced by masking noisy words in T full. (Best viewed in color.) scale LUPerson-MLLM dataset. This dataset comprises 1.0 million images, and each image has four captions, T s qwen, T s shikra, T d qwen, and T d shikra. The first and the last two captions are generated according to the static and dynamic instructions, respectively. We reserve the T s for each image as we observe that its description is usually complementary to that of T d. In the following section, we will train the model with LUPerson-MLLM. For simplicity, we refer all the above MLLM-generated descriptions as T full. 3.2. Noise-Aware Masking Although MLLMs are powerful, they cannot describe images very precisely. As depicted in Fig. 1 and Fig. 2, a few words do not match the described image in the obtained textual descriptions. Existing methods [23, 29] usually discard the noisy descriptions, losing the other valuable information contained in the matched words. Accordingly, we propose a novel noise-aware masking (NAM) method that identifies noisy text tokens and fully uses the matched text tokens for model training. Image Encoder. An image is divided into M nonoverlapped patches. These image tokens are concatenated with the [CLS] token and are fed into the image encoder. Then, the [CLS] token embedding at the last image encoder layer is used as the global image feature, denoted as vcls \u2208Rd. The feature dimension is represented by d. Text Encoder. We tokenize each textual description T full into a sequence of N tokens. The N of each sentence varies according to its length. The token sequence is bracketed with [SOS] and [EOS] to represent the start and the end of the sequence. Meanwhile, we examine each text token\u2019s noise level in T full, which is computed and stored in the previous training epoch. These values are used to perform NAM on T full to obtain T nam. After that, T full and T nam are fed into the text encoder independently. At the final text encoder layer, the global feature t\u2032 eos of T nam is utilized to calculate loss. T full is only used for NAM, which means it is not used for loss computation. Noise-Aware Masking. We utilize the image and text encoders\u2019 token embeddings in the l-th layers for the noiselevel estimation of T full. These embeddings are denoted as Fv = [vl 1, ..., vl M] and Ft = [tl 1, ..., tl N], respectively, where vl j \u2208Rd and tl j \u2208Rd. Furthermore, we calculate the token-wise similarity between a single text-image pair as follows: S = Ft T Fv, (1) \fwhere S \u2208RN\u00d7M is a similarity matrix and sij represents the cosine similarity between the i-th text token embedding and the j-th image token embedding. If one text token does not match the image, the similarity scores between this token\u2019s embedding and those of all the image tokens will be consistently be low. Therefore, the noise level of the i-th text token in T full can be estimated via: ri = 1 \u2212( max 1\u2264j\u2264M sij). (2) By applying Eq.(2) to each row of S, we obtain a vector r = [r1, ..., rN] that records the noise-level of all text tokens. Moreover, NAM applies the masking operation to all the text tokens in T full with different probabilities, which can be determined based on the noise-level values recorded in r. However, in the initial training stage, the values of elements in r may be high. This results in excessive masking of important tokens and hinders learning. To resolve this issue, we modify the expectation value of all r elements into a constant number as described below: Er = 1 N N X i=1 ri, (3) r\u2032 = [r1 \u2212Er + p, ..., rN \u2212Er + p], (4) where p is the average masking ratio. We utilize the r\u2032 values as the final probability that a text token might be masked. We include the pseudo code and visualization of NAM in the supplementary materials. Discussion. Computing r\u2032 and then applying NAM to obtain T nam in each iteration requires two forward passes. This additional time cost cannot be overlooked in largescale training. In contrast, our strategy computes r\u2032 for the next training epoch, which requires only one forward pass for each iteration. Furthermore, we initialize the r\u2032 values with the constant p in the first training epoch. 3.3. Optimization Following [24], we adopt the similarity distribution matching (SDM) loss to optimize our model. Given a minibatch of B matched image-text pairs {(vi cls, t \u2032i eos)}B i , we first establish the matching relationship between each image and text (i.e., {(vi cls, t \u2032j eos), yi,j}(1 \u2264i, j \u2264B)), where yi,j = 1 and yi,j = 0 denote a positive and a negative image-text pair, respectively. Then, we calculate the ground truth matching distribution qi for the i-th image, where its j-th element is qi,j = yi,j/ PB b=1 yi,b. Finally, we align the predicted probability distribution pi with qi as follows: Li2t = 1 B B X i=1 KL(pi\u2225qi) = 1 B B X i=1 B X j=1 pi,j log( pi,j qi,j + \u03f5), (5) where \u03f5 is a small number to avoid numerical problems and pi,j = exp(sim(vi cls, t \u2032j eos)/\u03c4) PB b=1 exp(sim(vi cls, t \u2032b eos)/\u03c4) . (6) sim(u, v) = u\u22a4v/\u2225u\u2225\u2225v\u2225denotes the cosine similarity between u and v, \u03c4 is a temperature coefficient. The SDM loss from text to image Lt2i can be computed by exchanging the position of vcls and t \u2032 eos in Eq. (5) and Eq. (6). Finally, the complete SDM loss is computed as follows: Lsdm = Li2t + Lt2i. (7) It is worth noting that since we randomly sample images from the large-scale LUPerson database, we assume that each image in a sampled batch has a unique identity. 4. Experiments 4.1. Datasets and Settings CUHK-PEDES. CUHK-PEDES [31] is a pioneer dataset in the text-to-image ReID field. Each image in this dataset has two textual descriptions. The training set comprises data on 11,003 identities, including 34,054 images and 68,108 textual descriptions. In contrast, the testing set contains 3,074 images and 6,156 textual descriptions from 1,000 identities. ICFG-PEDES. ICFG-PEDES [15] contains of 54,522 images from 4,102 identities. Each image has one textual description. The training set consists of 34,674 image-text pairs corresponding to 3,102 identities, while the testing set comprises 19,848 image-text pairs from the remaining 1,000 identities. RSTPReid. RSTPReid [70] includes 20,505 images captured by 15 cameras from 4,101 identities. Each identity has five images captured with different cameras and each image has two textual descriptions. According to the official data division, the training set incorporates data from 3,701 identities, while both the validation and testing sets include data from 200 identities, respectively. LUPerson. LUPerson [17] contains 4,180,243 pedestrian images sampled from 46,260 online videos, covering a variety of scenes and view points. The images are from over 200K pedestrians. Evaluation Metrics. Like existing works [4, 24, 42, 64], we adopt the popular Rank-k accuracy (k=1,5,10) and mean Average Precision (mAP) as the evaluation metrics for the three databases. Moreover, we consider the following two evaluation settings. Direct Transfer Setting. For this setting, the model is only trained on the LUPerson-MLLM dataset, and the above three benchmarks are tested immediately. This setting directly evaluates the quality of our dataset and the effectiveness of the proposed methods (i.e., TDE and NAM). \fTable 1. Ablation study on each key component in the direct transfer setting. \u2018CLIP\u2019 refers to directly using the original CLIP encoders provided in [38]. Method T s qwen T s shikra T d qwen T d shikra NAM CUHK-PEDES ICFG-PEDES RSTPReID R1 R5 mAP R1 R5 mAP R1 R5 mAP CLIP 12.65 27.16 11.15 6.67 17.91 2.51 13.45 33.85 10.31 Static Text \u2713 37.65 57.86 33.40 23.78 42.77 11.18 36.30 60.60 26.25 \u2713 39.70 62.60 36.09 19.02 35.63 9.67 36.90 62.65 28.33 \u2713 \u2713 46.00 66.82 41.27 26.74 44.22 13.23 41.10 66.95 30.21 Dynamic Text \u2713 40.72 62.36 37.21 24.16 41.24 11.32 38.65 64.70 28.81 \u2713 43.63 65.46 39.08 22.07 39.57 11.35 38.80 63.45 28.60 \u2713 \u2713 48.86 69.41 44.09 28.43 46.37 14.23 44.25 66.15 32.99 TDE \u2713 \u2713 \u2713 \u2713 50.32 71.36 45.74 29.12 47.96 15.13 45.70 70.75 33.23 NAM \u2713 \u2713 \u2713 \u2713 \u2713 52.64 71.62 46.48 32.61 50.79 16.48 47.75 70.75 34.73 Fine-tuning Setting. In this setting, we first pre-train our model on the LUPerson-MLLM dataset and then finetune it on each of the three benchmarks respectively. 4.2. Implementation Details Similar to previous studies [7, 24], we adopt CLIP-VITB/16 [38] as the image encoder and a 12-layer transformer as our text encoder. The input image resolution is resized to 384 \u00d7 128 pixels. Additionally, we apply random horizontal flipping, random cropping, and random erasing as data augmentation for the input images. Each textual description is first tokenized, with a maximum length of 77 tokens (including the [SOS] and [EOS] tokens). The hyper-parameter p is set to 0.15 and the temperature coefficient \u03c4 in Eq. (6) is set to 0.02. The model is trained using the Adam optimizer with a learning rate of 1e-5 and cosine learning rate decay strategy. We train each model on 8 TITAN-V GPUs, with 64 images per GPU. The training process lasts for 30 epochs. The versions of the mentioned LLM/MLLMs are ChatGPT-3.5-Turbo, Qwen-VL-Chat-7B, and Shikra-7B. 4.3. Ablation Study We randomly sample 0.1 million images from our LUPerson-MLLM database to accelerate the ablation study on the direct transfer evaluation setting. Then, we increase the amount of training images to 1.0 million to enhance the transfer ability of our text-to-image ReID models. Effectiveness of TDE. The experiments in Table 1 show that dynamic instruction is better than static instruction. For example, the model using only T d qwen outperforms that the one using T s qwen by about 3% in Rank-1 performance on the CUHK-PEDES database. On the same database and evaluation metric, the model that uses only T d shikra outperforms the one using T s shikra by about 4%. These experimental results indicate that enhancing sentence pattern diversity improves the transfer ability of ReID models. Therefore, we use the four descriptions for each image in the subsequent experiments. It is worth noting that none of the above experiments employ NAM. Instead, they mask every text token with an equal probability of p. Effectiveness of NAM. MLLM-generated textual descriptions often contain noise, which is harmful for model training. Replacing the equal masking strategy with our NAM method improves our model\u2019s Rank-1 performance by 2.32%, 3.49%, and 2.05% on the three databases, respectively. These improvements are even higher than the benefits of combining dynamic and static texts (i.e., 1.46%, 0.69%, and 1.45%). These experimental results demonstrate that NAM identifies the noisy words in the text and effectively reduces their impact. NAM allows the model to accurately align visual and textual features, thereby enhancing the direct transfer text-image ReID performance. The Layer where NAM Computes S. S contains pairwise similarity scores between features in Fv and Ft. This experiment investigates the optimal layer for obtaining Fv and Ft. The results are plotted in Fig. 3. We observe that the model\u2019s performance consistently improves regardless of the layer used to provide Fv and Ft. We also notice that the adopted encoders\u2019 10-th layer yields the best overall performance. Compared to the last encoder layer, the 10-th layer may offer more fine-grained information, facilitating more accurate similarity computation between token pairs. The Overall Masking Ratio for NAM. Our NAM method masks different text tokens with unequal probabilities, but it maintains an overall probability of p. In this experiment, we explore the optimal p value. To demonstrate NAM\u2019s advantages, we also include the results of the masking tokens with equal probabilities (referred to as \u201cEM\u201d). As shown in Table 4, NAM consistently outperforms EM with various p values. The optimal value of p is about 0.15. Combination of NAM and MLM. MLM requires the model to predict the masked text tokens. It has proven effective and is widely applied in NLP models. Recent textto-image ReID studies [24] confirm that MLM loss is beneficial when the textual descriptions are manually annotated. However, our NAM doesn\u2019t predict the masked tokens as the textual descriptions generated by MLLMs may be noisy. Table 2 shows that applying MLM loss to NAM is harmful, indicating the MLLM description noise is a crucial issue. The Data Size Impact. The dataset size is essential to \f51.81 52.24 52.12 52.64 51.98 51.80 50.32 49.5 50 50.5 51 51.5 52 52.5 53 7 8 9 10 11 12 CUHK-PEDES w/ NAM w/o NAM Layer 32.17 32.31 32.33 32.61 32.08 31.62 29.12 28.2 29 29.8 30.6 31.4 32.2 33 7 8 9 10 11 12 ICFG-PEDES w/ NAM w/o NAM Rank-1(%) 47.75 48.05 48.30 47.75 47.55 47.40 45.70 45 45.6 46.2 46.8 47.4 48 48.6 7 8 9 10 11 12 RSTPReid w/ NAM w/o NAM Layer Layer Rank-1(%) Rank-1(%) Figure 3. Results of different layers for NAM to compute S. The encoders contain 12 layers in total. Best viewed with zoom-in. 51.62 52.64 52.25 53.11 52.24 49.95 50.23 49.62 49.87 49.87 48 49 50 51 52 53 54 0.1 0.15 0.2 0.25 0.3 CUHK-PEDES NAM EM 32.32 32.61 31.57 32.42 32.41 29.77 29.12 29.00 30.11 30.04 28 29 30 31 32 33 34 0.1 0.15 0.2 0.25 0.3 ICFG-PEDES NAM EM 47.40 47.75 48.40 47.00 48.30 45.75 45.79 46.75 45.55 45.75 44 45 46 47 48 49 0.1 0.15 0.2 0.25 0.3 NAM EM Rank-1(%) Rank-1(%) Rank-1(%) p p p RSTPReid Figure 4. Results of different overall masking ratios p for NAM. \u2018EM\u2019 represents masking all text tokens with the same probability p. Best viewed with zoom-in. Table 2. Results of the combination of NAM and the MLM loss. Method CUHK-PEDES ICFG-PEDES RSTPReid R1 mAP R1 mAP R1 mAP EM 50.32 45.74 29.12 15.13 45.70 33.23 NAM 52.64 46.48 32.61 16.48 47.75 34.73 NAM w/ MLM loss 48.79 43.86 27.36 14.16 44.45 33.07 Table 3. Comparisons with existing pre-training datasets in the direct transfer setting. Pretrain Dataset CUHK-PEDES ICFG-PEDES RSTPReid R1 mAP R1 mAP R1 mAP None 12.65 11.15 6.67 2.51 13.45 10.31 MALS [64] (1.5 M) 19.36 18.62 7.93 3.52 22.85 17.11 LUPerson-T [42] (0.95 M) 21.88 19.96 11.46 4.56 22.40 17.08 Ours (0.1 M) 52.64 46.48 32.61 16.53 47.75 34.73 Ours (1.0 M) 57.61 51.44 38.36 20.43 51.50 37.34 training. More pre-trained data improves the performance. We investigate the effect of training data size on the direct transfer ReID performance and summarize the results in Fig. 5. It is evident that the model\u2019s direct transfer performance steadily improves as the data amount increases. Finally, compared with the model using only 0.1 million training images, the Rank-1 performance of the model using 1.0 million training images is significantly promoted by 5.75% on the challenging ICFG-PEDES database, indicating that our approach can scale to large-scale database. 4.4. Comparisons with State-of-the-Art Methods Comparisons with Other Pre-training Datasets. MALS [64] and LUPerson-T [42] are two pre-training datasets in the field of text-to-image ReID. MALS [64] contains 1.5 M images, with textual descriptions obtained using the BLIP model [29]. However, it does not address the diversity and 12.65 52.64 54.67 57.00 57.61 6.67 32.61 36.66 38.29 38.36 13.45 47.75 50.20 50.60 51.50 0 10 20 30 40 50 60 0M 0.1M 0.3M 0.6M 1.0M CUHK-PEDES ICFG-PEDES RSTPReid Data Size Rank-1(%) Figure 5. Training data size\u2019s impact on our methods\u2019 direct transfer ReID performance. \u20180 M\u2019 refers to directly using the original CLIP encoders. Table 4. Comparisons with existing pre-training datasets in the fine-tuning setting. Init Parameters Source Target CUHK-PEDES ICFG-PEDES RSTPReid R1 mAP R1 mAP R1 mAP CLIP [38] CUHK-PEDES 73.48 66.21 43.04 22.45 52.55 39.97 ICFG-PEDES 33.90 31.65 63.83 38.37 47.45 36.83 PSTPReid 35.25 32.35 33.58 19.58 60.40 47.70 MALS [64] (1.5 M) CUHK-PEDES 74.05 66.57 44.53 22.66 53.55 39.17 ICFG-PEDES 40.38 36.83 64.37 38.85 49.00 38.20 PSTPReid 38.40 34.47 34.11 20.82 61.90 48.08 LuPerson-T [42] (0.95 M) CUHK-PEDES 74.37 66.60 44.30 22.67 53.75 38.98 ICFG-PEDES 35.07 32.47 64.50 38.22 48.05 38.21 PSTPReid 38.29 34.43 35.81 21.62 62.20 48.33 Ours (0.1 M) CUHK-PEDES 74.64 67.44 46.19 24.08 56.15 40.84 ICFG-PEDES 56.70 51.23 65.30 39.90 52.60 39.76 PSTPReid 56.69 51.40 42.70 25.69 64.05 49.27 Ours (1.0 M) CUHK-PEDES 76.82 69.55 49.38 26.92 59.60 44.70 ICFG-PEDES 61.20 55.60 67.05 41.51 54.80 42.56 PSTPReid 62.99 57.20 48.44 30.03 68.50 53.02 noise issues in the obtained descriptions. LUPerson-T [42] contains 0.95 M images that were also sampled from the LUPerson database [42]. It utilizes the CLIP model to predict pedestrian attributes and inserts them into manually defined templates as textual descriptions. We utilize the three databases to train the CLIP-ViT/B-16 model, incorporating the SDM loss. Finally, we evaluate the model\u2019s performance in both direct transfer and fine-tuning settings. Comparisons on the direct transfer setting are summarized in Table 3. It is shown that the model trained on the LUPerson-MLLM dataset achieves significantly better performance, even when we only sample 0.1 M images. This is because TDE enables diverse description generation. Moreover, NAM efficiently alleviates the impact of noise in textual descriptions. Combining both techniques results in a model that exhibits exceptional transfer abilities. In comparison, neither [64] nor [42] consider the noise problem in their obtained textual descriptions. Table 4 displays the model comparisons in the finetuning setting. In this experiment, we adopt the IRRA method [24] in the fine-tuning stage and initialize its parameters with each of the above three pre-trained models, respectively. The fine-tuned models are evaluated on both \fTable 5. Comparisons with state-of-the-art methods in the traditional evaluation settings. Method Image Enc. Text Enc. CUHK-PEDES ICFG-PEDES RSTPReid R1 R5 R10 mAP R1 R5 R10 mAP R1 R5 R10 mAP CMPM/C [66] RN50 LSTM 49.37 79.27 43.51 65.44 74.26 ViTAA [52] RN50 LSTM 55.97 75.84 83.52 50.98 68.79 75.78 DSSL [69] RN50 BERT 59.98 80.41 87.56 32.43 55.08 63.19 SSAN [15] RN50 LSTM 61.37 80.15 86.73 54.23 72.63 79.53 43.50 67.80 77.15 LapsCore [55] RN50 BERT 63.40 87.80 LBUL [54] RN50 BERT 64.04 82.66 87.22 45.55 68.2 77.85 SAF [32] ViT-Base BERT 64.13 82.62 88.4 TIPCB [10] RN50 BERT 64.26 83.19 89.1 54.96 74.72 81.89 CAIBC [53] RN50 BERT 64.43 82.87 88.37 47.35 69.55 79.00 AXM-Net [16] RN50 BERT 64.44 80.52 86.77 58.70 LGUR [41] DeiT-Small BERT 65.25 83.12 89.00 59.02 75.32 81.56 47.95 71.85 80.25 IVT [43] ViT-Base BERT 65.69 85.93 91.15 56.04 73.60 80.22 46.70 70.00 78.80 LCR\u00b2S [61] RN50 TextCNN+BERT 67.36 84.19 89.62 59.20 57.93 76.08 82.40 38.21 54.95 76.65 84.70 40.92 UniPT [42] ViT-Base BERT 68.50 84.67 90.38 60.09 76.19 82.46 51.85 74.85 82.85 with CLIP [38] backbone: Han et al. [21] CLIP-RN101 CLIP-Xformer 64.08 81.73 88.19 60.08 IRRA [24] CLIP-ViT CLIP-Xformer 73.38 89.93 93.71 66.10 63.46 80.25 85.82 38.06 60.20 81.30 88.20 47.17 MALS [64] + IRRA CLIP-ViT CLIP-Xformer 74.05 89.48 93.64 66.57 64.37 80.75 86.12 38.85 61.90 80.60 89.30 48.08 LUPerson-T [42] + IRRA CLIP-ViT CLIP-Xformer 74.37 89.51 93.97 66.60 64.50 80.24 85.74 38.22 62.20 83.30 89.75 48.33 Ours (1.0 M) + IRRA CLIP-ViT CLIP-Xformer 76.82 91.16 94.46 69.55 67.05 82.16 87.33 41.51 68.50 87.15 92.10 53.02 with ALBEF [28] backbone: RaSa [4] CLIP-ViT BERT-base 76.51 90.29 94.25 69.38 65.28 80.40 85.12 41.29 66.90 86.50 91.35 52.31 APTM [64] Swin-B BERT-base 76.53 90.04 94.15 66.91 68.51 82.99 87.56 41.22 67.50 85.70 91.45 52.56 Ours (1.0 M) + APTM Swin-B BERT-base 78.13 91.19 94.50 68.75 69.37 83.55 88.18 42.42 69.95 87.35 92.30 54.17 in-domain and cross-domain text-to-image ReID scenarios. According to the results in Table 4, two conclusions can be derived. First, compared with the CLIP model [38], pretraining using the three pre-training datasets exhibits performance promotion for in-domain and cross-domain tasks. Second, pre-training using LUPerson-MLLM exhibits the most remarkable performance promotion. For example, in the ICFG-PEDES \u2192CUHK-PEDES setting, LUPersonMLLM outperforms the other two models by 20.82% and 26.13% in Rank-1 accuracy, respectively. These experimental results further validate the effectiveness of our methods. Comparisons in the Traditional Evaluation Settings. Comparisons with state-of-the-art approaches are summarized in Table 5. We observe that our method achieves the best performance. With our pre-trained model parameters, the Rank-1 accuracy and mAP of IRRA are improved by 8.30% and 5.85% on the RSTPReid database, respectively. Besides, pre-training with our LUPerson-MLLM dataset is more effective than with the MALS and LUPerson-T datasets. This is because we effectively resolve the diversity and noise issues in the MLLM descriptions, facilitating more robust and discriminative feature learning. 5." +} \ No newline at end of file diff --git a/abs_9K/test_abstract_short_2405.05007v1.json b/abs_9K/test_abstract_short_2405.05007v1.json new file mode 100644 index 0000000000000000000000000000000000000000..f9b910fe4fa012f731a1175a04b408861cba21fa --- /dev/null +++ b/abs_9K/test_abstract_short_2405.05007v1.json @@ -0,0 +1,17 @@ +{ + "url": "http://arxiv.org/abs/2405.05007v1", + "title": "HC-Mamba: Vision MAMBA with Hybrid Convolutional Techniques for Medical Image Segmentation", + "abstract": "Automatic medical image segmentation technology has the potential to expedite\npathological diagnoses, thereby enhancing the efficiency of patient care.\nHowever, medical images often have complex textures and structures, and the\nmodels often face the problem of reduced image resolution and information loss\ndue to downsampling. To address this issue, we propose HC-Mamba, a new medical\nimage segmentation model based on the modern state space model Mamba.\nSpecifically, we introduce the technique of dilated convolution in the HC-Mamba\nmodel to capture a more extensive range of contextual information without\nincreasing the computational cost by extending the perceptual field of the\nconvolution kernel. In addition, the HC-Mamba model employs depthwise separable\nconvolutions, significantly reducing the number of parameters and the\ncomputational power of the model. By combining dilated convolution and\ndepthwise separable convolutions, HC-Mamba is able to process large-scale\nmedical image data at a much lower computational cost while maintaining a high\nlevel of performance. We conduct comprehensive experiments on segmentation\ntasks including skin lesion, and conduct extensive experiments on ISIC17 and\nISIC18 to demonstrate the potential of the HC-Mamba model in medical image\nsegmentation. The experimental results show that HC-Mamba exhibits competitive\nperformance on all these datasets, thereby proving its effectiveness and\nusefulness in medical image segmentation.", + "authors": "Jiashu Xu", + "published": "2024-05-08", + "updated": "2024-05-08", + "primary_cat": "eess.IV", + "cats": [ + "eess.IV", + "cs.CV" + ], + "label": "Original Paper", + "paper_cat": "Mamba", + "gt": "Automatic medical image segmentation technology has the potential to expedite\npathological diagnoses, thereby enhancing the efficiency of patient care.\nHowever, medical images often have complex textures and structures, and the\nmodels often face the problem of reduced image resolution and information loss\ndue to downsampling. To address this issue, we propose HC-Mamba, a new medical\nimage segmentation model based on the modern state space model Mamba.\nSpecifically, we introduce the technique of dilated convolution in the HC-Mamba\nmodel to capture a more extensive range of contextual information without\nincreasing the computational cost by extending the perceptual field of the\nconvolution kernel. In addition, the HC-Mamba model employs depthwise separable\nconvolutions, significantly reducing the number of parameters and the\ncomputational power of the model. By combining dilated convolution and\ndepthwise separable convolutions, HC-Mamba is able to process large-scale\nmedical image data at a much lower computational cost while maintaining a high\nlevel of performance. We conduct comprehensive experiments on segmentation\ntasks including skin lesion, and conduct extensive experiments on ISIC17 and\nISIC18 to demonstrate the potential of the HC-Mamba model in medical image\nsegmentation. The experimental results show that HC-Mamba exhibits competitive\nperformance on all these datasets, thereby proving its effectiveness and\nusefulness in medical image segmentation.", + "main_content": "Introduction Modern medical research is inextricably linked to the utilization of various medical images[1]. Medical images are designed to provide an accurate visual representation of the structure and function of various tissues and organs within the human body. They assist medical professionals and scientific researchers in exploring the normal and abnormal conditions of patients in great detail, thereby serving clinical and research purposes. In both laboratory-based cutting-edge medical research and in the clinical setting, medical image analysis plays a pivotal role in facilitating scientific inference and diagnosis. [2]Automatic medical image segmentation technology has the potential to expedite pathological diagnoses, thereby enhancing the efficiency of patient care. In recent years, a considerable amount of research on the computer-aided system for healthcare applications has been conducted[3, 4, 5].CNN-based and Transformer-based models have demonstrated excellent performance in a variety of vision tasks, especially in medical image segmentation. UNet[6], as a representative of CNN-based models, is known for its simple structure and scalability, and many subsequent improvements are based on this U-shaped architecture. TransUnet[7] is a pioneer in the field of Transformer-based models, it initially employs the Vision Transformer (ViT)[8] for feature extraction during the encoding phase and a Convolutional Neural Network (CNN) during the decoding phase. It demonstrates a robust capacity to capture global information. TransFuse[9] integrates the parallel architectures of ViT and CNN to simultaneously capture both local and global features. Furthermore, Swin-UNet[10] integrates Swin Transformer[11] with a U-shaped architecture, representing the inaugural instance of a U-shaped model that is exclusively based on Transformer. arXiv:2405.05007v1 [eess.IV] 8 May 2024 \fA PREPRINT MAY 9, 2024 However, although existing models have achieved some success in feature extraction, they still face the problem of reduced image resolution and information loss due to downsampling when dealing with medical images with complex textures and structures. To address this issue, Yu F. and Koltun V.[12] proposed the technique of dilated convolution. Dilated convolution allows the model to capture a wider range of contextual information without increasing the computational cost by extending the receptive field of the convolution kernel. Because it has the ability to enhance the perception of different scale structures of images without losing image details, it is especially suitable for medical images. However, since the dilated convolution increases the perceptual field by inserting \"0\" between the elements of the convolution kernel, the captured features may not be coherent or accurate in some cases. In recent times, studies based on state space models (SSMs) have attracted considerable interest from researchers [13, 15, 14].Building on the findings of classical SSM research[?], modern SSMs (e.g., Mamba[15]) not only establish long-range dependencies but also exhibit linear complexity with respect to input size. In particular, U-Mamba[16] demonstrates its potential by combining SSM with CNN for the first time in the context of medical image segmentation tasks. Inspired by this, we propose HC Mamba, a model based on SSM, which integrates a variety of convolution methods optimized for medical images, in order to further demonstrate its potential in the task of medical image segmentation. We introduce the technique of dilated convolution in the HC-Mamba model. By feeding the features generated by the dilated convolution into the SSM, the state transition capability of the SSM can be utilized to enhance the spatial correlation between the features, thus compensating for the discontinuities introduced due to the voids. In addition, the HC-Mamba model employs depthwise separable convolutions[17], a convolution method that decomposes the traditional convolution operation into two parts: depthwise convolution and pointwise convolution, which significantly reduces the number of parameters and the computational power of the model. By combining dilated convolutions and depthwise separable convolutions, HC-Mamba is able to process large-scale medical image data at a much lower computational cost while maintaining a high level of performance, which is particularly important for real-time medical image processing and large-scale medical data analysis. We conducted comprehensive experiments on segmentation tasks including organ segmentation, skin lesion and brain tumor images, and conduct extensive experiments on ISIC17 and ISIC18[18] to demonstrate the potential of the HC-Mamba model in medical image segmentation. The experimental results show that HC-Mamba exhibits competitive performance on all these datasets, thereby proving its effectiveness and usefulness in medical image segmentation. In conclusion, our contribution to the field can be summarized as follows: \u2022 We propose a hybrid convolution Mamba model (HC Mamba) for medical image segmentation, which combines a variety of convolution methods optimized for medical images to improve the receptive field of the model and reduce the parameters of the model. \u2022 We propose the HC-SSM module for enhancing the model\u2019s ability to extract features \u2022 We conducted extensive performance evaluations of the proposed model. The results show that our model has high accuracy (94.84%), mIoU (80.60%) and validity of DSC (89.25%). 2 Methods 2.1 Preliminaries Modern models based on State Space Models (SSM), particularly the Structured State Space Sequence Model (S4) and Mamba model, are classical continuous systems. The system maps a one-dimensional input function or sequence x(t) \u2208R to an output y(t) \u2208R via an implicit latent state h(t) \u2208RN, as shown in Equation 1. \u001ah\u2032(t) = Ah(t) + Bx(t) y(t) = Ch(t) (1) where, A \u2208RN\u00d7N is the state matrix, while B \u2208RN\u00d71 and C \u2208RN\u00d71 represent the projection parameters.The process is shown in the Figure 1.In the figure, the symbol D represents a skip connection, which can be understood as a transformed residual connection. Consequently, the portion of the graph that excludes D is typically designated as SSM. 2 \fA PREPRINT MAY 9, 2024 Figure 1: SSM(state space model)process diagram To adapt these continuous systems for deep learning applications, S4 and Mamba discretize the system. Specifically, a time scale parameter, or step size \u2206, is introduced, and fixed discretization rules such as Zero Order Hold (ZOH) are used to transform A and B into discrete parameters \u02c6 A and \u00af B: ( \u02c6 A = exp(\u2206A) \u00af B = \u2206A\u22121(exp(\u2206A) \u2212I)\u2206B (2) After discretization, the state space model computation can be implemented either through linear recursion: \u001ah\u2032(t) = Ah(t) + Bx(t) y(t) = Ch(t) (3) or global convolution: \u001aK = (CB, CAB, . . . , CAL\u22121B) y = x \u2217K (4) where, K \u2208RL represents a structured convolution kernel, and L denotes the length of the input sequence x. 2.2 Model structure The structure of HC-Mamba can be described as patch embedding layer, HC-SSM Block and patch merging layer. the model architecture is shown in Figure 2(a). 3 \fA PREPRINT MAY 9, 2024 Figure 2: (a) Overall structure of HC-Mamba. (b) Overall structure of HC-SSM Bloc In the HC-Mamba , the Patch Embedding layer first partitions the input image x \u2208RH\u00d7W \u00d73 into non-overlapping blocks of size 4x4. This operation maps the dimensions of the image to C dimensions (typically C = 96), resulting in an embedded image x\u2032 \u2208R H 4 \u00d7 W 4 \u00d7C. Subsequently, x\u2032 undergoes a layer normalization to standardize the embedded image before entering the main backbone of the HC-Mamba. The backbone consists of four stages. In particular, after the output of the first three stages, a merging layer is used to reduce the height and width of the input features while increasing the number of channels. We employed [2, 4, 2, 2] HC-SSM blocks in the four stages, with each stage having [C, 2C, 4C, 8C] channels respectively. 2.2.1 SS2D module SS2D module is the core of the HC-SSM block, which includes three key components: scan expansion, S6 block, and scan merging. Scan expansion decomposes the input image into independent sequences along four directions (up, down, left, and right), a step that ensures a wide spatial coverage of information and achieves multidirectional feature capture. Next, the S6 block uses a selectivity mechanism to impose choices on the parameters of the state-space model in order to accurately identify and extract the useful information while filtering out the irrelevant parts. Specifically, the block takes as input the feature format of [B, L, D], where B is the batch size, L is the sequence length, and D is the feature dimension. The features are first transformed through a linear layer, after which the update and output equations in the state space model are applied to produce the final output features. Finally, a scan-and-merge operation reconfigures these transformed sequences to produce an output image that matches the dimensions of the original input image. Through this subtle series of operations, the SS2D module provides powerful feature extraction and processing capabilities for the HC-SSM block. 2.2.2 HC-SSM Block HC-SSM block is the core module of HC-Mamba, as shown in Figure 2(b). We propose a two-branch feature extraction module based on SS2D. First, the module input is split into two sub-inputs of equal size using the channel split operation. Then, the two sub-inputs are fed into two branch modules, SSM branch and HC-Conv branch, respectively. In the SSM branch, the input undergoes a layer normalization and then enters the SS2D module, where the input features are first passed through a linear mapping for dimensionality enhancement, followed closely by a convolutional layer with depth-separable convolutions, which preserves the dimensionality and at the same time improves the localization processing of the features by grouping them. Then, the SiLU activation function is applied, a nonlinear transformation 4 \fA PREPRINT MAY 9, 2024 is introduced to enrich the model\u2019s expressiveness, and finally, the processed features are remapped to the original feature space to obtain the output of the SSM branch. In the HC-Conv branch, we introduce dilated convolution to expand the receptive field of the convolution kernel to capture a wider range of contextual information. This technique is particularly suitable for medical images, as it improves the model\u2019s ability to perceive structures at different scales of the image without losing image details. Meanwhile, we use an expansion strategy with an expansion rate of 1,2,3,1 to avoid the gridding effect that occurs with discontinuous data. Meanwhile, compared with the expansion rate of 2,2,2, the expansion rate of 1,2,3 strategy can ensure the continuity of the sensory field, an example is shown in Figure 3. Figure 3: Comparison diagram between expansion rate of 1,2,3 (left) and expansion rate of 2,2,2 (right) In comparison to the use of three layers of normal convolution, a larger sensory field can be obtained, examples of which can be seen in Figure 4. Figure 4: Receptive field diagram using three layers of ordinary convolution Meanwhile, the use of a sawtooth-like expansion rate strategy(i.e., an expansion rate of 1,2,3,1) allows the refocusing of local features after multi-scale feature extraction and helps to maintain spatial continuity of features, while the use of a smaller expansion rate at the end of the sequence allows the model to refocus on smaller regions that may contain important information. 5 \fA PREPRINT MAY 9, 2024 Finally, we merge the outputs of the two branches along the channel dimension of the feature map and use a parameterfree lightweight operation, the channel shuffle operation, to facilitate information interaction between the channels of the two sub-inputs. 3 Experiments 3.1 Datasets We conduct comprehensive experiments on HC-Mamba for medical image segmentation tasks. Specifically, we evaluate the performance of HC-Mamba on medical image segmentation tasks on the ISIC17, ISIC18 datasets. \u2022 ISIC2017:The ISIC2017 dataset contains three categories of diseases, melanoma, seborrheic keratosis, and benign nevus, 2,750 images, ground truth, and category labels. There are 2,000 images in the training set, 150 images in the validation set, and 600 images in the test set, and the color depth of the skin disease images is 24 bits, and the image sizes range from 767\u00d7576 to 6,621\u00d74,441. The validation and test sets also include unlabeled hyperpixel images. The category labels are stored in tables and the datasets need to be preprocessed before training the model. \u2022 ISIC2018:The ISIC2018 dataset contains different numbers of disease images for classification and segmentation, for the segmentation task, a total of 2,594 images were used as the training set, and 100 and 1,000 images were used as the validation and test sets, respectively. For the classification task, a total of 12,500 images were included, of which the training set contained a total of 10,015 images of 7 categories of diseases, namely actinic keratoses (327), basal cell carcinoma (514), benign keratoses (1,099), dermatofibromas (115), melanomas (1,113), melanocytic naevi (6,705), and vascular skin lesions (142). The seven classes of images in the classification task dataset are mixed in the same folder, and the labels are stored in tables that require preprocessing. 3.2 Results We compare HC-Mamba with some state-of-the-art models and some recent mamba-based model, presenting the experimental results in Table 1. In order to fully demonstrate that HC-Mamba\u2019s potential in medical image segmentation tasks directly benefits from SSM, we did not use any pre-training strategies. For the ISIC2017 and ISIC2018 datasets, HC-Mamba performs well on mIoU and Dice compared to other models. Specifically, HC-Mamba has a 1.46% and 1% advantage over MedMamba on mIoU and Dice, respectively, while it has a 2.74% and 1.7% advantage over Unet on mIoU and Dice, respectively. Table 1: Comparative experimental results on the ISIC17 and ISIC18 dataset. (Bold indicates the best.) Dataset Model mIoU(%)\u2191 DSC(%)\u2191 Acc(%)\u2191 Spe(%)\u2191 Sen(%)\u2191 ISIC17 UNet[6] 76.98 85.99 94.65 97.43 86.82 UTNetV2[22] 76.35 86.23 94.84 98.05 84.85 TransFuse[9] 77.21 86.40 95.17 97.98 86.14 MALUNet[23] 76.78 87.13 95.18 98.47 84.78 VM-UNet[20] 77.59 87.03 95.40 97.47 86.13 MedMamba[21] 78.82 88.15 95.01 97.50 86.62 HC-Mamba 79.27 88.18 95.17 97.47 86.99 ISIC18 UNet[6] 77.86 87.55 94.05 96.69 85.86 UNet++ [24] 76.31 85.83 94.02 95.75 88.65 Att-UNet [25] 76.43 86.91 94.13 96.23 87.60 SANet [26] 77.52 86.59 94.39 95.97 89.46 VM-UNet[20] 77.95 87.61 94.13 96.99 85.23 MedMamba[21] 79.13 88.35 94.23 95.68 89.74 HC-Mamba 80.60 89.25 94.84 97.08 87.90 3.3 Ablation experiments We compare HC-Mamba with and without Dilated convolution and depthwise separable convolution(DW convolution), presenting the experimental results in Table 2.Compared with model without Dilated convolution and depthwise 6 \fA PREPRINT MAY 9, 2024 separable convolution, HC-Mamba has only 12M parameters, a reduction of nearly 60%, while maintaining the same high level of performance. Table 2: Ablation studies on dilated convolution and depthwise separable convolutions. Convolution Method Evaluation parameter count mIoU(%)\u2191DSC(%)\u2191 Count(M)\u2193 77.85 87.51 27.43 Dilated convolution 78.85 88.17 24.68 DW convolution 77.95 87.61 13.06 Both 80.60 89.25 13.88 4 Discussion We propose HC-Mamba, a SSM model based on optimized convolution of multiple medical images. Its performance on medical image segmentation tasks is due to some of the current state-of-the-art models and some of the recent Mamba-based models. We introduce the technique of dilated convolution in the HC-Mamba model. Dilated convolution technique enables the model to capture a more extensive range of contextual information without increasing the computational cost by extending the perceptual field of the convolution kernel. This technique is particularly well-suited to medical images because it enhances the model\u2019s ability to perceive structures at different scales of the image without losing image details. Concurrently, by inputting the features generated by the dilated convolution into SSM, the state transition capability of SSM can be utilized to enhance the spatial correlation between the features, thus compensating for the discontinuities introduced due to the voids, which is one of the reasons for the excellent performance of HC-Mamba on medical images. In addition, the HC-Mamba model employs depthwise separable convolutions, a convolution method that decomposes the traditional convolution operation into two parts: depthwise convolution and pointwise convolution, significantly reducing the number of parameters and the computational power of the model. By combining dilated convolution and depthwise separable convolutions, HC-Mamba is able to process large-scale medical image data at a much lower computational cost while maintaining a high level of performance. Compared with existing Mamba-based segmentation models, such as VM-Unet, which has nearly 30M parameters, and MedMamba, which has nearly 25M parameters, HC-Mamba has only 13M parameters, a reduction of nearly 50%, while maintaining the same high level of performance, which provides a better basis for deploying it on lower-end devices. 5" +} \ No newline at end of file diff --git a/abs_9K/test_abstract_short_2405.05039v1.json b/abs_9K/test_abstract_short_2405.05039v1.json new file mode 100644 index 0000000000000000000000000000000000000000..e3ce5c17729f68f1f405fbcc9d55f389dc6eea53 --- /dev/null +++ b/abs_9K/test_abstract_short_2405.05039v1.json @@ -0,0 +1,17 @@ +{ + "url": "http://arxiv.org/abs/2405.05039v1", + "title": "Reviewing Intelligent Cinematography: AI research for camera-based video production", + "abstract": "This paper offers a comprehensive review of artificial intelligence (AI)\nresearch in the context of real camera content acquisition for entertainment\npurposes and is aimed at both researchers and cinematographers. Considering the\nbreadth of computer vision research and the lack of review papers tied to\nintelligent cinematography (IC), this review introduces a holistic view of the\nIC landscape while providing the technical insight for experts across across\ndisciplines. We preface the main discussion with technical background on\ngenerative AI, object detection, automated camera calibration and 3-D content\nacquisition, and link explanatory articles to assist non-technical readers. The\nmain discussion categorizes work by four production types: General Production,\nVirtual Production, Live Production and Aerial Production. Note that for\nVirtual Production we do not discuss research relating to virtual content\nacquisition, including work on automated video generation, like Stable\nDiffusion. Within each section, we (1) sub-classify work by the technical field\nof research - reflected by the subsections, and (2) evaluate the trends and\nchallenge w.r.t to each type of production. In the final chapter, we present\nour concluding remarks on the greater scope of IC research and outline work\nthat we believe has significant potential to influence the whole industry. We\nfind that work relating to virtual production has the greatest potential to\nimpact other mediums of production, driven by the growing interest in LED\nvolumes/stages for in-camera virtual effects (ICVFX) and automated 3-D capture\nfor a virtual modelling of real world scenes and actors. This is the first\npiece of literature to offer a structured and comprehensive examination of IC\nresearch. Consequently, we address ethical and legal concerns regarding the use\nof creative AI involving artists, actors and the general public, in the...", + "authors": "Adrian Azzarelli, Nantheera Anantrasirichai, David R Bull", + "published": "2024-05-08", + "updated": "2024-05-08", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.MM" + ], + "label": "Original Paper", + "paper_cat": "Diffusion AND Model", + "gt": "This paper offers a comprehensive review of artificial intelligence (AI)\nresearch in the context of real camera content acquisition for entertainment\npurposes and is aimed at both researchers and cinematographers. Considering the\nbreadth of computer vision research and the lack of review papers tied to\nintelligent cinematography (IC), this review introduces a holistic view of the\nIC landscape while providing the technical insight for experts across across\ndisciplines. We preface the main discussion with technical background on\ngenerative AI, object detection, automated camera calibration and 3-D content\nacquisition, and link explanatory articles to assist non-technical readers. The\nmain discussion categorizes work by four production types: General Production,\nVirtual Production, Live Production and Aerial Production. Note that for\nVirtual Production we do not discuss research relating to virtual content\nacquisition, including work on automated video generation, like Stable\nDiffusion. Within each section, we (1) sub-classify work by the technical field\nof research - reflected by the subsections, and (2) evaluate the trends and\nchallenge w.r.t to each type of production. In the final chapter, we present\nour concluding remarks on the greater scope of IC research and outline work\nthat we believe has significant potential to influence the whole industry. We\nfind that work relating to virtual production has the greatest potential to\nimpact other mediums of production, driven by the growing interest in LED\nvolumes/stages for in-camera virtual effects (ICVFX) and automated 3-D capture\nfor a virtual modelling of real world scenes and actors. This is the first\npiece of literature to offer a structured and comprehensive examination of IC\nresearch. Consequently, we address ethical and legal concerns regarding the use\nof creative AI involving artists, actors and the general public, in the...", + "main_content": "Introduction 5 1.1 Scope and related work . . . . . . . . . . . . . . . . . . . . . . . . . . 6 2 Technical Background 7 2.1 Convolutional Neural Networks . . . . . . . . . . . . . . . . . . . . . . 7 2.2 Generative AI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 2.3 Object Detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 2.4 Camera Pose Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . 12 2.5 Automated 3-D Capture . . . . . . . . . . . . . . . . . . . . . . . . . . 13 3 Intelligent Cinematography in Production 17 3.1 General Production Applications . . . . . . . . . . . . . . . . . . . . . 17 3.1.1 Computational Language Structures for Scene Analysis, Automated Labelling Schemes and Camera Management . . . . . . 17 3.1.2 Directive Assistants . . . . . . . . . . . . . . . . . . . . . . . . 19 3.1.3 Workflow Optimization and Automated Shot Composition . . . 21 3.1.4 Challenges and Future Work . . . . . . . . . . . . . . . . . . . 23 3.2 Virtual Production . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 3.2.1 ICVFX and LED Volumes . . . . . . . . . . . . . . . . . . . . . 25 3.2.2 Camera Calibration and Localization . . . . . . . . . . . . . . . 29 3.2.3 Neural 3-D Representations of Dynamic Scenes . . . . . . . . . 30 3.2.4 Neural 3-D Representations of Humans . . . . . . . . . . . . . 32 3.2.5 Challenges and Future Work . . . . . . . . . . . . . . . . . . . 33 3.3 Live Production . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 3.3.1 Human Pose Estimation . . . . . . . . . . . . . . . . . . . . . . 34 3.3.2 Object Tracking . . . . . . . . . . . . . . . . . . . . . . . . . . 36 3.3.3 Challenges and Future Work . . . . . . . . . . . . . . . . . . . 38 3.4 Aerial Production . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 3.4.1 Automated Single UAV control . . . . . . . . . . . . . . . . . . 38 3.4.2 Automated Multiple UAV control . . . . . . . . . . . . . . . . 40 3.4.3 Challenges and Future Work . . . . . . . . . . . . . . . . . . . 41 4 General Remarks 42 4.1 Social Responsibility . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 3 \fList of Figures 1 Example diagram of a CNN for image classification . . . . . . . . . . . 7 2 Text, image and video frame input sequences are encoded into a feature/latent representation and translated/transformed into alternative representations that are decoded to output other tangible representations such as text, images and video frames. si \u2208Sj illustrates how a sequence may be constructed from a set of input data. \u03ben encodes the input sequence into a representation, n, and a function, F, translates the representation into m. \u03be\u2212 m1 decodes m back into a sequence. \u03d5k is a pre-trained network that can be used during the encoding, decoding and/or transformation stage. . . . . . . . . . . . . . . . . . . . . . . . 9 3 A visualisation of the stages of the YOLO process with an example of how the IoU is determined. . . . . . . . . . . . . . . . . . . . . . . . . 12 4 Simplified representations of current approaches to automated 3-D capture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 5 Method for neural rendering of 3-D images presented in Yu et al (2021) 16 6 A three stage pipeline which uses idioms defined by camera and actor states as inputs to a DCCL compiler. Selected frames are evaluated w.r.t to the \u201cfilm tree\u201d . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 7 A diagram of the CamDroid architecture, Drucker and Zeltzer (1995) . 21 8 Illustration of the pipeline for text2animation in Yu et al (2022), where cj is the camera selected for capturing action ai. . . . . . . . . . . . . 23 9 IBL maps an environment image onto a 3-D primitive (e.g. a sphere). The IBL environment is treated as a light source and using ray tracing a model is composited. . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 10 Illustrating ICVFX compositing approaches . . . . . . . . . . . . . . . 27 11 Results from images taken in natural lighting (left) and configuring the LED panel for lighting (right) provided by LeGendre et al (2022) . . . 28 12 Extrinsic parameters are shown in grey. Intrinsic parameters are shown in white. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 13 Per-pixel binary joint verification used for identifying pixels for joint classification. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 List of Tables 1 Supplementary Online Resources containing articles, documentation and video-based discussions. . . . . . . . . . . . . . . . . . . . . . . . . 29 2 Overview of Dynamic NeRFs: Speed measures the training time where D is days, h is hours and m is minutes. Commercial GPU indicates whether the authors tested with commercially available GPUs. Explicit indicates where an explicit representation was used to boost training and inference speed. Ancestors indicates the prior work influencing each method. . . . . . . . . . . . . . . . . . . . . . . . . . . 31 4 \f1 Introduction Intelligent cinematography (IC) research leverages artificial intelligence (AI) to assist camera-based tasks in media production. For pre production, this concerns research involved with camera planning, for example using a script Riedl et al (2010); Dhahir et al (2021) or user input Christianson et al (1996); Wu and Christie (2016). For production, this includes work on automated camera-control Lim and Sinha (2015); Coaguila et al (2016); Mademlis et al (2019); Bonatti et al (2020); Yan et al (2022), methods for content acquisition Mildenhall et al (2021); Pumarola et al (2021); Zhao et al (2022); LeGendre et al (2022) and assisting directors and camera operators Drucker and Zeltzer (1995); de Lima et al (2009); Hu (2021); Hossain and Little (2018); Cao et al (2017); Bridgeman et al (2019). For post production, this involves work that avoids the need for post-production, such as through in-camera virtual effects (ICVFX) Sharma (2021); Kavakli and Cremona (2022); Finance and Zwerman (2015); Okun et al (2020), or for enhancing or diminishing camera artefacts such as light-water reflections Wu et al (2022a). For clarity, our definition of IC encompasses research focused on real content acquisition. This excludes research aimed at assisting post production or automated content generation using diffusion models. IC consequently influences the worlds of cinema, gaming and televised broadcasting. The symbiotic relationship between research and video production has been historically important to the progression of video entertainment. Yet, as this relationship evolves the types of content, styles of production, and thus fields of research, have branched into separate domains. For example, we find work that automates manual capture processes Dhahir et al (2021); LeGendre et al (2022) as well as work that elevates abstract goals like enhancing audience immersion Hu (2021); Wang and Li (2013). This variety of objectives makes it challenging to establish a unified definition of the IC research landscape, thus limits researchers and cinematographers who wish to understand the general state and trends. Moreover, there is an absence of both general review papers and more specialised articles. Hence, in this paper, we attempt to overcome these challenges by defining and presenting a structured review on the state and future of IC research. This review paper categorizes work by production medium. Similar to the definition of an artistic medium, this groups research by the relevant tools and practices involved with the various production types, rather than grouping work by objective. For instance, live sport and event broadcasting productions share many cinematographic practices, so are grouped under the same heading, Live Production. Doing so allows us to: (1) further sub-categorize according to related AI research fields; this provides a holistic view of the IC landscape while maintaining relevance to each production medium, (2) include research topics which, while not targeted at creative video production, have direct relevance to it, and (3) offer a view of the research landscape that cinematographers can refer and relate to. The topics covered for each medium are outlined below. 1. General Production: Work on human-computer interaction for camera control, automated directive assistance and workflow optimization. 5 \f2. Virtual Production: Work on ICVFX and virtual stage production, and automated 3-D capture of dynamic scenes and human actors. 3. Live Production: Work on real-time video correction, object tracking and human pose estimation for live broadcasting. 4. Aerial Production: Work on single UAV and mutli-UAV/swarm cinematography, covering topics such as real-time camera planning based on visual aesthetics and safety-first solutions in cluttered filming environments. 1.1 Scope and related work The focus of this paper is camera-oriented research; hence we do not cover popular creative AI topics such as large language models (LLMs) for script generation or generative AI for video generation. We acknowledge, however, that these may become directly involved in cinematography in the future. For instance, LLMs may prove useful for text-driven camera control or workflow optimisation. Moreover, the rapid progression of generative AI models could deliver high quality video-based methods that offer an alternative solution to automated 3-D capture. As the relevant literature does not yet consider camera-based objectives tied to these technologies we do not investigate these paradigms in the main body. Instead, we provide background on generative AI and discuss future use cases of these technologies within the relevant subsections of the main body. There are a few articles that review AI in the context of cinematography and those that do focus on special cases. For example, Christie et al (2008) reviews camera control for computer graphics. This paradigm involves view point computation and motion planning systems, and considers technical constraints such as direct and assisted control, and cinematic constraints such as view composition and pose. Similarly, Chen and Carr (2014) reviews autonomous methods for camera planning, control and selection for multi-camera systems. Furthermore, Qi et al (2010) reviews camera calibration techniques. Other works look at use-cases for IC such as Galvane et al (2015) for video editing, Young et al (2013) and Sharma et al (2023) for gaming and animation and Mademlis et al (2018a) for unmanned aerial vehicle (UAV) control. Our survey aims to bridge related works to provide a holistic view of the IC landscape so that computing and cinematographic communities mutually benefit from the discussion. The structure of this paper is as follows. In Section 2 we introduce the technical background on AI research commonly found in IC. The main body is contained in Section 3, where the order of the subsections is reflected in the prior list. For each subsection, we present an overview of the current paradigms and approaches and conclude with a discussion on the current challenges and future works. Lastly, in Section 4, we offer our concluding remarks and outline research that we believe has the most potential moving forward. As cinematography typically involves capturing real people, we append a section on social responsibility to our conclusion to highlight the importance of ethically driven research and content capture. We continue to emphasize the importance of awareness, responsibility and accountability regarding code sharing throughout the paper. 6 \f2 Technical Background 2.1 Convolutional Neural Networks Convolutional Neural Networks (CNNs) are a class of neural network that extract the features of an image-type by filtering through a neighbourhood of pixels with learnable filter coefficients, to learn common patterns within a set of training image(s). Regarding IC, CNNs provide backbone architectures for visual generative AI (Subsection 2.2) and object detection (Subsection 2.3). A common application of CNNs is in image classification, where the basic structure is shown in Figure 1. First, an image is input to the first layer through N channels, where each channel represents a different representation of an image. For example, an RGB image will have three values (red, green and blue) for each pixel, thus N = 3. The outputs of this layer then feed into the hidden layers. The outputs of each layer are called feature maps. The last hidden layer transforms the feature maps to feature vectors, so that the output layer can interpret a class probability using the feature vector. An image is then classified if it\u2019s probability falls within a predefined confidence-interval. In the case of Figure 1, if our confidence interval is 0.5, we classify the image as containing dogs. Fig. 1: Example diagram of a CNN for image classification The hidden layers of a CNN can be flexibly configured. The four components generally employed are convolution layer, activation function, pooling layer and fullyconnected layer: 1. A convolution layer applies an n \u00d7 m kernel (filter) over each pixel neighbourhood from an input channel. To deal with pixels on the perimeter of an image, we generally apply padding (the type of padding can vary). Dilated Yu and Koltun (2015) or deformable convolutions Dai et al (2017) are also used to modify the structure of the kernel for special cases. In addition, a stride can be applied to skip some pixels, i.e. the filter applies to the pixels located s distances from each-other (s = 1 means 7 \fall pixels are filtered and s = 2 means every second pixel in a row and column is taken). 2. An activation function determines how the features of a layer are transformed into an output. To be used in a hidden layer, the activation function needs to be differentiable and nonlinear, otherwise we would not be able to calculate the gradients of parameters during a backward pass of the network. Rectified linear unit (ReLU) and LeakyReLUs layers are frequently used as they are a differentiable non-linear function, with the added benefit that they are not susceptible to vanishing gradients (where gradients of parameters become too small to make regressive change to the model\u2019s parameters). Several types of activation function exist, as discussed in Baheti (2022). 3. A pooling layer effectively down-samples features, retaining the more important features from a set of feature maps. 4. A fully-connected layer is one where each node is dependant on all the outputs from the previous layer. This allows the network to introduce a wide range of dependencies between parameters. 2.2 Generative AI Generative AI concerns various domains, for instance text-to-video synthesis Balaji et al (2019); Abohwo (2023); Li et al (2018) and novel view synthesis for dynamic scenes Li et al (2022); Pumarola et al (2021); Fridovich-Keil et al (2023). As the novel view synthesis problem-landscape is broad and specially relevant, we reserve the discussion for a later subsection. In this subsection, we focus on generative AI involving 2-D images and video. The purpose of generative AI varies drastically depending on its application. For example, it can involve learning representations for a specific medium, e.g. text or images, and translating/transforming the representation into alternative representations to produce related content e.g. images generated by prompts and vice-versa. Terms such as text-to-image or image-to-text generation, fall under the broader class referred to as sequence-to-sequence translation Venugopalan et al (2015); Xu et al (2018); Pasunuru and Bansal (2017); Fajtl et al (2019). Figure 2 demonstrates how abstractions such as text, images and video frames could be sequenced, and simplifies an encoder-decoder pipeline that uses pre-trained networks to inform the encoding, decoding and/or transformation process. Overall, there are four prominent architectures that contribute separately towards image and video synthesis: (1) encoder-decoder architectures, (2) generative adversarial networks (GANs) Creswell et al (2018); Goodfellow et al (2020); Karras et al (2020), (3) transformers Vaswani et al (2017) and (4) diffusion models Ho et al (2020); Zhang et al (2023); Dhariwal and Nichol (2021); discussed below. In a GAN (see Brownlee Brownlee (2019)) a random vector is fed into a generator to produce a synthetic representation. Subsequently a comparison is made between the synthetic and real image, using a discriminator. The output of the discriminator indicates the level of similarity between the real and generated content, which is subsequently used in a loss function to train both the generator and discriminator. With knowledge of the vectors that successfully generate realistic content similar vectors 8 \fFig. 2: Text, image and video frame input sequences are encoded into a feature/latent representation and translated/transformed into alternative representations that are decoded to output other tangible representations such as text, images and video frames. si \u2208Sj illustrates how a sequence may be constructed from a set of input data. \u03ben encodes the input sequence into a representation, n, and a function, F, translates the representation into m. \u03be\u2212 m1 decodes m back into a sequence. \u03d5k is a pre-trained network that can be used during the encoding, decoding and/or transformation stage. can be constructed to produce similar content. This introduces the idea of a latent space. Consider a 4-D input vector, v = [0.2, 0.1, 0.99, 0.27] with values between 0 and 1, that produces an image of a shoe. The values, also called latent variables, are used to infer various features. For instance, the first dimension, v0 = 0.2 may be responsible for modelling features relating to shoe straps and laces, where a v0 = 0 returns a laced shoe, v0 = 1 returns a strapped shoe and v0 = 0.5 returns a shoe with a strap and a lace. Pushing the analogy further, consider the relationship between different latent variables in our input vector. For example, if v1 represents low-top or high-top shoes, then the combination of v0 and v1 will produce high-tops with laces that are tied at the tongue of the shoe and high-tops with laces that are tied at the throat of the shoe. This works so long as the training data set is sufficiently diverse to learn these various features. An acknowledged limitation for this type of GAN is the correlation between the dimensionality of the input vectors (which increases with the number of features/objects to be modelled) and computation cost (including training time, hardware and energy consumed). To overcome this problem, conditional GANs Wang et al (2018); Odena et al (2017); Sola and Gera (2023); Kang et al (2023) condition the random input vectors on context-specific cues, such as object classes or scene descriptors. This is accomplished by inputting the additional information into the generator and discriminator, thus facilitates the generation of highly diverse content. For example, to generate images of wild animals, the additional information could be the class of animal (e.g. frog, bear and lion). 9 \fFor sequence-to-sequence translation, GANs form a part of the translation pipeline, replacing \u03be\u22121 m in Figure 2. GANs adapt well to new sequences, although they are less robust at a local-scale; this is where transformers excel. Transformers Lin et al (2022); Khan et al (2022) are predominantly used to model the relationships between different sequences, replacing F in Figure 2, and place greater attention on local context. The attention mechanism essentially equates the significance of prior values and their distance from a specific value in a sequence; this process is called self-attention. Thus, the attention and contextual value of each point in a sequence is used to generate a representation that is decoded to provide a tangible output. A familiar example is the generative pre-trained transformer (GPT) model Radford et al (2019); Brown et al (2020), popularly used for text-to-text and text-to-image translation. For example, if we have a database containing the input question \u201cAre you ok?\u201d and various outputs, such as \u201cI am fine\u201d or \u201cI am sad\u201d the transformer will learn the likeliest response sequence. For example on the local scale, it may be that the majority of answers begin with \u201cI am\u201d, so this is first selected as the start of the output. If a subsequent majority of answers follow this with \u201cfine\u201d and end the response here, this will make up the second part of the output. There are also cases where context drastically changes the output. For example, a similar question, \u201cAre you actually ok?\u201d, may illicit a meaningful reply if meaningful replies appear more frequently within the training data set. Therefore, more attention might be placed on the word \u201cactually\u201d, triggering a different output. Ultimately, transformers provide a novel paradigm for text-oriented generative AI, notably machine translation Zhu and Luo (2022); Huang et al (2021). This is especially important to cinematographers interested in using text-controlled methods either for automated video/image analysis/editing Huang et al (2021); Brooks et al (2023) or for image and video generation Maniparambil et al (2023); Zhan et al (2021). The Stable Diffusion paradigm is closely related to this and is currently the dominant method for generating images from detailed prompts. Stable Diffusion embodies a different approach to image and video content generation. Until recently, generative methods, such as GANs, encoder-decoders and autoencoders (described in Figure 2) have struggled to provide solutions that offer both low cost (energy, computation, etc.) and fast inference. This is the accomplishment of stable diffusion and a predominant reason for its current success. Stable Diffusion Andrew (2023) works by first generating a random N-dimensional tensor similar to the random vector generated in a GAN. The tensor, together with an encoded prompt, are input into a noise predictor (usually a U-Net Ronneberger et al (2015); Siddique et al (2021)) which outputs a new tensor predicting the noise in the input tensor. This noise tensor is subtracted from the input tensor, essentially denoising the input without reducing the resolution of the input tensor. The new tensor is re-used as an input (now without a prompt) and a new, less noisy, tensor is determined. This process repeats for a fixed number of steps, whereby the final and least noisy tensor is decoded into an image. As the step from a latent space to a less noisy space is a simple operation for a GPU to perform (i.e. a subtraction of two tensors), the generation process is fast. Additionally, with sufficient iterations, the generated image contains negligible noise and is thus of high quality. 10 \fVideo-based methods using GANs Chen et al (2019); Liang et al (2017); Liu et al (2021), transformers Wang et al (2021a); Selva et al (2023) and stable diffusion Chai et al (2023); Karras et al (2023); Ceylan et al (2023) present a number of underlying challenges, including issues with temporal consistency and a lack of suitable quality assessment metrics. Furthermore, current methods rarely consider camera-based tasks, such as the optimal camera pose or trajectory for generating a video. Considering the current pace of research, we believe it won\u2019t be long before this is possible. In conclusion, generative AI is a powerful tool and may offer numerous solutions to production challenges moving forward. However, with great power researchers must actively seek ethical solutions to releasing code and/or pre-trained models. While many publicly available models (or online application interfaces) incorporate safety measures into their code (like detecting malicious text-based inputs) there are still cases where this is not sufficient for avoiding pornographic content generation RobinsEarly (2024). For researchers, steps should be taken to raise and act on concerns prior to releasing publicly available code, dataset or pre-trainedd models. 2.3 Object Detection In many instances in IC, there\u2019s a need not just for automatic object recognition, but also for locating these objects within an image or a video sequence. For example, in live football broadcasting, there is a need to classify player poses for the purpose of automating camera control. This can assist upcoming phases of a game or event, leading to better organisation of shots. Considering the variation of background clutter (e.g., from a crowd, grass, dynamic advertisement banners, etc.), object detection poses significant challenges. Location can be identified by either pixel-level contouring around the object\u2019s edge or by a bounding box. The former involves pixel-level classification and segmentation, together called semantic segmentation. In contrast, while the latter incorporates a regression branch to estimate the four corners of the bounding box alongside the classification branch. Object detection using bounding boxes is faster than semantic segmentation and has been extensively utilized in real-time applications, such as sports broadcasting. For example You Only Live Once (YOLO) models for object detection and image segmentation Redmon et al (2016) only require one pass of a network to infer location. Furthermore, image masking can help YOLO reduce stationary noise Yun and Park (2022) by excluding known pixels regions that do not contain relevant information during inference. YOLO works by dividing a target image/frame into N cells having size s \u00d7 s resolution; these are called residual blocks. Then, using bounding box regression the model estimates which residual blocks are associated with which bounding boxes. The probability of an object being present within the residual block is also predicted. Using non maximal suppression the model is able to reduce the influence of low probability scores and determine the bounding boxes with highest confidence; as illustrated in Figure 3. Following this, to measure the accuracy we take the intersection over union (IoU) of a ground truth bounding box and the estimated box to determine how close to the correct bounding box the prediction is. This provides a loss function to use during model optimisation Bandyopadhyay (2022). 11 \fFig. 3: A visualisation of the stages of the YOLO process with an example of how the IoU is determined. Several versions of YOLO have been used in IC applications, the current variant being YOLOv8 Jocher et al (2023). YOLOv2 supports the prediction of a fixed number objects in a single block, resulting in improved detection of smaller and grouped objects. YOLOv3 builds on this by introducing a model (with higher complexity) that can detect smaller objects and preserve fine detail. YOLOv4 Bochkovskiy et al (2020) (proposed by different authors) adds to YOLOv3 with cross mini batch normalisation for higher accuracy and weighted residual connections for better convergence during learning. 2.4 Camera Pose Estimation Camera pose estimation approximates the orientation, path and motion (the extrinsics) of a camera in 3-D space which, for example, is useful as a basis for mitigating motion blur caused by the motion of a camera. Accurate camera tracking also enables automated object detection, human pose detection and 3-D capture (photogrammetry). There exist numerous (commercial) solutions for this involving additional hardware (e.g. LIDAR or GPS) for local and global positioning. Though, there is also research on video-based solutions for cases where physical positioning systems are unavailable. There are three main approaches to pose estimation: (1) Visual Odometry (VO), (2) visual Simultaneous Localisation and Mapping (vSLAM) and (3) Structure from Motion (SfM), all of which share common components Taketomi et al (2017). The objective of VO is to recover a camera path incrementally, optimizing the current pose given the prior set of poses (window bundle adjustment) Cetinkaya (2021). We consider VO as a short-sighted solution as it has trouble linking a poses that intersect previously visited locations, i.e. the new set of frames would be treated as a separate locations. Thus, vSLAM learns an additional global localisation map as well as inheriting VO\u2019s objective of optimising the consistency of the local trajectory. The constraint with using vSLAM is that there are limited number of cameras and window bundle adjustments that can be processed. SfM approaches Agarwal et al (2011); Civera et al (2010) overcome this by utilizing measurements taken for every viewpoint/frame to boost reliability and avoid degenerative cases Pollefeys et al (1998), including those in Molton and Brady (2000); Mallet et al (2000) and Hirschmuller et al (2002). This is a popular means of pre-processing 2D images for 3-D modelling. 12 \fFurther, discussions on this topic include Saputra et al (2018) who compare vSLAM and SfM methods for highly dynamic scenarios, and Taketomi et al (2017) who assess vSLAM and SfM methods between 2010-2016, and Yousif et al (2015) who compares state-of-the-art general VO and vSLAM methods for robotic controllers (applicable to automated camera control in IC). There are numerous use cases in IC: for example, in Section 3.4 we discuss several instances where this research is involved with aerial capture. Additionally, research on camera calibration heavily supports work on automated 3-D/volumetric capture, as the ability to position images in 3-D space is heavily tied to the quality of inverse rendering pipelines Lin et al (2021); Chng et al (2022); Barron et al (2022). This discussed further in the following subsection and is revisited in the main body. 2.5 Automated 3-D Capture Automated 3-D capture is a popular topic in IC research, whether for analysis or for solving the inverse graphics problem. Here, the objective is to discover a 3-D representation using 2-D imagery and/or other sensor data. In Figure 4, we illustrate simplified representations of the current approaches to the problem. A popular capture method is Laser Informed Photogrammetry (e.g. LiDAR) which creates 3-D point-clouds from measurements of point depth relative to a camera\u2019s position. This can now be done on a mobile phone thanks to the introduction of mobile LiDAR camera sensors. These methods are promising for capturing enclosed spaces, however there are challenges with processing information over greater distances as well as limitations associated with noisy data. Additionally, these cases require photogammetric sensors which limits applicability when only image/video data is available. Recently, neural radiance field (NeRF) methods have been used to solve this problem, whereby the aim is to model the rendering processes involved with automated capture. Unlike Photogammtery which may represent scenes as 3-D structures (mesh, voxel-grid, point cloud), neural methods represent scene properties as neural representations which are sampled to interpolate views defined by the camera extrinsics in a virtual space. Given a set of images describing a real environment, NeRF networks are expected to reliably learn the visual features of a scene. NeRFs represent an important breakthrough for cinematographers, as can be adapted to a range of cinematographic tasks. For example, rendering 3-D from 2-D images can avoid costly re-shoots caused by poor lighting, scenery, weather and acting deficiencies Mildenhall et al (2022); Verbin et al (2022); Yuan et al (2022). Shots can also be re-worked in post production, meaning that shot-type, camera jitter, pacing and focus can also be modified. While the field of NeRF research is relatively new, it has developed rapidly and can now deliver high quality, compact solutions, leading us to expect production-ready NeRFs in the near future. The NeRF model samples volume density, \u03c3, and colour radiance, c, provided a 5-D input comprised of 3-D co-ordinates, o \u03f5 R3, plus 2-D viewing direction, d \u03f5 R2, which represents the position and viewing direction of a sample in space. A sample can be thought of as either a volumetric line-segment along a ray or a voxel intersected by a ray casted into the scene. Simply put, for each pixel (x, y) in an image a ray 13 \f(a) Photogrammetry relies on sensor data (may include cameras) to estimate a point cloud representation of a scene (b) Neural Radiance Fields sample points in space to estimate a colour and density value for each sample projected along a pixel-ray (c) Gaussian Splatting uses a point cloud to estimate the position of samples in space and uses this to estimate the colour, density scale and rotation of each Gaussian point Fig. 4: Simplified representations of current approaches to automated 3-D capture vector rx,y = o + tdx,y exists, where t = 0 represents the focal point of an image, t = n represents the position of the image plane along the ray and t > n represents point samples along a ray where n sets the scalar distance from the focal point to the image plane and can vary w.r.t lens distortion. We refer the reader to the Nerfstudio documentation Tancik et al (2023)1 which overviews the different types of camera models, sampling schemes and sample representations that are found in the NeRF literature. The original NeRF paper Mildenhall et al (2021) defines the network as a multilayered perceptron (MLP) with inputs (ri, di) for each sample in a bounded space tnear < tj < tfar and outputs (ci, \u03c3i). To render samples aggregated along a given ray, Mildenhall et al (2021) proposes Equation 1 where the exponent represents the accumulated transmittance w.r.t to the volumetric density of preceding samples. In practice, Equation 1 is numerically approximated using quadrature, in Equation 2, where \u03b4i is the thickness of a volume sample along a line segment. 1Accessible online: https://docs.nerf.studio/en/latest/nerfology/model components/visualize cameras. html 14 \fC(r) = Z \u221e t=0 \u03c3(r) \u00b7 c(r, d) \u00b7 e\u2212 R t s=0 \u03c3(r)dsdt (1) \u02c6 C(r) = tfar X i=tnear (1 \u2212exp(\u2212\u03c3i\u03b4i))ci exp( i\u22121 X j=tnear \u2212\u03c3j\u03b4j) (2) Subsequently, a loss function L(C\u2217, \u02c6 C) is used to optimise the predicted ray colour w.r.t to the colour of a ground truth pixel, C\u2217. To reduce the influence from spectral bias Rahaman et al (2019) NeRF maps position and viewing directions to a view-point\u2019s fourier features, using the encoding \u03b3 in Equation 3, where k is a hyper-parameter defining the dimensionality of the feature vector (the bandwidth of our discretized frequency encoding). It is well known in neural representation research that coordinate-MLPs struggle to learn high-frequency signal details, hence the need for encoding frequencies using \u03b3. However there are several studies that discuss alternative changes to the MLP activation functions which forgo the need for discretized frequency encoding Sitzmann et al (2020); Saragadam et al (2023). For example, Saragadam et al (2023) proposes an MLP using wavelet activation (called the wavelet implicit representation representation (WIRE)) while Sitzmann et al (2020) proposes a sinusoidal activation (called sinusoidal implicit representation (SIREN)). These are shown to not only reduce the size of the MLP but also capture a higher-bandwidth of frequencies. However, despite Sitzmann et al (2020) and Saragadam et al (2023) showcasing higher quality and faster convergence, they discuss how un-actuated these results are in NeRF research. \u03b3k : p \u2192(sin(20p), cos(20p), \u00b7 \u00b7 \u00b7 , sin(2kp), cos(2kp)) (3) Several approaches have been proposed to speed up NeRF computation: multiresolution hash encodings, voxel grids and voxel-trees M\u00a8 uller et al (2022); Wang et al (2022c); Yu et al (2021); Laine and Karras (2010). This is particularly relevant for researchers interested in the dynamic NeRF paradigm, as model complexity increases significantly when modelling 4-D space and time. The PlenOctrees representation Yu et al (2021), illustrated in Figure 5, builds off NeRF\u2019s ability to functionally represent a view-dependant scene by, (1) representing the structure of a scene as a sparse-voxel octree Wilhelms and Van Gelder (1992); Samet (1988); Laine and Karras (2010), and (2) sampling colour by involving spherical harmonics. (1) facilitates the rendering step through fast-to-access representations (octrees). Whereas (2) allows us to model viewdependency as octree leaves by mapping the view dependant colour values to a sphere\u2019s surface and predicting the coefficients of the spherical harmonic equation (with fixed degree) to return a colour value where the pixel-ray intersects the sphere. Subsequently, the rendering speed and learning is optimised significantly, and one could argue novelview quality has improved too. One caveat is the amount of data a representation will consume (in the magnitude of gigabytes), which perhaps is acceptable for those with required hardware, however there exist more affordable explicit options Chen et al (2022a); Rho et al (2023). Additionally, there is some difficulty with visualising unbounded scenes as voxel-grids occupy finite space. Yu et al (2021) overcomes this 15 \fFig. 5: Method for neural rendering of 3-D images presented in Yu et al (2021) by using NeRF++ Zhang et al (2020a) for rendering out-of-bound scenery. This is accomplished by modelling the foreground and background as separate components. With similar motivations, 3-D Gaussian Splatting (GS) has been proposed by Kerbl et al (2023) as a means of significantly reducing rendering time by using: (1) a point cloud representation of Gaussian \u201cblobs\u201d with position, covariance, colour and opacity properties, reducing the unnecessary computation of empty space present in NeRF models and (2) tile splatting for rendering. The covariance property is represented as a 3 \u00d7 3 matrix \u03a3 in Equation 4 and determines the scale S and rotation R of the blob in space. In practice, \u03a3 is approximated to a 2 \u00d7 2 matrix, avoiding the last row and column in the original matrix, leading to faster computation. \u03a3 = RSST RT (4) Additionally tile splatting also speeds up rendering with the following steps. We highlight the importance of the sorting approach, (3), as this limits NeRFs from achieving similar computational goals. 1. Divide a rendering view into 16 \u00d7 16 tile 2. Cull blobs with < 1% confidence of intersecting each tile frustum or blobs that fall outside the near and far bounds of the camera 3. Sort blob depth w.r.t to each tile using a fast GPU Radix sort Merrill and Grimshaw (2010); not per pixel as done in NeRF. 4. Render pixels w.r.t the sorted blobs for each tile by \u03b1-blending until the accumulated \u03b1 for each ray becomes 1 Finally, to train a GS model a scene is initialised with a sparse set of point-clouds that are \u201cdensified\u201d during training by cloning, re-scaling and re-positioning blobs to fit the geometry corresponding to a set of training view/image. Blobs that are essentially transparent (low \u03b1 contribution) are pruned. Conclusively, GS maybe selected over NeRFs due to their enhanced computational abilities, and in many cases improved performance. However for the dynamic paradigm, the current set of benchmark scenes only last up to a minute. As existing dynamic GS models are much less compact than NeRF alternatives, this presents a challenge for storing and sharing 4-D models. Additionally, as GS is a more recent development than NeRF we are yet to see the same attention towards cinematographic based tools, such as scene and camera editing or representational transformations such 16 \fas GS to mesh. Though, the research space is adapting at a fast pace so we believe it is only a matter of time before we see these limitations addressed. Finally, we underpin the importance of social awareness and responsibility involved with this field. Unlike generative AI, we notice almost no attention to the ethical problems that may arise from creating digital doubles of real humans. We urge researchers involved with this field to take more of an active role raising awareness. 3 Intelligent Cinematography in Production 3.1 General Production Applications In this section we present research relating to camera management and control, visual analysis, cinematographic assistance and workflow optimization. 3.1.1 Computational Language Structures for Scene Analysis, Automated Labelling Schemes and Camera Management Cinematographic language is used to communicate the state of production and relevant processes. It enables production staff to effectively communicate and react coherently to unforeseen dilemma. IC applications require a similar level of attention. Cinematic theory is often formalized for tasks such as automated camera control and scene analysis. For example, in Jhala and Young (2005) state machines are used to model cinematographic shots for camera planning in a virtual environment. In this context, in this section we discuss current approaches to human-AI interaction, with particular reference to automated camera control; we refer to this as idiomatic capture. A heuristic approach to camera control was introduced by Jhala and Young (2006), based on the decompositional partial order causal link planning algorithm Young and Moore (1994). This formalizes idiomatic capture by linking an approximated scene representation to a predetermined set of camera control responses. A conflict resolution algorithm ranks known actions with conditional operators to determine the optimal action and respective duration. To express idioms, Jhala and Young (2006) distinguishes a set of four requirements to mediate contextual differences between different productions: (1) Story Representation, (2) Real World Representation, (3) Rhetorical Coherence and (4) Temporal Consistency. The first two points describe the physical and contextual landscapes of a set, hence it is necessary to have both geometric (or visual) and semantic representations of a production. The third point expresses the need for rhetorical structure to ensure actions are executed decisively, e.g. a hierarchical-model for selecting shot-types. The final point specifies consistency with regards to the temporal aspect of filming. Similarly, the Declarative Camera Control Language (DCCL) uses a heuristicbased decision tree for idiomatic capture Christianson et al (1996), Figure 6. This method sets forth a hierarchical structure for automated shot-composition and camera control by breaking down scenes into idiom-specific frame sequences. Here, the relationship of consecutive frame sequences is dependant on temporal links between idioms. A heuristic evaluator then selects a candidate action by scoring possible responses and evaluating decisions via a decision tree. 17 \fFig. 6: A three stage pipeline which uses idioms defined by camera and actor states as inputs to a DCCL compiler. Selected frames are evaluated w.r.t to the \u201cfilm tree\u201d Idiomatic language structures have also been used for shot labelling Ronfard et al (2022) and evaluation Yu et al (2022). In more recent times, the Prose Storyboard Language (PSL) Ronfard et al (2022) has been proposed as a high-level language for describing shots relative to visual composition and idiomatic camera control. This is demonstrated with Alfred Hitchcock\u2019s North by Northwest, where a prompt not only describes the shot-type and fore, mid and background compositions (separately) but the type of transition as well. As PSL uses AND/OR decision trees it can be easily configured to add/modify labels. While the authors have not tested this on an automated labelling scheme, there is cause for such investigation as idiom-based languages might overlook newly adopted idioms and foreign capture practices. Embedded Constrained Patterns (ECP) is a dedicated query language, conscious of physical cinematographic limitations Wu and Christie (2016). Unlike PSL, ECP is more comprehensive in its description labelling montages rather than individual shots. Labels are assigned using the following descriptors: 1. Framing Constraints: size of actors and objects, angle of shot, region of actor and object locations and camera movement 2. Shot Relation: size, angle and region relative to a sequence of shots 3. Sub sequences: a local grouping of related shot sequences Wu and Christie (2016) propose ECP alongside a automated search algorithm for optimal shot-sequence description. The search method is partitioned into two stages, (1) build a cache of valid solutions, and (2) apply a double recursive depth search to choose optimal description from the set of valid solutions. During stage (1) valid solutions are separated into three sets: (i) FC a set of frames which satisfies framing constraints defined by ECP, (ii) RC a set of frame couples, [fi, fi+1] which satisfies the ECP\u2019s relational constraint, and (iii) SC a set of frame sequences, sm = [fl, fF ], where fl and fF represent the start and end frames of the frame sequence. In the 18 \fsecond stage, the search process iterates over all frames in each sequence, fi\u03f5s, where subsequent frames are validated as part of the local sequence, s, or global sequence, S. Alongside the technical implementation of ECP, the authors have plans for integration with Unity 3-D to apply ECP to their montages; which we see as a positive yet infrequent consideration in IC. It would be interesting to reverse the order of use. Whereby, rather than providing linguistic descriptors of a 3-D environment we could trigger camera actions through prompts written in PSL or ECP. Additionally, we find that languages built for human-AI interaction are not easily comparable. The choice of language structure can vary drastically depending on the task for automation. While there is no current standard for general cinematography, we observe interest from Movie Labs, who look to define a set of computational linguistic structures as a standard for cloud-based workflow optimization Labs (2022). The shift to cloud-based computing is indicative of another paradigm of production. Notably, as assets and tools are usually stored or executed offline changes to shared resources can be difficult to monitor on one platform. This could be facilitated by file management approaches which mimic, for example, code-sharing practices. However, this may be difficult to scale for productions requiring a large number of assets or productions which involve external international collaborators who may be un-knowledgeable of standard practices. 3.1.2 Directive Assistants Converting most cinematographic concepts into controllers presents a challenging task Christie and Olivier (2009). Notions of shot compositions, shot types and shot transitions are bound by real world problems such as the cost of production, physics and the topology of a set. Thus, AI directive assistants (DAs) have been introduced to alleviate these production challenges. Current IC research addresses tasks such as deriving shot lists, shot plans, optimising camera placement and controlling robotic camera rigs. There is a tendency to using semantic representations of a work environment to provoke DAs. For example, through DCCL, Christianson et al (1996) demonstrates a heuristic approach that relies on idiomatic-based practice for decision making on shot composition. Similarly, He et al (1996) discuss a different heuristic approach, utilizing a set of finite state machines2 to handle idiomatic camera actions. Both of these methods use virtual simulations with user-led events to demonstrate the ability of their tools to derive an idiomatic responses in a short amount of time. There are more novel approaches to the problem of deriving a shot-list, such as de Lima et al (2009) who employ an architecture that simulates four critical filming roles: (1) a Script Writer who observes the context of a current scene and sends information to (2), (2) a Screen-orgrapher who configures the staging of actors and objects for dramatic effect and passes this information to (3) and (4), (3) a Director who extracts important information and uses multiple support vector machines (SVMs) to make decisions on selected shots, and (4) a Cameraman who follows idioms for shooting. This method compounds three 2-D feature matrices into a high dimensional 2A state-based control architecture for decision making. If approached intelligently (e.g. state labels represented by a bit-string), logical optimization can be applied through state transition tables and state maps/implication charts. More on this here: https://inst.eecs.berkeley.edu/\u223ccs150/sp00/classnotes/ katz-ch9-mod.pdf. 19 \ffeature space selecting the most relevant SVM from a pre-trained set to classify the optimal shot position and viewing angle. SVMs work by using kernel maps to remap all features into a high-dimensional matrix. This is done under the assumption that classes of features are not linearly separable until they are compounded into a high-dimension. In practice, normalized positional values represent the environment features, while the perceived emotional state of each actor is represented by the actors\u2019 features, and the actor who is the principal focus is represented within the scene\u2019s features. These are used to for selecting the right SVM to estimate the optimal camera location and angle given the context of the scene. As SVMs may be considered old w.r.t to the current literature, there are likely more suitable approaches for modelling the environment, actor and scene features to classify the optimal camera angle and location from a pre-defined set. For example, with a large enough data set a deep neural networks will generalize better. There are also faster and simpler approaches such as using K-nearest neighbours on the environment, scene and actor features. Alternatively, CamDroid is a well known state-based camera control architecture, that was introduced by Drucker and Zeltzer (1995) as illustrated in Figure 7. This method uses Tool Command Language (Tcl) 3 with TK embeddable language widgets Ousterhout (1990) to interface between user-input and controllers, and pre-specified scripts. The pre-specified scripts access an object database (containing information about the local real environment) through application specific processes/object interfaces, and camera modules through camera interfaces; which ultimately determine the camera state and its subsequent action dependant on a set of filming constraints such as the type of camera motion. Conditional functional frameworks like CamDroid are a classical way of handling automated camera control and are powerful for capturing real-time time-dependant action. Despite the sparsity of DA research, there are recent works such as Stoll et al (2023) who investigate shot selection with a multi-view camera setting for filming theatrical performances. The authors record performances from multiple views in 4K and by cropping the high-resolution frames into lower resolution frames, a set of camera actions is derived. Subsequently, skeletal and facial poses are estimated for each actor in the set of cropped videos. These are used in an automated editing script where, for example, moving lips may be used to detect an actor as the principle focus of the scene, so the relevant clip (i.e. a selected view and camera action) are selected for a given time frame. Preceding Stoll et al (2023), prior works have tackled zoom and crop methods in different settings Gandhi and Ronfard (2013); Gandhi et al (2014); Rachavarapu et al (2018) such as Kumar et al (2017) who displays the set of cropped shots in a split screen for shot selection. The principal areas for investigating DAs: through idiomatic language, classification and state based approaches, are not novel by today\u2019s standards of AI research, but are nonetheless capable of achieving reliable idiomatic camera actions in response to user input. However, because the discussion on concurrent methods is limited by the sparsity of recent work, we have yet to see promising work that moves away from offline automated DA and towards a live setting, such as in sport broadcasting. Despite this, numerous related fields approach similar paradigms. For instance Besan\u00b8 con et al 3A high-level functional programming language. 20 \fFig. 7: A diagram of the CamDroid architecture, Drucker and Zeltzer (1995) (2021) discuss state-of-the-art approaches for interacting with 3-D visualizations. The authors outline numerous areas for improvement, particularly surrounding the tools and error metrics used to understand and evaluate human-computer interaction. Skeletal estimating and facial recognition, such as in Stoll et al (2023), are not sufficient to gain the context of a scene. Likewise, Ali (2008) discuss applications relating to camera control in virtual environments, which also relates to the discussions in Sections 3.2.2 and 3.2.3. In the following subsection we present recent approaches to workflow optimization that encompasses aspects of the DA paradigm, such as language-driven shot selection. 3.1.3 Workflow Optimization and Automated Shot Composition Pre-producton and production workflows will vary drastically as a result of budget constraints, delivery deadlines and creative objectives. Despite this, there are aspects to film-making which remain constant, including set design, the object and background staging processes, and film capture and editing. In industry, notable efforts from Movie Labs (discussed in Section 3.1.1), illustrates the potential of a cloud-based platform for streamlining production, entailing new computational language structures, methods for cloud-security and collaboration workflows. Movie Labs have produced a set of white papers Labs (2022) detailing their ambitions over coming next decade. Machinima production (MP) for IC leverages intelligent script writing to generate camera poses for shots (tested on a virtual scene)4. Rather than automating the entire production process, we see potential in adapting MP tools to attend to the 4Further discussions: Riedl et al (2010) and Elson and Riedl (2007). 21 \fgeneral scope of workflows (not necessarily for computer generated animation). For example, the GLAMOUR system Dhahir et al (2021) utilizes natural language generation informed by cinematographic idioms, to produce a movie-like composition of shots, with comprehensive descriptors of the scene from still images. The outcome being several short documentary-style productions. GLAMOUR is a multi-objective heuristic approach for attention optimization. Anantrasirichai and Bull (2021), thus as animated actors are directed (by an AI) during a scene, the optimal choice of camera and transition are determined. As discussed by Tan (2018), the level of potential stimuli one can experience from a film far surpasses the complexity of audience attention. Thus, GLAMOUR could be extended to include other processes involved in automated shot composition, such as kernalized filter control (KCF), presented by Henriques et al (2014) and used for actor detection and framing in automated aerial content acquisition (discussed in Section 3.4.1). Additionally useful, cinematic motion style transfer in Wang et al (2023) uses 3-D representations generated from real images and matched to a shot sequence from a scene in a given movie. This could be paired with attention optimization to improve upon selected idiomatic shot to transfer, from which we could derive more meaningful camera poses and motions. A semi-automated method introduced by Yu et al (2022) was designed to handle object staging, automated camera positioning and shot composition, from an annotated script. The framework shown in Figure 8 optimizes camera parameters dependant on a generated sequence of actions for each present character. An action list is denoted by {ai|i = 1, 2, ..., N}, where ai is the ith action and N is the total number of actions available. Following this, the action list is transformed to stage performance, where each action corresponds to a movement (time period) in a scene. This automates scene scheduling process. The consequent performance of an action within a scene is denoted by {p(t)ai , p(t+1)ai , ..., p(t+l)ai }, where t denotes the moment the movement begins for t\u03f5T and (t + l)ai, which denotes the sub-components of a movement for an action ai within a performance, {pt|p1, p2, ..., pT }. During the camera optimization step, \u201caestethic\u201d and \u201cfidelity\u201d models are jointly applied to a performance pi, to determine the optimal camera ct to use at each time step t. The aesthetic model analyses six factors for camera planning: character visibility, character action, camera configuration, screen continuity (relative to character position), moving continuity (accounting for on-going changes between movements) and shot duration. The fidelity model first assumes that a mathematical model can approximate the relationship between a script and a generated video. To accomplish this, the model uses the global vector for word (GloVe) embedding model Pennington et al (2014) to generate text from the generated video, then analyses the similarities between generated script and target script5. A GloVe embedding is vector representation for words that are trained using the global co-occurrence word-word of word pairs/groups. Thus, substructures of the vector space can define synonyms (proximal parallel vectors) and canonical structures (vector paths). This is similar to the latent variables for text-based generative AI, discussed in Section 2.2. Overall, both of these methods produce naive approaches to workflow optimization which is constrained by cinematographic understanding and current technological 5The final result is visualized here: https://www.youtube.com/watch?v=0PUdV6OeMac. 22 \fFig. 8: Illustration of the pipeline for text2animation in Yu et al (2022), where cj is the camera selected for capturing action ai. capability. For example, the crux of the evaluation mechanism in Yu et al (2022) depends not only on the assumption that a fidelity model can be constructed, but also the ability to re-translate a video into text in a manner appropriate for evaluation. The reliance on this chained processes reduces the effectiveness of this model. Though as Yu et al (2022) concludes, there is still much to accomplish in optimising production workflows. For example, a further extension of method like GLAMOUR could involve automating the generation of annotated shots or for extending story-board process. This could prove beneficial for productions with tighter budgets and/or deadlines. Additionally, an improved method of evaluating such models is necessary, as we have doubts as to the validity of text-based testing for cinematographic models. Considering that workflows are often fragmented due to reliance on external collaborators, research focused on supporting collaboration seems more plausible. For example, a model could be tasked with acting as an administrator for a shared resource deciding if committed changes should be accepted given some camera-based constrains. Leaning into ICVFX, cameras such as the Axibo PT46 motion controlled slider host a cloud-based Unreal Engine workflow and directly attend to the shared resource paradigm. Use of such tools could be extended to investigations projecting dynamic assets Dang et al (2019); Lan et al (2022); Ji et al (2020) and validating the in-camera (artistic) composition given a feature-based target Lee et al (2018); Wang et al (2023). 3.1.4 Challenges and Future Work One of the challenges with introducing novel AI to the production process is testing and validation. Unfortunately, real production environments are not only challenging to access but difficult to control. Therefore most proposed research favours testing in simulation through virtual environments. Clearly, there is some disparity between a semi-informed simulation and a real use case. Going forward, we need to acknowledge the copious benefit of experimental productions Hardy (2013). Experimental productions could become a platform for testing IC tools, whether this is to accomplish menial tasks so artists can focus on creative experimentation or to explicitly support 6Product information: https://www.axibo.com/product/pt4. 23 \fone\u2019s creative expression. Alternatively, we could envision using 3-D reconstructions of real scenes, discussed in Section 3.2, to achieve more reliable tests on real scenarios. This would provide more relatable results for cinematographers that intend using IC to direct cameras in real environments, as well as those that wish capture cinematic footage using 3-D reconstructions. With respect to supporting the cinematographic language, there are many considerations to make. For example, the application of language can be beneficial for semantic script analysis/generation Hladun et al (2021); Dharaniya et al (2023); Martinez et al (2019) but it could may also act as a form of communication between production staff and AI interfaces Christianson et al (1996); Yu et al (2022). We found PSL to be particularly interesting as it is represented as mid-level language. This is beneficial as it reduces the reliance of solving natural language processing (NLP) problems (e.g. a sequence2sequence encoding/decoding problem), which can be troublesome when confronted with complex shot descriptions or tasked with describing subjective cinematographic observations. Though, we should acknowledge the progress made in NLP and semantic analysis research over the last decade Cambria and White (2014); Chandrasekaran and Mago (2021). Consequently, we are interested to see more proposals for learning linguistic structures for cinematographic production. For example, riding off recent breakthroughs in NLP Radford et al (2018); Mhlanga (2023) researchers could explore LLMs for camera control and workflow optimization tasks. Consequently, we believe the progression of this will rely on establishing reliable sources of data. For example, for semantic shot analysis one could propose a comprehensive data set and method of labelling shots using PSL to reduce the complexity of natural language structures for word-embeddings. Otherwise, DA research provides a platform to host a range of AI controllers. They can be adapted for specific purpose and architectures are often implemented as modular pipelines. However, evaluating performance in-the-wild is limited by access to real sets, production staff and the limited number of benchmark datasets and models. This indicates that the state of DA research still has a considerable journey before solutions can be applied on a commercial scale. We hypothesize that as production is further facilitated in other areas, such as improved camera control Amerson et al (2005); Christie et al (2008); Jhala and Young (2006), DA could be facilitated by controlling a set of automated tasks on a higher-plane of abstraction, perhaps through the use of semantic query languages for resource descriptive frameworks (RDF) Creighton et al (2006); Haase et al (2004) or DCCL and PSL. With respect to workflow optimization strategies, we primarily question the social benefit of MP research. As discussed in Section 1, IC relies on easing the production of artistic film-making, so automating the entire process (\u2018text-to animation\u2019) leaves little room for creative input Williams (2022); Ploin et al (2022); Anonymous (2022). However, this could be used to support the story-boarding process and other preproduction tasks. Taking this further, one could formulate a storyboard as a set of animated clips to make decisions on shot composition. Thus future work could look at optimising shot composition, such as for style transfer Wang et al (2023) or attention optimization Valuch et al (2014); Piao et al (2019). 24 \f3.2 Virtual Production In this section we focus on virtual production using real cameras. This is important as it delineates from classical virtual production used in animation or gaming, whereby two notable differences exist. Firstly, compositing virtual scenes for animation is less challenging than compositing virtual assets in a real scene for an IC production as nuances in lighting, colouring and perspective between real and virtual assets require attention. Secondly, real cameras are limited by physical and fiscal constraints, like set topology, additional hardware and technical skill for achieving specific camera motions. These matters are trivial to accomplish with a virtual camera that has 6DoF. These differences underpin the general concerns of research tied to virtual production for IC. Hence, in this section we look at research involved with ICVFX and LED Volumes touching upon works that investigate re-colourization and image-based lighting (IBL). We also discuss research focused on synthesizing virtual replicas of real actors and scenes through NeRFs, prefaced in Section 2.5. This looks at removing the physical and fiscal limitations of using real cameras for content acquisition, allowing users to re-capture real scenes in the context of a virtual environment/engine, as is achieved with classical virtual production. 3.2.1 ICVFX and LED Volumes Numerous scholarly works have undertaken the task of dissecting visual effects (VFX) and in-camera applications within the realm of cinematography. However, the relevant use case(s) for IC is vaguely defined. Considering a purely cinematographic perspective, ICVFX usually involves CGI and/or compositing techniques that are executed in real time, providing cinematographers with a live feed of how virtual effects will appear relative to the real scene set-up. This can support production in numerous ways, such as an indicator for poor lighting or as a way to pre-visualize compositing and VFX to ease post production challenges. Here, the technical practice mainly concerns the ability to distinguish foreground and background elements Sharma (2021) as well as modify lighting and colour Bengtsson and Kang (2022). Sharma (2021) briefly presents chroma key and roto scoping as paradigms, with a focus on the former; in Figure 10(a). While, Bengtsson and Kang (2022) presents how IBL, a well understood method in VFX in Figure 9, has evolved into a lighting workflow surrounding LED video screens for driving in-camera relighting and colourization, Figure 10(b). This hints at two distinct use cases for IC research: 1. Automatic foreground segmentation7: (i) with and (ii) without chroma screens (i.e. blue/green screens) 2. Automatic IBL and re-colourization It is generally known that use case (1) is readily accomplished by using chroma screens (1.i) and keying out the colour corresponding to the screen. However this introduces issues for colour-based image segmentation. Cheng et al (2001) touches upon relevant colour-based image segmentation techniques angled towards the wider AI audience. The authors discuss solutions such as histogram thresholding Littmann 7Here, the terms segmentation and roto scoping can be used interchangeably. 25 \fFig. 9: IBL maps an environment image onto a 3-D primitive (e.g. a sphere). The IBL environment is treated as a light source and using ray tracing a model is composited. and Ritter (1997), using binary trees to store data intensive 3-D colour spaces Schacter et al (1976); Sarabi and Aggarwal (1981), region-based methods Tremeau and Borel (1997); Cheng and Sun (2000), fuzzy techniques Huntsherger et al (1985); Tominaga (1986) and neural networks Huang (1999); Campadelli et al (1997). Consequently the authors highlight problems with shading (e.g. shadows and highlights) and texturing. The most apparent case of this is interfering light-bounce from the chroma screen. This leads to subject specific approaches, for example Sigal et al (2004) approaches real-time skin segmentation for video opposed by time-varying illumination. Sigal et al (2004) optimizes a second order Markov model to predict a skin-colour histogram over time. Additionally, chroma keying restricts the ability to present colours similar to that of the chroma screen. More recently, we see approaches attending to case (1.ii). This is a popular paradigm outside of cinematography whereby we see emerging research such as the Segment Anything Model (SAM) Kirillov et al (2023)8 a one-click solution to general image segmentation. Angled towards cinematography we find work such as Roto++ Li et al (2016) a rotoscoping tool with the ambition of respecting the artists\u2019 requirements. Roto++ improves upon traditional interpolation techniques by combining a real-time appearance tracking and a novel shape manifold regularization process (built on the Gaussian process latent variable model (GP-LVM) Lawrence and Hyv\u00a8 arinen (2005)). Subsequently, within a sequence of frames the method (1) predicts the change of a shape manifold and (2) identifies which next keyframe needs to be manually roto scoped. Concerning case (2); while IBL is classically a 3-D rendering technique, it can be applied to real scenes where the lighting set-up can be readily changed Debevec (2006). Ren et al (2015a) and Wang et al (2009) discuss approaches that reconstruct the light transport matrix (LTM)9. Wang et al (2009) classifies the approaches into three categories: (i) brute force O\u2019Toole et al (2012); Hawkins et al (2005), directly modelling the LTM, (ii) sparsity based Garg et al (2006); Masselus et al (2003); Reddy et al 8Still in pre-print. 9This defines light interactions on object surfaces. 26 \f(a) Foreground separation with automated segmentation or chroma screens. (b) In-camera compositing with an LED stage Fig. 10: Illustrating ICVFX compositing approaches (2012), modelling a set of basis functions under the assumption that each row of the LTM can be linearly approximated, and (iii) coherence based O\u2019Toole and Kutulakos (2010); Wang et al (2009); Fuchs et al (2007), analysing the coherence reflectance field to acquire the LTM. The limitation with these methods is that they require multiple images under varying lighting condition, meaning for video this problem is more challenging to address. Interestingly focus on cases (1) and (2) has shifted towards using LED panels to project virtual backgrounds as a practical solution. For case (1) foreground-background separation is made easy by replacing chroma screens with virtual backgrounds displayed live on interconnected LED panels. While more costly and energy intensive, it reduces the workload for post-production and avoids the need to roto scope. Unfortunately this means the need for research in this area is minimal and shifts toward supporting computation (e.g. reducing energy consumption). For case (2) the outcome is not so severe. Instead, the introduction of LED panels offers new possibilities for automated lighting calibration, now including the LED panels as a light source Nila et al (2022); Payne and Giardiello (2022); James et al (2021); Helzle (2023). For example, LeGendre et al (2022) treats the panel lighting as ambient light, producing an example result in Figure 11 LeGendre et al (2022) accomplishes this by first applying two matrices, M and N = MQ\u22121 to the out-of-camera-frustum and in-camera-frustum, respectively, where 27 \fFig. 11: Results from images taken in natural lighting (left) and configuring the LED panel for lighting (right) provided by LeGendre et al (2022) M and N represents a 3\u00d73 pre-correction matrix. Then applying the post-correction matrix, Q, to the final image, where Q represents a 3\u00d73 post-correction matrix that remaps viewed pixels to the desired/expected colour schema. M is solved through matrix calculation, from known LED emission spectral sensitivity functions, i.e. M = [SL]\u22121, where [SL] represents the observed average pixel values from capturing light emitted by the LED panels. Q is found by minimising the squared error between predicted pixel values and target pixel values, using the 3x3 matrix [SRL]j. This encodes the spectral modulation and integrates the camera spectral sensitivity functions and LED emission and material reflectance spectra LeGendre et al (2022) for a given colour chart10 square, j. Testing on colour charts showed near-optimal results, though resulting errors are limited by using a 3\u00d73 linear transformation kernel. LeGendre et al (2022) highlights issues resulting in de-saturation of skin colour11 and fabrics; which is reasoned by the restrictive colour schema from LED panel lighting. Similarly, Smith and Zink (2021) looks at correcting dynamic in-camera-frustum hue changes, though using simpler colour transforms. Additionally, Debevec and LeGendre (2022) investigates a method for HDR-Image (HDRI) lighting reproduction, going from virtual HDRI setting to a LED volume setting. This essentially inverts the classical IBL problem and is approached by dilating pixels above a given threshold to meet constrains on local average pixel values displayed displayed on a virtual LED wall. Overall, the use of LED panelling in production is still new. Aside from discussing lighting and colourization, the IC landscape outside of this paradigm is undefined Kavakli and Cremona (2022). Subsequently, we have aggregated a set of non-scientific resources, in Table 1, which contribute to cinematographic discussions concerning the use of LED panels. We additionally categorized the list to provide further clarity on subject materials. 10A colour chart is used to map shades and tints of red, green and blue, where each square in a chart is a different shade. 11The authors propose further testing on a larger spectrum of skin colours. 28 \fTable 1: Supplementary Online Resources containing articles, documentation and video-based discussions. Topic Papers General Discussions Nila et al (2022); Kavakli and Cremona (2022); Kadner (2021); Pires et al (2022); Kadner (2019); Hendricks (2022); Society (2020) Workflows/Pipelines Chambers et al (2017); Consulting (2021) Motion Tracking Viehmann (2020); Televisual (2021) Actor Imersion Bennett (2020); Leane (2020) Fig. 12: Extrinsic parameters are shown in grey. Intrinsic parameters are shown in white. 3.2.2 Camera Calibration and Localization The objective of camera calibration and localization (or pose estimation) is to map a 3-D world onto a 2-D image plane. This involves modelling camera intrinsic/inertial parameters and extrinsic/external parameters Zhang (2014) using real image data. An illustration of the relevant parameters to be modelled is shown in Figure 12. The intrinsic parameters define the cameras model, focal length, and lens distortions. The extrinsic parameters model the camera transform matrix for each pose as well as the path for moving shots. This is a popular problem that concerns the general use case of a single camera Qi et al (2010); Long and Dongri (2019); Remondino and Fraser (2006) as well as specific use cases, such as surgical monitoring Qi et al (2010); Obayashi et al (2023); Koeda et al (2015). Regarding IC, we find work supporting a number of applications including aerial photography Bonatti et al (2019); Pueyo et al (2022), photogrammetry Luhmann et al (2016); Clarke and Fryer (1998); Fraser and Al-Ajlouni (2006), imagebased 3-D reconstruction Chen et al (2022b); Truong et al (2023); Xian et al (2023); Moreau et al (2022) and underwater filming Shortis et al (2007); Capra et al (2015); Ma et al (2023); Massot-Campos and Oliver-Codina (2015); Zhou et al (2023). The 29 \ftwo practical uses for this in cinematography are automated camera control and real 3-D reconstruction. For automated camera control, the research landscape leans towards controlling the extrinsic parameters Salvi et al (2002); Remondino and Fraser (2006). However, there exists work bridging extrinsic and intrinsic parametric control to satisfy the cinematographer. For example, Pueyo et al (2022) present CineMPC which searches for an optimum trajectory (for a drone) and camera angle using a model predicted control (MPC) framework. MPC Camacho and Alba (2013) achieves a process output such as a camera action by considering future time instances/horizons and minimising the cost of selecting different actions. CineMPC models a finite horizon which is continually displaced until all actions cease and constrains the objective function to be flexible to different camera configurations and visual aesthetics. The authors curate mathematical expressions to account for composition, depth of field and canonical shots. Thus, the subsequent cost functions can be considered naive approximations of canonical cinematographic style. While there exists a lot of work on camera calibration for automated control, the landscape with regards to 3-D reconstruction is more fertile. Discussed by Remondino and Fraser (2006), the general approaches to calibration are Tsai (1987), Heikkila and Silv\u00b4 en (1997) and Zhang (2000) which model distortion intrinsic parameters for a pinhole camera. Nevertheless, there is no universal or flexible automation scheme that achieves this Remondino and Fraser (2006). This is why we still see splintered research on subsets paradigms such as Pitombeira and Mitishita (2023) which focuses on calibrating the zoom and focus features for scenarios where modelling fixed focus and zoom lenses are infeasible. Or Jasi\u00b4 nska et al (2023), which looks at improving geometric stability of videos using SfM of multi-view stereo (MVS) images for photogrammetric reconstruction. Interestingly, there is work relating to INR and NeRF modelling which addresses the whole problem Zhu et al (2023). The classical model in Mildenhall et al (2021) naively uses COLMAP Schonberger and Frahm (2016) whereas recent work has focused on joint-optimization Yen-Chen et al (2021); Xia et al (2022); Wang et al (2021b); Truong et al (2023). While many of these papers improve upon pose estimation for a fixed pinhole camera model, there is work on evolving the lens model as well. For example, Xian et al (2023) presents a formulation of a ResNet-based lens distortion model and robust patern-based calibration to provide a thin-lens model equitable for NeRF reconstruction as well as other vision based tasks. 3.2.3 Neural 3-D Representations of Dynamic Scenes Reconstructing real scenes as 3-D representations presents a host of new solutions to existing cinematographic problems, such as novel view synthesis for content acquisition, and elevates prior 2-D based paradigms to 3-D. For instance, in Section 3.1 we discuss how 3-D reconstruction could be used for evaluating IC research on DA by providing realistic virtual environments, for testing through simulation. There are two general formulations of this problem: 30 \f1. Monocular scenes captured with a single camera moving around a moving object Pumarola et al (2021); Yan et al (2023); Park et al (2023) 2. MVS scenes containing a single action captured using multiple cameras which are often static Sabater et al (2017); Broxton et al (2020); Li et al (2022) There is a third formulation that has been considered Fridovich-Keil et al (2023), forward facing scenes where a single camera is bounded to a single plane of motion. However this has been weakly adopted as a universal paradigm. For example, K-Planes Fridovich-Keil et al (2023) mimics the general consensus on using normalized device coordinates (NDC) and scene-contraction used in Barron et al (2022) to \u201chack\u201d at this problem. However, this offers no ability to render 6 degrees of freedom (6DoF) dynamic video. Doing so would require hallucinating obstructed geometry (the \u2018 \u201cbehind\u201d of a scene) which introduces a whole new paradigm. Methods that attend to the general formulations are shown in Table 2. These methods are not the only concurrent solutions, nonetheless they represent the wide variety of available solutions. Method Speed Commercial GPU Explicit Ancestors D-NeRF D \u2717 NeRF Dy-NeRF D \u2717 NeRF V4D h \u2717 NeRFPlayer h \u2717 \u2717 Instant-NGP, TensorRF K-Planes m NeRF-W, Instant-NGP, DVGO HyperReel h \u2717 TensorRF NeRF-DS h \u2717 \u2717 HyperNeRF HexPlane h \u2717 \u2717 TensorRF Tensor4D h \u2717 \u2717 NeRF-T, D-NeRF DynIBaR D \u2717 \u2717 NSFF, IBRNet Table 2: Overview of Dynamic NeRFs: Speed measures the training time where D is days, h is hours and m is minutes. Commercial GPU indicates whether the authors tested with commercially available GPUs. Explicit indicates where an explicit representation was used to boost training and inference speed. Ancestors indicates the prior work influencing each method. There are two traits that currently differentiate originality of research. The first is the proposal of new space-time representations. The second is the manipulation of representations to enhance the learning of temporal elements. One of the earliest and easiest-to-understand proposals for a new space-time representation is D-NeRF Pumarola et al (2021). This method models the deformation field 31 \f\u03a6(x, t) \u2192\u2206x where \u2206x is the predicted positional change of a ray-sample relative to a canonical static field \u03a6(x + \u2206x, t) \u2192(c, \u03c3). Relative to global space, this learns an SE(3) transformation. Learning a canonical static space is a robust way of ensuring volumetric consistency with time. Though intuitive, this lends itself to issues when a scene is not continuously in-frame. To counterbalance this, methods such as K-Planes decompose static and dynamic volumes representations. More specifically, K-Planes does this by projecting ray samples (containing (x, t)) into 6 feature planes, three representing static space and three representing dynamic space. The inputs are normalized between [0, N], projected onto the feature planes and bi-linearly interpolated among varying scales (i.e. coarse and fine features). To decode the features, attaining (c, \u03c3), element-wise multiplication is used to recover a final feature vector which is passed into a feature decoder (for explicit representation) or small MLP (for implicit representation). Overcoming similar problems with large and slow-to-train dynamic representation we see GS alternatives such as 4DGS Wu et al (2023)12, which use the same K-Plane decompisition, though use it to derive covariance and visual properties, rather than visual-only properties as with K-Planes. Another interesting solution to the issue of volumetric consistency is key-frame interpolation of static fields. HyperReel exemplifies this by learning the displacement vectors and geometric primitives of a jointly learned key-framed static field. To learn many key-frame fields for a single video, HyperReel builds upon TensorRF due to its compact nature and fast learning ability. Furthermore, as this only provides a discrete set of time-dependant snapshots the authors propose modelling the velocity of volumes at each key-frame. Similarly to the principles of D-NeRF, this enhances the temporal quality of the radiance field representation. Overall, methods are only modestly capable of modelling dynamic scenes with 6DoF and usually require additional modification to handle non-generic scenes, such as forward-facing Attal et al (2023); Fridovich-Keil et al (2023). Despite this, we still find work that tends to production specific needs such as editable NeRFs and GSs\u2019 representations Lazova et al (2023); Zheng et al (2023); Huang et al (2023). For cinematographers this means patience as we continue to witness increasing interest in this field. 3.2.4 Neural 3-D Representations of Humans Modelling non-rigid or deformable geometry, for example a human, is a classical problem for graphics research. This topic is well aligned with research on dynamic NeRFs and broadens human-centered computer vision research to the 3-D case. Similarly to the prior subsection, the two general problem formulations are multiview and monocular scenes, which we use to distinguish current research objectives. We can additionally differentiate work by it\u2019s reliance on generalising humans and their poses. HumanNeRF Zhao et al (2022) exemplifies both of these points as a method that focuses on the multi-view paradigm while generating generalizable human poses. The authors propose using a NeRF to learn the skinner multi-person linear model (SMPL) Loper et al (2015), i.e. the geometry and appearance of an actor. These features are 12Pre-print available on arXiv 32 \flearnt and used to train a novel neural appearance blending field. Similarly to DNeRF, the generalizable NeRF learns the canonical and deformation field of an actor by taking inputs of an SMPL skeleton and pixel aligned features and outputting the colour and density of a sample. The appearance blending field refines texture details by accounting for the colour of aligned features from neighbouring views. This model performs well, however it requires carefully placed cameras and struggles with new poses. MonoHuman Yu et al (2023) overcomes this by using a shared bidirectional deformation modules that disentangles forward and backward deformation into rigid skeletal motion and non-rigid motion. Forward deformation regards the transformation from canonical space to a unique observation space, while backward deformation accomplishes the opposite. To guide training for new poses, forward correspondence features at known key frames are selected from an observation bank and visual features are evaluated relative to the features in the new observation space. This improves volumetric consistency between different observation spaces, meaning sequences of actions can be recovered with more confidence. Additionally, the method is less vulnerable to issues with using monocular video as the observation bank can be used to improve reconstruction of new non-rigid actions. In cinematographic practice, neural human modelling methods relives us of the accessibility constraints of motion capture (MoCap) suits Wang et al (2022a,b) which are currently viewed as the gold standard MoCap systems. The authors of the neural motion (NeMo) model Wang et al (2022b) exemplify this by testing their proposed framework on athletic actions, taken from the Penn Action Dataset Zhang et al (2013). Like MonoHuman and HumanNeRF, NeMo generalizes motions using multiview video. Though notably this is achieved by inputting videos of the same action with varying scene conditions, such as different actors and lighting. To handle unsynchronized actions a phase network is introduced as time-based warping function to align poses (i.e. joint angles and translation of motion) in an action sequence. Furthermore, scene-specific feature embeddings handle the visual differences between scenes. Consequently, the phase and instance features are used as inputs to the NeMo network where the outputs define the joint angles and translation of motion used to render an SMPL model. Conclusively, this field of research is popular and highly relevant to IC. While it is currently limited by training and inference time Wang et al (2022b); Yu et al (2023) there is potential for it use in applications like MoCap, which may be additionally beneficial for actor-based ICVFX. 3.2.5 Challenges and Future Work With ICVFX, research is limited by access to expensive technologies such as cameras and LED volumes. Considering the novelty of this production format, we are certain to see more research emerge as more stages are built. Aside from current research on improving the quality of stage lighting, there are other areas that we believe are worth pursuing. LED volume productions (or virtual stage production (VSP)) is heavy on pre-production planning, thus there is potential for supporting tasks like pre-production shot visualization, or 3-D stage design. This 33 \farea of research branches well with 3-D capture and modelling research. Real-time 3-D view-synthesis (rendering) could also play a role in in-camera background projection for VSP. Though prior to this there are steps yet to be taken in providing highresolution renders or real-time scene changes. With the recent introduction of open-source pipelines for NeRF Tancik et al (2023); Li et al (2023) modifying and testing different NeRFs is trivial. However, there is a lack of understanding when it comes to the optimal strategy of capturing media for training NeRFs. Additionally, current methods for evaluating NeRFs using 2-D image-based metrics such as PSNR, SSIM and LPIPS are perhaps not completely reflective of the accuracy of the 3-D fields which we expect a NeRF network to learn Tancik et al (2023); Gao et al (2022). This is specifically true for the dynamic usecase. Therefore, we believe this field still has a considerable journey ahead before we can confidently select models for large scale commercial productions. It is also worth noting that automated 3-D capture technologies play a great role in the general worry over the ownership of an actors visual essence. In Section 4.1 we elaborate on the social responsibility of researchers as we find this ethical dilemma akin to issues previously faced with Deep Fake research. 3.3 Live Production IC research for live production is conditional on the type of environment, types of actions expected to be displayed (e.g. routines or phases in a performance or sport), location of audience members and the focal point for the digital audience. Analysing the current play or performance can live selection of cameras and shots. Additionally, challenging environments and scenarios may lead to degraded visual quality, thus real time solutions are required for image correction. For example, image analysis from a game of football is challenged by noise arising from spectators or advertisement banners Penumala et al (2019); Spagnolo et al (2013); Wu et al (2022a), while watersports will face problems of distortion from partially underwater participants, light artifacts from surface reflection and noise from turbulent water Wu et al (2022a); Host and Iva\u02c7 si\u00b4 c-Kos (2022). In this section we link work concerning more research on object tracking and human pose estimation and specifically highlight cases that could inform live shot selection or real-time image correction methods. 3.3.1 Human Pose Estimation Human pose estimation (HPE) is a popular paradigm in computer vision research Badiola-Bengoa and Mendez-Zorrilla (2021); Sarafianos et al (2016); Zheng et al (2020). We see cases of HPE used in cinematic scene analysis Wang et al (2023); Wu et al (2022b) and also in live event broadcasting. For example, for sport events analysing a player\u2019s pose may allow us to to forecast a series of entertaining events which could provoke particular cinematographic shots. For sport there is ultimately one major factor that divides research: whether or not general application is possible. This is an essential delineation to make prior to reviewing a methods Badiola-Bengoa and Mendez-Zorrilla (2021) as solutions are heterogeneous13. Aside from this there are 13E.g. anatomical poses will vary between sport. 34 \fspecial cases where models take-on specific challenges, such as exploring HPE in 3D Wang et al (2023); Song and Fan (2021), in a highly dynamic environments Henning et al (2022) or for team sports Bridgeman et al (2019). Hu (2021) looks at real-time football player (single-target) posture estimation for live presentational analysis. To accomplish this, confidence weighted fusion and visual attention is used handle problems with colour camouflage and static foreground features, to first identify the target foreground features. Figure13 illustrates pixel-based joint verification for identifying key-point target features, using local binary similarity patterns (lbsp). Then a heat map is generated using a ResNet, a stacked hour glass network and deep-layer aggregation (DLA). The DLA collects features from CNN layers to determine the shared features, i.e. it aggregates features at each layer relating to a verified joint and tries to classify what it is (knee, elbow, etc.).For optimization, the model is updated with adaptive strategies relating to the confidence weight of each pixel and their corresponding weighted fusion sum based on the joint classification. This methods shows significant improvement in joint and posture detection compared to other state-of-the-art methods Wang and Li (2013); Pishchulin et al (2013); Hossain and Little (2018); Pavllo et al (2019). Differently, Bridgeman et al (2019) looks at tracking and analysing multiple targets simultaneously. This method builds upon existing research by Cao et al (2017). The goal is to construct a 3D (skeletal) pose model using multiple 2D views, while tackling prior issues of long processing time and reliance on appearance-based queues for initiating feature detection. This is achieved by presenting a greedy approach to multi-view detection. The approach has three steps: (1) 2D Pose Error Correction is done by flipping incorrectly oriented body parts and dividing by body part, (2) Per-frame 2D pose association determines a consistent label for each body part across multiple views, found greedily by selecting the best pose from a weighted rank, and (3) 3D skeleton tracking uses the 2D labelled poses to generate a 3D skeleton for each individual. The novelty in this paper is it\u2019s use of multi-view HPE to verify joint placements in 3D. With similar ambitions, Song and Fan (2021) explicitly improves pose detection methods by using different modules from the VGG11 network for different feature fusion methods. To accomplish this, estimates of features are sampled from the VGG11 network and local features of points are first aggregated. After the image passes through a semantic segmentation network, a segmented target (i.e. a body part) is passed into the feature fusion network. This splices and fuses the RGB features of the segmented image, local point-cloud features and global view features to form a final feature vector which is then reduced to a scalar value for classifying the body part. Wang et al (2017) looks at HPE using fixed aerial cameras. As with previous models, a CNN is used for target (player) detection and a YOLOv2 model is used for pose detection, trained on public aerial image data sets. Processed images undergo further classification of normal and abnormal poses. Conclusively, the model provides an insightful way to detect posture from a common viewing angle for events, though is limited by its ability to detect abnormal poses consequent of a biased data set it is challenging to find publicly available data sets pertaining to abnormal poses. It would be interesting to see this developed for real-time application, particularly as 35 \f(a) Joint-verification using color features, vt, and the local binary similarity pattern, lbspt, of pixel x at frame t. Rcolor and Rlbsp are predefined distance thresholds and {v1, v2, ..., vn} and {lbsp1, lbsp2, ..., lbspm} are a set of known color and binary similarity features. The distance function between the pixel\u2019s colour feature, lbsp feature and features in their respective sets, is evaluated w.r.t the thresholds and the binary result is fused using a logical AND operation to determine where a pixel belongs to a joint. (b) Local binary pattern example, where a binary similarity function evaluates a given pixel with pixels that fall on it\u2019s local binary pattern. The similarity is taken w.r.t to a threshold, T. Fig. 13: Per-pixel binary joint verification used for identifying pixels for joint classification. wireless video transmission protocols are shifting to improve on wireless-latency and error handling Ahmed et al (2015). 3.3.2 Object Tracking As with HPE, object detection and tracking (ODT) has a significant presence in sports broadcasting, for similar reasons. Unlike HPE, solutions are not prescribed on a caseby-case basis. Rather they tackle implicit issues given a set of idiomatic environment characteristics Kaushal et al (2018). For example for football, we could envision a method for detecting a ball in motion within a noisy environment. This translates well to other sports with similar tropes, such as handball. 36 \fKaushal et al (2018) presents a lengthy review of ODT approaches including evolutionary algorithms, conventional neural networks and feed-forward neural networks (FNNs). It is shown that there are suitable approaches for almost any combination of known visual constraints. For example, FNNs are most successful for video streams with irregular illumination change and noise, whilst darker hues and low frame rate are best handled by DL-based CNN classifiers. Within IC we find two distinct paradigms, (1) singular target ODT and (2) multiple target ODT. (1) is much easier to solve and solutions tend to use YOLO detection. For example, Wu et al (2022a) looks at two sports in particular, swimming and table tennis for over-the-top and in-stadium large-screen broadcasting. With the table tennis case study, problems relate to resolving high-speed motion blur for small moving objects to capture the game live in 3-D. While with swimming, problems relate to light-reflection obscuring the cameras vision of swimmers and water occlusion. For table tennis, a YOLOv4 model is used to estimate ball-bounce from a (singular view) video stream and uses this to define a 3-D trajectory. For swimming the solution is a reduced ResNet-50 neural network model acting as the base-network, alongside a modified SiamRPN++ model for swimmer-tracking. To support recognition the camera\u2019s view is masked to suppress background noise. Wu et al (2022a) concludes on difficulties with the communication of heterogeneous interfaces between collaborators as well as providing the feasibility of this model in actual practice. As concluded by Buri\u00b4 c et al (2018), YOLO methods are the fastest to test and fine-tune14 thus is an attractive case for practical development. They are proven to be sufficient particularly for occlusions. The less successful alternative, Mask R-CNN He et al (2017), uses two networks, one for detecting regions of interest (i.e. a Regional Proposal Network (RPN)) and a deep CNN for determining how likely a group of pixels within a region of interest is associated with an object. On the other hand, point (2) concerns multiple target ODT which we split into two further sub-objectives: (i) ODT for objects of the same family, and (ii) ODT for objects from different families (such as player and ball detection in Moon et al (2017)). S \u00b8ah and Direko\u02d8 glu (2021) reviews several popular methods for in-field-sports broadcasting. Conventional approaches detect visual changes using classical image processing methods Deori and Thounaojam (2014); Balaji and Karthikeyan (2017). Though, these solutions are naive to several constraints such as (ii). Alternatively, DL-based methods are usually constructed as Fast R-CNNs Girshick (2015) (ancestor of the Masked R-CNN) and YOLO Pobar and Ivasic-Kos (2020); Rahmad et al (2019); Ren et al (2015b); Hurault et al (2020). Yet again, there is also a case to be made for Masked R-CNN Zhang et al (2020b). Surprisingly, S \u00b8ah and Direko\u02d8 glu (2021) finds that conventional methods outrank DL approaches though not by much. It is reasoned that with a larger number of targets to track, deep CNNs are unable to distinguish lower resolution features whilst conventional methods are totally reliant on high-resolution imagery. In the case of (ii), DL approaches are currently unfeasible as small objects detection presents further issues tied to low resolution feature detection S \u00b8ah and Direko\u02d8 glu (2021). 14Though, we note only YOLOv2 was tested in this review 37 \f3.3.3 Challenges and Future Work Relative to other forms of production, live broadcasting has a long standing relationship with AI research. We see this through considerable efforts to elevate the efficacy of pose and object detectors for human targets. We have doubts on the current practicality of these models considering their lack of accuracy relative to human ability, though this is well understood by research and often the main objective. Therefore, we do not expect future work will deviate from investigating this paradigm. Considering the number of processes involved, there are several other feasible avenues for research. For example, we could extend the supporting infrastructure by investigating different masking strategies for suppressing the general set of noisy surroundings. We could also investigate image detection and correction methods of visual artifacts, such as the reflection of light from a swimming pool Wu et al (2022a). Furthermore, a significant sum of footage is readily accessible online. Granted, there are biases in the popularity, gender and racial diversity of sporting events, which should be avoided where possible. Nonetheless, this indicates a need to source more diverse data-sets for general use. Otherwise, we are interested to see how live IC could play a larger part in driving content capture. For example, we are interested to see more research in optimising shot composition for anticipating events. It would also be exciting to see stylized shot compositions that look at capturing in-game events with a variation of techniques. For instance, using cinematic style transfer Wang et al (2023) to replicate praiseworthy shots. Perhaps one day we may be able to watch a sports game in a Tarantio-esque style. 3.4 Aerial Production Similar to many other applications of IC, the problem landscape for UAV-based cinematography is generally undefined Mademlis et al (2018a). This is further pronounced when we consider the limitations that come with commercial drones, which are not easy to outfit for cinematographic purpose. Mademlis et al (2018a) produces an interesting review of concurrent challenges, outlining that due to badly defined problem space, lack of technological accessibility as well as roughly drawn legal, ethical and safety constraints, it can be challenging to find a solution which fits the general needs of cinematographers. Despite this, work on autonomous UAVs continues to grow and more recently we find large variety of seemingly successful approaches to UAV control for IC. In this section we make the key delineation between single drone and multi-drone systems. This is important to consider for cinematographers with tight budgets but also as the technology between the two can greatly differ. 3.4.1 Automated Single UAV control Aside from budget constraints, the choice to use a single UAV may come down to the size and accessibility of the camera\u2019s viewing search space. For example a small viewing region around an actor to capture close to mid range shots may only require a single versatile drone. With limited airspace around a subject more drones demands 38 \fadditional constraints to avoid crashes further diminishing the possibility of real-time solutions. Thus there is an area of research on single UAV control referred to as autonomous \u201cfollow me\u201d quadrotors Joubert et al (2016). There are a number of vision and sensor-oriented solutions within this domain Naseer et al (2013); Teuliere et al (2011); Lim and Sinha (2015); Coaguila et al (2016) as well as drone manufacturers that supply this feature Joubert et al (2016), like the 3DR Solo and DJI Phantom. To accomplish this work there are a number of physical and technical limitations to consider. The first is the trade off between online and offline solutions Bonatti et al (2020); Puttige and Anavatti (2007). Online solutions Huang et al (2018); Bonatti et al (2020) make decisions quickly, responsively and are pragmatic when active elements in a scene move in unpredictable ways, such as an actors movement. Additionally, work exists on physical camera and drone drone modification for improved online and onboard capability Huang et al (2018). Offline solutions can enable solving more complex challenges, such as swarm robotics Mademlis et al (2019). The second challenge to consider is the method of detecting actors. This is useful for trajectory planning but also gimbal control. Bonatti et al (2020) considers using a single shot detector Liu et al (2016), such as YOLO9000 Redmon and Farhadi (2017) and Fast R-CNN Ren et al (2015b), for a problem demanding smooth gimbal control and determines that Fast R-CNN is optimal given the speed-accuracy trade off for defining a trajectory. They additionally use MobileNet to perform low-memory feature extraction with fast inference speed. Finally, for indefinite actor tracking they use a KCF Henriques et al (2014), taking advantage of the fact that some learning algorithms perform better in the Fourier domain. The KCF tracker relies on kernel ridge regression which is a kernalized version of the linear correlation filter, forming the basis for the fastest trackers available Bolme et al (2010, 2009). Finally the third and most evident challenge is how trajectory planning is handled with regard to certain cinematographic objectives. A tempting option is to adapt cinematographic concepts to mathematical expressions that are subsequently optimized for control Pueyo et al (2022); Joubert et al (2016). Alternatively, Ashtari et al (2020) proposes a method that models the dynamic motion of a human camera operator in real time. The authors follow works on modelling the vertical and lateral displacement of a walking patterns Carpentier et al (2017); Zijlstra and Hof (1997) combining approaches into a single routine that additionally considers the rotation of the drone and damping effects to simulate different types of camera equipment. Another approach is to let a reinforcement learning agent control the camera motion. Gschwindt et al (2019) builds on CHOMP Ratliff et al (2009) to parameterize smooth trajectory planning while using a deep Q network (DQN) to lead target shot selection. Two methods are proposed for training the DQN. The first method is a human-crafted reward function; like adapting cinematographic shots to optimization functions, this reward function accounts for the actors presences in a shot, the duration and the shot angle. The second method is human-led observation which rewards the DQN relative to the cinematographers subjective opinion. Ultimately, the design of autonomous single-camera UAV systems is ambiguous. For cinematographers, this means there are a number of trade offs to consider, such as the complexity and feasibility of using more flexibly systems (e.g. drones with gimbals 39 \for \u201dfollow me\u201d modes). We agree with the premise set by Mademlis et al (2018a), that work still needs to be done unifying the objectives of autonomous drone systems. However we also believe that a universal paradigm could inhibit varying uses, for example mimicking human walk patterns compared and optimising shot composition present different objectives and outcomes. 3.4.2 Automated Multiple UAV control For multi-UAV technologies, the limitations of single UAV control are amplified. For example, communication, cooperation, finite bandwidth and safety concerns become more precarious to manage as the number of UAVs increases Mademlis et al (2018b). This is one reason behind the use of schedulers Capitan et al (2019); Torres-Gonz\u00b4 alez et al (2018). Scheduling drones can insinuate two types of tasks, a swarm of cooperating drones with a unified purpose and non-cooperative drones Mademlis et al (2019). For cinematographers, this is comparative to a drone fleet being used to capture multiple views achieving the same cinematographic objective (for example, a thematically driven system N\u00a8 ageli et al (2017)) or a fleet used to capture a varying selection of shots for sport or event like filming, where a human or autonomous director makes the final decision over which shot to broadcast Capitan et al (2019). Additionally delineating from single UAV capture, it becomes less plausible to use GPS-based localization for trajectory planning Mademlis et al (2018b). This is because GPS systems introduce noise during drone localization which perturbs drone formations that demand accurate localization. Furthermore, a similar problem is faced with SLAM-only methods for localization. Hence, an inertial measurement unit (IMU)GPS-SLAM fused system Mademlis et al (2018b) has been employed. This problem is also shared with works that lie outside of the IC scope Yan et al (2022); Han et al (2022); Debeunne and Vivet (2020). For example, Yan et al (2022) looks at a fusing a VO15 system with IMU integrated on an extended Kalman filter; inspired by Bloesch et al (2015). With the additional curation of a dense 3-D map of the target environment, built using ORB-SLAM Wang et al (2016), automatic robotic localization and control in an agricultural context (i.e. inside a green house and outside in a field) is made possible. Interestingly, the authors experiment on cluttered and noncluttered environments which is a shared characteristic of production environments namely the variation of clutter between different sets. N\u00a8 ageli et al (2017) directly address this problem and further highlight challenges with dynamic entities (drones, humans, moving set etc.), using model predicted contour control (MPCC)16. Confronted with a highly constrained scenario, the authors simplify the problem using manually defined \u201cvirtual rails\u201d for each drone. This acts as a coarse trajectory guide that drones loosely follow to avoid collisions while achieving one-shot aesthetic objectives. To track and update the state of moving targets a Kalman filter is used. Furthermore, the authors employ an actor-driven view-framing method which can be adjusted in real-time via general user interface (GUI) by a cinematographer for varied framing. While not explored in the paper, this could lead to extensions that look at varying the transition between framing inputs at different 15Discussed in Section 2 16This is related to MPC, previously mentioned. 40 \ftimes to produce aesthetically pleasing transitions. Finally, overcoming the collision constraints, the authors model regions to be avoided as ellipses around a subject as a hard constraint (i.e. high penalization), using slack-variables to indicate when a horizon is foreseeable. Hence, when slack variables are high, the problem is deemed infeasible either due to violating the collision constraints or the computational budget being exhausted. We note that slack variables are commonly used in MPC research to model pareto-optimal solutions. Overall, the limitations raised by researchers are wide-spread. Since each production scenario is unique and solutions must be robust due to safety concerns, there is no work that currently solves the general paradigm. We share our formalization of the general paradigm with Mademlis et al (2018a), whereby the ideal is a system that can act on a set of high-level inputs from an operator, who is not required to be technically knowledgeable. 3.4.3 Challenges and Future Work Respecting the goal of a general automated UAV solution for IC, the prior subsections lead us to encourage prioritising the discovery of a holistic set solutions. The breadth of use-cases are wide and often unique, however much like the DA problem (discussed in Section3.1.2), solutions heavily rely on the development of related works. For example, solutions to the UAV localization paradigm canonically involve extensions of SLAM and VO research. With recent developments in camera calibration, consequent to the explosion of NeRF research, we are thus likely to see extended use of visual methods for localization and path generation in the future. Considering this avoids the need for GPS-related hardware, it may result in longer flying time and cheaper budgets which are particularly useful for multi-UAV paradigm. Moreover, with new developments in dynamic and human NeRFs, discussed in Section 3.2.3 and 3.2.4, we see additional potential in training and/or testing automated drones on unforeseen circumstances using real-world digital twins. This could be used to accomplish safety tests or to train UAVs on scene-specific scenarios. For example, if we wanted a UAV to learn to react to actors disobeying drone safety protocol, we could train a NeRF environment using MonoHuman and interpolate sequences of human poses which defy protocol. On the other hand, the steady developments of NeRF could render UAV-based solutions redundant for directly acquiring the desired content. Rather, it is likely that UAVs become an intermediary step for NeRF-like content acquisition, as it separates the problem into two steps: (1) UAV image acquisition with the objective of optimising a NeRF with 6DoF, and (2) content acquisition within NeRF with 6DoF flexibility. This simplifies the current paradigm, which accomplishes localization and path generation constrained by subjective visual objectives. Alternatively, while this may apply to the general UAV use-case there are situations that still require attention from IC researchers. For example, the follow-me paradigm implies the use of UAVs for direct content acquisition and relies more on the ability of a drone to move with a visual anchor, like a human target, under cinematographic objectives, like a desired walk pattern or jitter. Considering this is a simpler problem with arguably easier to define objectives, the intervention of NeRF may be unnecessary. 41 \f4 General Remarks This literature review begins with the characterisation of IC as the field of research dedicated to exploring AI solutions for cinematographic video production. We subsequently reviewed four production mediums that we believe are most relevance to current researchers and interested cinematographers, including a subsection on general production tools. This leads us to present our final remarks. There are certainly several favourable fields of research that have potential to shape the industry going forward: 1. Computational Language Structures, discussed in Section 3.1.1 2. LED Volumes, discussed in Section 3.2.1 3. NeRFs, discussed in Section 3.2.3 and 3.2.4 With regards to computational language structures, we do not believe that the current state of research is telling of its potential impact on IC. Instead, we believe with the current state of research on LLMs and it\u2019s use in the public domain17, that natural language decision making for IC will be facilitated. Though we have yet to see this is investigated in research, we believe there is cause to pursue its application in fields like automated directive assistance, MP for planning and automated UAV control. With regards to LED volumes and NeRFs we believe the impact on industry has already been felt. For the former, the adoption of this new technology has offered radical change to canonical chroma-keying practices and offers a new image-based paradigm for computer vision researchers. Regarding the latter, research has demonstrated a flexible pool of new solutions to problems which were previously solved through with sculpting/3-D modelling tools and/or hardware dependant photogrammetric methods. With the current push for compact, fast and accurate representations, we would not be surprised if production ready NeRFs became available soon. Conclusively, we hope that our work inspires more reviews on future developments of IC research. The fields of research and the relevant video production industries we have discussed are highly susceptible to change, as exemplified by the influence of the LED volume. Thus we should acknowledge the need for concurrent reviews. Finally, to further bridge the gap between IC researchers and industry professionals we encourage future reviews that targets industry professionals directly. 4.1 Social Responsibility The impact of evolving technologies on the creative industries is multifaceted. Regarding IC, there are two matters that underpin widespread concerns with freely advancing AI: (1) replacing actors with human-like AI models Bedingfield (2023); Chiu (2023); Chmielewski (2023) and (2) using generative AI and/or 3-D models to produce pornographic content Tenbarge (2023); Olson (2021). The matters mainly concern fields such as automated 3-D capture and generative modelling, where the ability to capture a persons physical essence is currently possible to a high degree for images and likely soon-to-be for videos. As we discuss these topics in Section 3.2.4 and 2.2, respectively, 17E.g. generating trailers from text/scripts Reshi (2023); Choi (2023) 42 \fwe take the opportunity now to provide awareness on the issues and discuss steps which can be taken to avoid certain ethical dilemmas. Replacing real actors can be achieved by either modelling a real human or generating a virtual human Bedingfield (2023). For actors this is problematic as current stars fear abuse of existing content containing detailed depictions of themselves. This may also lead to marginalizing real people for virtual celebrity look-a-likes. The adult entertainment industry is afflicted by similar problems, though these are enhanced by a comprehensive set of concerns including the widened accessibility of mature content from minors and the treatment of women in the work place Gilkerson (2021); Stokes (2012). Notably, (mis-)use of deep-fakes leads to serious outcomes such as the fact that 96% of deepfakes are sexually explicit depictions of women who didn\u2019t consent Tenbarge (2023) and also has profound implications in-sighting cases of child pornography Olson (2021). Embellishing the issue further, Kirchengast (2020) reviews current regulatory mechanisms and legislative powers in the US, concluding that present solutions require thorough critique and serious investigation. This is discussed in consideration to the current level of criminalization, which weakly implicates the parties at fault. Consequently, we believe researchers should take steps to avoid contributing these problems. The general choices include, highlighting awareness as we have in this section, limiting the use of publicly available code through licensing and applying due-diligence when collaborating with external partners. We also believe that less pressure should be placed on researchers that choose to publish with close-source code. Hence, we should learn from prior instances involving generative AI and avoid publishing state-of-the-art models (and optimised parameter sets) that do not have robust measures to counter misuse. Furthermore, this could be accompanied by statements of transparency, outlining the efforts that have been made to avoid code misuse. Researchers may also consider the chain of fault and potential legal ramifications that comes with irresponsibly publishing code. As legislative discussions have not concluded on a system for assigning fault, this is worth considering." +} \ No newline at end of file diff --git a/abs_9K/test_abstract_short_2405.05216v1.json b/abs_9K/test_abstract_short_2405.05216v1.json new file mode 100644 index 0000000000000000000000000000000000000000..197d878790a6abd7a4764a36965751e3980d5a1f --- /dev/null +++ b/abs_9K/test_abstract_short_2405.05216v1.json @@ -0,0 +1,16 @@ +{ + "url": "http://arxiv.org/abs/2405.05216v1", + "title": "FinePOSE: Fine-Grained Prompt-Driven 3D Human Pose Estimation via Diffusion Models", + "abstract": "The 3D Human Pose Estimation (3D HPE) task uses 2D images or videos to\npredict human joint coordinates in 3D space. Despite recent advancements in\ndeep learning-based methods, they mostly ignore the capability of coupling\naccessible texts and naturally feasible knowledge of humans, missing out on\nvaluable implicit supervision to guide the 3D HPE task. Moreover, previous\nefforts often study this task from the perspective of the whole human body,\nneglecting fine-grained guidance hidden in different body parts. To this end,\nwe present a new Fine-Grained Prompt-Driven Denoiser based on a diffusion model\nfor 3D HPE, named \\textbf{FinePOSE}. It consists of three core blocks enhancing\nthe reverse process of the diffusion model: (1) Fine-grained Part-aware Prompt\nlearning (FPP) block constructs fine-grained part-aware prompts via coupling\naccessible texts and naturally feasible knowledge of body parts with learnable\nprompts to model implicit guidance. (2) Fine-grained Prompt-pose Communication\n(FPC) block establishes fine-grained communications between learned part-aware\nprompts and poses to improve the denoising quality. (3) Prompt-driven Timestamp\nStylization (PTS) block integrates learned prompt embedding and temporal\ninformation related to the noise level to enable adaptive adjustment at each\ndenoising step. Extensive experiments on public single-human pose estimation\ndatasets show that FinePOSE outperforms state-of-the-art methods. We further\nextend FinePOSE to multi-human pose estimation. Achieving 34.3mm average MPJPE\non the EgoHumans dataset demonstrates the potential of FinePOSE to deal with\ncomplex multi-human scenarios. Code is available at\nhttps://github.com/PKU-ICST-MIPL/FinePOSE_CVPR2024.", + "authors": "Jinglin Xu, Yijie Guo, Yuxin Peng", + "published": "2024-05-08", + "updated": "2024-05-08", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "label": "Original Paper", + "paper_cat": "Diffusion AND Model", + "gt": "The 3D Human Pose Estimation (3D HPE) task uses 2D images or videos to\npredict human joint coordinates in 3D space. Despite recent advancements in\ndeep learning-based methods, they mostly ignore the capability of coupling\naccessible texts and naturally feasible knowledge of humans, missing out on\nvaluable implicit supervision to guide the 3D HPE task. Moreover, previous\nefforts often study this task from the perspective of the whole human body,\nneglecting fine-grained guidance hidden in different body parts. To this end,\nwe present a new Fine-Grained Prompt-Driven Denoiser based on a diffusion model\nfor 3D HPE, named \\textbf{FinePOSE}. It consists of three core blocks enhancing\nthe reverse process of the diffusion model: (1) Fine-grained Part-aware Prompt\nlearning (FPP) block constructs fine-grained part-aware prompts via coupling\naccessible texts and naturally feasible knowledge of body parts with learnable\nprompts to model implicit guidance. (2) Fine-grained Prompt-pose Communication\n(FPC) block establishes fine-grained communications between learned part-aware\nprompts and poses to improve the denoising quality. (3) Prompt-driven Timestamp\nStylization (PTS) block integrates learned prompt embedding and temporal\ninformation related to the noise level to enable adaptive adjustment at each\ndenoising step. Extensive experiments on public single-human pose estimation\ndatasets show that FinePOSE outperforms state-of-the-art methods. We further\nextend FinePOSE to multi-human pose estimation. Achieving 34.3mm average MPJPE\non the EgoHumans dataset demonstrates the potential of FinePOSE to deal with\ncomplex multi-human scenarios. Code is available at\nhttps://github.com/PKU-ICST-MIPL/FinePOSE_CVPR2024.", + "main_content": "Introduction Given monocular 2D images or videos, 3D Human Pose Estimation (3D HPE) aims to predict the positions of human 1 arXiv:2405.05216v1 [cs.CV] 8 May 2024 \fbody joints in 3D space. It is vital in various applications, including self-driving [50, 56], sports analysis [13, 31, 46], abnormal detection [9, 45], and human-computer interaction [11, 25, 42]. Considering the expensive computational costs of directly obtaining 3D human poses from 2D contents, 3D HPE is usually decomposed into two stages: 1) detecting 2D keypoints in images or videos [5, 7, 24, 39], and 2) mapping 2D keypoints to 3D human poses [6, 10, 35, 48, 52]. In this work, we mainly focus on the second stage, estimating 3D human poses given 2D keypoints. Existing monocular 3D HPE methods [4, 6, 10, 17\u2013 19, 27, 28, 35, 36, 43, 44, 47, 48, 52, 54, 59, 61] usually have three challenges as follows: 1) Uncertainty: the depth ambiguity inherently exists in the mapping from 2D skeletons to 3D ones (one-to-many); 2) Complexity: flexible human body structure, complex inter-joint relationships, and a high limb freedom degree lead to self-occlusion or rare and complicated poses; 3) Generalizability: current publicly available 3D HPE datasets have limited action classes, and thus, the models trained on such data are prone to overfitting and difficult to generalize to more diverse action classes. To address these issues, we consider improving the 3D HPE model performance by enhancing the input information. We found that existing methods ignore accessible texts and naturally feasible knowledge of humans while they promise to provide the model with more guidance. We explicitly utilize (1) the action class of human poses, (2) kinematic information \u201cspeed\u201d, and (3) the way that different human body parts (e.g., person, head, body, arms, and legs) move in human activities to build fine-grained part-aware prompts for the reconstruction task. Specifically, we incorporate a fine-grained part-aware prompt learning mechanism into our framework to drive 3D human pose estimation via visionlanguage pre-trained models. It is well known that text prompts play a crucial role in various downstream tasks for vision-language pre-training models (e.g., CLIP [30]). However, manually designing prompt templates is expensive and cannot ensure that the final prompt is optimal for the 3D HPE task. Thus, we create a new fine-grained part-aware prompt learning mechanism that adaptively learns modifiers for different human body parts to precisely describe their movements from multiple granularities, including action class, speed, the whole person, and fine-grained human body parts. This new mechanism, coupled with diffusion models, possesses controllable high-quality generation capability, which is beneficial in addressing the challenges of the 3D human pose estimation task. In this work, we propose a Fine-grained Prompt-driven Denoiser (FinePOSE) based on diffusion models for 3D human pose estimation, in Fig. 1, which is composed of a fine-grained part-aware prompt learning (FPP) block, fine-grained prompt-pose communication (FPC) block, and prompt-driven timestamp stylization (PTS) block. Concretely, the FPP block encodes three kinds of information about the human pose, including action class, coarseand fine-grained parts of humans like \u201cperson, head, body, arms, legs\u201d, and kinematic information \u201cspeed\u201d, and integrates them with pose features for serving subsequent processes. Then, the FPC block injects fine-grained part-aware prompt embedding into noise 3D poses to establish fine-grained communications between learnable part-aware prompts and poses for enhancing the denoising capability. To handle 3D poses with different noise levels, the PTS block introduces the timestamp coupled with fine-grained part-aware prompt embedding into the denoising process to enhance its adaptability and refine the prediction at each noise level. Our contributions can be summarized as follows: \u2022 We propose a new fine-grained part-aware prompt learning mechanism coupled with diffusion models that possesses human body part controllable high-quality generation capability, beneficial to the 3D human pose estimation task. \u2022 Our FinePOSE encodes multi-granularity information about action class, coarseand fine-grained human parts, and kinematic information, and establishes fine-grained communications between learnable part-aware prompts and poses for enhancing the denoising capability. \u2022 Extensive experiments illustrate that our FinePOSE obtains substantial improvements on Human3.6M and MPIINF-3DHP datasets and achieves state-of-the-art. More experiments on EgoHumans demonstrate the potential of FinePOSE to deal with complex multi-human scenarios. 2. Related Work Diffusion Models. Diffusion models [12, 26, 37, 38] are a kind of generative models that sequentially add a series of noise with different levels to the raw data, gradually transforming it from an original data distribution to a noisy distribution, and subsequently reconstructing the original data by denoising. Diffusion models have strong capabilities in many applications, from 2D image or video generation/editing [1\u20133, 16, 49] to 3D human pose estimation/generation [10, 17, 19, 27, 35, 47, 48, 52, 54, 59]. The 3D HPE task, for example, encounters various difficulties, including occlusions, limited training data, and inherent ambiguity in pose representations. Therefore, diffusion models\u2019 ability to generate high-fidelity 3D human poses makes them more suitable for 3D HPE. 3D Human Pose Estimation. Considering that extracting 2D human skeletons from videos or images requires expensive costs, the 3D human pose estimation task is usually divided into two phases: (1) estimating 2D positions of human joints from images or videos [5, 7, 22, 41], and (2) mapping 2D positions to the 3D space to estimate the 3D positions of human joints [4, 6, 10, 17\u2013 19, 27, 28, 35, 36, 43, 47, 48, 52, 54, 59, 61]. In this work, we focus on the second phase. Early, TCN [29] used a 2 \fDiffusion Process Denoising Process Fine-grained Prompt-driven Denoiser (FinePOSE) Fine-grained Part-aware Prompt Learning (FPP) CLIP Training & Inference Add Noise Contaminated 3D poses: Training Fine-grained Prompt-driven Denoiser Fine-grained Prompt-pose Communication (FPC) Fine-grained Prompt-Pose MHCA PTS Spatial MHSA Temporal MHSA spatial temporal spatial-temporal SpatialTemporal MHSA Fine-grained Part-aware Prompts head arms body legs action class person speed 2D poses: Uncontaminated 3D poses: Reconstructed 3D poses: Figure 2. The architecture of the proposed FinePOSE. In the diffusion process, Gaussian noise is gradually added to the ground-truth 3D poses Y0, generating the noisy 3D poses Yt for the timestamp t. In the denoising process, Yt, X and t are fed to fine-grained prompt-driven denoiser D to reconstruct pure 3D poses \u02c6 Y0. D is composed of a Fine-grained Part-aware Prompt learning (FPP) block, a Fine-grained Prompt-pose Communication (FPC) block, and a Prompt-driven Timestamp Stylization (PTS) block, where FPP provides more precise guidance for all human part movements, FPC establishes fine-grained communications between learnable prompts and poses for enhancing the denoising capability, and PTS integrates learned prompt embedding and current timestamp for refining the prediction at each noise level. fully convolutional network based on dilated temporal convolutions over 2D keypoints to estimate 3D poses in video. SRNet [51] proposed a split-and-recombine approach, leading to appreciable improvements in predicting rare and unseen poses. Anatomy [6] decomposed the task into bone direction prediction and bone length prediction, from which the 3D joint locations can be derived entirely. Recently, MixSTE [52] used temporal and spatial transformers alternately to obtain better spatio-temporal features. MotionBERT [59] proposed a pretraining stage to recover the underlying 3D motion from noisy partial 2D observations. GLAGCN [48] globally modeled the spatio-temporal structure for 3D human pose estimation. D3DP [35] proposed the jointlevel aggregation strategy to benefit from all generated poses. Unlike previous methods, our approach proposes a new finegrained part-aware prompt learning mechanism coupled with diffusion models that possess controllable, high-quality generation capability of human body parts, which benefits the 3D human pose estimation task. Prompt Learning. Prompt learning has been widely used in the computer vision community [8, 21, 57, 58]. Typically, CoOp [58] utilized a continuous prompt optimization from downstream data instead of hand-craft design, the pioneering work that brings prompt learning to adapt pre-trained vision language models. CoCoOp [57] extended CoOp by learning image conditional prompts to improve generalization. ProDA [21] learned a prompt distribution over the output embedding space. VPT [8] introduced variational prompt tuning by combining a base learned prompt with a residual vector sampled from an instance-specific underlying distribution. PointCLIPV2 [60] combined CLIP [30] with GPT [20] to be a unified 3D open-world learner. Unlike the above methods, we propose a new fine-grained part-aware prompt learning mechanism, which encodes multi-granularity information about action class, coarseand fine-grained human parts, and kinematic data, and establishes fine-grained communications between learnable part-aware prompts and poses for enhancing the denoising capability. 3. The Proposed Approach: FinePOSE Given a 2D keypoints sequence X \u2208RN\u00d7J\u00d72, constructed by N frames with J joints in each, the proposed approach is formulated to predict the 3D pose sequence Y \u2208RN\u00d7J\u00d73. Considering the high-quality generation capability of the text-controllable denoising process of diffusion models, we develop a Fine-grained Prompt-driven Denoiser (FinePOSE) D for 3D human pose estimation. FinePOSE generates accurate 3D human poses enhanced by three core blocks: Finegrained Part-aware Prompt learning (FPP), Fine-grained Prompt-pose Communication (FPC), and Prompt-driven Timestamp Stylization (PTS) blocks. 3.1. Diffusion-Based 3D Human Pose Estimation Diffusion models are generative models that model the data distribution in the form of p\u03b8(Y0) := R p\u03b8(Y0:T )dY1:T through chained diffusion and reverse (denoising) processes. The diffusion process gradually adds Gaussian noise into the ground truth 3D pose sequence Y0 to corrupt it into an approximately Gaussian noise Yt(t\u2192T) using a variance schedule {\u03b2t}T t=1, which can be formulated as \\la b el {e q1 } q\\ l e f t ( \\mathbf {Y}_{t}\\mid \\mathbf {Y}_{0}\\right ):=\\sqrt {\\bar {\\alpha }_{t}} \\mathbf {Y}_{0}+\\epsilon \\sqrt {1-\\bar {\\alpha }_{t}}, (1) where \u00af \u03b1t :=Qt s=0\u03b1s and \u03b1t :=1\u2212\u03b2t. Afterward, the denoising process reconstructs the uncontaminated 3D poses by a 3 \fdenoiser D. Since the degraded data is well approximated by a Gaussian distribution after the diffusion process, we can obtain initial 3D poses YT by sampling noise from a unit Gaussian. Passing YT (t = T) to the denoiser D, we obtain \u02c6 Y0 that is thereafter used to generate the noisy 3D poses \u02c6 Yt \u2212 1 as inputs to the denoiser D at timestamp t\u22121 via DDIM [37], which can be formulated as \\ l a be l { e q :DDIM } \\ m at h bf { Y }_{t\\!-\\!1}=\\sqrt {\\bar {\\alpha }_{t\\!-\\!1}}\\hat {\\mathbf {Y}}_0\\!+\\!\\epsilon _t\\sqrt {1\\!-\\!\\bar {\\alpha }_{t\\!-\\!1}\\!-\\!\\sigma ^2_t}\\!+\\!\\sigma _t\\epsilon , (2) where t is from T to 1, \u03f5 \u223cN(0, I) is standard Gaussian noise independent of Yt, and \\ e psilo n _ t &= \\ l e ft (\\m athb f { Y }_t \\ !\\ !\\ s qrt {\\b a r {\\a lpha } _ t}\\cdot \\hat {\\mathbf {Y}}_0\\right )/\\sqrt {1\\!-\\!\\bar {\\alpha }_t}, \\\\ \\sigma _t&=\\sqrt {\\left (1\\!-\\!\\bar {\\alpha }_{t\\!-\\!1}\\right )/\\left (1\\!-\\!\\bar {\\alpha }_t\\right )}\\cdot \\sqrt {1\\!-\\!(\\bar {\\alpha }_t/\\bar {\\alpha }_{t\\!-\\!1})}, (3b) where \u03f5t is the noise at timestamp t, and \u03c3t controls how stochastic the diffusion process is. 3.2. Fine-grained Prompt-driven Denoiser Fine-grained Part-aware Prompt Learning (FPP). To assist the reconstruction of pure 3D poses \u02c6 Y0 from contaminated 3D poses Yt with additional information, FinePOSE guides the denoising process with regular 2D keypoints X, timestamp t, and fine-grained part-aware prompt embedding P. We design the FPP block to learn P. It encodes three pose-related information in the prompt embedding space, including its action class, coarseand fine-grained parts of humans like \u201cperson, head, body, arms, legs\u201d, and kinematic information \u201cspeed\u201d. Afterward, P is integrated with pose features for subsequent processes. A learnable prompt embedding P = {p}K k=1 is with the shape of K \u00d7 L \u00d7 D, where K denotes the number of text prompts, L indicates the number of tokens in each text prompt, and D is the dimension of token embedding. Since the number of valid tokens is found to be three to four through the text encoder Etx, the first four tokens are taken as representations \u02dc pk for each text. Moreover, since modifiers help precisely describe the movements of human body parts, we design a learnable vector rk \u2208R(Lk\u22124)\u00d7D to wrap the representations as pk. The above can be formulated as \\ t ilde {\\bm {p }}_ k & =\\m ath cal {E } _{\\text {t x }}(\\text {text}_k)[:4],\\ k \\in [1, K],\\\\ \\bm {p}_k&=\\text {Concat}(\\bm {r}_k, \\tilde {\\bm {p}}_k), (4b) where K = 7 and {textk}7 k=1 indicate {person, [Action Class], speed, head, body, arms, legs}. rk is initialized with Gaussian distribution of \u00b5 = 0 and \u03c3 = 0.02, and {Lk}7 k=1 ={7, 12, 10, 10, 10, 14, 14}, which sums to 77 regarding the text embedding dimension of CLIP [30]. In short, the FPP block builds multi-granularity text prompts and learnable modifiers, providing precise guidance for each human body part, as shown in Fig. 2. Fine-grained Prompt-pose Communication (FPC). After obtaining fine-grained part-aware prompt embedding P, we establish fine-grained communications between learned partaware prompts and poses using the FPC block to improve the denoising quality. Specifically, when processing the noised 3D poses Yt, it injects prompt embedding P, 2D keypoints X, and timestamp t within. First, FPC integrates Yt and guidance information (i.e., X, t, and P) by a series of concatenation and addition operations, as Zt = Concat(Yt, X)+P[L]+F(t). F is the timestamp embedding network containing a sinusoidal function followed by two Linear layers connected by a GELU non-linearity. The timestep embedding adaptively adjusts the quantity of Gaussian noise additions. Since the denoiser D works iteratively, providing detailed information about the current timestamp t is crucial for D to handle 3D poses containing different noise levels effectively. Then, Zt is encoded by a spatial transformer, where the multi-head self-attention (MHSA) mechanism helps to focus on the fine-grained relationships between joints within each frame, obtaining Zs t. To completely inject prompt embedding P into Zs t, we implement a multi-head cross-attention model, where the query, key, and value are as Q = WQZs t, K = WKP, V = WV P. The value is aggregated with cross-attention A to generate fine-grained prompt-driven pose features Zsp t , achieving fine-grained prompt-pose communication. The mechanism can be formulated as \\mathbf { A }&= \\ tex t {s oft m a x } (\\ m ath b f {Q}\\ o times \\mathbf {K}^\\top /\\sqrt {d}),\\\\ \\mathbf {Z}_t^{sp}&=\\mathbf {A}\\otimes \\mathbf {V},\\ \\tilde {\\mathbf {Z}}_t^{sp}=\\mathcal {P}(\\mathbf {Z}_t^{sp}), (5b) where d = D/H and H is the number of attention heads. P indicates the PTS block that bring timestamp t into the generation process to obtain timestamp stylized output \u02dc Zsp t . On the other hand, to model inter-frame relationships between poses, \u02dc Zsp t is encoded using a temporal transformer via MHSA to obtain \u02dc Zspf t . Finally, we utilize a spatialtemporal transformer accompanied by permutation operations between spatial and temporal dimensions to extract more compact fine-grained prompt-driven pose features from \u02dc Zspf t , which are decoded as the predicted 3D poses \u02c6 Y0. Prompt-driven timestamp Stylization (PTS). As mentioned, providing timestamp embedding to the denoising process is critical for handling 3D poses with different noise levels. Therefore, inspired by Motiondiffuse [53], we introduce the PTS block that explicitly embeds timestamp t by positional embedding [40] and sums it with the learnable prompt embedding P obtained by the FPP block, as v=P[L]+F(t). Given the intermediate output Zsp t of the FPC block, the PTS block calculates \u02dc Zsp t = Zsp t \u00b7 \u03c8w(\u03d5(v))+\u03c8b(\u03d5(v)), where \u03c8b, \u03c8w, \u03d5 are three different linear projections, and (\u00b7) is the Hadamard product. 4 \fMethod N Human3.6M (DET) Human3.6M (GT) Year Detector MPJPE \u2193 P-MPJPE \u2193 Detector MPJPE \u2193 P-MPJPE \u2193 TCN [29] 243 CPN 46.8 36.5 GT 37.8 / CVPR\u201919 Anatomy [6] 243 CPN 44.1 35.0 GT 32.3 / CSVT\u201921 P-STMO [33] 243 CPN 42.8 34.4 GT 29.3 / ECCV\u201922 MixSTE [52] 243 HRNet 39.8 30.6 GT 21.6 / CVPR\u201922 PoseFormerV2 [54] 243 CPN 45.2 35.6 GT 35.5 / CVPR\u201923 MHFormer [19] 351 CPN 43.0 34.4 GT 30.5 / CVPR\u201922 Diffpose [10] 243 CPN 36.9 28.7 GT 18.9 / CVPR\u201923 GLA-GCN [48] 243 CPN 44.4 34.8 GT 21.0 17.6 ICCV\u201923 ActionPrompt [55] 243 CPN 41.8 29.5 GT 22.7 / ICME\u201923 MotionBERT [59] 243 SH 37.5 / GT 16.9 / ICCV\u201923 D3DP [34] 243 CPN 35.4 28.7 GT 18.4 / ICCV\u201923 FinePOSE (Ours) 243 CPN 31.9 25.0 GT 16.7 12.7 (-3.5) (-3.7) (-0.2) (-4.9) Table 1. Quantitative comparison with the state-of-the-art 3D human pose estimation methods on the Human3.6M dataset. N: the number of input frames. CPN, HRNet, SH: using CPN [7], HRNet [39], and SH [24] as the 2D keypoint detectors to generate the inputs. GT: using the ground truth 2D keypoints as inputs. The best and second-best results are highlighted in bold and underlined formats. 3.3. Training & Inference Training. The contaminated 3D poses Yt is sent to a finegrained prompt-driven denoiser D to reconstruct the 3D poses \u02c6 Y0 =D(Yt, X, t, P) without noise. The entire framework is optimized by minimizing the MSE loss \u2225Y0 \u2212\u02c6 Y0\u22252. Inference. Since the distribution of YT is nearly an isotropic Gaussian distribution, we sample H initial 3D poses {Yh T }H h=1 from a unit Gaussian. After passing them to the denoiser D, we obtain H feasible 3D pose hypotheses { \u02c6 Yh 0}H h=1. Each hypothesis \u02c6 Yh 0 is used to generate the noisy 3D poses \u02c6 Yh t\u22121 as inputs to the denoiser D for the next timestamp t\u22121. Then, we regenerate { \u02c6 Yh 0}H h=1 using { \u02c6 Yh t \u2212 1}H h=1 as inputs to the denoiser D for the next timestamp t\u22122. Analogously, this process iterates M times starting from the timestamp T, so each iteration m \u2208[1, M] is with the timestamp t=T(1\u2212m M ). Following Joint-Wise ReprojectionBased Multi-Hypothesis Aggregation (JPMA) in [35], we reproject { \u02c6 Yh 0}H h=1 to the 2D camera plane using known or estimated intrinsic camera parameters and then choose joints with minimum projection errors with the input X, as h '&= \\ma thop {\\ arg \\ mi n }\\l i mits _{ h\\in [1,H] } \\ |\\m a thca l {P} _R(\\hat {\\mathbf {Y}}_0^h)[j]-\\mathbf {X}[j]\\|_2,\\\\ \\hat {\\mathbf {Y}}_0[j]&=\\hat {\\mathbf {Y}}_0^{h'}[j],\\ j\\in [1,J], (6b) where PR is the reprojection function, j is the index of joints, and h\u2032 indicates the index of selected hypothesis. JPMA enables us to select joints from distinct hypotheses automatically to form the final prediction \u02c6 Y0. 3.4. Extension to 3D Multi-Human Pose Estimation We append a post-integration to FinePOSE to apply for the multi-human scenario, avoiding incorporating extra computational cost. Specifically, given a multi-human 2D keypoints sequence Xmul \u2208RC\u00d7N\u00d7J\u00d72, which involves C human characters, FinePOSE first predicts \u02c6 Yc 0 for each character c \u2208[1, C]. Considering that some characters may temporarily leave the camera field of view, their positions in those frames are set as zeros to ensure synchronization of all characters\u2019 states in Xmul. Next, we integrate { \u02c6 Yc 0}C c=1 by stacking over the character dimension, obtaining the final prediction \u02c6 YC 0 \u2208RC\u00d7N\u00d7J\u00d73. 4. Experiments 4.1. Datasets and Metrics Human3.6M [14] is a widely used benchmark dataset in human pose estimation tasks, which provides a large-scale collection of accurate 3D joint annotations on diverse human activities. Human3.6M consists of 3.6 million RGB images, captured from multiple camera views, of 11 professional actors performing 15 activities, e.g., walking, running, and jumping. Following previous efforts [19, 29, 34], our FinePOSE is trained on five subjects (S1, S5, S6, S7, S8) and evaluated on two subjects (S9, S11). We calculate the mean per joint position error (i.e., MPJPE) to measure the average Euclidean distance in millimeters between the ground truth and estimated 3D joint positions for evaluation. We also report procrustes MPJPE (i.e., P-MPJPE) that calculates MPJPE after aligning the estimated poses to the ground truth using a rigid transformation. MPI-INF-3DHP [23] provides synchronized RGB video sequences with accurate 3D joint annotations for 3D human pose estimation. It comprises 8 activities conducted by 8 actors in the training set, while the test set encompasses 7 activities. We calculate MPJPE, the percentage of correctly estimated keypoints (i.e., PCK) within a 150mm range, and the area under the curve (i.e., AUC). EgoHumans [15] collects multi-human ego-exo videos covering 7 sports activities. Recently, a subset of 2D to 3D 5 \fMethod / MPJPE \u2193 Human3.6M (DET) Dir. Disc. Eat Greet Phone Photo Pose Pur. Sit SitD. Smoke Wait WalkD. Walk WalkT. Avg TCN [29] 45.2 46.7 43.3 45.6 48.1 55.1 44.6 44.3 57.3 65.8 47.1 44.0 49.0 32.8 33.9 46.8 SRNet [51] 46.6 47.1 43.9 41.6 45.8 49.6 46.5 40.0 53.4 61.1 46.1 42.6 43.1 31.5 32.6 44.8 RIE [32] 40.8 44.5 41.4 42.7 46.3 55.6 41.8 41.9 53.7 60.8 45.0 41.5 44.8 30.8 31.9 44.3 Anatomy [6] 41.4 43.5 40.1 42.9 46.6 51.9 41.7 42.3 53.9 60.2 45.4 41.7 46.0 31.5 32.7 44.1 P-STMO [33] 38.9 42.7 40.4 41.1 45.6 49.7 40.9 39.9 55.5 59.4 44.9 42.2 42.7 29.4 29.4 42.8 MixSTE [52] 36.7 39.0 36.5 39.4 40.2 44.9 39.8 36.9 47.9 54.8 39.6 37.8 39.3 29.7 30.6 39.8 PoseFormerV2 [54] 45.2 MHFormer [19] 39.2 43.1 40.1 40.9 44.9 51.2 40.6 41.3 53.5 60.3 43.7 41.1 43.8 29.8 30.6 43.0 Diffpose [10] 33.2 36.6 33.0 35.6 37.6 45.1 35.7 35.5 46.4 49.9 37.3 35.6 36.5 24.4 24.1 36.9 GLA-GCN [48] 41.3 44.3 40.8 41.8 45.9 54.1 42.1 41.5 57.8 62.9 45.0 42.8 45.9 29.4 29.9 44.4 ActionPrompt [55] 37.7 40.2 39.8 40.6 43.1 48.0 38.8 38.9 50.8 63.2 42.0 40.0 42.0 30.5 31.6 41.8 MotionBERT [59] 36.1 37.5 35.8 32.1 40.3 46.3 36.1 35.3 46.9 53.9 39.5 36.3 35.8 25.1 25.3 37.5 D3DP [34] 33.0 34.8 31.7 33.1 37.5 43.7 34.8 33.6 45.7 47.8 37.0 35.0 35.0 24.3 24.1 35.4 FinePOSE (Ours) 31.4 31.5 28.8 29.7 34.3 36.5 29.2 30.0 42.0 42.5 33.3 31.9 31.4 22.6 22.7 31.9 (-1.6) (-3.3) (-2.9) (-2.4) (-3.2) (-7.2) (-5.6) (-3.6) (-3.7) (-5.3) (-3.7) (-3.1) (-3.6) (-1.7) (-1.4) (-3.5) Table 2. Quantitative comparison with the state-of-the-art 3D human pose estimation methods on the Human3.6M dataset using 2D keypoint detectors to generate the inputs. Dir., Disc.,\u00b7 \u00b7 \u00b7 , and WalkT. correspond to 15 action classes. Avg indicates the average MPJPE among 15 action classes. The best and second-best results are highlighted in bold and underlined formats. Method N MPI-INF-3DHP Year PCK\u2191 AUC\u2191 MPJPE \u2193 TCN [29] 81 86.0 51.9 84.0 CVPR\u201919 Anatomy [6] 81 87.9 54.0 78.8 CSVT\u201921 P-STMO [33] 81 97.9 75.8 32.2 ECCV\u201922 MixSTE [52] 27 94.4 66.5 54.9 CVPR\u201922 PoseFormerV2 [54] 81 97.9 78.8 27.8 CVPR\u201923 MHFormer [19] 9 93.8 63.3 58.0 CVPR\u201922 Diffpose [10] 81 98.0 75.9 29.1 CVPR\u201923 GLA-GCN [48] 81 98.5 79.1 27.8 ICCV\u201923 D3DP [34] 243 98.0 79.1 28.1 ICCV\u201923 FinePOSE (Ours) 243 98.9 80.0 26.2 (+0.4) (+0.9) (-1.6) Table 3. Quantitative comparison with the state-of-the-art 3D human pose estimation methods on the MPI-INF-3DHP dataset using ground truth 2D keypoints as inputs. N: the number of input frames. The best and second-best results are highlighted in bold and underlined formats. keypoints annotations has been released covering tagging, lego-assembling, and fencing. It contains 105 RGB videos taken by ego cameras. Between 1 and 3 human characters appear in each video, resulting in a total of 238 subsequences. We report the average MPJPE per video. 4.2. Implementation Details We take MixSTE [52] as the backbone of the denoiser D and CLIP as the frozen text encoder Etx. The numbers of MHSA-MLP-LN building blocks of the spatial, temporal, and spatio-temporal transformer in the FPC block are 1, 1, and 3. The training epoch in all the experiments below is 100, and the batch size is 4. We adopt AdamW optimizer with the momentum parameters of \u03b21 = 0.9, \u03b22 = 0.999, and the weight decay of 0.1. The learning rate starts from 6e\u22125 and shrinks after each epoch with a factor of 0.993. For fair Method Human3.6M (DET) MPJPE \u2193 P-MPJPE \u2193 w/o Prompt 37.2 29.1 M-Prompt 35.8 28.1 S-Prompt 36.2 28.9 C-Prompt 34.7 27.4 AL-Prompt 34.6 27.4 FinePOSE (Ours) 31.9 25.0 Table 4. Ablation study on different designs of prompt learning in the FPP block. w/o Prompt: without any textual information and learnable prompts. M-Prompt: using the action class to design the prompt manually. S-Prompt: using a learnable prompt combined with the action class. C-Prompt: employing the action class and coarse-grained information to create the prompt. AL-Prompt: only learnable prompts without any manual design. comparisons, we set the number of hypotheses H = 1 and iterations M = 1 during training, and H = 20 and M = 10 during inference, as in D3DP [34]. 4.3. Comparison with the State-of-the-Arts Human3.6M. Tab. 1 reports comparisons between our FinePOSE with state-of-the-art (SOTA) 3D HPE methods on the Human3.6M dataset. FinePOSE significantly achieves new SOTA performance, especially when using detected 2D keypoints as inputs. Compared with existing 3D HPE methods, FinePOSE surpasses the SOTA method D3DP [34] by 3.5mm in MPJPE and 3.7mm in P-MPJPE. When using ground truth 2D keypoints as inputs, FinePOSE also significantly outperforms the SOTA method MotionBERT [59], improving MPJPE by 0.2mm. Tab. 2 provides detailed comparisons between on each action class using 2D keypoint detectors as inputs. For example, our FinePOSE achieves noticeable improvements (43.7mm\u219236.5mm) for the ac6 \fMethod Configuration MPJPE \u2193 P-MPJPE \u2193 FPP FPC PTS Baseline 37.2 29.1 w FPP \u2713 35.3 28.0 w/o FPP \u2713 37.1 29.2 w/o FPC \u2713 \u2713 35.7 27.8 w/o PTS \u2713 \u2713 36.6 29.0 FinePOSE (Ours) \u2713 \u2713 \u2713 31.9 25.0 Table 5. Ablation study on different configurations of FinePOSE on Human3.6M using 2D keypoint detectors as inputs. Baseline: the method without any textual information via prompt learning. w FPP: the method only contains the FPP block and adds P[L] to the input. w/o FPP: the method without the FPP block leads to an infeasible FPC block. w/o FPC: the method without the FPC block. w/o PTS: the method without the PTS block. tion class \u201cPhoto\u201d and decreases average MPJPE by 3.5mm (35.4mm\u219231.9mm). MPI-INF-3DHP. Tab. 3 reports comparisons between our FinePOSE and SOTA 3D HPE methods on the MPI-INF3DHP dataset, using ground truth 2D keypoints as inputs. Compared with the SOTA existing method GLA-GCN [48], FinePOSE decreases MPJPE by 1.6mm and increases the PCK by 0.4% and AUC by 0.9%. Overall, these experimental results demonstrate that our FinePOSE benefits from fine-grained part-aware prompt learning and pose-prompt communications, resulting in higher denoising quality and estimation accuracy. 4.4. Ablation Study We conduct a series of analysis experiments of our FinePOSE on the Human3.6M dataset to investigate the effects on the performance of different prompt learning designs in the FPP block and different blocks in FinePOSE. Effects of Different Designs in FPP. We design various versions of the FPP block for our FinePOSE, including a) w/o Prompt, b) M-Prompt, c) S-Prompt, d) C-Prompt, and e) ALPrompt. Specifically, w/o Prompt denotes FinePOSE without introducing textual information and learnable prompts. MPrompt indicates using the action class to design the prompt manually instead of the FPP block. Taking the action class \u201cDirections\u201d as an example, the manually designed prompt is \u201ca person is pointing directions with hands\u201d. There are 15 action classes available in the Human3.6M dataset corresponding to 15 kinds of manually designed prompts. S-Prompt indicates utilizing learnable prompts combined with the action class. C-Prompt indicates employing the action class and coarse-grained information like \u201cperson\u201d and \u201cspeed\u201d to create the prompt. Finally, AL-Prompt means only using learnable prompts without any manual design. We first evaluate the effect of manually designed prompts (i.e., M-Prompt) on Human3.6M. As shown in Tab. 4, compared to w/o Prompt, M-Prompt achieves a decrease of 1.4mm on MPJPE and 1.0mm on P-MPJPE, indicating that Method / MPJPE \u2193 EgoHumans Tag. Lego Fenc. Avg D3DP [35] 30.7 29.0 46.6 35.4 FinePOSE (Ours) 30.0 26.7 46.2 34.3 (-0.7) (-2.3) (-0.4) (-1.1) Table 6. Quantitative comparison with D3DP on the EgoHumans dataset using 2D keypoints as inputs. Tag., Lego, and Fenc. correspond to 3 action classes. Avg indicates the average MPJPE among 3 action classes. manually designing prompts is a practical strategy even though they cannot guarantee the prompt is optimal during the denoising process for the 3D HPE task. To evaluate the effectiveness of S-Prompt, we compare it with w/o Prompt. As shown in Tab. 4, MPJPE and P-MPJPE are reduced by 1.0mm and 0.2mm, respectively, for S-Prompt, which demonstrates that with the help of learnable prompts, integrating textual information can improve the performance on 3D HPE task. While compared to M-Prompt, S-Prompt results in performance degradation, indicating that learnable prompts must be meticulously designed. In addition, we also investigate the impact of manual intervention degrees on 3D HPE performance using two groups of comparative experiments. In the first group, we used only learnable prompts without any textual information and manual intervention, named AL-Prompt, which differs from S-Prompt with the action class. The second group designed a coarse-grained prompt involving action class, \u201cperson\u201d, \u201cspeed\u201d, and corresponding learnable prompts, denoted as C-Prompt. We see that both AL-Prompt and C-Prompt outperform S-Prompt since AL-Prompt is without interference from uncomplete textual information and C-Prompt contains some important textual information like action class, \u201cperson\u201d, and \u201cspeed\u201d, which provide the action subject and kinematic data. Finally, it is observed that our FinePOSE outperforms various versions of prompt learning on both MPJPE and P-MPJPE, indicating the effectiveness of the fine-grained part-aware prompt learning mechanism in FinePOSE. Effects of Different Blocks in FinePOSE. In Tab. 5, we provide different settings of our FinePOSE to evaluate the effects of different blocks for the 3D HPE performance, including Baseline, w FPP, w/o FPP, w/o FPC, and w/o PTS. Specifically, Baseline denotes FinePOSE without introducing textual information and learnable prompts, the same as the configuration of w/o Prompt. w FPP indicates FinePOSE only contains the FPP block without introducing the FPC and PTS blocks and only adds textual information P[L] to the input. w/o FPP denotes FinePOSE without the FPP block, leading to the FPC block being infeasible and only utilizing the PTS block. w/o FPC means FinePOSE without the FPC block but using the FPP and PTS blocks. w/o PTS refers to FinePOSE without the PTS block but using the FPP and FPC blocks to integrate textual information for fine-grained 7 \fSittingDown MotionBERT D3DP FinePOSE WalkDog Sitting Purchases Discussion Photo Posing Figure 3. Qualitative comparisons of our FinePOSE with MotionBERT [59] and D3DP [34] on Human3.6M. The gray skeleton is the ground-truth 3D pose. The blue skeleton represents the prediction of the human left part, and the orange indicates the human right part. The red dashed line represents the incorrect regions of the compared methods, and the blue dashed line indicates the counterparts of FinePOSE. part-aware prompt learning. Compared w FPP and Baseline, we observe that the former can achieve 1.9mm and 1.1mm improvements on MPJPE and P-MPJPE. This is because our FinePOSE contains the FPP block, which adds the prompt embedding P[L] into the input Zt of denoiser D, significantly improving the denoising capability. We observe that the results between w/o FPP and Baseline are almost equivalent. The baseline has already brought timestamp t into the denoising process, while the PTS block refines the prediction at each noise level by reusing the timestamp to the denoising process after the FPP and FPC block. Thus, there is nearly no effect in adding only the PTS block without FPP and FPC blocks to the denoiser. Making a comparison between w/o FPC and w/o FPP, the former achieves a decrease of 1.4mm on both MPJPE and P-MPJPE over w/o FPP, indicating that the FPP block in the denoiser plays a critical role in the fine-grained part-aware prompt learning mechanism. Finally, we observe that FinePOSE achieves a decrease of 4.7mm on MPJPE and 4.0mm on P-MPJPE compared to w/o PTS, indicating the necessity to integrate learned prompt embeddings and timestamps in the PTS block. 4.5. Results on 3D Multi-Human Pose Estimation In real-world applications, the multi-human scenario is more common than the single-human one. However, its complexity hinders existing work from handling it. In Sec. 3.4, we present a post-integration to extend FinePOSE for the multihuman pose estimation task. We implemented the extension using the SOTA method D3DP for a convincing comparison. The experimental results on EgoHumans are reported in Tab. 6, demonstrating that (1) the integration strategy indeed has potential feasibility and (2) FinePOSE has a dominant performance even in the complex multi-human scenario. 4.6. Visualization Fig. 3 shows the visualization results of D3DP [35], MotionBERT [59] and our FinePOSE on Human3.6M. These methods have performed well for actions in which the body, legs, and other parts of the person in the scene are relatively clear. For the actions with simple shapes, e.g., \u201cDiscussion\u201d and \u201cPhoto\u201d, the 3D poses predicted by FinePOSE match better with ground-truth 3D poses than those of D3DP and MotionBERT, especially in the left knee, right arm, and right hip of \u201cDiscussion\u201d and in the left knee of \u201cPhoto\u201d. For the actions with complex shapes, e.g., \u201cSitting\u201d and \u201cSittingDown\u201d, FinePOSE is more accurate at various joints, especially for arms and legs, while the 3D poses predicted by D3DP and MotionBERT differ significantly from groundtruth 3D poses. 5." +} \ No newline at end of file diff --git a/abs_9K/test_abstract_short_2405.05252v1.json b/abs_9K/test_abstract_short_2405.05252v1.json new file mode 100644 index 0000000000000000000000000000000000000000..b73cc0b137b8cb02ded392eb3404a8e34a370847 --- /dev/null +++ b/abs_9K/test_abstract_short_2405.05252v1.json @@ -0,0 +1,20 @@ +{ + "url": "http://arxiv.org/abs/2405.05252v1", + "title": "Attention-Driven Training-Free Efficiency Enhancement of Diffusion Models", + "abstract": "Diffusion Models (DMs) have exhibited superior performance in generating\nhigh-quality and diverse images. However, this exceptional performance comes at\nthe cost of expensive architectural design, particularly due to the attention\nmodule heavily used in leading models. Existing works mainly adopt a retraining\nprocess to enhance DM efficiency. This is computationally expensive and not\nvery scalable. To this end, we introduce the Attention-driven Training-free\nEfficient Diffusion Model (AT-EDM) framework that leverages attention maps to\nperform run-time pruning of redundant tokens, without the need for any\nretraining. Specifically, for single-denoising-step pruning, we develop a novel\nranking algorithm, Generalized Weighted Page Rank (G-WPR), to identify\nredundant tokens, and a similarity-based recovery method to restore tokens for\nthe convolution operation. In addition, we propose a Denoising-Steps-Aware\nPruning (DSAP) approach to adjust the pruning budget across different denoising\ntimesteps for better generation quality. Extensive evaluations show that AT-EDM\nperforms favorably against prior art in terms of efficiency (e.g., 38.8% FLOPs\nsaving and up to 1.53x speed-up over Stable Diffusion XL) while maintaining\nnearly the same FID and CLIP scores as the full model. Project webpage:\nhttps://atedm.github.io.", + "authors": "Hongjie Wang, Difan Liu, Yan Kang, Yijun Li, Zhe Lin, Niraj K. Jha, Yuchen Liu", + "published": "2024-05-08", + "updated": "2024-05-08", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.AI", + "cs.LG", + "eess.IV", + "eess.SP" + ], + "label": "Original Paper", + "paper_cat": "Diffusion AND Model", + "gt": "Diffusion Models (DMs) have exhibited superior performance in generating\nhigh-quality and diverse images. However, this exceptional performance comes at\nthe cost of expensive architectural design, particularly due to the attention\nmodule heavily used in leading models. Existing works mainly adopt a retraining\nprocess to enhance DM efficiency. This is computationally expensive and not\nvery scalable. To this end, we introduce the Attention-driven Training-free\nEfficient Diffusion Model (AT-EDM) framework that leverages attention maps to\nperform run-time pruning of redundant tokens, without the need for any\nretraining. Specifically, for single-denoising-step pruning, we develop a novel\nranking algorithm, Generalized Weighted Page Rank (G-WPR), to identify\nredundant tokens, and a similarity-based recovery method to restore tokens for\nthe convolution operation. In addition, we propose a Denoising-Steps-Aware\nPruning (DSAP) approach to adjust the pruning budget across different denoising\ntimesteps for better generation quality. Extensive evaluations show that AT-EDM\nperforms favorably against prior art in terms of efficiency (e.g., 38.8% FLOPs\nsaving and up to 1.53x speed-up over Stable Diffusion XL) while maintaining\nnearly the same FID and CLIP scores as the full model. Project webpage:\nhttps://atedm.github.io.", + "main_content": "Introduction Diffusion Models (DMs) [9, 29] have revolutionized computer vision research by achieving state-of-the-art performance in various text-guided content generation tasks, including image generation [28], image editing [12], super resolution [17], 3D objects generation [27], and video generation [10]. Nonetheless, the superior performance of DMs comes at the cost of an enormous computation budget. Although Latent Diffusion Models (LDMs) [28, 34] make text-to-image generation much more practical and affordable for normal users, their inference process is still too slow. For example, on the current flagship mobile phone, *Work was partly done during an internship at Adobe. \u2020Corresponding Author. generating a single 512px image requires 90 seconds [19]. To address this issue, numerous approaches geared at efficient DMs have been introduced, which can be roughly categorized into two regimes: (1) efficient sampling strategy [24, 30] and (2) efficient model architecture [19, 38]. While efficient sampling methods can reduce the number of denoising steps, they do not reduce the memory footprint and compute cost for each step, making it still challenging to use on devices with limited computational resources. On the contrary, an efficient architecture reduces the cost of each step and can be further combined with sampling strategies to achieve even better efficiency. However, most prior efficient architecture works require retraining of the DM backbone, which can take thousands of A100 GPU hours. Moreover, due to different deployment settings on various platforms, different compression ratios of the backbone model are required, which necessitate multiple retraining runs later. Such retraining costs are a big concern even for large companies in the industry. To this end, we propose the Attention-driven Trainingfree Efficient Diffusion Model (AT-EDM) framework, which accelerates DM inference at run-time without any retraining. To the best of our knowledge, training-free architectural compression of DMs is a highly uncharted area. Only one prior work, Token Merging (ToMe) [1], addresses this problem. While ToMe demonstrates good performance on Vision Transformer (ViT) acceleration [2], its performance on DMs still has room to improve. To further enrich research on training-free DM acceleration, we start our study by profiling the floating-point operations (FLOPs) of the state-of-the-art model, Stable Diffusion XL (SDXL) [26], through which we find that attention blocks are the dominant workload. In a single denoising step, we thus propose to dynamically prune redundant tokens to accelerate attention blocks. We pioneer a fast graph-based algorithm, Generalized Weighted Page Rank (G-WPR), inspired by Zero-TPrune [35], and deploy it on attention maps in DMs to identify superfluous tokens. Since SD-XL contains ResNet blocks, which require a full number of tokens for the convolution operations, we propose a novel similarity-based token copy approach to recover pruned tokens, again leveraging the rich information provided by the 1 arXiv:2405.05252v1 [cs.CV] 8 May 2024 \fSD-XL @ 6.7 TFLOPs AT-EDM @ 4.1 TFLOPs Figure 1. Examples of applying AT-EDM to SD-XL [26]. Compared to the full-size model (top row), our accelerated model (bottom row) has around 40% FLOPs reduction while enjoying competitive generation quality at various aspect ratios. attention maps. This token recovery method is critical to maintaining image quality. We find that naive interpolation or padding of pruned tokens adversely impacts generation quality severely. In addition to single-step token pruning, we also investigate cross-step redundancy in the denoising process by analyzing the variance of attention maps. This leads us to a novel pruning schedule, dubbed as DenoisingSteps-Aware Pruning (DSAP), in which we adjust the pruning ratios across different denoising timesteps. We find DSAP not only significantly improves our method, but also helps improve other run-time pruning methods like ToMe [1]. Compared to ToMe, our approach shows a clear improvement by generating clearer objects with sharper details and better text-image alignment under the same acceleration ratio. In summary, our contributions are four-fold: \u2022 We propose the AT-EDM framework, which leverages rich information from attention maps to accelerate pretrained DMs without retraining. \u2022 We design a token pruning algorithm for a single denoising step. We pioneer a fast graph-based algorithm, G-WPR, to identify redundant tokens, and a novel similarity-based copy method to recover missing tokens for convolution. \u2022 Inspired by the variance trend of attention maps across denoising steps, we develop the DSAP schedule, which improves generation quality by a clear margin. The schedule also provides improvements over other run-time acceleration approaches, demonstrating its wide applicability. \u2022 We use AT-EDM to accelerate a top-tier DM, SD-XL, and conduct both qualitative and quantitative evaluations. Noticeably, our method shows comparable performance with an FID score of 28.0 with 40% FLOPs reduction relative to the full-size SD-XL (FID 27.3), achieving state-of-theart results. Visual examples are shown in Fig. 1. 2. Related Work Text-to-Image Diffusion Models. DMs learn to reverse the diffusion process by denoising samples from a normal distribution step by step. In this manner, the diffusion-based generative models enable high-fidelity image synthesis with variant text prompts [4, 9]. However, DMs in the pixel space suffer from large generation latency, which severely limits their applications [36]. The LDM [28] was the first to train a Variational Auto-Encoder (VAE) to encode the pixel space into a latent space and apply the DM to the latent space. This reduces computational cost significantly while maintaining generation quality, thus greatly enhancing the application of DMs. Subsequently, several improved versions of the LDM, called Stable Diffusion Models (SDMs), have been released. The most recent and powerful opensource version is SD-XL [26], which outperforms previous versions by a large margin. SD-XL is our default backbone in this work. Efficient Diffusion Models. Researchers have made enormous efforts to make DMs more efficient. Existing efficient DMs can be divided into two types: (1) Efficient sampling to reduce the required number of denoising steps [22, 30\u201332]. A recent efficient sampling work [24] managed to reduce the number of denoising steps to as low as one. It achieves this by iterative distillation, halving the number of denoising steps each time. (2) Architectural compression to make each sampling step more efficient [11, 19, 36, 38]. A recent work [13] removes multiple ResNet and attention blocks in the U-Net through distillation. Although these methods can save computational costs while maintaining decent image quality, they require retraining of the DM backbone to enhance efficiency, needing thousands of A100 GPU hours. Thus, a trainingfree method to enhance the efficiency of DMs is needed. Note that our proposed training-free framework, AT-EDM, 2 \fis orthogonal to these efficiency enhancement methods and can be stacked with them to further improve their efficiency. We provide corresponding experimental evidence in Supplementary Material. Training-Free Efficiency Enhancement. Training-free (i.e., post-training) efficiency enhancement schemes have been widely explored for CNNs [14, 33, 39] and ViTs [2, 7, 15, 35]. However, training-free schemes for DMs are still poorly explored. To the best of our knowledge, the only prior work in this field is ToMe [1]. It uses token embedding vectors to obtain pair-wise similarity and merges similar tokens to reduce computational overheads. While ToMe achieves a decent speed-up when applied to SD-v1.x and SD-v2.x, we find that it does not help much when applied to the state-of-the-art DM backbone, SD-XL, whilst our method achieves a clear improvement over it (see experimental results in Section 4). This is mainly due to (1) the significant architectural change of SD-XL (see Supplementary Material); (2) our better algorithm design to identify redundant tokens. Exploiting Attention Maps. We aim to design a method that exploits information present in pre-trained models. ToMe only uses embedding vectors of tokens and ignores the correlation between tokens. We take inspiration from recent image editing works [3, 5, 8, 25], in which attention maps clearly demonstrate which parts of a generated image are more important. This inspires us to use the correlations and couplings between tokens indicated by attention maps to identify unimportant tokens and prune them. Specifically, we can convert attention maps to directed graphs, where nodes represent tokens, without information loss. Based on this idea, we develop the G-WPR algorithm for token pruning in a single denoising step. Non-Uniform Denoising Steps. Various existing works [6, 18, 21, 37] demonstrate that denoising steps contribute differently to the quality of generated images; thus, it is not optimum to use uniform denoising steps. OMS-DPM [21] builds a model zoo and uses different models in different denoising steps. It trains a performance predictor to assist in searching for the optimal model schedule. DDSM [37] employs a spectrum of neural networks and adapts their sizes to the importance of each denoising step. AutoDiffusion [18] employs evolutionary search to skip some denoising steps and some blocks in the U-Net. Diff-Pruning [6] uses a Taylor expansion over pruned timesteps to disregard noncontributory diffusion steps. All existing methods either require an intensive training/fine-tuning/searching process to obtain and deploy the desired denoising schedule or are not compatible with our proposed G-WPR token pruning algorithm due to the U-Net architecture change. On the contrary, based on our investigation of the variance of attention maps across denoising steps, we propose DSAP. Its schedule can be determined via simple ablation experiments and 6731 5108 1623 0 1000 2000 3000 4000 5000 6000 7000 8000 U-Net Attn Conv+Res GFLOPs Figure 2. U-Net FLOPs breakdown of SD-XL [26] measured with 1024px image generation. Among components of U-Net (convolution blocks, ResNet blocks, and attention blocks), attention blocks cost the most. it is compatible with any token pruning scheme. DSAP can potentially be migrated to existing efficient DMs to help improve their image quality. 3. Methodology We start our investigation by profiling the FLOPs of the state-of-the-art DM, SD-XL, as shown in Fig. 2. Noticeably, among compositions of the sampling module (U-Net), attention blocks, which consist of several consecutive attention layers, dominate the workload for image generation. Therefore, we propose AT-EDM to accelerate attention blocks in the model through token pruning. AT-EDM contains two important parts: a single-denoising-step token pruning scheme and the DSAP schedule. We provide an overview of these two parts and then discuss them in detail. 3.1. Overview Fig. 3 illustrates the two main parts of AT-EDM: Part I: Token pruning scheme in a single denoising step. Step 1: We obtain the attention maps from an attention layer in the U-Net. We can potentially obtain the attention maps from self-attention or cross-attention. We compare the two choices and analyze them in detail through ablation experiments. Step 2: We use a scoring module to assign an importance score to each token based on the obtained attention map. We use an algorithm called G-WPR to assign importance scores to each token. This is described in Section 3.2. Step 3: We generate pruning masks based on the calculated importance score distribution. Currently, we simply use the top-k approach to determine the retained tokens, i.e., prune tokens with less importance scores. Step 4: We use the generated mask to perform token pruning. We do this after the feed-forward layer of attention layers. We may also perform pruning early before the feedforward layers. We provide ablative experimental results for it in Supplementary Material. Step 5: We repeat Steps 1-4 for each consecutive attention layer. Note that we do not apply pruning to the last attention layer before the ResNet layer. 3 \fAttention Layer Self-Attention Cross-Attention FFN Pruning Module Attention Layer ResNet Layer Refilling Module \u2460Get the attention map \u2462Generate pruning masks \u2463Run-time pruning \u2464Repeat \u2460-\u2463for consecutive layers \u2465Similarity-based copy to fill pruned tokens before passing to ResNet layers Denoising timestep \ud835\udc61 \ud835\udc41 \ud835\udc41\u2212\ud835\udf0f+ 1 \ud835\udc41\u2212\ud835\udf0f 0 Single-Denoising-Step Token Pruning Denoising-StepsAware Pruning Prune Less ... ... ... Prune More Pruning Module Attention Layer Retained token Pruned token Filled token ... Attention Block Graph Signal Strong Weak \u2461Calculate importance scores via G-WPR Figure 3. Overview of our proposed efficiency enhancement framework AT-EDM. Single-Denoising-Step Token Pruning: (1) We get the attention map from self-attention. (2) We calculate the importance score for each token using G-WPR. (3) We generate pruning masks. (4) We apply the masks to tokens after the feed-forward network to realize token pruning. (5) We repeat Steps (1)-(4) for each consecutive attention layer. (6) Before passing feature maps to the ResNet block, we recover pruned tokens through similarity-based copy. DenoisingSteps-Aware Pruning Schedule: In early steps, we propose to prune fewer tokens and to have less FLOPs reduction. In later steps, we prune more aggressively for higher speedup. Step 6: Finally, before passing the pruned feature map to the ResNet block, we need to fill (i.e., try to recover) the pruned tokens. A simple approach is to pad zeros, which means we do not fill anything. The method that we currently use is to copy tokens to corresponding locations based on similarity. This is described in detail in Section 3.2. Part II: DSAP schedule. Attention maps in early denoising steps are more chaotic and less informative than those in later steps, which is indicated by their low variance. Thus, they have a weaker ability to differentiate unimportant tokens [8]. Based on this intuition, we design the DSAP schedule that prunes fewer tokens in early denoising steps. Specifically, we select some attention blocks in the up-sampling and down-sampling stages and leave them unpruned, since they contribute more to the generated image quality than other attention blocks [19]. We demonstrate the schedule in detail in Section 3.3. 3.2. Part I: Token Pruning in a Single Step Notation. Suppose A(h,l) \u2208RM\u00d7N is the attention map of the h-th head in the l-th layer. It reflects the correlations between M Query tokens and N Key tokens. We refer to A(h,l) as A for simplicity in the following discussion. Let Ai,j denote its element in the i-th row, j-th column. A can be thought of as the adjacency matrix of a directed graph in the G-WPR algorithm. In this graph, the set of nodes with input (output) edges is referred to as \u03a6in (\u03a6out). Nodes in \u03a6in (\u03a6out) represent Key (Query) tokens, i.e., \u03a6in = {kj}N j=1 (\u03a6out = {qi}M i=1). Let st K (st Q) denote the vector that represents the importance score of Key (Query) tokens in the t-th iteration of the G-WPR algorithm. In the case of self-attention, Query tokens are the same as Key tokens. Specifically, we let {xi}N i=1 denote the N tokens and s denote their importance scores in the description of our token recovery method. The G-WPR Algorithm. WPR [35] uses the attention map as an adjacency matrix of a directed complete graph. It uses a graph signal to represent the importance score distribution among nodes in this graph. This signal is initialized uniformly. WPR uses the adjacency matrix as a graph operator, applying it to the graph signal iteratively until convergence. In each iteration, each node votes for which node is more important. The weight of the vote is determined by its importance in the last iteration. However, WPR, as proposed in [35], constrains the used attention map to be a self-attention map. Based on this, we propose the G-WPR algorithm, which is compatible with both self-attention and cross-attention, as shown in Algorithm 1. The attention 4 \ffrom Query qi to Key kj weights the edge from qi to kj in the graph generated by A. In each iteration of the vanilla WPR, by multiplying with the attention map, we map the importance of Query tokens st Q to the importance of Key tokens st+1 K , i.e., each node in \u03a6out votes for which \u03a6in node is more important. For self-attention, st+1 Q = st+1 K since Query and Key tokens are the same. For cross-attention, Query tokens are image tokens and Key tokens are text prompt tokens. Based on the intuition that important image tokens should devote a large portion of their attention to important text prompt tokens, we define function f(A, sK) that maps st+1 K to st+1 Q . One entropy-based implementation is st+1 Q (qi) = f(A, st+1 K ) = PN j=1 Ai,j \u00b7 st+1 K (kj) \u2212PN j=1 Ai,j \u00b7 ln Ai,j (1) where Ai,j is the attention from Query qi to Key kj. This is the default setting for cross-attention-based WPR in the following sections. We discuss and compare other implementations in Supplementary Material. Note that for selfattention, f(A, st+1 K ) = st+1 K . The G-WPR algorithm has an O(M \u00d7 N) complexity, where M (N) is the number of Query (Key) tokens. We employ this algorithm in each head and then obtain the root mean square of scores from different heads (to reward tokens that obtain very high importance scores in a few heads). Algorithm 1 The G-WPR algorithm for both self-attention and cross-attention Require: M, N > 0 is the number of nodes in \u03a6out, \u03a6in; A \u2208 RM\u00d7N; sQ \u2208RM, sK \u2208RN; f(A, sk) maps the importance of Key to that of Query Ensure: s \u2208RM represents the importance score of image tokens s0 Q \u2190 1 M \u00d7 eM t \u21900 while (|st Q \u2212st\u22121 Q | > \u03f5) or (t = 0) do st+1 K \u2190AT \u00d7 st Q st+1 Q \u2190f(A, st+1 K ) st+1 Q \u2190st+1 Q /|st+1 Q | t \u2190t + 1 end while s \u2190st Q Recovering Pruned Tokens. We have fewer tokens after token pruning, leading to efficiency enhancement. However, retained tokens form irregular maps and thus cannot be used for convolution, as shown in Fig. 4. We need to recover the pruned tokens to make them compatible with the following convolutional operations in the ResNet layer. (I) Padding Zeros. One straightforward way to do this is to pad zeros. However, to maintain the high quality of generated images, we hope to recover the pruned tokens as precisely as possible, as if they were not pruned. 1 2 3 4 Token pruning is not natively compatible with ResNet Similarity-based copy resolves the incompatibility 1 2 3 4 Token pruning 1 2 4 3 Reshape 1 4 ResNet Layer \u00d7 Similaritybased Copy \u221a Not compatible due to the non-square shape Reshape \u2460Attention map averaged cross heads Key Query 1 2 3 4 \u2461Delete rows of pruned tokens \u2462Find the highest attention received for each pruned token \u2463Get the most similar token of pruned tokens 2: 1 3: 4 \u2464Copy retained tokens to fill pruned tokens 1 4 Similarity-based Copy Retained token Pruned token Filled token 2 3 1 2 3 4 1 4 1 2 3 4 \u221a \u221a 1 2 3 4 Token pruning 1 2 4 3 1 2 4 3 1 4 2 3 1 2 4 3 ResNet Layer 2 3 Figure 4. Our similarity-based copy method for token recovering resolves the incompatibility between token pruning and ResNet. Token pruning incurs the non-square shape of feature maps and thus is not compatible with ResNet. To address this issue, we propose similarity-based copy to recover the pruned tokens. It first averages the attention map across heads and deletes the rows of pruned tokens to avoid selecting them as the most similar one. Then, it finds the source of the highest attention received for each pruned token and copies the corresponding retained tokens for recovery. After recovering, the tokens can be translated into a spatially-complete feature map to serve as input to ResNet blocks. (II) Interpolation. Interpolation methods, such as bicubic interpolation, are not suitable in this context. To use the interpolation algorithm, we first pad zeros to fill the pruned tokens and form a feature map of size N \u00d7 N. Then we downsample it to N 2 \u00d7 N 2 and upsample it back to N \u00d7 N with the interpolation algorithm. We keep the values of retained tokens fixed and only use the interpolated values of pruned tokens. Due to the high pruning rates (usually larger than 50%), most tokens that represent the background get pruned, leading to lots of pruned tokens that are surrounded by other pruned tokens instead of retained tokens. Interpolation algorithms assign nearly zero values to these tokens. (III) Direct copy. Another possible method is to use the corresponding values before pruning is applied (i.e., before being processed by the following attention layers) to fill the pruned tokens. The problem with this method is that the value distribution changes significantly after being processed by multiple attention layers, and copied values are far from the values of these tokens if they are not pruned and are processed by the following attention layers. To avoid the effect of distribution shift, we propose the similarity-based copy technique, as shown in Fig. 4. Instead of copying values that are not processed by attention layers, we select tokens that are similar to pruned tokens from the retained tokens. We use the self-attention map to determine the source of the highest attention received for each pruned token and use that as the most similar one. This is based on the intuition that attention from token xa to token xb, Aa,b, is determined by two factors: (1) importance 5 \fVariance Step Region I Region II Region III Region IV Figure 5. Variance of attention maps in different denoising steps. We divide the denoising steps into four typical regions: (I) Veryearly steps: Variance of attention maps is small and increases rapidly. (II) Mid-early steps: Variance of attention maps is large and increases slowly. (III) Middle steps: Variance of attention maps is large and almost constant. (IV) Last several steps. of token xb, i.e., s(xb), and (2) similarity between token xa and xb. If we observe the attention that xb receives, i.e., compare {Ai,b}i\u2208N, since s(xb) is fixed, index i = \u03b7 that maximizes {Ai,b}i\u2208N is the index of the most similar token, i.e., x\u03b7. Finally, we copy the value of token x\u03b7 to fill (i.e., recover) the pruned token xb. 3.3. Part II: Denoising-Steps-Aware Pruning Early denoising steps determine the layout of generated images and, thus, are crucial. On the contrary, late denoising steps aim at refining the generated image, natively including redundant computations since many regions of the image do not need refinement. In addition, early denoising steps have a weaker ability to differentiate unimportant tokens, and late denoising steps yield informative attention maps and differentiate unimportant tokens better. To support this claim, we investigate the variance of feature maps in different denoising steps, as shown in Fig. 5. It indicates that attention maps in early steps are more uniform. They assign similar attention scores to both important and unimportant tokens, making it harder to precisely identify unimportant tokens and prune them in early steps. Based on these intuitions, we propose DSAP that employs a prune-less schedule in early denoising steps by leaving some of the layers unpruned. The Prune-Less Schedule. In SD-XL, each down-stage includes two attention blocks and each up-stage includes three attention blocks (except for stages without attention). The mid-stage also includes one attention block. Each attention block includes 2-10 attention layers. In our prune-less schedule, we select some attention blocks to not perform token pruning. Since previous works [13, 19] indicate that the mid-stage contributes much less to the generated image quality than the up-stages and down-stages, we do not select the attention block in the mid-stage. Based on the ablation study, we choose to leave the first attention block in each down-stage and the last attention block in each upstage unpruned. We use this prune-less schedule for the first \u03c4 denoising steps. We explore setting \u03c4 in different regions shown in Fig. 5 and find \u03c4 = 15 is the optimal choice. We present all the related ablative experimental results in Section 4.4. A detailed description of the less aggressive pruning schedule is provided in Supplementary Material. To further consolidate our intuitions, we also investigate a more aggressive pruning schedule in early denoising steps and find it is inferior to our current approach (see Supplementary Material). 4. Experimental Results In this section, we evaluate AT-EDM and ToMe on SD-XL. We provide both visual and quantitative experimental results to demonstrate the advantages of AT-EDM over ToMe. 4.1. Experimental Setup Common Settings. We implement both our AT-EDM method and ToMe on the official repository of SD-XL and evaluate their performance. The resolution of generated images is 1024\u00d71024 pixels and the default FLOPs budget for each denoising step is assumed to be 4.1T, which is 38.8% smaller than that of the original model (6.7T) unless otherwise noted. The default CFG-scale for image generation is 7.0 unless otherwise noted. We set the total number of sampling steps to 50. We use the default sampler of SD-XL, i.e., EulerEDMSampler. AT-EDM. For a concise design, we only insert a pruning layer after the first attention layer of each attention block and set the pruning ratio for that layer to \u03c1. To meet the FLOPs budget of 4.1T, we set \u03c1 = 63%. For the DSAP setting, we choose to leave the first attention block in each down-stage and the last attention block in each up-stage unpruned. We use this prune-less schedule for the first \u03c4 = 15 denoising steps. ToMe. The SD-XL architecture has changed significantly compared to previous versions of SDMs (see Supplementary Material). Thus, the default setting of ToMe does not lead to enough FLOPs savings. To meet the FLOPs budget, it is necessary to use a more aggressive merging setting. Therefore, we expand the application range of token merging (1) from attention layers at the highest feature level to all attention layers, and (2) from self-attention to self-attention, cross-attention, and the feedforward network. We set the merging ratio r = 50% to meet the FLOPs budget of 4.1T. Evaluations. We first compare the generated images with manually designed challenging prompts in Section 4.2. Then, we report FID and CLIP scores of zero-shot image generation on the MS-COCO 2017 validation dataset [20] in Section 4.3. Tested models generate 1024\u00d71024 px images based on the captions of 5k images in the validation set. We provide ablative experimental results and analyze them in Section 4.4 to justify our design choices. We provide more implementation details in Supplementary Material. 6 \f\u201cUltra realistic illustration of an old man cyborg, cyberpunk, sci-fi fantasy\u201d \u201cclose up of mystic dog, like a phoenix, red and blue colors digital\u201d (a) SD-XL (b) ToMe (d) ToMe (c) Ours (e) Ours (AT-EDM) \u201c15mm wide-angle lens photo of a rapper in 1990 New York holding a kitten up to the camera\u201d w/o DSAP w/ DSAP \u201cA single beam of light enters the room from the ceiling. The beam of light is illuminating an easel. On the easel there is a Rembrandt painting of a raccoon.\u201d Figure 6. Comparing AT-EDM to the state-of-the-art approach, ToMe [2]. While the full-size SD-XL [26] (Col. a) consumes 6.7 TFLOPs, we compare the accelerated models (Col. b-e) at the same budget of 4.1 TFLOPs. Compared to ToMe, we find that AT-EDM\u2019s token pruning algorithm provides clearer generated objects with sharper details and finer textures, and a better text-image alignment where it better retains the semantics in the prompt (see the fourth row). Moreover, we find that DSAP provides better structural layout of the generated images, which is effective for both ToMe and our approach. AT-EDM combines the novel token pruning algorithm and the DSAP schedule (Col. e), outperforming the state of the art. 4.2. Visual Examples for Qualitative Analysis We use manually designed challenging prompts to evaluate ToMe and our proposed AT-EDM framework. The generated images are compared in Fig. 6. We compare more generated images in Supplementary Material. Visual examples indicate that with the same FLOPs budget, AT-EDM demonstrates better main object preservation and textimage alignment than ToMe. For instance, in the first example, AT-EDM preserves the main object, the face of the old man, much better than ToMe does. AT-EDM\u2019s strong ability to preserve the main object is also exhibited in the second example. ToMe loses high-frequency features of the main object, such as texture and hair, while AT-EDM retains them well, even without DSAP. The third example again illustrates the advantage of AT-EDM over ToMe in preserving the rapper\u2019s face. The fourth example uses a relatively 7 \f24 26 28 30 32 34 36 38 40 0.28 0.29 0.3 0.31 0.32 0.33 FID score CLIP score (Open Clip ViT-g14) SD-XL@6.7TFLOPs ToMe@4.1TFLOPs AT-EDM@4.1TFLOPs AT-EDM\u2020@4.5TFLOPs Figure 7. FID-CLIP score curves. The used CFG scales are [1.0, 1.5, 2.0, 2.5, 3.0, 4.0, 5.0, 6.0, 7.0, 9.0, 12.0, 15.0]. This figure is zoomed in to the bottom-right corner to show the comparison between the best trade-off points. AT-EDM outperforms ToMe by a clear margin. See complete curves in Supplementary Material. complex prompt that describes relationships between multiple objects. ToMe misunderstands \u201da Rembrandt painting of a raccoon\u201d as being a random painting on the easel and a painting of a raccoon on the wall. On the contrary, the image generated by AT-EDM understands and preserves these relationships very well, even without DSAP. As a part of our AT-EDM framework, DSAP is not only effective in ATEDM but also beneficial to ToMe in improving image quality and text-image alignment. When we deploy DSAP in ToMe, we select corresponding attention blocks to not perform token merging, while keeping the FLOPs cost fixed. 4.3. Quantitative Evaluations FID-CLIP Curves. We explore the trade-off between the CLIP and FID scores through various Classifer-Free Guidance (CFG) scales. We show the results in Fig. 7. ATEDM\u2020 does not deploy pruning at the second feature level (see Supplementary Material). It indicates that for most CFG scales, AT-EDM not only lowers the FID score but also results in higher CLIP scores than ToMe, implying that images generated by AT-EDM not only have better quality but also better text-image alignment. Specifically, when the CFG scale equals 7.0, AT-EDM results in [FID, CLIP] = [28.0, 0.321], which is almost the same as the full-size one ([27.3, 0.323], CFG scale=4.0). For comparison, ToMe results in [35.3, 0.320] with a CFG scale of 7.0. Thus, ATEDM reduces the FID gap from 8.0 to 0.7. Various FLOPs Budgets. We deploy ToMe and AT-EDM on SD-XL under various FLOPs budgets and quantitatively compare their performance in Table 1. The FLOPs cost in this table refers to the average FLOPs cost of a denoising step. Table 1 indicates that AT-EDM achieves better image quality than ToMe (lower FID scores) under all FLOPs budgets. When the FLOPs budget is extremely low (less than 50% of the full model), ToMe achieves higher CLIP Table 1. Deploying ToMe and AT-EDM in SD-XL under different FLOPs budgets. We generate all images with the CFG-scale of 7.0, except for SD-XL\u2020, for which we use a CFG-scale of 4.0. Model FID CLIP TFLOPs SD-XL 31.94 0.3284 6.7 SD-XL\u2020 27.30 0.3226 6.7 ToMe-a 58.76 0.2954 2.9 AT-EDM-a 52.00 0.2784 2.9 ToMe-b 40.94 0.3154 3.6 AT-EDM-b 29.80 0.3095 3.6 ToMe-c 35.27 0.3198 4.1 AT-EDM-c 28.04 0.3209 4.1 ToMe-d 32.46 0.3235 4.6 AT-EDM-d 27.23 0.3245 4.5 scores than AT-EDM. When the FLOPs saving is 30-40%, AT-EDM achieves not only better image quality (lower FID scores) but also better text-image alignment (higher CLIP scores) than ToMe. Note that under the same CFG-scale, AT-EDM achieves a lower FID score than the full-size model while reducing FLOPs by 32.8%. In the case that it trades text-image alignment for image quality (via reducing the CFG scale to 4.0), AT-EDM achieves not only a lower FID score but also a higher CLIP score than the full-size model while reducing FLOPs by 32.8%. We provide more visual examples under various FLOPs budgets in Supplementary Material. Latency Analysis. SD-XL uses the Fused Operation (FO) library, xformers [16], to boost its generation. The Current Implementation (CI) of xformers does not provide attention maps as intermediate results; hence, we need to additionally calculate the attention maps. We discuss the sampling latency for three cases: (I) without FO, (II) with FO under CI, and (III) with FO under the Desired Implementation (DI), which provides attention maps as intermediate results. Table 2 shows that with FO, the cost of deploying pruning at the second feature level exceeds the latency reduction it leads to. Hence, AT-EDM\u2020 is faster than AT-EDM. Fig. 8 shows the extra latency incurred by different pruning steps shown in Fig. 3. With a negligible quality loss, ATEDM achieves 52.7%, 15.4%, 17.6% speed-up in terms of latency w/o FO, w/ FO under CI, w/ FO under DI, respectively, which outperforms the state-of-the-art work by a clear margin. We present the memory footprint of ATEDM in Supplementary Material. 4.4. Ablation Study Self-Attention (SA) vs. Cross-Attention (CA). G-WPR can potentially use attention maps from self-attention (SAbased WPR) and cross-attention (CA-based WPR). We provide a detailed comparison between the two implementations. We visualize their pruning masks and provide gener8 \fTable 2. Comparison between sampling latency in different cases. \u2020 means not deploying pruning at the second feature level. Model SD-XL ToMe AT-EDM AT-EDM\u2020 Ave. FLOPs/step 6.7 T 4.1 T 4.1 T 4.5 T w/o FO 31.0s 21.0s 20.3s 22.1s w/ FO under CI 18.0s 17.7s 18.3s 15.6s w/ FO under DI 18.0s 17.7s 16.3s 15.3s 0 500 1000 1500 2000 Step 1 Step 2 Step 3 Step 4 Step 6 Latecncy (ms) AT-EDM AT-EDM\u2020 Figure 8. Latency incurred by different pruning steps shown in Fig. 3. Measured w/ FO under CI. Note that under DI, the latency of Step 1 (get the attention map) is eliminated. (a) SD-XL (b) CA-WPR (c) SA-WPR (d) CA-WPR (e) SA-WPR Generated Image Pruning Mask (Black: Pruned Tokens) Figure 9. Comparison between different implementations of GWPR: CA-based WPR and SA-based WPR. In general, CA-based WPR may remove too many background tokens, making the background not recoverable, while SA-based WPR preserves the image quality better. ated image examples for a visual comparison in Fig. 9. This figure indicates that SA-based WPR outperforms CA-based WPR. The reason is that CA-based WPR prunes too many background tokens, making it hard to recover the background via similarity-based copy. Similarity-based Copy. We provide comparisons between different methods to fill the pruned pixels in Fig. 10, which demonstrate the advantages of our similarity-based copy method. Images generated by bicubic interpolation are quite similar to those generated by padding zeros because interpolation usually assigns near-zero values to pruned tokens that are surrounded by other pruned tokens and can hardly recover them. Direct copy means directly copying corresponding token values before the first pruning layer in the attention block to recover the pruned tokens, where the following attention layers do not process the copied values. Thus, the copied values cannot recover the information in pruned tokens and even negatively affect the retained tokens. On the contrary, similarity-based copy uses attention maps and tokens that are retained to recover the pruned to(a) SD-XL (b) Padding Zeros (c) Bicubic Interpolation (d) Direct Copy (e) Similarity-based Copy Generated image Feature map Figure 10. Different methods to recover the pruned tokens. Zero padding (Col. b), bicubic interpolation (Col. c), and direct copy (Col. d) can hardly recover pruned tokens and result in noticeable image degradation with blurry background (incomplete moon). On the other hand, similarity-based copy (Col. e) provides better image quality and keeps the complete moon in the original image. Better viewed when zoomed in. (a) SD-XL (b) 0 Step (d) 15 Steps (e) 30 Steps (f) 45 Steps (c) 5 Steps Figure 11. Comparison between different numbers of early pruneless steps where 0 step is the same as without DSAP. We find that pruning less on the first 15 steps achieves the best quality. kens, providing significantly higher image quality. Denoising-Steps-Aware Pruning. We explore different design choices for DSAP. (1) The prune-less schedule selects one attention block from each down-stage and up-stage in the U-Net and skips the token pruning in it. According to ablation results shown in Supplementary Material, F-L (First-Last) appears to be the best one, i.e., leaving the first attention block of downstages and the last attention block of up-stages unpruned in early denoising steps. (2) We then explore how the number of early prune-less denoising steps affects the generated image quality in Fig. 11. Note that we keep the FLOPs budget fixed and adjust the pruning rate accordingly when we change the number of prune-less steps. This figure shows that the setting of 15 early prune-less steps provides the best image quality. Note that the setting of zero prune-less step is identical to the setting without DSAP, and 5, 15, 30, 45 prune-less steps represents setting the boundary in Region I, II, III, IV of Fig. 5, respectively. The results indicate that placing the boundary between the prune-less and normal schedule in Region II performs best. This meets our expectation because the variance of attention maps becomes high enough to identify unimportant tokens well in Region II. 9 \f5." +} \ No newline at end of file diff --git a/abs_9K/test_abstract_short_2405.05259v1.json b/abs_9K/test_abstract_short_2405.05259v1.json new file mode 100644 index 0000000000000000000000000000000000000000..84cbe1493085cf5642d66f290e2c270c926a6229 --- /dev/null +++ b/abs_9K/test_abstract_short_2405.05259v1.json @@ -0,0 +1,17 @@ +{ + "url": "http://arxiv.org/abs/2405.05259v1", + "title": "OpenESS: Event-based Semantic Scene Understanding with Open Vocabularies", + "abstract": "Event-based semantic segmentation (ESS) is a fundamental yet challenging task\nfor event camera sensing. The difficulties in interpreting and annotating event\ndata limit its scalability. While domain adaptation from images to event data\ncan help to mitigate this issue, there exist data representational differences\nthat require additional effort to resolve. In this work, for the first time, we\nsynergize information from image, text, and event-data domains and introduce\nOpenESS to enable scalable ESS in an open-world, annotation-efficient manner.\nWe achieve this goal by transferring the semantically rich CLIP knowledge from\nimage-text pairs to event streams. To pursue better cross-modality adaptation,\nwe propose a frame-to-event contrastive distillation and a text-to-event\nsemantic consistency regularization. Experimental results on popular ESS\nbenchmarks showed our approach outperforms existing methods. Notably, we\nachieve 53.93% and 43.31% mIoU on DDD17 and DSEC-Semantic without using either\nevent or frame labels.", + "authors": "Lingdong Kong, Youquan Liu, Lai Xing Ng, Benoit R. Cottereau, Wei Tsang Ooi", + "published": "2024-05-08", + "updated": "2024-05-08", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.RO" + ], + "label": "Original Paper", + "paper_cat": "Semantic AND Segmentation AND Image", + "gt": "Event-based semantic segmentation (ESS) is a fundamental yet challenging task\nfor event camera sensing. The difficulties in interpreting and annotating event\ndata limit its scalability. While domain adaptation from images to event data\ncan help to mitigate this issue, there exist data representational differences\nthat require additional effort to resolve. In this work, for the first time, we\nsynergize information from image, text, and event-data domains and introduce\nOpenESS to enable scalable ESS in an open-world, annotation-efficient manner.\nWe achieve this goal by transferring the semantically rich CLIP knowledge from\nimage-text pairs to event streams. To pursue better cross-modality adaptation,\nwe propose a frame-to-event contrastive distillation and a text-to-event\nsemantic consistency regularization. Experimental results on popular ESS\nbenchmarks showed our approach outperforms existing methods. Notably, we\nachieve 53.93% and 43.31% mIoU on DDD17 and DSEC-Semantic without using either\nevent or frame labels.", + "main_content": "Introduction Event cameras, often termed bio-inspired vision sensors, stand distinctively apart from traditional frame-based cameras and are often merited by their low latency, high dynamic range, and low power consumption [28, 44, 76]. The realm of event-based vision perception, though nascent, has rapidly evolved into a focal point of contemporary research [99]. Drawing parallels with frame-based perception and recognition methodologies, a plethora of task-specific applications leveraging event cameras have burgeoned [25]. Event-based semantic segmentation (ESS) emerges as one of the core event perception tasks and has gained increasing attention [2, 6, 38, 79]. ESS inherits the challenges of traditional image segmentation [11, 12, 19, 39, 58], while 1 arXiv:2405.05259v1 [cs.CV] 8 May 2024 \falso contending with the unique properties of event data [2], which opens up a plethora of opportunities for exploration. Although accurate and efficient dense predictions from event cameras are desirable for practical applications, the learning and annotation of the sparse, asynchronous, and high-temporal-resolution event streams pose several challenges [47, 49, 61]. Stemming from the image segmentation community, existing ESS models are trained on densely annotated events within a fixed and limited set of label mapping [2, 79]. Such closed-set learning from expensive annotations inevitably constrains the scalability of ESS systems. An obvious approach will be to make use of the image domain and transfer knowledge to event data for the same vision tasks. Several recent attempts [30, 61, 79] resort to unsupervised domain adaptation to avoid the need for paired image and event data annotations for training. These methods demonstrate the potential of leveraging frame annotations to train a segmentation model for event data. However, transferring knowledge across frames and events is not straightforward and requires intermediate representations such as voxel grids, frame-like reconstructions, and bio-inspired spikes. Meanwhile, it is also costly to annotate dense frame labels for training, which limits their usage. A recent trend inclines to the use of multimodal foundation models [13, 50, 67, 69, 94] to train task-specific models in an open-vocabulary and zero-shot manner, removing dependencies on human annotations. This paper continues such a trend. We propose a novel open-vocabulary framework for ESS, aiming at transferring pre-trained knowledge from both image and text domains to learn better representations of event data for the dense scene understanding task. Observing the large domain gap in between heterogeneous inputs, we design two cross-modality representation learning objectives that gradually align the event streams with images and texts. As shown in Fig. 1, given raw events and text prompts as the input, the learned feature representations from our OpenESS framework exhibit promising results for known and unknown class segmentation and can be extended to more open-ended texts such as \u201cadjectives\u201d, \u201cfine-grained\u201d, and \u201ccoarse-grained\u201d descriptions. To sum up, this work poses key contributions as follows: \u2022 We introduce OpenESS, a versatile event-based semantic segmentation framework capable of generating openworld dense event predictions given arbitrary text queries. \u2022 To the best of our knowledge, this work represents the first attempt at distilling large vision-language models to assist event-based semantic scene understanding tasks. \u2022 We propose a frame-to-event (F2E) contrastive distillation and a text-to-event (T2E) consistency regularization to encourage effective cross-modality knowledge transfer. \u2022 Our approach sets up a new state of the art in annotationfree, annotation-efficient, and fully-supervised ESS settings on DDD17-Seg and DSEC-Semantic benchmarks. 2. Related Work Event-based Vision. The microsecond-level temporal resolution, high dynamic range (typically 140 dB vs. 60 dB of standard cameras), and power consumption efficiency of event cameras have posed a paradigm shift from traditional frame-based imaging [25, 60, 77, 108]. A large variety of event-based recognition, perception, localization, and reconstruction tasks have been established, encompassing object recognition [18, 29, 48, 68], object detection [27, 31, 103, 109], depth estimation [17, 36, 42, 62, 65, 70], optical flow [7, 20, 33, 34, 53, 81, 105], intensity-image reconstruction [23, 24, 73, 98, 107], visual odometry and SLAM [43, 56, 72], stereoscopic panoramic imaging [4, 75], etc. In this work, we focus on the recently-emerged task of eventbased semantic scene understanding [2, 79]. Such a pursuit is anticipated to tackle sparse, asynchronous, and hightemporal-resolution events for dense predictions, which is crucial for safety-critical in-drone or in-vehicle perceptions. Event-based Semantic Segmentation. The focus of ESS is on categorizing events into semantic classes for enhancing scene interpretation. Alonso et al. [2] contributed the first benchmark based on DDD17 [5]. Subsequent works are tailored to improve the accuracy while mitigating the need for extensive event annotations [30]. EvDistill [84] and DTL [83] utilized aligned frames to enhance event-based learning. EV-Transfer [61] and ESS [79] leveraged domain adaptation to transfer knowledge from existing image datasets to events. Recently, HALSIE [6] and HMNet [38] innovated ESS in cross-domain feature synthesis and memorybased event encoding. Another line of research pursues to use of spiking neural networks for energy-efficient ESS [10, 49, 63, 90]. In this work, different from previous pursuits, we aim to train ESS models in an annotation-free manner by distilling pre-trained vision-language models, hoping to address scalability and annotation challenges. Open-Vocabulary Learning. Recent advances in visionlanguage models open up new possibilities for visual perceptions [13, 88, 106]. Such trends encompass image-based zero-shot and open-vocabulary detection [26, 52, 89, 96], as well as semantic [35, 51, 55, 97, 100], instance [45, 87], and panoptic [21, 41, 93] segmentation. As far as we know, only three works studied the adaptation of CLIP for event-based recognition. EventCLIP [92] proposed to convert events to a 2D grid map and use an adapter to align event features with CLIP\u2019s knowledge. E-CLIP [102] uses a hierarchical triple contrastive alignment that jointly unifies the event, image, and text feature embedding. Ev-LaFOR [18] designed category-guided attraction and category-agnostic repulsion losses to bridge event with CLIP. Differently, we present the first attempt at adapting CLIP for dense predictions on sparse and asynchronous event streams. Our work is also close to superpixel-driven contrastive learning [46, 74], where pre-processed superpixels are used to 2 \f\ud835\udc65 \ud835\udc66 Time \u201croad\u201d \u201csidewalk\u201d F2E \u2131!! \"#$ Group Group \u2131 !\" %&' \u2131 !# ()%* \ud835\udc3c%&' \ud835\udc3c+* \ud835\udc3c\"#$ Dense \u201cbuilding\u201d \u2026 \u2026 \u201cdriveable\u201d \u201cwalkable\u201d \u201cmanmade\u201d Calibration Input Encoding Grouping Decoding Contrastive Prompt T2E \ud835\udcab ,! \"#$ \ud835\udcab ,\" %&' \ud835\udc1f%&' \ud835\udc1f\"#$ \u2131!$ $-$ \ud835\udc21\"#$ \ud835\udc21$-$ Figure 2. Architecture overview of the OpenESS framework. We distill off-the-shelf knowledge from vision-languages models to event representations (cf. Sec. 3.1). Given a calibrated event Ievt and a frame Iimg, we extract their features from the event network F evt \u03b8e and the densified CLIP\u2019s image encoder F clip \u03b8c , which are then combined with the text embedding from CLIP\u2019s text encoder F txt \u03b8t for open-world prediction (cf. Sec. 3.2). To better serve for cross-modality knowledge transfer, we propose a frame-to-event (F2E) contrastive objective (cf. Sec. 3.3) via superpixel-driven distillation and a text-to-event (T2E) consistency objective (cf. Sec. 3.4) via scene-level regularization. establish contrastive objectives with modalities from other tasks, e.g., point cloud understanding [57], remote sensing [37], medical imaging [82], and so on. In this work, we propose OpenESS to explore superpixel-to-event representation learning. Extensive experiments verify that such an approach is promising for annotation-efficient ESS. 3. Methodology Our study serves as an early attempt at leveraging visionlanguage foundation models like CLIP [69] to learn meaningful event representations without accessing ground-truth labels. We start with a brief introduction of the CLIP model (cf. Sec. 3.1), followed by a detailed elaboration on our proposed open-vocabulary ESS (cf. Sec. 3.2). To encourage effective cross-modal event representation learning, we introduce a frame-to-event contrastive distillation (cf. Sec. 3.3) and a text-to-event consistency regularization (cf. Sec. 3.4). An overview of the OpenESS framework is shown in Fig. 2. 3.1. Revisiting CLIP CLIP [69] learns to associate images with textual descriptions through a contrastive learning framework. It leverages a dataset of 400 million image-text pairs, training an image encoder (based on a ResNet [39] or Vision Transformer [22]) and a text encoder (using a Transformer architecture [80]) to project images and texts into a shared embedding space. Such a training paradigm enables CLIP to perform zero-shot classification tasks, identifying images based on textual descriptions without specific training on those categories. To achieve annotation-free classification on a custom dataset, one needs to combine class label mappings with hand-crafted text prompts as the input to generate the text embedding. In this work, we aim to leverage the semantically rich CLIP feature space to assist open-vocabulary dense prediction on sparse and asynchronous event streams. 3.2. Open-Vocabulary ESS Inputs. Given a set of N event data acquired by an event camera, we aim to segment each event ei among the temporally ordered event streams \u03b5i, which are encoded by the pixel coordinates (xi, yi), microsecond-level timestamp ti, and the polarity pi \u2208{\u22121, +1} which indicates either an increase or decrease of the brightness. Each event camera pixel generates a spike whenever it perceives a change in logarithmic brightness that surpasses a predetermined threshold. Meanwhile, a conventional camera captures gray-scale or color frames Iimg i \u2208R3\u00d7H\u00d7W which are spatially aligned and temporally synchronized with the events or can be aligned and synchronized to events via sensor calibration, where H and W are the spatial resolutions. Event Representations. Due to the sparsity, high temporal resolution, and asynchronous nature of event streams, it is common to convert raw events \u03b5i into more regular representations Ievt i \u2208RC\u00d7H\u00d7W as the input to the neural network [25], where C denotes the number of embedding channels which is depended on the event representations 3 \fthemselves. Some popular choices of such embedding include spatiotemporal voxel grids [29, 104, 105], frame-like reconstructions [73], and bio-inspired spikes [49]. We investigate these three methods and show an example of taking voxel grids as the input in Fig. 2. More analyses and comparisons using reconstructions and spikes are in later sections. Specifically, with a predefined number of events, each voxel grid is built from non-overlapping windows as: I^ { e v t}_i = \\sum _{\\mat h bf {e}_ j \\in \\ var epsilon _i} p_j \\delta (\\mathbf {x}_j \\mathbf {x}) \\delta (\\mathbf {y}_j \\mathbf {y}) \\max \\{1 |t^{*}_j t| , 0\\},~ (1) where \u03b4 is the Kronecker delta function; t\u2217 j = (B \u22121) tj\u2212t0 \u2206T is the normalized event timestamp with B as the number of temporal bins in an event stream; \u2206T is the time window and t0 denotes the time of the first event in the window. Cross-Modality Encoding. Let Fevt \u03b8e : RC\u00d7H\u00d7W 7\u2192 RD1\u00d7H1\u00d7W1 be an event-based segmentation network with trainable parameters \u03b8e, which takes as input an event embedding Ievt i and outputs a D1-dimensional feature of downsampled spatial sizes H1 and W1. Meanwhile, we integrate CLIP\u2019s image encoder Fclip \u03b8c : R3\u00d7H\u00d7W 7\u2192 RD2\u00d7H2\u00d7W2 into our framework and keep the parameters \u03b8c fixed. The output is a D2-dimensional feature of sizes H2 and W2. Our motivation is to transfer general knowledge from Fclip \u03b8c to Fevt \u03b8e , such that the event branch can learn useful representations without using dense event annotations. To enable open-vocabulary ESS predictions, we leverage CLIP\u2019s text encoder Ftxt \u03b8t with pre-trained parameters \u03b8t. The input of Ftxt \u03b8t comes from predefined text prompt templates and the output will be a text embedding extracted from CLIP\u2019s rich semantic space. Densifications. CLIP was originally designed for imagebased recognition tasks and does not provide per-pixel outputs for dense predictions. Several recent attempts explored the adaptation from global, image-level recognition to local, pixel-level prediction, via either model structure modification [100] or fine-tuning [51, 71, 97]. The former directly reformulates the value-embedding layer in CLIP\u2019s image encoder, while the latter uses semantic labels to gradually adapt the pre-trained weights to generate dense predictions. In this work, we implement both solutions to densify CLIP\u2019s outputs and compare their performances in our experiments. Up until now, we have presented a preliminary framework capable of conducting open-vocabulary ESS by leveraging knowledge from the CLIP model. However, due to the large domain gap between the event and image modalities, a na\u00a8 \u0131ve adaptation is sub-par in tackling the challenging event-based semantic scene understanding task. 3.3. F2E: Frame-to-Event Contrastive Distillation Since our objective is to encourage effective cross-modality knowledge transfer for holistic event scene perception, it thus becomes crucial to learn meaningful representations for both thing and stuff classes, especially their boundary information. However, the sparsity and asynchronous nature of event streams inevitably impede such objectives. Superpixel-Driven Knowledge Distillation. To pursue a more informative event representation learning at higher granularity, we propose to first leverage calibrated frames to generate coarse, instance-level superpixels and then distill knowledge from a pre-trained image backbone to the event segmentation network. Superpixel groups pixels into conceptually meaningful atomic regions, which can be used as the basis for higher-level perceptions [1, 54, 85]. The semantically coherent frame-to-event correspondences can thus be found using pre-processed or online-generated superpixels. Such correspondences tend to bridge the sparse events to dense frame pixels in a holistic manner without involving extra training or annotation efforts. Superpixel & Superevent Generation. We resort to the following two ways of generating the superpixels. The first way is to leverage heuristic methods, e.g. SLIC [1], to efficiently groups pixels from frame Iimg i into a total of Mslic segments with good boundary adherence and regularity as Isp i = {I1 i , I2 i , ..., IMslic i }, where Mslic is a hyperparameter that needs to be adjusted based on the inputs. The generated superpixels satisfy I1 i \u222aI2 i \u222a...\u222aIMslic i = {1, 2, ..., H\u00d7 W}. For the second option, we use the recent Segment Anything Model (SAM) [50] which takes Iimg i as the input and outputs Msam class-agnostic masks. For simplicity, we use M to denote the number of superpixels used during knowledge distillation, i.e., {Isp i = {I1 i , ..., Ik i }|k = 1, ..., M} and show more comparisons between SLIC [1] and SAM [50] in later sections. Since Ievt i and Iimg i have been aligned and synchronized, we can group events from Ievt i into superevents {V sp i = {V1 i , ..., Vl i}|l = 1, ..., M} by using the known event-pixel correspondences. Frame-to-Event Contrastive Learning. To encourage better superpixel-level knowledge transfer, we leverage a pretrained image network Fimg \u03b8f : R3\u00d7H\u00d7W 7\u2192RD3\u00d7H3\u00d7W3 as the teacher and distill information from it to the event branch Fevt \u03b8e . The parameters of Fimg \u03b8f , which can come from either CLIP [69] or other pretext task pre-trained backbones such as [8, 15, 64], are kept frozen during the distillation. With Fevt \u03b8e and Fimg \u03b8f , we generate the superevent and superpixel features as follows: \\m a t h bf { f } ^ {ev t} _ i =& ~ \\ frac { 1}{|V ^ {s p } _i| } \\su m _ {l\\i n V ^{sp} _ i}\\m at h cal {P }_{\\o m eg a _e}^{evt}~(~\\mathcal {F}^{evt}_{\\theta _{e}} ~(I^{evt}_i)_l~ )~,\\\\ \\mathbf {f}^{img}_i =& ~\\frac {1}{|I^{sp}_i|}\\sum _{k\\in I^{sp}_i}\\mathcal {P}_{\\omega _f}^{img}~(~\\mathcal {F}^{img}_{\\theta _{f}} ~(I^{img}_i)_k~ )~, (3) where Pevt \u03c9e and Pimg \u03c9f are projection layers with trainable parameters \u03c9e and \u03c9f, respectively, for the event branch and frame branch. In the actual implementation, Pevt \u03c9e and 4 \fPimg \u03c9f consist of linear layers which map the D1and D3dimensional event and frame features to the same shape. The following contrastive learning objective is applied to the event prediction and the frame prediction: \\mathc al {L} _ { F 2 E}( \\ t heta _{ e }, \\om e ga _e , \\ome ga _ f) = \\su m _i \\ l o g \\left [ \\frac {e^{(\\langle \\mathbf {f}^{evt}_i, \\mathbf {f}^{img}_i \\rangle /\\tau _1 )}}{\\sum _{j\\neq i} e^{(\\langle \\mathbf {f}^{evt}_i, \\mathbf {f}^{img}_j \\rangle /\\tau _1 )}} \\right ]~, \\label {eq:f2e} (4) where \u27e8\u00b7, \u00b7\u27e9denotes the scalar product between the superevent and superpixel embedding; \u03c41 > 0 is a temperature coefficient that controls the pace of knowledge transfer. Role in Our Framework. Our F2E contrastive distillation establishes an effective pipeline for transferring superpixellevel knowledge from dense, visual informative frame pixels to sparse, irregular event streams. Since we are targeting the semantic segmentation task, the learned event representations should be able to reason in terms of instances and instance parts at and in between semantic boundaries. 3.4. T2E: Text-to-Event Consistency Regularization Although the aforementioned frame-to-event knowledge transfer provides a simple yet effective way of transferring off-the-shelf knowledge from frames to events, the optimization objective might encounter unwanted conflicts. Intra-Class Optimization Conflict. During the model pretraining, the superpixel-driven contrastive loss takes the corresponding superevent and superpixel pair in a batch as the positive pair, while treating all remaining pairs as negative samples. Since heuristic superpixels only provide a coarse grouping of conceptually coherent segments (kindly refer to our Appendix for more detailed analysis), it is thus inevitable to encounter self-conflict during the optimization. That is to say, from hindsight, there is a chance that the superpixels belonging to the same semantic class could be involved in both positive and negative samples. Text-Guided Semantic Regularization. To mitigate the possible self-conflict in Eq. (4), we propose a text-to-event semantic consistency regularization mechanism that leverages CLIP\u2019s text encoder to generate semantically more consistent text-frame pairs {Iimg i , Ti}, where Ti denotes the text embedding extracted from Ftxt \u03b8t . Such a paired relationship can be leveraged via CLIP without additional training. We then construct event-text pairs {Ievt i , Ti} by propagating the alignment between events and frames. Specifically, the paired event and text features are extracted as follows: \\m a t hbf {h }^{ev t} _i =& ~\\ m athc a l {Q} _{ \\ome ga _q}^{evt}~(\\mathcal {F}^{evt}_{\\theta _{e}} ~(I^{evt}_i) )~, ~~ \\mathbf {h}^{txt}_i = \\mathcal {F}^{txt}_{\\theta _{t}}~(T_{i} )~, (5) where Qevt \u03c9q is a projection layer with trainable parameters \u03c9q, which is similar to that of Pevt \u03c9e . Now assume there are a total of Z classes in the event dataset, the following objective is applied to encourage the consistency regularization: &\\math cal {L} _ { T 2E} (\\t h e ta _{e},\\ o mega _q ) =\\\\ & \\su m _{z=1}^Z \\log \\le f t [ \\fr a c {\\s u m _{T _ i \\in z,I^{evt}_i} e^{(\\langle \\mathbf {h}^{evt}_i, \\mathbf {h}^{txt}_i \\rangle /\\tau _2 )}}{\\sum _{j\\neq i, T_i \\in z,T_i \\not \\in I^{evt}_i} e^{(\\langle \\mathbf {h}^{evt}_j, \\mathbf {h}^{txt}_i \\rangle /\\tau _2 )}} \\right ]~, \\label {eq:t2e} (7) where \u03c42 > 0 is a temperature coefficient that controls the pace of knowledge transfer. The overall optimization objective of our OpenESS framework is to minimize L = LF 2E + \u03b1LT 2E, where \u03b1 is a weight balancing coefficient. Role in Our Framework. Our T2E semantic consistency regularization provides a global-level alignment to compensate for the possible self-conflict in the superpixel-driven frame-to-event contrastive learning. As we will show in the following sections, the two objectives work synergistically in improving the performance of open-vocabulary ESS. Inference-Time Configuration. Our OpenESS framework is designed to pursue segmentation accuracy in annotationfree and annotation-efficient manners, without sacrificing event processing efficiency. As can be seen from Fig. 2, after the cross-modality knowledge transfer, only the event branch will be kept. This guarantees that there will be no extra latency or power consumption added during the inference, which is in line with the practical requirements. 4. Experiments 4.1. Settings Datasets. We conduct experiments on two popular ESS datasets. DDD17-Seg [2] is a widely used ESS benchmark consisting of 40 sequences acquired by a DAVIS346B. In total, 15950 training and 3890 testing events of spatial size 352 \u00d7 200 are used, along with synchronized gray-scale frames provided by the DAVIS camera. DSEC-Semantic [79] provides semantic labels for 11 sequences in the DSEC [32] dataset. The training and testing splits contain 8082 and 2809 events of spatial size 640 \u00d7 440, accompanied by color frames (with sensor calibration parameters available) recorded at 20Hz. More details are in the Appendix. Benchmark Setup. In addition to the conventional fullysupervised ESS, we establish two open-vocabulary ESS settings for annotation-free and annotation-efficient learning, respectively. The former aims to train an ESS model without using any dense event labels, while the latter assumes an annotation budget of 1%, 5%, 10%, or 20% of events in the training set. We treat the first few samples from each sequence as labeled and the remaining ones as unlabeled. Implementation Details. Our framework is implemented using PyTorch [66]. Based on the use of event representations, we form frame2voxel, frame2recon, and frame2spike settings, where the event branch will adopt E2VID [73], ResNet-50 [39], and SpikingFCN [49], respectively, with an AdamW [59] optimizer with cosine learning rate scheduler. The frame branch uses a pre-trained ResNet50 [8, 9, 15] and is kept frozen. The number of superpixels 5 \fTable 1. Comparative study of existing ESS approaches under the annotation-free, fully-supervised, and open-vocabulary ESS settings, respectively, on the test sets of the DDD17-Seg [5] and DSEC-Semantic [79] datasets. All scores are in percentage (%). The best score from each learning setting is highlighted in bold. Method Venue DDD17 DSEC Acc mIoU Acc mIoU Annotation-Free ESS MaskCLIP [100] ECCV\u201922 81.29 31.90 58.96 21.97 FC-CLIP [97] NeurIPS\u201923 88.66 51.12 79.20 39.42 OpenESS Ours 90.51 53.93 86.18 43.31 Fully-Supervised ESS Ev-SegNet [2] CVPRW\u201919 89.76 54.81 88.61 51.76 E2VID [73] TPAMI\u201919 85.84 48.47 80.06 44.08 Vid2E [30] CVPR\u201920 90.19 56.01 EVDistill [84] CVPR\u201921 58.02 DTL [83] ICCV\u201921 58.80 PVT-FPN [86] ICCV\u201921 94.28 53.89 SpikingFCN [49] NCE\u201922 34.20 EV-Transfer [61] RA-L\u201922 51.90 15.52 63.00 24.37 ESS [79] ECCV\u201922 88.43 53.09 84.17 45.38 ESS-Sup [79] ECCV\u201922 91.08 61.37 89.37 53.29 P2T-FPN [91] TPAMI\u201923 94.57 54.64 EvSegformer [47] TIP\u201923 94.72 54.41 HMNet-B [38] CVPR\u201923 88.70 51.20 HMNet-L [38] CVPR\u201923 89.80 55.00 HALSIE [6] WACV\u201924 92.50 60.66 89.01 52.43 Open-Vocabulary ESS MaskCLIP [100] ECCV\u201922 90.50 61.27 89.81 55.01 FC-CLIP [97] NeurIPS\u201923 90.68 62.01 89.97 55.67 OpenESS Ours 91.05 63.00 90.21 57.21 involved in the calculation of F2E contrastive loss is set to 100 for DSEC-Semantic [79] and 25 for DDD17-Seg [2]. For evaluation, we extract the feature embedding for each text prompt offline from a frozen CLIP text encoder using pre-defined templates. For linear probing, the pre-trained event network Fevt \u03b8e is kept frozen, followed by a trainable point-wise linear classification head. Due to space limits, kindly refer to our Appendix for additional details. 4.2. Comparative Study Annotation-Free ESS. In Tab. 1, we compare OpenESS with MaskCLIP [100] and FC-CLIP [97] in the absence of event labels. Our approach achieves zero-shot ESS results of 53.93% and 43.31% on DDD17-Seg [2] and DSEC-Semantic [79], much higher than the two competitors and even comparable to some fully-supervised methods. This validates the effectiveness of conducting ESS in an annotation-free manner for practical usage. Meanwhile, we observe that a fine-tuned CLIP encoder [97] could generate much better semantic predictions than the structure adaptation method [100], as mentioned in Sec. 3.2. Comparisons to State-of-the-Art Methods. As shown in Tab. 1, the proposed OpenESS sets up several new state-ofthe-art results in the two ESS benchmarks. Compared to the 31.7 32.1 35.9 35.7 35.1 32.9 34.1 38.3 38.3 37.0 29 33 37 41 30.1 32.3 35.7 36.1 34.5 31.1 33.7 37.1 38.1 36.5 29 33 37 41 31.1 32.8 35.6 35.4 35.2 32.6 34.2 37.9 38.3 36.1 29 33 37 41 25 (SAM) 25 (SLIC) 50 (SAM) 50 (SLIC) 100 (SAM) 100 (SLIC) 150 (SAM) 150 (SLIC) 200 (SAM) 200 (SLIC) DINO SwAV MoCoV2 Figure 3. Ablation study on the number of superpixels (provided by either SAM [50] or SLIC [1]) involved in calculating the frameto-event contrastive loss. Models after pre-training are fine-tuned with 1% annotations. All mIoU scores are in percentage (%). previously best-performing methods, OpenESS is 1.63% and 2.21% better in terms of mIoU scores on DDD17-Seg [2] and DSEC-Semantic [79], respectively. It is worth mentioning that in addition to the performance improvements, our approach can generate open-vocabulary predictions that are beyond the closed sets of predictions of existing methods, which is more in line with the practical usage. Annotation-Efficient Learning. We establish a comprehensive benchmark for ESS under limited annotation scenarios and show the results in Tab. 3. As can be seen, the proposed OpenESS contributes significant performance improvements over random initialization under linear probing, few-shot fine-tuning, and fully-supervised learning settings. Specifically, using either voxel grid or event reconstruction representation, our approach achieves > 30% relative gains in mIoU on both datasets under liner probing and around 2% higher than prior art in mIoU with full supervisions. We also observe that using voxel grids to represent raw event streams tends to yield overall better ESS performance. Qualitative Assessment. Fig. 4 provides visual comparisons between OpenESS and other approaches on DSECSemantic [79]. We find that OpenESS tends to predict more consistent semantic information from sparse and irregular event inputs, especially at instance boundaries. We include more visual examples and failure cases in the Appendix. Open-World Predictions. One of the core advantages of OpenESS is the ability to predict beyond the fixed label set from the original training sets. As shown in Fig. 1, our approach can take arbitrary text prompts as inputs and generate semantically coherent event predictions without using event labels. This is credited to the alignment between event features and CLIP\u2019s knowledge in T2E. Such a flexible way of prediction enables a more holistic event understanding. Other Representation Learning Approaches. In Tab. 2, we compare OpenESS with recent reconstruction-based [3, 6 \fBackground Building Fence Person Pole Road Sidewalk Vegetation Car Wall Traffic-Sign Event Reconstruction MaskCLIP ESS-Sup OpenESS FC-CLIP GT Figure 4. Qualitative comparisons of state-of-the-art ESS approaches on the test set of DSEC-Semantic [79]. Each color corresponds to a distinct semantic category. GT denotes the ground truth semantic maps. Best viewed in colors and zoomed-in for additional details. Table 2. Comparative study of different representation learning methods applied on event data. OV denotes whether supporting open-vocabulary predictions. All mIoU scores are in percentage (%). The best score from each dataset is highlighted in bold. Method Venue Backbone OV DDD17 DSEC Random ViT-S/16 \u2717 48.76 40.53 MoCoV3 [16] ICCV\u201921 ViT-S/16 \u2717 53.65 49.21 IBoT [101] ICLR\u201922 ViT-S/16 \u2717 49.94 42.53 ECDP [95] ICCV\u201923 ViT-S/16 \u2717 54.66 47.91 Random ViT-B/16 \u2717 43.89 38.24 BeiT [3] ICLR\u201922 ViT-B/16 \u2717 52.39 46.52 MAE [40] CVPR\u201922 ViT-B/16 \u2717 52.36 47.56 Random ResNet-50 \u2717 56.96 57.60 SimCLR [14] ICML\u201920 ResNet-50 \u2717 57.22 59.06 ECDP [95] ICCV\u201923 ResNet-50 \u2717 59.15 59.16 Random ResNet-50 \u2717 55.56 52.86 OpenESS Ours ResNet-50 \u2713 57.01 55.01 Random E2VID \u2717 61.06 54.96 OpenESS Ours E2VID \u2713 63.00 57.21 40, 95, 101] and contrastive learning-based [14, 16] pretraining methods. As can be seen, the proposed OpenESS achieves competitive results over existing approaches. It is worth highlighting again that our framework distinct from prior arts by supporting open-vocabulary learning. 4.3. Ablation Study Cross-Modality Representation Learning. Tab. 4 provides a comprehensive ablation study on the frame-to-event (F2E) and text-to-event (T2E) learning objectives in OpenESS using three event representations. We observe that 43.17 45.58 48.94 49.74 28.90 34.77 38.90 42.53 23.95 30.42 34.11 39.25 20 25 30 35 40 45 50 55 1% 5% 10% 20% mIoU (%) 52.02 55.11 55.66 56.07 49.89 53.72 55.02 55.70 45.30 52.03 53.02 54.05 44 46 48 50 52 54 56 58 1% 5% 10% 20% mIoU (%) Random ID Random ID DDD17-Seg DSEC-Semantic OOD OOD Figure 5. Cross-dataset representation learning results of comparing OpenESS pre-training using in-distribution (ID) and outof-distribution (OOD) data in-between the DDD17-Seg [5] and DSEC-Semantic [79] datasets. Models after pre-training are finetuned with 1%, 5%, 10%, and 20% annotations, respectively. both F2E and T2E contribute to an overt improvement over random initialization under linear probing and few-shot fine-tuning settings, which verifies the effectiveness of our proposed approach. Once again, we find that the voxel grids tend to achieve better performance than other representations. The spike-based methods [49], albeit being computationally more efficient, show sub-par performance compared to voxel grids and reconstructions. Superpixel Generation. We study the utilization of SLIC [1] and SAM [50] in our frame-to-event contrastive distillation and show the results in Fig. 3. Using either frame net7 \fTable 3. Comparative study of different open-vocabulary semantic segmentation methods [97, 100] under the linear probing (LP) and few-shot fine-tuning, and full supervision (Full) settings, respectively, on the test sets of the DDD17-Seg [5] and DSEC-Semantic [79] datasets. All mIoU scores are given in percentage (%). The best mIoU scores from each learning configuration are highlighted in bold. Method Configuration DSEC-Semantic DDD17-Seg LP 1% 5% 10% 20% Full LP 1% 5% 10% 20% Full Random Voxel Grid 6.70 26.62 31.22 33.67 41.31 54.96 12.30 52.13 54.87 58.66 59.52 61.06 MaskCLIP [100] 33.08 33.89 37.03 38.83 42.40 55.01 31.91 53.91 56.27 59.32 59.97 61.27 FC-CLIP [97] Voxel Grid 43.00 39.12 43.71 44.09 47.77 55.67 54.07 56.38 58.50 60.05 60.85 62.01 OpenESS (Ours) frame2voxel 44.26 41.41 44.97 46.25 48.28 57.21 55.61 57.58 59.07 61.03 61.78 63.00 Improve \u2191 +33.56 +14.79 +13.75 +12.58 +6.97 +2.25 +43.31 +5.45 +4.20 +2.37 +2.26 +1.94 Random Reconstruction 6.22 23.95 30.42 34.11 39.25 52.86 13.89 45.30 52.03 53.02 54.05 55.56 MaskCLIP [100] 27.09 30.73 36.33 40.13 43.37 52.97 29.81 49.02 53.65 54.11 54.75 56.12 FC-CLIP [97] Reconstruction 40.08 38.99 43.34 45.35 47.18 53.05 52.17 51.01 54.09 54.99 55.05 56.34 OpenESS (Ours) frame2recon 44.08 43.17 45.58 48.94 49.74 55.01 53.61 52.02 55.11 55.66 56.07 57.01 Improve \u2191 +37.86 +19.22 +15.16 +14.83 +10.49 +2.15 +39.72 +6.72 +3.08 +2.64 +2.02 +1.45 Table 4. Ablation study of OpenESS under linear probing (LP) and few-shot fine-tuning settings from three learning configurations on the test set of DDD17-Seg [5]. F2E denotes the frame-toevent contrastive learning. T2E denotes the text-to-event semantic regularization. All mIoU scores are given in percentage (%). Configuration F2E T2E DDD17-Seg LP 1% 5% 10% 20% Voxel Grid Random 12.30 52.13 54.87 58.66 59.52 frame2voxel \u2713 52.60 55.41 57.07 59.77 60.21 \u2713 54.11 56.77 58.95 60.12 60.99 \u2713 \u2713 55.61 57.58 59.07 61.03 61.78 Reconstruction Random 13.89 45.30 52.03 53.02 54.05 frame2recon \u2713 50.21 50.96 53.67 54.21 54.92 \u2713 52.62 51.63 54.27 55.00 55.17 \u2713 \u2713 53.61 52.02 55.11 55.66 56.07 Spike Random 12.04 10.01 20.02 25.81 26.03 frame2spike \u2713 15.07 14.31 21.77 26.89 27.07 \u2713 16.11 14.67 22.61 27.97 29.01 \u2713 \u2713 16.27 14.89 23.54 28.51 29.98 works pre-trained by DINO [9], MoCoV2 [15], or SwAV [8], the SAM-generated superpixels consistently exhibit better performance for event representation learning. The number of superpixels involved in calculating tends to affect the effectiveness of contrastive learning. A preliminary search to determine this hyperparameter is required. We empirically find that setting M to 100 for DSEC-Semantic [79] and 25 for DDD17-Seg [2] will likely yield the best possible segmentation performance in our framework. Cross-Dataset Knowledge Transfer. Since we are targeting annotation-free representation learning, it is thus intuitive to see the cross-dataset adaptation effect. As shown in Fig. 5, pre-training on OOD datasets also brings appealing improvements over the random initialization baseline. This result highlights the importance of conducting representation learning for an effective transfer to downstream tasks. 6.70 26.62 31.22 33.67 41.31 10.05 27.84 32.79 34.21 42.13 44.26 41.41 44.97 46.25 48.28 0 14 28 42 56 70 LP 1% 5% 10% 20% mIoU (%) Figure 6. Single-modality OpenESS representation learning study on the DSEC-Semantic [79] dataset. The results are from models of random initialization (\u25a0 \u25a1), recon2voxel pre-training (\u25a0 \u25a1), and frame2voxel pre-training (\u25a0 \u25a1), respectively, after linear probing (LP) and annotation-efficient fine-tuning. Framework with Event Camera Only. Lastly, we study the scenario where the frame camera becomes unavailable. We replace the input to the frame branch with event reconstructions [73] and show the results in Fig. 6. Since the limited visual cues from the reconstruction tend to degrade the quality of representation learning, its performance is subpar compared to the frame-based knowledge transfer. 5." +} \ No newline at end of file diff --git a/abs_9K/test_abstract_short_2405.05330v1.json b/abs_9K/test_abstract_short_2405.05330v1.json new file mode 100644 index 0000000000000000000000000000000000000000..e640ee39b5cbccb0e7901c783ea071d41689662c --- /dev/null +++ b/abs_9K/test_abstract_short_2405.05330v1.json @@ -0,0 +1,19 @@ +{ + "url": "http://arxiv.org/abs/2405.05330v1", + "title": "Chemo-dynamical Evolution of Simulated Satellites for a Milky Way-like Galaxy", + "abstract": "The chemical abundances of Milky Way's satellites reflect their star\nformation histories (SFHs), yet, due to the difficulty of determining the ages\nof old stars, the SFHs of most satellites are poorly measured. Ongoing and\nupcoming surveys will obtain around ten times more medium-resolution spectra\nfor stars in satellites than are currently available. To correctly extract SFHs\nfrom large samples of chemical abundances, the relationship between chemical\nabundances and SFHs needs to be clarified. Here, we perform a high-resolution\ncosmological zoom-in simulation of a Milky Way-like galaxy with detailed models\nof star formation, supernova feedback, and metal diffusion. We quantify SFHs,\nmetallicity distribution functions, and the $\\alpha$-element (Mg, Ca, and Si)\nabundances in satellites of the host galaxy. We find that star formation in\nmost simulated satellites is quenched before infalling to their host. Star\nformation episodes in simulated satellites are separated by a few hundred Myr\nowing to supernova feedback; each star formation event produces groups of stars\nwith similar [$\\alpha$/Fe] and [Fe/H]. We then perform a mock observation of\nthe upcoming Subaru Prime Focus Spectrograph (PFS) observations. We find that\nSubaru PFS will be able to detect distinct groups of stars in [$\\alpha$/Fe] vs.\n[Fe/H] space, produced by episodic star formation. This result means that\nepisodic SFHs can be estimated from the chemical abundances of $\\gtrsim$ 1,000\nstars determined with medium-resolution spectroscopy.", + "authors": "Yutaka Hirai, Evan N. Kirby, Masashi Chiba, Kohei Hayashi, Borja Anguiano, Takayuki R. Saitoh, Miho N. Ishigaki, Timothy C. Beers", + "published": "2024-05-08", + "updated": "2024-05-08", + "primary_cat": "astro-ph.GA", + "cats": [ + "astro-ph.GA", + "astro-ph.HE", + "astro-ph.IM", + "astro-ph.SR" + ], + "label": "Original Paper", + "paper_cat": "Diffusion AND Model", + "gt": "The chemical abundances of Milky Way's satellites reflect their star\nformation histories (SFHs), yet, due to the difficulty of determining the ages\nof old stars, the SFHs of most satellites are poorly measured. Ongoing and\nupcoming surveys will obtain around ten times more medium-resolution spectra\nfor stars in satellites than are currently available. To correctly extract SFHs\nfrom large samples of chemical abundances, the relationship between chemical\nabundances and SFHs needs to be clarified. Here, we perform a high-resolution\ncosmological zoom-in simulation of a Milky Way-like galaxy with detailed models\nof star formation, supernova feedback, and metal diffusion. We quantify SFHs,\nmetallicity distribution functions, and the $\\alpha$-element (Mg, Ca, and Si)\nabundances in satellites of the host galaxy. We find that star formation in\nmost simulated satellites is quenched before infalling to their host. Star\nformation episodes in simulated satellites are separated by a few hundred Myr\nowing to supernova feedback; each star formation event produces groups of stars\nwith similar [$\\alpha$/Fe] and [Fe/H]. We then perform a mock observation of\nthe upcoming Subaru Prime Focus Spectrograph (PFS) observations. We find that\nSubaru PFS will be able to detect distinct groups of stars in [$\\alpha$/Fe] vs.\n[Fe/H] space, produced by episodic star formation. This result means that\nepisodic SFHs can be estimated from the chemical abundances of $\\gtrsim$ 1,000\nstars determined with medium-resolution spectroscopy.", + "main_content": "INTRODUCTION Satellite galaxies of the Milky Way (MW) are crucial for understanding galaxy formation (e.g., Bullock & Boylan-Kolchin 2017). Many satellites possess ancient stars; the histories of star formation and galaxy assembly are imprinted in the chemo-dynamical properties of such satellites. In our Galaxy, thanks to their relatively close distances from the Sun (\u224820\u2013200 kpc), we can observe the chemo-dynamical properties of individual stars in the MW\u2019s satellites (e.g., Tolstoy et al. 2004; Battaglia et al. 2006; Kirby et al. 2011b). Over 100 dwarf galaxies are identified within 3 Mpc (e.g., McConnachie 2012; Simon 2019). Galaxies with \u2217JSPS Research Fellow stellar masses (M\u2217) less than \u2248109 M\u2299are typically categorized as dwarf galaxies. Among them, gas-free dwarf galaxies with M\u2217\u2273105 M\u2299are called classical dwarf spheroidal galaxies (dSphs), while those with M\u2217\u2272105 M\u2299are identified as ultrafaint dwarf galaxies (UFDs). Many of the dwarf galaxies in the Local Group are satellites of the MW or M31; interactions with their more massive hosts could affect the chemodynamical properties of these satellites (Genina et al. 2019; Kvasova et al. 2024). Satellites exhibit a wide variety of star formation histories (SFHs) and chemical abundances. The SFHs of Local Group dwarf galaxies can be derived by colormagnitude diagrams (CMDs, e.g., de Boer et al. 2012a,b; Weisz et al. 2014; Ren et al. 2024). Weisz et al. (2014) comprehensively studied SFHs in the Local Group dwarf arXiv:2405.05330v1 [astro-ph.GA] 8 May 2024 \f2 Hirai et al. galaxies. They found that more massive systems tend to have more extended SFHs. They also showed that MW or M31 satellites have a shorter duration of star formation than those in the field populations. Chemical abundances reflect the SFHs and nucleosynthesis pathways in satellites (e.g., Tolstoy et al. 2009; Kirby et al. 2010, 2011a,b; Ishigaki et al. 2014; Hill et al. 2019; Sk\u00b4 ulad\u00b4 ottir et al. 2024). Kirby et al. (2011b) analyzed metallicity distribution functions (MDFs) of the MW\u2019s satellites with Keck/DEIMOS with a chemical evolution model. They found that the MDFs of more-luminous systems are well-fit with their Extra Gas Model, which assumes gas infall. However, their best-fit effective yields suggested that gas outflow also played an important role in the chemical evolution of less-luminous systems. Thanks to the difference in the delay times between core-collapse supernovae (CCSNe) and type Ia supernovae (SNe Ia), the ratios of \u03b1-elements (e.g., Mg, Ca, and Si) to Fe are often used as an indicator for the rate of chemical evolution. For example, Hill et al. (2019) reported high-resolution spectroscopy of 99 stars in the Sculptor dSph. They found that the decreasing trend of [\u03b1/Fe]1 toward higher metallicity starts at [Fe/H] = \u22121.8. This metallicity is lower than the start of this trend in the MW, indicating that the chemical evolution of Sculptor dSph proceeded more slowly. Numerical simulations have been performed to understand the SFHs and chemical evolution of dwarf galaxies (e.g., Revaz et al. 2009; Okamoto et al. 2010; Revaz & Jablonka 2012, 2018; Hirai et al. 2015, 2017, 2018, 2019; Jeon et al. 2017; Escala et al. 2018; Simpson et al. 2018; Garrison-Kimmel et al. 2019; Applebaum et al. 2021; Di Cintio et al. 2021; Samuel et al. 2022; Rodr\u00b4 \u0131guez et al. 2022). Di Cintio et al. (2021) found that 25% of their simulated satellite dwarf galaxies exhibit an enhancement of star formation after infall to their host. In contrast, the star formation in satellites with little gas or small pericentric distances is quenched after infall due to ram pressure stripping. Escala et al. (2018) introduced the process of metal diffusion in cosmological zoom-in simulations of the Feedback in Realistic Environment (FIRE) project (Hopkins et al. 2014), and analyzed chemical abundances in their simulated dwarf galaxies. They found that the MDFs and intrinsic scatter in [\u03b1/Fe] are similar in satellite and isolated dwarf galaxies, suggesting that internal chemical evo1 [X/Y] = =log(NX/NY) \u2212log(NX/NY)\u2299, where NX and NY are the number densities of elements X and Y, respectively. lution plays a more important role than environmental effects. Ongoing and upcoming surveys will significantly enlarge the number of stars in satellites of the MW with available spectroscopy (e.g., Takada et al. 2014; Cooper et al. 2023; Jin et al. 2023). For example, the Dark Energy Spectroscopic Instrument (DESI) Milky Way Survey will observe 7 million stars with magnitudes 16 < r < 20 at Galactic latitudes |b| > 20\u25e6(Cooper et al. 2023). Their footprint includes 31 Local Group dwarf galaxies. This potentially could yield medium-resolution (R \u223c5, 000) spectroscopy of the member stars in some of these galaxies from their centers to their outskirts. The upcoming Subaru Prime Focus Spectrograph (PFS) will target 7 Local Group dwarf galaxies in their Galactic Archaeology survey (Takada et al. 2014). Thanks to their wide field of view (1.25 square degrees) and massively multiplexed spectroscopic capability (2,394 fibers), they can obtain medium-resolution (R \u223c5, 000) spectroscopy for stars with magnitudes g \u227223 in these galaxies. The Subaru PFS will yield radial velocities, [Fe/H], carbon, \u03b1-elements, and nickel abundance measurements in each galaxy for \u22481,000 to 14,000 stars, more than ten times larger than the current numbers of stars with these measurements. Comparison with cosmological zoom-in simulations and these observations will greatly advance our understanding of the chemo-dynamical properties of dwarf galaxies. This study aims to understand the relationship between star formation and chemical evolution in satellite galaxies. With our high-resolution cosmological zoomin simulation of a MW-like galaxy, we examine SFHs, MDFs, and \u03b1-element abundances in satellites with M\u2217 \u223c105\u2013107 M\u2299, corresponding to the mass ranges of satellite dSphs of the MW. We show how SFHs are reflected in MDFs and \u03b1-element abundances using our simulation. We then evaluate the capability of upcoming surveys to reconstruct the SFHs from the chemical abundances of dwarf galaxies. This paper is organized as follows. Section 2 describes our code, the adopted initial conditions, and the procedures used for carrying out mock observations. In Section 3, we describe the chemo-dynamical properties of our simulated satellites. Section 4 discusses how SFHs are reflected in chemical abundances, and how these can be observed in future surveys. Our conclusions are presented in Section 5. 2. METHODS 2.1. Code We have computed the evolution of satellite galaxies in a cosmological zoom-in simulation of a MW-like \fSimulated Dwarf Satellites of the Milky Way 3 galaxy performed by Hirai et al. (2022). In this simulation, we adopted the N-body/density-independent smoothed particle hydrodynamics code asura (Saitoh et al. 2008, 2009; Saitoh & Makino 2013, 2016). For cooling and heating calculations, we adopted cloudy ver. 13.05 (Ferland et al. 2013). Gas particles probabilistically form stars if they are in a region with a number density of hydrogen atoms higher than 100 cm\u22123, the temperature is lower than 1,000 K, and there are converging flows (\u2207\u00b7 v < 0, e.g., Hirai et al. 2021). Each star particle is treated as a simple stellar population (SSP) with the initial mass function (IMF) of Chabrier (2003) from 0.1 M\u2299to 100 M\u2299. Star particles with ages less than 10 Myr heat the surrounding gas to 104 K (Fujii et al. 2021). We implemented momentum-based supernova feedback following Hopkins et al. (2018a). Metal diffusion was incorporated following Hirai & Saitoh (2017). The cosmic ultra-violet (UV) heating was implemented following Haardt & Madau (2012). The reionization is assumed to occurat a redshift (z) of 8.5. We also assumed the self-shielding model of Rahmati et al. (2013). We adopted the nucleosynthetic yields compiled in the Chemical Evolution Library (celib, Saitoh 2017). CCSNe and SNe Ia are the dominant contributors to the evolution of the [\u03b1/Fe] ratios. For CCSNe, we use the yields of Nomoto et al. (2013) with 13 M\u2299to 40 M\u2299. Given the mass of the star particle, we integrated the IMF from the maximum stellar mass of the IMF to the lower stellar mass until the cumulative number of stars in the integration range became unity. This approach enabled the tracking of the contribution from CCSNe with different progenitor masses in sufficiently high-resolution simulations. When the stellar particle mass (m\u2217) was 4.5 \u00d7 103 M\u2299, the IMF for CCSNe (13\u2013 40 M\u2299) was divided into 100 bins. For SNe Ia, we assumed a delay-time distribution with a power-law index of \u22121, and a minimum delay time of 40 Myr, following Maoz et al. (2012). We also included the contribution of asymptotic giant branch (AGB) stars for stars with 1 to 8 M\u2299(Karakas 2010; Doherty et al. 2014). We adopted the solar abundance of Asplund et al. (2009). 2.2. Initial Conditions A MW-like halo was selected from the cosmological simulation with a box size of (36 h\u22121 Mpc)3. We adopted cosmological parameters of \u2126m = 0.308, \u2126\u039b = 0.692, \u2126b = 0.0484, and H0 = 67.8 km s\u22121 Mpc\u22121 (Planck Collaboration et al. 2016). An initial condition for the zoom-in simulation was generated by music (Hahn & Abel 2011). We used the Amiga Halo Finder (ahf, Gill et al. 2004; Knollmann & Knebe 2009) to find Table 1. List of Simulated Satellite Galaxies at z = 0. Halo ID Mhalo M\u2217 \u27e8[Fe/H]\u27e9 \u03c3[Fe/H] d (M\u2299) (M\u2299) (dex) (kpc) 9 7.5 \u00d7 109 7.5 \u00d7 106 \u22121.95 0.23 204.2 12 4.7 \u00d7 109 2.1 \u00d7 107 \u22121.08 0.58 148.8 36 2.2 \u00d7 109 1.1 \u00d7 107 \u22121.43 0.37 54.5 38 2.5 \u00d7 109 1.3 \u00d7 105 \u22121.52 0.52 198.7 40 2.3 \u00d7 109 3.9 \u00d7 106 \u22121.52 0.46 57.9 150 5.9 \u00d7 108 2.4 \u00d7 105 \u22122.53 0.24 190.7 151 6.0 \u00d7 108 7.7 \u00d7 104 \u22122.89 0.43 167.2 167 5.2 \u00d7 108 3.1 \u00d7 104 \u22124.34 0.14 206.6 199 4.2 \u00d7 108 2.8 \u00d7 104 \u22123.42 0.28 169.2 Note\u2014From left to right, the columns are the Halo ID, the total halo mass within the virial radius (Mhalo), the total stellar mass (M\u2217), the mean [Fe/H] (\u27e8[Fe/H]\u27e9), the dispersion of [Fe/H] (\u03c3[Fe/H]), and the distance from the center of the central galaxy (d). M\u2217, \u27e8[Fe/H]\u27e9, and \u03c3[Fe/H] are computed within the half-mass radius. the target halo. In this simulation, the initial masses of each particle in the finest region were 7.2 \u00d7 104 M\u2299for dark matter, 1.3 \u00d7 104 M\u2299for gas, and 4.5 \u00d7 103 M\u2299for stars. We set the gravitational softening length (\u03f5g) to 85 pc for dark matter and 82 pc for gas and stars. We performed the simulation from z = 100 to 0. In this simulation, we picked out satellites orbiting the central galaxy. We only considered those with a minimum of 104 dark matter and 10 star particles, and made sure that they were not false substructures introduced by the contamination from low-resolution particles. Table 1 lists the simulated satellite galaxies selected for this study. 2.3. Mock Observations We performed mock observations for Subaru PFS (see Section 4.2)2. For the mock observation, we computed the magnitudes of simulated stars. First, SSP particles were divided into individual stars. In this model, stars from 0.1 M\u2299to 100 M\u2299were probabilistically generated from SSP particles, following a Chabrier (2003) IMF. Stars were generated until the total generated stellar mass exceeded the particle\u2019s mass. Then, the magnitudes of each star were computed using the isochrone 2 Sanderson et al. (2020) also discussed in detail mock observations of galaxy simulations. \f4 Hirai et al. table obtained from cmd 3.73 (Girardi et al. 2000, and updates thereof). We generated isochrones with ages from 4 Gyr to 13.8 Gyr and [M/H]4 from \u22122.0 to 0.0 based on the PARSEC-COLIBRI stellar-evolutionary tracks (Bressan et al. 2012; Chen et al. 2014, 2015; Tang et al. 2014; Marigo et al. 2017; Pastorelli et al. 2019, 2020). With this tool, we computed apparent V -band magnitudes for stars in Halos 12 and 40. We assume Halos 12 and 40 are located at 147 kpc and 86 kpc from an observer to compare with the Fornax and Sculptor dSphs, respectively (McConnachie 2012). We then applied the Subaru PFS spectral synthesis pipeline (roughly based on Kirby et al. 2010; Escala et al. 2019) to compute observed uncertainties. The pipeline adopts synthetic spectra of K-giants and G-dwarfs for \u22124.0 \u2264[Fe/H] \u2264\u22120.5. We calculated wavelengthdependent continuum signal-to-noise ratios with the Subaru PFS Exposure Time Calculator5 using the simulated stars\u2019 V -band magnitudes, assuming a three-hour exposure in the Subaru PFS median-resolution mode for K giants. Then, we computed uncertainties on [Fe/H] and [\u03b1/Fe] by resampling the synthetic spectra hundreds of times from Gaussian-distributed per-pixel noise inversely proportional to the estimated signal-to-noise ratios. The simulated chemical abundances of stars are varied within those estimated uncertainties. 3. RESULTS 3.1. Structures and Star Formation Histories This paper mainly discusses the chemo-dynamical evolution of Halos 12, 40, and 150, listed in Table 1. The [\u03b1/Fe] as a function of [Fe/H] for Halos 9 and 36 are shown in the Appendix. We select three these simulated dwarf galaxies based on their stellar mass (Halo 12: 2.1 \u00d7 107M\u2299, Halo 40: 3.9 \u00d7 106M\u2299, and Halo 150: 2.4 \u00d7 105M\u2299). These values are similar to those of the Fornax (2.0 \u00d7 107M\u2299), Sculptor (2.3 \u00d7 106M\u2299), and Draco (2.9 \u00d7 105M\u2299) dSphs (McConnachie 2012). Also, Halos 12, 40, and 150 currently contain no gas. Figure 1 shows the stellar mass distribution of Halos 12, 40, and 150 at z = 0. The half-mass (light) radii of these galaxies are 1,334 pc (Halo 12), 874 pc (Halo 40), and 1,346 pc (Halo 150), respectively. The somewhat larger radii than the observed ones (Fornax: 710 pc, Sculptor: 283 pc, Draco: 221 pc, McConnachie 2012) are 3 http://stev.oapd.inaf.it/cgi-bin/cmd 4 [M/H] = log(Z/X) \u2212log(Z/X)\u2299, where X and Z are the mass fractions of hydrogen and metals, respectively. 5 https://github.com/Subaru-PFS/spt ExposureTimeCalculator due to the spatial resolution of this simulation (\u03f5g = 85 pc). The simulated satellite dwarf galaxies exhibit various SFHs. Figure 2 shows the cumulative SFHs of all satellites listed in Table 1. The SFHs of satellite galaxies are affected by SN feedback, cosmic reionization, and interactions with the host galaxy. This figure shows that more massive satellites tend to have extended SFHs, while less massive halos quench star formation earlier. Star formation in halos with < 109M\u2299(150, 151, 167, and 199) is quenched at < 2 Gyr from the beginning of the simulation by cosmic reionization and SN feedback, while halos with \u2265109M\u2299form stars after the reionization epoch. Gas accreted before reionization in halos with \u2265109M\u2299self-shield the UV background, resulting in them surviving the reionization (e.g., O\u02dc norbe et al. 2015; Wheeler et al. 2019). Hereafter, we focus on three satellites: Halos 12, 40, and 150. The mass and the cosmic infall time also affect the SFHs. Figure 3 shows the orbits (top panels), mass evolution (middle panels), and SFHs (bottom panels) of Halos (a) 12, (b) 40, and (c) 150. Halo 12 has the most recent infall time. The first pericentric passage (5 kpc) of this galaxy is 0.7 Gyr prior to the end of the simulation (Figure 3 (a), top panel). Prior to pericentric passage, this galaxy experienced two star formation events separated by 2.9 Gyr (Figure 3 (a), bottom panel). The first period of star formation starts at 0.1 Gyr and ends at 3.3 Gyr from the beginning of the simulation. During this period, stars are formed along with the accretion of material (Figure 3 (a), middle panel). After SNe expel the gas away from the halo, the infall of the gas forms new stars. This interplay episodically forms stars for 3.2 Gyr. The second star formation event begins when the accretion of a halo brings additional material to the halo at 6.2 Gyr. As with the first period of star formation, it is regulated by SN feedback. The star formation is quenched when feedback from CCSNe from the recent star formation (t \u227210 Myr ago) and SNe Ia from previous star formation (t \u223c1 Gyr ago) expel the gas from the galaxy at 9.5 Gyr. Halo 40 has a shorter total duration of star formation, mainly due to the earlier infall time than that of Halo 12. Halo 40 crosses the main halo\u2019s virial radius (Rvir) at 7.4 Gyr, while Halo 12 experiences its closest pericenter passage at 12.6 Gyr. Due to the early infall, repeated gas removal by ram pressure stripping prevents additional star formation in the later phase. The evolution of gas mass after the first infall is due to our analysis method. The increase in the tidal radius of the halo around the apocenter accretes more diffuse gas around the galaxy, \fSimulated Dwarf Satellites of the Milky Way 5 \u22124 \u22122 0 2 4 X (kpc) \u22124 \u22122 0 2 4 Y (kpc) (a) \u22125.0 \u22124.5 \u22124.0 \u22123.5 \u22123.0 \u22122.5 Log stellar mass fraction \u22124 \u22122 0 2 4 X (kpc) \u22124 \u22122 0 2 4 Y (kpc) (b) \u22126 \u22125 \u22124 \u22123 \u22122 \u22121 Log stellar mass fraction \u22124 \u22122 0 2 4 X (kpc) \u22124 \u22122 0 2 4 Y (kpc) (c) \u22125.0 \u22124.5 \u22124.0 \u22123.5 \u22123.0 \u22122.5 Log stellar mass fraction Figure 1. Stellar distribution of simulated satellite dwarf galaxies for (a) Halo 12, (b) Halo 40, and (c) Halo 150. The color scale depicts each grid\u2019s log scale stellar-mass fraction. Most stars are spherically distributed at the center of their dark matter halo. resulting in the increase of the detected gas mass of this 100 101 log(Time (Gyr)) 0.0 0.2 0.4 0.6 0.8 1.0 Cumulative star formation history Halo 9 Halo 12 Halo 36 Halo 38 Halo 40 Halo 150 Halo 151 Halo 167 Halo 199 Figure 2. Cumulative SFHs of simulated dwarf satellites, as listed in Table 1. Less massive halos (e.g., Halos 151, 167, and 199) tend to quench star formation earlier than more massive halos (e.g., Halos 9, 12, and 36). halo. Although gas mass evolution is shown here, these gas particles are not eligible to form stars. Halo 40 experienced star formation in the first 2.8 Gyr. As shown in the bottom panel of Figure 3 (b), there are five peaks of star formation, separated from 0.40 to 0.97 Gyr. The SFH in this halo is also mainly regulated by SN feedback. As shown in the bottom panel of Figure 3 (b), stars are formed during cosmic reionization. After the star formation is quenched at 0.83 Gyr, an additional gas supply resumes star formation at 1.79 Gyr. Eventually, star formation is halted at 2.76 Gyr. This quenching is mainly caused by the heating by CCSNe from the recent star formation and SNe Ia from the previous star formation, due to their delay times. Since Halo 40 is located at a distance five times larger than the virial radius of the main halo at 2.76 Gyr, ram pressure stripping is unlikely to be the main cause responsible for the suppression of star formation. Halo 150 has the shortest duration of star formation among the halos shown in Figure 3. Figure 3 (c) shows the cosmic time evolution of Halo 150. The top panel shows that this halo experienced at least two pericenter passages. Note that we cannot follow the mass evolution before 4.84 Gyr, because the progenitor halos are undetected by the halo finder. As shown in the bottom panel of Figure 3 (c), the first episode of star formation lasts 0.47 Gyr, and is quenched by cosmic reionization. In this episode, 80% of its stars are formed. The second star formation event occurs at 1.66 Gyr, possibly because of the gas infall, but it is quenched quickly. \f6 Hirai et al. 0 1000 Distance (kpc) 106 109 Mass (M ) 0 2 4 6 8 10 12 14 Time (Gyr) 10 3 10 2 SFR (M yr 1) (a) 0 500 1000 Distance (kpc) 106 109 Mass (M ) 0 2 4 6 8 10 12 14 Time (Gyr) 10 3 10 2 SFR (M yr 1) (b) 0 200 Distance (kpc) 105 107 109 Mass (M ) 0 2 4 6 8 10 12 14 Time (Gyr) 10 3 SFR (M yr 1) (c) Figure 3. Cosmic time evolution of (a) Halo 12, (b) Halo 40, and (c) Halo 150. Top sub-panels: The orbital distance (blue) and the time evolution of the virial radius of the main halo (orange). Middle sub-panels: The dark matter (bluesolid) and gas (orange-dashed) mass evolution. Bottom subpanels: star formation histories. The grey line represents the epoch of reionization (z = 8.5). The light-grey shaded region in panel (c) means the halo finder cannot follow the mass evolution. 3.2. Chemical Abundances The MDFs of stellar systems reflect their histories of star formation, gas infall, and gas outflow; Figure 4 shows MDFs of Halos 12, 40, and 150. We also plot the observed MDFs of the Fornax, Sculptor, and Draco dSphs (Kirby et al. 2010). It should be noted that the purpose of our study is not to reproduce the MDFs of the observed dSphs. Rather, we compare simulated and observed MDFs in Section 4.1. The MDF of Halo 12 exhibits a bimodal distribution, reflecting two major star formation events (the bottom panel of Figure 3 (a)). All stars with [Fe/H] < \u22121.5 are formed within 3.3 Gyr from the beginning of the simulation. These stars are mainly located in the outskirts of the galaxy. For stars with [Fe/H] < \u22121.5, 28.5% of them are within rh, while 71.5% of stars with [Fe/H] \u2265\u22121.5 are within rh. As shown in the green-dashed line in Figure 4 (a), the fraction of stars with [Fe/H] < \u22121.5 in the MDF is significantly decreased for stars within rh. Stars around [Fe/H] = \u22121.2 and [Fe/H] = \u22120.8 are associated with star formation events around 8.0 Gyr and 9.5 Gyr, respectively. As shown in the middle panel of Figure 3 (a), these stars are formed from gas infall. Figure 4 (b) shows the MDF of Halo 40. The MDF is broadly distributed over \u22123.0 \u2272[Fe/H] \u2272\u22121.0. Stars around [Fe/H] = \u22122.3, \u22121.8, and \u22121.3 reflect star formation at different cosmic times. For [Fe/H] < \u22122.5, all stars formed before 1.0 Gyr from the beginning of the simulation. For stars with \u22122.5 < [Fe/H] < \u22122.0, half of them are formed at t < 1.0 Gyr, while others are formed at 1.7 < t/(Gyr) < 2.3, simultaneously with stars with \u22122.0 < [Fe/H] < \u22121.5. Stars with [Fe/H] > \u22121.5 have younger ages. All of these stars are formed after 2.2 Gyr. Although there is an overlap in the ages of each peak, the peaks in the MDFs indicate star formation at different cosmic times. In Figure 4 (b), we also plot the MDF for stars within rh. Unlike for Halo 12, the MDFs are not largely affected by the spatial distribution of stars. Figure 4 (c) shows the MDF for Halo 150. As shown in Figure 3 (c), Halo 150 exhibits two star formation events. The second peak of star formation produces stars with \u22122.3 < [Fe/H] < \u22122.1, while the first star formation event mainly forms stars with [Fe/H] < \u22124.0. The number fraction of these ultra metal-poor (UMP) stars is 75.6% for all UMP stars and 64.9% for UMP stars within rh. These stars largely affect the median metallicity of this galaxy. The median metallicity is [Fe/H] = \u22124.36 for all stars, but [Fe/H] = \u22122.16 for stars with [Fe/H] > \u22124.0. Stars with different ages clearly differ in the [\u03b1/Fe] vs. [Fe/H] space. Figure 5 (a) shows [\u03b1/Fe], as a function of [Fe/H], in Halo 12. This galaxy has two major \fSimulated Dwarf Satellites of the Milky Way 7 \u22123.0 \u22122.5 \u22122.0 \u22121.5 \u22121.0 \u22120.5 [Fe/H] 0.00 0.05 0.10 0.15 0.20 0.25 0.30 0.35 df/d[Fe/H] (a) Halo 12 Halo 12 (within rh) Fornax \u22123.0 \u22122.5 \u22122.0 \u22121.5 \u22121.0 \u22120.5 [Fe/H] 0.00 0.05 0.10 0.15 0.20 0.25 df/d[Fe/H] (b) Halo 40 Halo 40 (within rh) Sculptor \u22126 \u22125 \u22124 \u22123 \u22122 \u22121 [Fe/H] 0.00 0.05 0.10 0.15 0.20 0.25 0.30 0.35 df/d[Fe/H] (c) Halo 150 Halo 150 (within rh) Draco Figure 4. Simulated (blue-solid line) and observed (orangedashed line) MDFs for (a) Halo 12 and Fornax, (b) Halo 40 and Sculptor, and (c) Halo 150 and Draco. The green-dashed line represents the MDFs for stars within rh. The simulated data do not include simulated observational errors. Observed data are taken from Kirby et al. (2010). star formation events (Figure 3 (a)). The first event (13.7 Gyr to 10.5 Gyr ago) forms the decreasing trend of [\u03b1/Fe] from [Fe/H] = \u22122.5 to [Fe/H] = \u22121.0. Also, there is roughly a \u223c1 dex scatter in the [\u03b1/Fe] ratios. The episodic star formation creates these features during the first major star formation event. The first star formation episode (\u226513 Gyr ago) forms the high-\u03b1 ([\u03b1/Fe] > +0.3) component. The interstellar medium (ISM)\u2019s inhomogeneity results in a widely distributed metallicity (\u22123.0 < [Fe/H] < \u22121.5). The low-\u03b1 (\u22120.3 < [\u03b1/Fe] < \u22120.1) and very metal-poor (\u22122.5 < [Fe/H] < \u22122.2) component come from another dwarf galaxy accreted to Halo 12. The subsequent star formation episodes (12.0 Gyr to 10.5 Gyr ago) produce the decreasing trend of [\u03b1/Fe] ratios due to the substantial contribution from SNe Ia. In contrast, the second star formation event (7.6 Gyr to 4.3 Gyr ago) produces an increasing trend of the [\u03b1/Fe] ratios for [Fe/H] > \u22121.5. This trend suggests that stars are preferentially formed from the ejecta of CCSNe. During the second major star formation event, stars are mainly produced at the galaxy\u2019s center. Young stars give rise to CCSNe mainly at the center, while SNe Ia occur in the more extended region due to their delay times; SNe Ia occur in the more distant places relative to the star-forming region. This difference in the spatial distribution results in the formation of stars reflecting the yields of CCSNe. Since Si also exhibits a similar behavior, AGB stars are unlikely to contribute to forming this trend. Figure 5 (b) shows [\u03b1/Fe], as a function of [Fe/H], in Halo 40. From inspection, five peaks of star formation (Figure 3 (b)) produce groups of stars with different [Fe/H] and [\u03b1/Fe] ratios. The first peak of star formation (13.4 Gyr ago) produces stars with [Fe/H] < \u22122.3 and [\u03b1/Fe] > +0.3. Since it is the earliest phase of the star formation, CCSNe are the dominant contributor to the enrichment, resulting in a flat trend of [\u03b1/Fe] as a function of [Fe/H]. A few stars with [Fe/H] > \u22122.0 and [\u03b1/Fe] \u22480.2 are formed from the ejecta of Population III CCSNe. The second peak of star formation (13.0 Gyr ago) forms stars with \u22122.5 < [Fe/H] < \u22122.0 and +0.1 < [\u03b1/Fe] < +0.5. The contribution of SNe Ia from the stars produced in the first peak of star formation makes this second group of stars, with lower [\u03b1/Fe] and higher [Fe/H] than the first group. Subsequent star formation and the contributions of SNe Ia from the previous peaks of star formation produce groups of stars with lower [\u03b1/Fe] and higher [Fe/H]. The third peak of star formation (12.0 Gyr ago) creates groups of stars with \u22122.5 < [Fe/H] < \u22121.7 and \u22120.3 < [\u03b1/Fe] < +0.2. This group has the lowest [\u03b1/Fe] \f8 Hirai et al. ratios because of the contribution of SNe Ia from the previous two star formation peaks. The fourth peak of star formation (11.6 Gyr ago) produces stars with the same [Fe/H] range but higher [\u03b1/Fe] ratios (0.0< [\u03b1/Fe] < +0.4). This group of stars reflects the ejecta from CCSNe formed in the third peak of star formation. The final star formation event (11.0 Gyr ago) forms stars with \u22121.5 < [Fe/H] < \u22121.0 and \u22120.2 < [\u03b1/Fe] < +0.2. Because of its short duration (\u223c100 Myr), stars are mainly formed from the ejecta of CCSNe. Figure 5 (c) shows [\u03b1/Fe] as a function of [Fe/H] in Halo 150. Although stars are too few to discuss the trend of the [\u03b1/Fe] ratios, stars formed at different times exhibit distinct differences in [\u03b1/Fe] ratios. Stars formed in \u226513.4 Gyr ago show [\u03b1/Fe] > +0.2, reflecting the yields of CCSNe. Different [\u03b1/Fe] ratios originate from CCSNe with different progenitor masses. A clear separation of star formation events (1.24 Gyr, Figure 3 (c)) yields stars formed in the second star formation peak with lower [\u03b1/Fe] ratios owing to the contribution of SNe Ia. The dispersion of the [\u03b1/Fe] ratios reflects the degree of the ISM\u2019s inhomogeneity. We quantified the scatter for [\u03b1/Fe] in \u22123 < [Fe/H] < \u22120.5 following Escala et al. (2018). These authors defined the intrinsic scatter as the standard deviation of the distance distribution between stars\u2019 [\u03b1/Fe] ratios and the cubic spline fitting curve for the data. For Halos 12 and 40, the intrinsic scatter of [\u03b1/Fe] is 0.18 dex and 0.16 dex, respectively. These are similar to the estimated intrinsic scatter (Escala et al. 2018) of the Fornax (0.14 dex) and Sculptor dSphs (0.078 dex), meaning that the simulated and observed satellites have ISM inhomogeneity that gives rise to scatter \u22720.2 dex for the [\u03b1/Fe] ratios. The radial metallicity distribution reflects spatial variations in star formation. Star formation in the inner region of Halos 12 and 40 lasts longer than that in the outer region. Figures 6 (a) and (b) show radial [Fe/H] distributions in Halos 12 and 40, respectively. Both galaxies have a negative slope of [Fe/H], as a function of the distance from the center, reflecting the difference in the spatial distribution of the stars with different ages. The youngest stars in these galaxies are located within 3 kpc, while stars with ages of > 13 Gyr have a more extended spatial distribution to 5 kpc. The radial [\u03b1/Fe] distribution exhibits positive slopes (Figures 6 (c) and (d)). Because newer stars located in the center of the galaxies are more affected by SNe Ia, the average [\u03b1/Fe] ratio near the galactic center is lower than in the outskirts. These radial [Fe/H] and [\u03b1/Fe] gradients are caused by old and metal-poor populations in the outskirts. This result highlights the importance \u22123.0 \u22122.5 \u22122.0 \u22121.5 \u22121.0 \u22120.5 [Fe/H] \u22120.50 \u22120.25 0.00 0.25 0.50 0.75 1.00 [\u03b1/Fe] (a) 6 8 10 12 Age (Gyr) \u22123.0 \u22122.5 \u22122.0 \u22121.5 \u22121.0 \u22120.5 [Fe/H] \u22120.50 \u22120.25 0.00 0.25 0.50 0.75 1.00 [\u03b1/Fe] (b) 11.0 11.5 12.0 12.5 13.0 13.5 Age (Gyr) \u22123.0 \u22122.5 \u22122.0 \u22121.5 \u22121.0 \u22120.5 [Fe/H] \u22120.50 \u22120.25 0.00 0.25 0.50 0.75 1.00 [\u03b1/Fe] (c) 12.05 12.06 12.07 12.08 12.09 12.10 Age (Gyr) Figure 5. The \u03b1-element distributions for (a) Halo 12, (b) Halo 40 , and (c) Halo 150. The color bars indicate the ages of the stars. The simulated data do not include simulated observational errors. of measuring the chemical abundances of stars in the outer regions of dwarf satellites. The kinematics of stars also differ among stars with different metallicities. Figures 7 (a) and (b) show the line-of-sight velocities (vlos) as a function of [Fe/H]. We computed vlos assuming that Halos 12 and 40 are located in the equatorial coordinates of Fornax and Sculptor (Hayashi et al. 2020), respectively, i.e., we observed Halos 12 and 40 respectively located in the positions of Fornax and Sculptor dSphs from the position of the Sun in the Milky Way. The dispersion of vlos for [Fe/H] \u2264\u22121.5 is 19.3 km s\u22121 in Halo 12 and 19.2 km s\u22121 in Halo 40. On the other hand, stars with [Fe/H] > \u22121.5 have smaller dispersion: 15.0 km s\u22121 (Halo 12) and 16.8 km s\u22121 (Halo 40). These results confirm the existence of kinematical distinct populations in satellites (e.g., Tolstoy et al. 2004; Battaglia et al. 2006). \fSimulated Dwarf Satellites of the Milky Way 9 0 1 2 3 4 5 Distance from the center (kpc) 3.0 2.5 2.0 1.5 1.0 0.5 [Fe/H] (a) ( 0.22 \u00b1 0.01) dex per kpc 5 6 7 8 9 10 11 12 13 Age (Gyr) 0 1 2 3 4 5 Distance from the center (kpc) 3.0 2.5 2.0 1.5 1.0 0.5 [Fe/H] (b) ( 0.13 \u00b1 0.01) dex per kpc 11.0 11.5 12.0 12.5 13.0 13.5 Age (Gyr) 0 1 2 3 4 5 Distance from the center (kpc) 0.6 0.4 0.2 0.0 0.2 0.4 0.6 0.8 1.0 [ /Fe] (c) (+0.05 \u00b1 0.002) dex per kpc 5 6 7 8 9 10 11 12 13 Age (Gyr) 0 1 2 3 4 5 Distance from the center (kpc) 0.6 0.4 0.2 0.0 0.2 0.4 0.6 0.8 1.0 [ /Fe] (d) (+0.08 \u00b1 0.01) dex per kpc 5 6 7 8 9 10 11 12 13 Age (Gyr) Figure 6. Radial [Fe/H] distributions for (a) Halo 12, (b) Halo 40, and [\u03b1/Fe] distributions for (c) Halo 12, and (d) Halo 40, respectively. The color bars indicate the ages of the stars. The simulated data do not include simulated observational errors. The red line is the least squares linear fit for the data. The slope is shown in each panel. 4. DISCUSSION 4.1. Chemo-dynamical Evolution of Satellites Here, we discuss the chemo-dynamical evolution of the MW\u2019s satellites by comparing simulations and observations. The relationship between orbits and SFHs has been argued to explain the variety of observed SFHs seen in MW\u2019s satellites. Miyoshi & Chiba (2020) computed the orbital motions of MW\u2019s satellites, including Fornax, Leo I, Sculptor, and Draco, with a time-varying gravitational potential based on the Gaia Data Release 2 (Gaia Collaboration et al. 2018) proper motions, and compared them with SFHs. They found that the infall times of classical dSphs coincide well with the peak of the star-formation regions (SFRs), while UFDs had already been quenched before the infall times. Simulated satellites have some similarities to galaxies analyzed by Miyoshi & Chiba (2020). Halo 12 is similar to the Fornax dSph in terms of its stellar mass and SFH. Both galaxies have intermediate age (4\u20138 Gyr) and old (> 10 Gyr) stellar populations. The orbit of Halo 12 is similar to that of Leo I. Both Halo 12 and Leo I experienced one pericenter passage throughout their orbits. Stellar mass, orbits, and SFHs are similar between Halo 40 and the Sculptor dSph. These galaxies formed most stars prior to their infall. Halo 150 is similar to the Draco dSph regarding stellar mass, orbits, and SFHs. These galaxies also comprise old (> 10 Gyr) stellar populations. These results suggest that star formation in intermediate-age and old stars in these galaxies was regulated by SN feedback and gas inflow, as we have argued in Section 3.1. The major difference between our simulation and the MW\u2019s satellites is the star formation after infall. Our simulation does not exhibit enhancement of the SFR at the time of the infall, which has been observed by \f10 Hirai et al. 3.0 2.5 2.0 1.5 1.0 0.5 [Fe/H] 0 20 40 60 80 100 120 vlos (km s 1) (a) 3.0 2.5 2.0 1.5 1.0 0.5 [Fe/H] 50 75 100 125 150 175 vlos (km s 1) (b) Figure 7. Line-of-sight velocities (vlos) as a function of [Fe/H] in (a) Halo 12 and (b) Halo 40. The simulated data do not include simulated observational errors. The orangedashed line shows the standard deviation of vlos as a function of [Fe/H]. Miyoshi & Chiba (2020). Di Cintio et al. (2021) showed that galaxies should satisfy two conditions to enhance the star formation after infall: (1) galaxies must have cold gas with at least 10\u22122 times the virial mass of the halo at the time of the infall and (2) the pericentric distance should be larger than 10 kpc. None of the galaxies analyzed in this study satisfy these conditions. The strength and treatment of SN feedback highly affect the SFHs and gas outflow of simulated dwarf galaxies. Since galaxy formation simulations cannot resolve the evolution of SN remnants, we need to rely on subgrid feedback models (e.g., Naab & Ostriker 2017; Hopkins et al. 2018a). Revaz & Jablonka (2012) performed isolated dwarf galaxy simulations with different strengths of SN feedback. Their simulations showed that the star formation lasted < 1 Gyr in their strongest feedback case, while stars were continuously formed over 14 Gyr if they adopted a level of feedback 100 times less than the strongest one (also see Hazenfratz et al. 2024). Xu et al. (2022) suggested that the mass-loading factor (the ratio of outflow rate and star formation rate) in dwarf galaxies (M\u2217\u223c104\u2013107M\u2299) observed in extremely metal-poor representatives explored by the Subaru survey project (e.g., Kojima et al. 2020; Matsumoto et al. 2022; Isobe et al. 2023; Nishigaki et al. 2023; Xu et al. 2024) were \u223c10 to 100 times lower than those predicted in galaxy formation simulations. These results highlight the importance of studying the effects of feedback on the SFHs of dwarf galaxies. MDFs reflect the SFHs and gas infall/outflow of dwarf galaxies. Kirby et al. (2011b) showed that Fornax dSph has a narrow MDF with \u03c3 = 0.36 dex. The Leo I dSph also exhibits a similar MDF. Their chemical evolution model suggested that these galaxies experienced gas infall to shape the narrow MDF. Halo 12 also exhibits a narrow MDF (\u03c3 = 0.20 dex) for stars with [Fe/H] > \u22121.5. As described in Section 3, these stars are formed by gas infall. These results suggest that gas infall plays an important role in the chemical evolution of the Fornax and Leo I dSphs. The Sculptor dSph has a broader MDF (\u03c3 = 0.46 dex) than those of the Fornax and Leo I dSphs (Kirby et al. 2013). Kirby et al. (2011b) found that none of their chemical evolution models reproduce Sculptor\u2019s MDF. This problem is resolved if they alter the SFH of the chemical evolution model to a more appropriate choice of parameters for SNe Ia and the SFH (Kirby et al. 2011a; Homma et al. 2015). Homma et al. (2015) interpreted Sculptor\u2019s SFH derived by de Boer et al. (2012a) with a chemical evolution model similar to that of Kirby et al. (2011b). They found that dSphs with a larger fraction of stars formed in the early phase have a more elongated low-metallicity tail of the MDF. Halo 40 in our simulation also exhibits a broad MDF (\u03c3 = 0.46 dex) similar to Sculptor\u2019s MDF. This broad MDF is formed by episodic star formation (Figure 3 (b)), rather than the continuous SFH assumed in the one-zone chemical evolution models (Kirby et al. 2011a; Homma et al. 2015). From inspection of Figure 4 (b), there are at least three distinct peaks in Halo 40\u2019s MDF formed by episodic star formation. If this is the case, upcoming spectroscopic surveys of dSphs could confirm whether or not the Sculptor dSph has an episodic SFH (see Section 4.2). 4.2. Prospects for Future Surveys Identifying whether the MW\u2019s satellites have episodic star formation is critical to understanding the effects of SN feedback on their chemo-dynamical evolution and the nature of dark matter (e.g., Aparicio et al. 2001; Bettinelli et al. 2019; Rusakov et al. 2021). Pontzen & Governato (2012) showed that large-scale bulk motion \fSimulated Dwarf Satellites of the Milky Way 11 of gas caused by episodic star formation transforms the cusped density profile of dark matter to a cored one (also see Mashchenko et al. 2008; Wheeler et al. 2019). The dependence of SFHs on dark matter profiles in observed satellites is not well understood (e.g., Hayashi et al. 2020, 2023). We need additional indicators to identify episodic star formation. As we have found in Figure 5, the episodic star formation creates groups of stars with similar [\u03b1/Fe] and [Fe/H]. We need to search for this feature with observations. Upcoming wide-field spectroscopic surveys will be able to measure chemical abundances for a sufficiently large number of stars to detect signatures of episodic SFH from chemical abundances (e.g., Takada et al. 2014; Cooper et al. 2023). For example, Subaru PFS will measure Fe and \u03b1-element abundances for 14,000 and 6,900 stars in Fornax and Sculptor, respectively. In this subsection, we discuss how the simulated [\u03b1/Fe] vs. [Fe/H] distribution (Figure 5) can be observed by Subaru PFS. Figure 8 shows Subaru PFS mock observations of [\u03b1/Fe] vs. [Fe/H] for Halos 12 and 40. Procedures for the mock observations are described in Section 2.3. Typical observational uncertainties added to the simulated data are \u03c3 \u22480.13 dex and 0.14 dex for the [\u03b1/Fe] and [Fe/H] ratios, respectively. Compared to Figure 5, the scatter in the [\u03b1/Fe] ratios have been increased. Nevertheless, we can still identify groups of stars having similar [\u03b1/Fe] and [Fe/H] associated with episodic star formation. The top panel of Figure 8 compares mock observed abundances of Halo 12 and the Fornax dSph. With Keck/DEIMOS, Kirby et al. (2011a) found scatter in [\u03b1/Fe] ratios and a lack of correlation with [Fe/H] in Fornax. Their results suggested that such scatter could arise from bursty star formation or inhomogeneity of the ISM. Mock observed [\u03b1/Fe] ratios in Halo 12 also exhibit scatter for stars with [Fe/H] > \u22121.5. Due to the observed uncertainties, detailed structures of [\u03b1/Fe] ratios seen in Figure 5 (a) cannot be observed, and these structures are observed as scatter. As we have argued in Section 3.2, the scatter of [\u03b1/Fe] ratios likely come from the enhanced contribution of CCSNe, due to bursty star formation and inhomogeneous chemical abundances in the ISM. This result is consistent with the suggestion by Kirby et al. (2011a). Stars with [Fe/H] < \u22121.5 in Figure 8 (top) highlight the importance of observing the Fornax dSph with a wide-field multiplexed spectrograph. In Figure 4 (a), we have shown that most stars with [Fe/H] < \u22121.5 are located outside of rh. Even after applying observed uncertainties, we can still see the decreasing trend of [\u03b1/Fe] as a function of [Fe/H] and scatter associated with the Figure 8. Subaru PFS mock observations (black dots) of [\u03b1/Fe] vs. [Fe/H] for Halos 12 (top panel) and 40 (bottom panel). Red symbols are the abundances for Fornax (top panel) and Sculptor (bottom panel) observed with Keck/DEIMOS (Kirby et al. 2011a). peaks of episodic star formation. Since the current sample (Kirby et al. 2011b) is limited to the center of the Fornax dSph (\u2272400 pc), we cannot constrain the chemical evolution in the outskirts of this galaxy. We will be able to investigate the most metal-poor tail of the MDF and [\u03b1/Fe] ratios by obtaining spectroscopy out to the tidal radius (2,078 pc; Irwin & Hatzidimitriou 1995) of the Fornax dSph. There are limitations on the ability of mediumresolution spectroscopy to identify dwarf galaxies accreted to the Fornax dSph with [\u03b1/Fe] ratios. In Figure 5, we find a low-\u03b1 (\u22120.3 < [\u03b1/Fe] < \u22120.1) and very metal-poor (\u22122.5 < [Fe/H] < \u22122.2) component, which is from an accreted dwarf galaxy. However, the distinction of this component is unclear, due to the observed uncertainties in Figure 8 (top). This result suggests that measuring velocity distribution (Figure 7) and high-resolution spectroscopy for chemical abundances of stars on the outskirts is necessary to distinguish accreted components. For example, most stars with [Fe/H] \f12 Hirai et al. \u2264\u22122.5 in Halo 12 come from accreted dwarf galaxies. Their line-of-sight velocity dispersion is 22.3 km s\u22121, while that of stars with [Fe/H] > \u22122.5 shows 16.7 km s\u22121 (Figure 7). These difference in velocity dispersion could be measured in future surveys. The bottom panel of Figure 8 compares mock observed [\u03b1/Fe], as a function of [Fe/H], in Halo 40 and the Sculptor dSph. In the mock observation, the groups of stars with similar [\u03b1/Fe] and [Fe/H] formed in episodic star formation. For [Fe/H] < \u22122.0, these groups are typically separated with 0.5 and 0.4 dex in [Fe/H] and [\u03b1/Fe], respectively. However, the number of stars (375) observed in Keck/DEIMOS (Kirby et al. 2011b) is insufficient to identify such groups of stars. With Subaru PFS, we expect to measure [\u03b1/Fe] and [Fe/H] for 6,900 stars in the Sculptor dSph. As shown in this mock observation, in the planned survey we will confirm whether there is episodic star formation occurring every few hundred Myr by identifying chemical clumps. In this subsection, we have shown that [\u03b1/Fe] vs. [Fe/H] measured by medium-resolution spectroscopy for \u22731,000 stars can confirm signatures of episodic star formation in the Fornax and Sculptor dSphs. Thanks to our high-resolution cosmological zoom-in simulation, we can discuss the detailed chemo-dynamical structures of satellite galaxies with \u2273106 M\u2299. However, due to the resolution limit, we cannot constrain the SFHs and chemical abundances of galaxies with \u2272105 M\u2299. The SFHs of poorly resolved galaxies tend to be more bursty, because there are too many synchronized SNe from a star particle (e.g., Hopkins et al. 2018b; GarrisonKimmel et al. 2019). Hopkins et al. (2018b) showed that simulated galaxies should have > 100 star particles to result in a convergence of SFHs. This result means that simulations of MW-like galaxies with a mass resolution of \u223c10 M\u2299is required to resolve SFHs of the smallest satellites (\u2272103 M\u2299). Such simulations could be achieved by resolving the computational scaling issue using deep learning (Hirashima et al. 2023). We expect that a comparison with upcoming wide-field spectroscopic surveys and high-resolution cosmological simulations will improve our capability to reconstruct the chemo-dynamical evolution of satellites from chemical abundances. 5." +} \ No newline at end of file diff --git a/abs_9K/test_abstract_short_2405.05380v1.json b/abs_9K/test_abstract_short_2405.05380v1.json new file mode 100644 index 0000000000000000000000000000000000000000..07fb5b54475bf41a8a4401bbdcf1caba70065c15 --- /dev/null +++ b/abs_9K/test_abstract_short_2405.05380v1.json @@ -0,0 +1,16 @@ +{ + "url": "http://arxiv.org/abs/2405.05380v1", + "title": "Excluded volume effects on tangentially driven active ring polymers", + "abstract": "The conformational and dynamical properties of active ring polymers are\nstudied by numerical simulations. The two-dimensionally confined polymer is\nmodeled as a closed bead-spring chain, driven by tangential forces, put in\ncontact with a heat bath described by the Brownian multiparticle collision\ndynamics. Both phantom polymers and chains comprising excluded volume\ninteractions are considered for different bending rigidities. The size and\nshape are found to be dependent on persistence length, driving force, and bead\nmutual exclusion. The lack of excluded volume interactions is responsible for a\nshrinkage of active rings when increasing driving force in the flexible limit\nwhile the presence induces a moderate swelling of chains. Internal dynamics of\nflexible phantom active rings shows activity-enhanced diffusive behavior at\nlarge activity values while, in the case of self-avoiding active chains, it is\ncharacterized by active ballistic motion not depending on stiffness. The\nlong-time dynamics of active rings is marked by rotational motion whose period\nscales as the inverse of the applied tangential force, irrespective of\npersistence length and beads self-exclusion.", + "authors": "A. Lamura", + "published": "2024-05-08", + "updated": "2024-05-08", + "primary_cat": "cond-mat.soft", + "cats": [ + "cond-mat.soft" + ], + "label": "Original Paper", + "paper_cat": "Diffusion AND Model", + "gt": "The conformational and dynamical properties of active ring polymers are\nstudied by numerical simulations. The two-dimensionally confined polymer is\nmodeled as a closed bead-spring chain, driven by tangential forces, put in\ncontact with a heat bath described by the Brownian multiparticle collision\ndynamics. Both phantom polymers and chains comprising excluded volume\ninteractions are considered for different bending rigidities. The size and\nshape are found to be dependent on persistence length, driving force, and bead\nmutual exclusion. The lack of excluded volume interactions is responsible for a\nshrinkage of active rings when increasing driving force in the flexible limit\nwhile the presence induces a moderate swelling of chains. Internal dynamics of\nflexible phantom active rings shows activity-enhanced diffusive behavior at\nlarge activity values while, in the case of self-avoiding active chains, it is\ncharacterized by active ballistic motion not depending on stiffness. The\nlong-time dynamics of active rings is marked by rotational motion whose period\nscales as the inverse of the applied tangential force, irrespective of\npersistence length and beads self-exclusion.", + "main_content": "INTRODUCTION Last twenty years registered a growing interest towards active matter [1\u20134]. This is made of out-of-equilibrium interacting units capable of absorbing energy from their environment and transforming it into motion. An interesting example is provided by active polymer-like structures where the presence of active noise and/or internal propulsion, interacting with deformability, is responsible for intriguing new phenomena, investigated both theoretically and numerically [5\u201318]. Nature provides numerous realizations showing how activity is crucial in determining both structural and dynamical properties. Among others, actin \ufb01laments and microtubules are prototypes of \ufb01lamentous structures, subject to local active forces exerted by biomolecular motors, capable of performing di\ufb00erent activities at the biological level [19, 20]. For example, microtubules placed on kinesin motility assays can undergo active self-organization to obtain more ordered structures such as bundles [21] and rings [22]. Such closed structures are very common and can be observed in chromosomes inside bacteria [23], in DNA and RNA arranging in loops [24, 25], in actomyosin rings [26], and in microtubules on dynein-coated surfaces [27] whose dynamics is greatly a\ufb00ected by the circular form [28]. Very recently some studies have investigated structures and dynamic behaviors of active rings. In three spatial dimensions active Brownian [29] and tangentially driven [30, 31] ring polymer models have been considered. In the former case it is found that the action of local active random forces enhances conformal \ufb02uctuations [29] while in the latter one, the local tangent force causes small rings to swell and large rings to collapse with an arrested dynamics in the case of \ufb02exible rings [30]. Neglecting excluded volume interactions allows an analytical study of the dynamics of semi\ufb02exible active polar ring polymers [31] which reveals that conformations are independent on activity and characterized by a rotational motion. This resembles the tank-treading motion observed for passive rings [32\u201335] and vesicles [36, 37] when subject to an external shear \ufb02ow. The interplay of local polar and long-range activities on the swelling and collapse of \ufb02exible ring polymers has been also considered [38]. In the two-dimensional case very few studies addressed the behavior of active ring polymers. Active Brownian models have been adopted to mimic mammalian cells [39] and to investigate the motion of active rings in porous media [40]. Despite of this, the problem is interesting since several experiments showed that it is possible to assemble microtubules, on a 2 \fmotor protein-\ufb01xed surface [22, 28, 41], in ring shapes which are characterized by rotational motion [27]. Due to the peculiar dynamic behavior, it appears very engaging to understand such patterns which strongly depend on the topological constraints in two dimensions. This is precisely the aim of the present study where the e\ufb00ects of excluded volume interactions are explicitly considered in the case of active polymer rings. We introduce a discrete model of a closed semi\ufb02exible polymer whose beads are subject to a force tangentially oriented with respect to the polymer backbone. Excluded volume interactions are taken into account in order to highlight their roles in the observed dynamics since these forces are known to be relevant in the case of two-dimensional passive rings in the limit of small bending rigidity [42, 43]. Hydrodynamic interactions are ignored due to the strong interaction between rings and substrates in two dimensions thus allowing the use of the free-draining approximation. For this reason the polymer is placed in contact with a Brownian heat bath and its dynamics is numerically studied by using the Brownian version [44] of the multiparticle collision dynamics [45, 46]. We \ufb01nd that the size and shape, measured by the radius of gyration and by the asphericity, respectively, depend on persistence length, excluded volume interactions, and active force. In the limit of \ufb02exible rings, phantom chains decrease in size when increasing activity while rings with conserved topology present a moderate swelling, becoming more roundish in both cases. In the opposite limit of sti\ufb00rings, excluded volume interactions are not crucial in determining conformations which are independent on activity. Flexible phantom active rings show enhanced di\ufb00usive dynamics while self-avoiding active chains display ballistic dynamic behavior not depending on sti\ufb00ness. The long-time dynamics is characterized by a reptation motion for all bending rigidities which, in the case of sti\ufb00rings, resembles the tank-treading motion observed for two-dimensional sheared vesicles [47\u201349]. The rotational period is found to scale as the inverse of the active force. The numerical model for the polymer and the Brownian heat bath is introduced in Sec. II. The results for the conformations and the dynamics are reported in Sec. III. Finally, Sec. IV is devoted to discuss the main \ufb01ndings presenting some conclusions. 3 \fII. MODEL AND METHOD A closed chain of length L is considered in two spatial dimensions. It is composed of N beads, each having mass M, whose internal interactions are due to di\ufb00erent contributions. Consecutive beads interact via the harmonic potential Ubond = \u03bah 2 N X i=1 (|ri+1 \u2212ri| \u2212l)2, (1) where \u03bah is the spring constant, ri indicates the position vector of the i\u2212th bead (i = 1, . . . , N) with rN+1 = r1 and r0 = rN, and l is the average bond length. A bending potential is considered to enforce chain sti\ufb00ness and is given by Ubend = \u03ba N X i=1 (1 \u2212cos \u03b8i) (2) where \u03ba controls the bending rigidity and \u03b8i is the angle between two consecutive bond vectors. In the following, chain sti\ufb00ness is characterized in terms of the length Lp = 2\u03bal/kBT which corresponds to the polymer persistence length in the worm-like chain limit [50]. Here kBT is the thermal energy, T is the temperature, and kB is Boltzmann\u2019s constant. Excluded volume interactions between non-bonded beads are modeled by the truncated and shifted Lennard-Jones potential Uex = 4\u01eb h\u0010\u03c3 r \u001112 \u2212 \u0010\u03c3 r \u00116 + 1 4 i \u0398(21/6\u03c3 \u2212r), (3) where \u01eb is the volume-exclusion energy, r is the distance between two non-connected beads, and \u0398(x) is the Heaviside function (\u0398(x) = 0 for x < 0 and \u0398(x) = 1 for x \u22650). This potential avoids chain self-crossings so to preserve the ring topology. Finally, an active force F a i (i = 1, . . . , N) is applied tangentially to the \ufb01lament at the position of each bead. In the present paper we adopt a push-pull type force [6, 8, 13, 31]. By assuming that molecular motors are homogeneously distributed along a bond, it is reasonable to consider that each bond is subject to a constant force, along its direction, given by f a(ri\u2212ri\u22121)/l(i = 1, . . . , N) [6]. This force has magnitude f a since the bond length |ri\u2212ri\u22121| is constrained to be l by using a very high value of the spring constant \u03bah in (1). The force on each bond is then equally distributed between the adjacent beads so that, say, on the bead i there is a contribution f a(ri \u2212ri\u22121)/(2l) along the inward bond and a contribution 4 \ff a(ri+1 \u2212ri)/(2l) along the outward bond. The total net force acting on the i-th bead is the sum of these two terms F a i = f a 2l (ri+1 \u2212ri\u22121) , i = 1, . . . , N. (4) The expression (4) is such that the sum of active forces along the discrete ring, PN i=1 F a i , is zero [31]. Moreover, the value of the force (4) depends on the relative positions of the beads i \u22121 and i + 1, varying between 0, when the two consecutive bonds are antiparallel, and f a, when the bonds are parallel. In other studies a constant tangent force, acting on all the beads, has been considered [30, 51, 52]. The strength of the total active force is quanti\ufb01ed by the P\u00b4 eclet number Pe = f aNL/(kBT) [8, 31]. An alternative de\ufb01nition of the P\u00b4 eclet number, Pe\u2217= f al/(kBT) = Pe/N2, being L = Nl, is sometimes used in the literature [30]. Newton\u2019s equations of motion of beads are integrated by the velocity-Verlet algorithm with time step \u2206tp [53, 54]. The ring is kept in contact with a Brownian heat bath which is modeled by making use of the Brownian multiparticle collision (MPC) method [44, 46, 55] where hydrodynamics is ignored. Every bead interacts with \u03c1 virtual solvent particles of mass m in order to simulate the interaction with a \ufb02uid volume. Since it is not necessary to keep track of the positions of the solvent particles in the present algorithm [44], it is su\ufb03cient to couple each bead with an e\ufb00ective virtual solvent particle with momentum sampled from a Maxwell-Boltzmann distribution of variance \u03c1mkBT and zero mean. The interaction process proceeds via the stochastic rotation dynamics of the MPC method [46, 56, 57]. The relative velocity of each polymer bead, with respect to the center-of-mass velocity of the bead and its corresponding virtual solvent particle, is randomly rotated by angles \u00b1\u03b1. Collisions are then executed at time intervals \u2206t, with \u2206t > \u2206tp. It has been shown that the evolution equation of the MPC model for the solute particle takes the form of a discretized Langevin equation for which the expression of the friction coe\ufb03cient has been obtained [55]. Simulations are carried out with the choices \u03b1 = 130o, \u2206t = 0.1tu, with time unit tu = p ml2/(kBT), M = \u03c1m with \u03c1 = 5, \u03bahl2/(kBT) = 104, \u03c3/l = 1, N = L/l = 50, and \u2206tp = 10\u22122\u2206t. In some cases, longer rings with N = 100, 200 beads have been also considered. A larger value of the ratio \u03c3/l, which might be experimentally relevant, would cause the overlap of neighboring beads with a smoothing of the interaction potential and, eventually, only minor quantitative changes in the following results. The value of \u03bah is such 5 \fto ensure that bond length \ufb02uctuations are negligible in any non-equilibrium condition. III. NUMERICAL RESULTS We consider rings with persistence lengths ranging from the \ufb02exible limit (Lp/L = 0) to the sti\ufb00one (Lp/L = 40). The active force f a is varied to access a wide interval of P\u00b4 eclet number (0 \u2264Pe \u22645 \u00d7 104 0 \u2264Pe\u2217\u226420). Finally, in order to incorporate excluded volume e\ufb00ects, the value \u01eb = kBT is used referring to the model as a self-avoiding active ring (SAR). To point up topological e\ufb00ects, a comparison with self-crossing rings is also carried out by setting \u01eb = 0. In this latter case we refer to the model as a phantom active ring (PAR). For the considered set of parameters, the friction coe\ufb03cient \u03be [55] acting on each bead is such that M/\u03be \u22722.0 \u00d7 10\u22126\u03c4r, 8.5 \u00d7 10\u22125\u03c4r for self-avoiding and phantom rings, respectively. This ensures that the dynamics is close to the overdamped one so that inertial e\ufb00ects are negligible for the results in the following. Here and in the rest of the paper, \u03c4r denotes the polymer relaxation time in the passive case and is determined by considering the time decay of the ring-diameter autocorrelation function (see details when discussing Fig. 10). It results to be \u03c4r \u22436.5 \u00d7 104tu, 1.5 \u00d7 103tu for self-avoiding and phantom \ufb02exible rings, respectively, and \u03c4r \u22431.6 \u00d7 105tu when Lp/L = 40 where there are no di\ufb00erences between the two models. Polymers are initialized in a circular shape and equilibrated up to time 106tu, much longer than any polymer relaxation time. Then, data are collected in single runs for every parameter set over time intervals of duration \u224350\u03c4r, and averaged. In the case of the PAR model with Lp/L = 0.4 at Pe = 2.5 \u00d7 104, averages are obtained from three di\ufb00erent realizations, each of duration up to 150\u03c4r. A. Polymer conformations By varying activity and sti\ufb00ness, rings can attain di\ufb00erent con\ufb01gurations. In order to characterize the observed patterns, the gyration tensor G\u03b1\u03b2 = 1 N N X i=1 \u2206ri,\u03b1\u2206ri,\u03b2 (5) is computed. Here \u2206ri,\u03b1 is the position of the i-th bead in the center-of-mass reference frame of the polymer and the Greek index indicates the Cartesian component. The two eigenvalues 6 \f\u03bb1 and \u03bb2, with \u03bb1 > \u03bb2, of the tensor (5) are extracted to calculate the gyration radius R2 g = \u03bb1 + \u03bb2 (6) which measures the total size of the ring. The asphericity A = (\u03bb1 \u2212\u03bb2)2 (\u03bb1 + \u03bb2)2 (7) is also computed to provide information about the shape, being 0 \u2264A \u22641 with A = 0 for a circle and A = 1 for a rod. The computed values of \u27e8R2 g\u27e91/2, normalized to the radius of gyration Rc = L/(2\u03c0) of a rigid circle, are depicted versus the P\u00b4 eclet number in Fig. 1 for di\ufb00erent values of the persistence length Lp in the case of SAR and PAR models. The left panel shows data in the \ufb02exible regime corresponding to chains for which the values of the gyration radius in the passive limit, Pe \u21920, are di\ufb00erent for self-avoiding (\ufb01lled symbols) and phantom (empty symbols) rings [43]. The di\ufb00erence in the radii is due to the conserved circular topology in the SAR model thanks to self-avoiding e\ufb00ects. In this model polymers show larger sizes with respect to the PAR model. On the contrary, the bonds of phantom rings overlap to maximize the con\ufb01gurational entropy because of \ufb02exibility [43] thus producing more compact structures. Radii increase with the persistence length in both models while the relative di\ufb00erences reduce. Activity does not produce any signi\ufb01cant change in the radius of gyration up to Pe \u2243103. For values Pe \u2273104, the behavior varies with the considered model and the conformations depend on activity. Some typical con\ufb01gurations are reported in the bottom part of Fig. 1. This latter range of activity is experimentally relevant: For example, in the case of microtubules of length L = 1\u00b5m with N = 10 active motors, each with force f a = 6pN, it would be Pe \u2243104 at room temperature [31, 58]. Phantom rings tend to shrink while self-avoiding rings swell. In the case of fully \ufb02exible chains (Lp/L = 0) when Pe = 5 \u00d7 104, the root mean-square radius of gyration reduces by approximately 25% for PAR model and increases by approximately 15% for SAR model with respect to the values at equilibrium. We note here that the shrinkage of phantom chains in two dimensions is larger compared to the value (\u224310%) found in three dimensions [31] using a similar discrete model for the same P\u00b4 eclet number, thus pointing out the relevance of space dimensionality on conformations. The probability distribution functions P(Rg/Rc) of the radius of gyration are shown for PAR and SAR models with Lp/L = 0 in the panels (a) and (c), respectively, of 7 \fFig. 2 for di\ufb00erent values of activity. In both models, the mode of the distribution increases with Pe and the width becomes narrower suggesting that \ufb02uctuations are suppressed by activity (see Movie 1 given in the supplementary material). By increasing the sti\ufb00ness, the variations of \u27e8R2 g\u27e91/2 with respect to the equilibrium value reduce and become negligible in the case of self-avoiding rings for which a very small contraction (\u22433%) can be appreciated when Lp/L = 0.2. At value of bending rigidity such that Lp/L \u22430.4, the sti\ufb00regime is entered. In the passive limit Pe \u21920, the values of the gyration radius appear indistinguishable at \ufb01xed bending rigidity, irrespective of excluded volume interactions, as a consequence of the mechanical constraints exerted by sti\ufb00ness (see Fig. 1 (b)). The global size of rings increases with sti\ufb00ness to become comparable to that of a rigid ring for very sti\ufb00chains (Lp/L = 40). When active polymers are considered, they show negligible variations in size except in the case of phantom active rings with Lp/L = 0.4. In this latter case, the gyration radius displays a non-monotonic dependence on the P\u00b4 eclet number due to di\ufb00erent conformations which can be assumed by the ring. This is re\ufb02ected in the probability distribution function of Rg, shown in Fig. 2 (b), that becomes multimodal in the cases with Pe = 2.5\u00d7104, 5\u00d7104. Without the topology constraint enforced by excluded volume interactions, activity is able to deform the chain despite its bending rigidity. The interplay with \ufb02uctuations produces di\ufb00erent con\ufb01gurations of variable duration, observable during very long time dynamics. Typical patterns, corresponding to the three peaks of P(Rg/Rc) with Pe = 2.5 \u00d7 104, are illustrated in Fig. 3. In the case of self-avoiding active rings with Lp/L = 0.4, activity does not change the global size. However, distribution functions become skewed (see Fig. 2 (d)) since rings continuously shrink and swell during their dynamics (see Movie 2 given in the supplementary material). This e\ufb00ect reduces when increasing the bending rigidity so that rings behave as rigid circles. Indeed, when Lp/L \u22731, no appreciable di\ufb00erence can be observed in the behavior between PAR and SAR models since self-exclusion becomes irrelevant. This is due to the fact that bonds are separated from each other because of the high bending rigidity of sti\ufb00polymers. More details about the dynamics will be provided in the following Section. In order to gain further insight into the observed patterns of active rings, the equal time bond correlation function is computed. It is de\ufb01ned as \u27e8cos \u03b8(s)\u27e9= \u27e8ti+s \u00b7 ti\u27e9 l2 (8) 8 \fwhere ti = ri+1 \u2212ri is the bond vector and s is the contour separation. The closed topology guarantees the property \u27e8cos \u03b8(s)\u27e9= \u27e8cos \u03b8(N \u2212s)\u27e9. Figure 4 depicts the bond correlation function for the persistence lengths Lp/L = 0, 0.4 with Pe = 2.5 \u00d7 104. Flexible phantom rings show a very fast decay at small separations followed by anti-correlation on a distance of about two bonds before reaching complete decorrelation at a contour separation of about 6 bonds. This suggests the presence of small wraps of few beads that favor the contraction in size. In contrast, \ufb02exible self-avoiding active rings manifest a larger directional correlation on short distance due to excluded volume e\ufb00ects that restrict the possible conformations. Owing to the preserved circular topology, the correlation function becomes negative on separations s/N \u22431/2. As already observed, sti\ufb00ness is responsible of increasing the size of rings. In the case of self-avoiding active rings with Lp/L = 0.4, this produces a larger correlation between bonds which are strongly anti-correlated on distances s/N \u22431/2 as in the case of rigid passive rings [42]. When considering semi\ufb02exible phantom active rings, the presence of the structure with two interlaced rings, shown in Fig. 3 (c), determines bond anti-correlation at separations s/N \u22431/4 and small correlation at s/N \u22431/2. In order to better evaluate the e\ufb00ect of activity on the shape of active rings, the average asphericity is plotted in Fig. 5 for the \ufb02exible (panel (a)) and sti\ufb00(panel (b)) regimes. In the former case, asphericity presents a non-monotonic dependence on sti\ufb00ness when Pe \u21920, as observed in Ref. [43], with self-exclusion warranting more circular shapes. The e\ufb00ect of activity is to make rings more roundish in both models with the exception of the PAR model with Lp/L = 0.2 when activity favors elongated structures with respect to the passive limit. As far as the bending rigidity is negligible, our results give \u27e8A\u27e9\u22430.26 in the passive case, as predicted in the Gaussian limit [59]. The observed small wraps at high activity favor local back folding so that rings are able to gain even more compact conformations (see Fig. 1 (a)), while reducing their asphericity with respect to the passive case. Once bending rigidity comes into play (at values Lp/L \u22430.2), phantom active rings can still reduce the gyration radius due to self-crossing while assuming a more eccentric elliptical shape. The corresponding probability distributions P(A) are highly skewed with a maximum at A = 0 and long tails, as it can be seen in Fig. 6 (a,c) for \ufb02exible rings (Lp/L = 0). The e\ufb00ect of activity is to increase the height of the maximum of distributions while slightly shortening tails. For sti\ufb00 active rings (Fig. 5 (b)) it is possible to observe that activity induces slightly more elongated shapes with respect to the passive case though this e\ufb00ect reduces when increasing sti\ufb00ness. 9 \fOnly for phantom active rings with Lp/L = 0.4, a non-monotonic dependence on activity is visible due to the observed conformations (see Fig. 3) and the peculiar dynamics, as previously discussed. This is also re\ufb02ected in the probability distributions shown Fig. 6 (b) for Lp/L = 0.4. The distribution P(A) is characterized by a linear decay as far as Pe \u2272104. For larger values of activity longer tails and pronounced shoulders appear in the distribution P(A). In the case of self-avoiding active rings (Fig. 6 (d)), the role played by activity is to produce slightly longer tails while poorly a\ufb00ecting the behavior at small values of A. B. Dynamical behavior In this Section we describe and characterize the dynamical behavior of active rings once the steady state has been reached. When Pe \u22721, there are no e\ufb00ects induced by the applied tangential force and rings behave as in the passive case with di\ufb00usive translational motion of the center of mass (see the following discussion). By increasing activity, rings are set in a slow rotational motion due to the applied force though this rotation is not continuous in time. In order to illustrate and quantify the described behavior, it is useful to consider the ring diameter, de\ufb01ned as Rd = rN/2+1 \u2212r1. The time dependence of the x-component Rdx is reported in Fig. 7 in the case of a \ufb02exible self-avoiding ring at di\ufb00erent values of activity. Once Pe \u223co(102), a steady rotation of active rings can be observed. During steady rotation, the vector Rd rotates continuously so that its components oscillate periodically in time. This behavior can be used to infer the characteristic rotation frequency \u03c9. This is determined by a spectral analysis (see the inset of Fig. 7 (d)) of the time series Rdx(t). The computed periods of rotation, T = 2\u03c0/\u03c9, are shown in Fig. 8 for di\ufb00erent persistence lengths and rings of lengths L = 50l, 100l, 200l. It is evident that the period T follows a power-law decay with dependence (Pe/L3)\u22121, irrespective of the bending rigidity and ring average size at high activity. Our results con\ufb01rm what analytically predicted for threedimensional phantom active rings that undergo active tank-treading motion with frequency \u03c9 = (Pe/L3)(2\u03c0lkBT/\u03be) = f a/(Rc\u03be) which is proportional to the tangential velocity f a/\u03be and independent of the e\ufb00ective ring size. [31]. Moreover, here we \ufb01nd evidence that the period is not depending on excluded volume interactions in two dimensions. In the case of the phantom \ufb02exible chain, a compact conformation is observed at Pe \u2243102 and thermal noise deeply in\ufb02uences ring rotation so that the observed spectrum of frequencies is quite 10 \fbroad. Phantom active rings require larger values of activity or of sti\ufb00ness with respect to self-excluding active rings in order to establish a uniform rotational motion. Sizes and shapes of active rings in the steady state show a poor dependence on the applied force as far as Pe \u2272104, as already discussed in the previous Section. However, when entering the regime of experimentally relevant P\u00b4 eclet numbers, rings undergo large morphological deviations with respect to equilibrium. Phantom active rings, despite the initial circular con\ufb01guration, can be driven, going through intermediate structures (see panel (b) of Fig. 3), into more compact con\ufb01gurations (see panel (c) of Fig. 3). Simulations for the PAR model have been conducted at Pe = 2.5\u00d7104 for di\ufb00erent values of the persistence length. It appears that when 0.3 \u2272Lp/L \u22720.45, rings spontaneously assume the double ring conformation with Rg/Rc \u22430.52 (corresponding to the typical value of Rg for the conformation of Fig. 3 (c)). This latter structure can spontaneously disentangle with a lifetime which is longer at Lp/L \u22430.4. This behavior can be observed in the time dependence of the gyration radius and of the asphericity in Fig. 9 for the PAR model with Lp/L = 0.4 at Pe = 2.5 \u00d7 104 on a very long time run of duration 150\u03c4r \u22437 \u00d7 104T. Starting from the initial circular shape, phantom rings can self-cross assuming conformations similar to the one of Fig. 3 (b) with an elongated shape resembling the number eight. This is possible only in a narrow range centered at Lp/L \u22430.4 since the \u201ceight con\ufb01guration\u201d is compatible with this value of the persistence length. Due to thermal \ufb02uctuations, it can happen that one of the two sub-rings moves towards the other one trespassing the mutual crossing point to give the double ring conformation. Despite this costs a strong local bending, the double ring is always observed at Lp/L = 0.4 in all the considered runs at very high P\u00b4 eclet number. In the case of active rings comprising excluded volume interactions, activity is responsible of inducing temporary elongated con\ufb01gurations as illustrated in Fig. 9 by the peaks of asphericity corresponding to the reduction of the radius of gyration (see also Movie 2 in the supplementary material). In order to further characterize the rotational behavior, it is useful to consider the normalized time-correlation function of the ring diameter \u27e8Rd(t) \u00b7 Rd(0)\u27e9/\u27e8R2 d(0)\u27e9. In the left panel of Fig. 10 the normalized autocorrelation function is plotted for a \ufb02exible self-avoiding ring for di\ufb00erent values of activity. In the passive case, the function exhibits an exponential decay, exp(\u2212t/\u03c4r), which is used to determine the polymer relaxation time \u03c4r. When Pe = 10 no relevant di\ufb00erence can be appreciated with respect to equilibrium on time scales 11 \fcomparable to the relaxation time. The increase of activity is responsible for producing an oscillatory behavior which is modulated in time by the same decay of the passive ring. The damped oscillatory pattern with a shorter period is maintained when the P\u00b4 eclet is further increased. The comparison in the behavior of the autocorrelation function of the ring diameter between the PAR and SAR models is reported in the panel (b) of Fig. 10 for di\ufb00erent bending rigidities with Pe = 103. In the case of \ufb02exible phantom active rings, the correlation function shows an exponential decay since the observed compact structure, due to the lack of any bending rigidity, requires larger values of activity to observe oscillations. On the contrary, self-avoiding active rings present the damped oscillatory behavior thanks to excluded volume e\ufb00ects that preserve the circular topology avoiding any collapse of the chain while rotating. Oscillations are clearly observable in the correlation functions of semi\ufb02exible, both phantom and self-excluding, active rings. The amplitudes are larger in the latter case due to the longer relaxation times and increase with bending rigidity to become indistinguishable between the two models in the limit of sti\ufb00rings. As far as oscillations are well de\ufb01ned, the numerical data of the autocorrelation function are very well approximated (see Fig. 10 (b)) by the theoretical prediction [31] \u27e8Rd(t) \u00b7 Rd(0)\u27e9 \u27e8R2 d(0)\u27e9 \u2248cos(2\u03c0t/T) exp(\u2212t/\u03c4r), (9) where the values of T and \u03c4r, computed in the present simulations, are used. Finally, the beads mean-square displacement (MSD) \u27e8(ri(t) \u2212ri(0))2\u27e9is computed which allows the characterization of the translational motion of ring. Due to the ring topology, the beads MSD is independent of the point location and receives a contribution from the centerof-mass motion, \u27e8\u2206r2 cm(t)\u27e9, and another one from the internal dynamics, \u27e8\u2206r2(t)\u27e9, so that one can write \u27e8(ri(t) \u2212ri(0))2\u27e9= \u27e8\u2206r2 cm(t)\u27e9+ \u27e8\u2206r2(t)\u27e9. Since the sum of all internal and active forces over the whole ring vanish, the center-of-mass motion is purely di\ufb00usive depending only on thermal \ufb02uctuations and not on activity. In this way the quantity \u27e8\u2206r2(t)\u27e9, which is related the beads MSD relative to the center-of-mass MSD, provides information on the ring internal dynamics. The MSD \u27e8\u2206r2(t)\u27e9for self-avoiding \ufb02exible (Lp/L = 0) and sti\ufb00 active rings (Lp/L = 40) with di\ufb00erent activities are reported in Fig. 11. In the case without any sti\ufb00ness (panel (a)) the sub-di\ufb00usive exponent 0.6 is found in the time range t \u226a\u03c4r when thermal e\ufb00ects prevail on active contributions, as predicted by the Rouse model of two-dimensional \ufb02exible polymers with excluded volume interactions [60]. For large P\u00b4 eclet 12 \fnumbers, Pe \u2273104, an active ballistic time regime is observed with \u27e8\u2206r2(t)\u27e9\u223ct2. For longer times, oscillations, due to the active tank-treading, appear in the MSD which then goes to a plateau when t \u2273\u03c4r. This behavior, due to the mutual repulsion among beads, is di\ufb00erent from what is found when considering \ufb02exible phantom rings. In this case the sub-di\ufb00usive behavior t1/2 holds when t \u226a\u03c4r. The MSD shows the activity-enhanced linear time regime at high values of activity (Pe \u2243104) followed by oscillations at longer times, as predicted in three dimensions [31]. The MSD of sti\ufb00polymers (panel (b)) exhibits an initial time dependence t0.7. The exponent 0.7 slightly underestimates the predicted value 3/4 [61] due to the \ufb01nite ring length [60]. A linear time dependence [62] is then observed at late times when Pe \u22721. Strong activity induces the active ballistic time regime followed in time by oscillations. In this case we \ufb01nd that the numerical values of \u27e8\u2206r2(t)\u27e9are very well described (see Fig. 11 (b)) by the theoretical prediction [31] \u2206r2(t) \u000b /L2 \u2248 h 1 \u2212cos(2\u03c0t/T)e\u2212t/\u03c4ri /(2\u03c02), (10) where the computed values of T and \u03c4r are used. IV. DISCUSSION AND" +} \ No newline at end of file diff --git a/abs_9K/test_abstract_short_2405.05433v1.json b/abs_9K/test_abstract_short_2405.05433v1.json new file mode 100644 index 0000000000000000000000000000000000000000..063e02c5a3bb1bd4dd511bc34ed5feb4716fbcdb --- /dev/null +++ b/abs_9K/test_abstract_short_2405.05433v1.json @@ -0,0 +1,17 @@ +{ + "url": "http://arxiv.org/abs/2405.05433v1", + "title": "Robust Reward Placement under Uncertainty", + "abstract": "Reward placement is a common optimization problem in network diffusion\nprocesses, where a number of rewards are to be placed in a network so as to\nmaximize the total reward obtained as agents move randomly in it. In many\nsettings, the precise mobility network might be one of several possible, based\non parameters outside our control, such as the weather conditions affecting\npeoples' transportation means. Solutions to the reward placement problem must\nthus be robust to this uncertainty, by achieving a high utility in all possible\nnetworks. To study such scenarios, we introduce the Robust Reward Placement\nproblem (RRP). Agents move randomly on a Markovian Mobility Model that has a\npredetermined set of locations but its precise connectivity is unknown and\nchosen adversarialy from a known set $\\Pi$ of candidates. Network optimization\nis achieved by selecting a set of reward states, and the goal is to maximize\nthe minimum, among all candidates, ratio of rewards obtained over the optimal\nsolution for each candidate. We first prove that RRP is NP-hard and\ninapproximable in general. We then develop $\\Psi$-Saturate, a pseudo-polynomial\ntime algorithm that achieves an $\\epsilon$-additive approximation by exceeding\nthe budget constraint by a factor that scales as $O(ln|\\Pi|/\\epsilon)$. In\naddition, we present several heuristics, most prominently one inspired from a\ndynamic programming algorithm for the max-min 0-1 Knapsack problem. We\ncorroborate our theoretical findings with an experimental evaluation of the\nmethods in both synthetic and real-world datasets.", + "authors": "Petros Petsinis, Kaichen Zhang, Andreas Pavlogiannis, Jingbo Zhou, Panagiotis Karras", + "published": "2024-05-08", + "updated": "2024-05-08", + "primary_cat": "cs.MA", + "cats": [ + "cs.MA", + "cs.SI" + ], + "label": "Original Paper", + "paper_cat": "Diffusion AND Model", + "gt": "Reward placement is a common optimization problem in network diffusion\nprocesses, where a number of rewards are to be placed in a network so as to\nmaximize the total reward obtained as agents move randomly in it. In many\nsettings, the precise mobility network might be one of several possible, based\non parameters outside our control, such as the weather conditions affecting\npeoples' transportation means. Solutions to the reward placement problem must\nthus be robust to this uncertainty, by achieving a high utility in all possible\nnetworks. To study such scenarios, we introduce the Robust Reward Placement\nproblem (RRP). Agents move randomly on a Markovian Mobility Model that has a\npredetermined set of locations but its precise connectivity is unknown and\nchosen adversarialy from a known set $\\Pi$ of candidates. Network optimization\nis achieved by selecting a set of reward states, and the goal is to maximize\nthe minimum, among all candidates, ratio of rewards obtained over the optimal\nsolution for each candidate. We first prove that RRP is NP-hard and\ninapproximable in general. We then develop $\\Psi$-Saturate, a pseudo-polynomial\ntime algorithm that achieves an $\\epsilon$-additive approximation by exceeding\nthe budget constraint by a factor that scales as $O(ln|\\Pi|/\\epsilon)$. In\naddition, we present several heuristics, most prominently one inspired from a\ndynamic programming algorithm for the max-min 0-1 Knapsack problem. We\ncorroborate our theoretical findings with an experimental evaluation of the\nmethods in both synthetic and real-world datasets.", + "main_content": "Introduction In many graph optimization problems, a stakeholder has to select locations in a network, such as a road, transportation, infrastructure, communication, or web network, where to place reward-generating facilities such as stores, ads, sensors, or utilities to best service a population of moving agents such as customers, autonomous vehicles, or bots [Zhang and Vorobeychik, 2016; Ostachowicz et al., 2019; Zhang et al., 2020; Rosenfeld and Globerson, 2016; Amelkin and Singh, 2019]. B C 0.2 D A 0.8 1 1 1 B C 0.2 D A 1 1 0.8 1 2 1/2 2 1/2 A B D 2 1/2 2 1/2 A Figure 1: Moving agent under two settings; sunny and rainy. Each table contains the number of steps and the initial probabilities. Such problems are intricate due to the uncertainty surrounding agent mobility [Krause et al., 2008; Chen et al., 2016; He and Kempe, 2016; Hor\u02c7 c\u00b4 \u0131k et al., 2022]. For instance, consider outdoor ad placement. We represent the road map as a probabilistic network in which agents move. If every agent follows the same movement pattern regardless of environmental conditions, then the problem of placing ads to maximize expected number of ad views admits a greedy algorithm with an approximation ratio [Zhang et al., 2020]. Still, the problem becomes more involved under malleable environmental conditions that alter movement patterns. As a toy example, Figure 1 shows a probabilistic network. A moving agent randomly starts from an initial location, and takes two steps by the probabilities shown on edges representing street segments, under two environmental settings; sunny and rainy. Suppose a stakeholder has a budget to place an adbillboard at a single location. Under the sunny setting, the best choice of ad placement is B, since agent certainly pass by that point regardless of its starting position. On the other hand, under the rainy setting, agent will necessarily pass by D within two steps, hence that is most preferable. Under such uncertainty, a rational stakeholder would prefer the location that yields, in the worst case, a reward having the highest ratio to the best feasible one. For instance, if a stakeholder selects B (resp. D), then under the rainy (resp. sunny) setting the expected reward is 0.6. However, the optimal strategy for a risk-averse stakeholder would be the selection of C, as the expected reward is higher in both settings, equal to 0.9. In this paper, we introduce the problem of robust reward placement (RRP) in a network, under uncertainty about the environment whereby an agent is moving according to any of several probabilistic mobility settings. We express each arXiv:2405.05433v1 [cs.MA] 8 May 2024 \fsuch setting by a Markov Mobility Model (MMM) noted as \u03c0 \u2208\u03a0. The cumulative reward a stakeholder receives grows whenever agent passes by one of the reward states SR. RRP seeks to select a set of such states S\u2217 R within a budget, that maximizes the worst-case ratio, across all settings \u03a0, of the collected reward F(SR|\u03c0) over the highest reward that can be collected under the same setting F(S\u2217 \u03c0|\u03c0). More formally, S\u2217 R = arg maxSR min\u03c0\u2208\u03a0 F (SR|\u03c0) F (S\u2217 \u03c0|\u03c0) . The max-min ratio objective is used in risk-averse portfolio optimization and advertising [Ordentlich and Cover, 1998; Li and Yang, 2020]. Our Contribution. Our contributions stand as follows: 1. We introduce the problem of Robust Reward Placement (RRP) over a set of Markov Mobility Models, that has real-world applications across various domains. 2. We study the properties of RRP and show that it is NPhard (Theorem 1). Due to the additivity and monotonicity properties of the reward function (Lemma 3), RRP admits an optimal solution in pseudo-polynomial time under a single setting, i.e. |\u03a0| = 1 (Lemma 4), yet it is inapproximable when |\u03a0| > 1 unless we exceed the budget constraint by a factor O(ln |\u03a0|) (Theorem 2). 3. We equip techniques from robust influence maximization to develop \u03a8-Saturate, a pseudo-polynomial time algorithm viable to the challenge of RRP. \u03a8-Saturate computes a solution within \u03f5 distance of the optimal, i.e. OPT\u2212\u03f5, while exceeding the budget constraint by a factor O(ln |\u03a0|/ \u03f5) (Lemma 6). 4. We present several heuristics as alternative solutions with the most prominently one based on a dynamic programming algorithm for the max\u2013min 0\u20131 KNAPSACK problem; to which RRP can be reduced (Lemma 5). We corroborate our theoretical findings with an experimental comparison of our solution vs. a suite of heuristics in both synthetic and real-world data. Due to space constraints, the proofs of Lemmas 5,6 and 9 appear in Appendix A. 2 Related Work The Robust Reward Placement problem relates to the broad area of robust maximization of a spread function in a network, with some distinctive differences. Some works [Du et al., 2013; He and Kempe, 2016; Chen et al., 2016; Logins et al., 2020, 2022] study problems of selecting a seed set of nodes that robustly maximize the expected spread of a diffusion process over a network. However, in those models, such as Independent cascade and Linear Threshold [Kempe et al., 2003] the diffusion process underlying the spread function is generative, whereby an item such as an idea, fashion, message, or meme, propagates in the network by producing unlimited replicas of itself. On the other hand, we study a non-generative spread function, whereby the goal is to reach as many as possible out of a population of network-resident agents. Our spread function is similar to the one studied in the problem of Geodemographic Influence Maximization [Zhang et al., 2020], yet thereby the goal is to select a set of network locations that achieves high spread over a mobile population under a single environmental setting. We study the more challenging problem of achieving competitive spread in the worst case under uncertainty regarding the environment. Several robust discrete optimization problems [Kouvelis and Yu, 2013] address uncertainty in decision-making by optimizing a max\u2013min or min\u2013max function under constraints. The robust MINIMUM STEINER TREE problem [Johnson et al., 2000] seeks to minimize the worst-case cost of a tree that spans a graph; the min\u2013max and min\u2013max regret versions of the KNAPSACK problem [Aissi et al., 2009] have a modular function as a budget constraint; other works examine the robust version of submodular functions [Krause and Golovin, 2014; He and Kempe, 2016] that describe several diffusion processes [Adiga et al., 2014; Krause et al., 2008]. To our knowledge, no prior work considers the objective of maximizing the worst-case ratio of an additive function over its optimal value subject to a knapsack budget constraint. 3 Preliminaries Markov Mobility Model (MMM). We denote a discretetime MMM as \u03c0 = (S, I, T , M), where S is a set of n states, I is a vector of n elements in [0, 1] expressing an initial probability distribution over states in S, T is an n \u00d7 n right-stochastic matrix, where T [s, s\u2032] is the probability of transition from state s \u2208S to another state s\u2032 \u2208S, and M is an n \u00d7 K matrix with elements in [0, 1], where K is the maximum number of steps and M[s, k] expresses the cumulative probability that an agent starting from state s \u2208S takes k\u2032 \u2208[k, K] steps. Remarkably, a MMM describes multiple agents and movements, whose starting positions are expressed via initial distribution I and their step-sizes via M. Rewards. Given a MMM, we select a set of states to be reward states. We use a reward vector R \u2208{0, 1}n to indicate whether state s \u2208S is a reward state and denote the set of reward states as SR = {s \u2208S|R[s] = 1}. In each timestamp t, an agent at state s may move to state s\u2032 and retrieve reward R[s\u2032]. For a set of reward states SR with reward vector R, and a given MMM \u03c0, the cumulative reward F(SR|\u03c0) of an agent equals: F(SR|\u03c0) = X k\u2208[K] F\u03c0(SR|k) (1) F\u03c0(SR|k) = R\u22a4\u0000T k(I \u25e6Mk) \u0001 , (2) where F\u03c0(SR|k) is the expected reward at the kth step, Mk is the kth column of M, and \u25e6denotes the Hadamard product. Note that as K \u2192\u221e, Equation 2 yields the steady-state distribution of the model. Equation 2 is a general formulation of PageRank scores [Brin and Page, 1998] as it considers different initial and step distributions via I and M, respectively. 4 Problem Formulation In this section we model the uncertain environment where individuals navigate and introduce the Robust Reward Placement (RRP) problem over a set of Markov Mobility Models (MMMs), extracted from real movement data, that express the behavior of individuals under different settings. Setting. Many real-life applications generate data on the point-to-point movements of agents over a network, along with a distribution and their total number of steps. Using aggregate statistics on this information, we formulate, without \floss of generality, the movement of a population by means of a single agent moving probabilistically over the states of a MMM \u03c0 = (S, I, T , M). Due to environment uncertainty, the agent\u2019s movement pattern may follow any of |\u03a0| different settings1 \u03a0 = {\u03c01, \u03c02, . . . , \u03c0|\u03a0|}. Robust Reward Placement Problem. Several allocation problems can be formulated as an optimization problem over a MMM \u03c0, where reward states SR correspond to the placement of resources. Given a budget L and a cost function c : S \u2192N+, the Reward Placement (RP) problem seeks the set of reward states S\u2217 R \u2286S that maximize the cumulative reward F(S\u2217 R|\u03c0) given by a moving agent, that is: S\u2217 R = arg max SR F(SR|\u03c0) s.t. X s\u2208SR c[s] \u2264L. However, in reality the agent\u2019s movements follow an unknown distribution sampled from a set of settings \u03a0 = {\u03c01, \u03c02, . . . , \u03c0|\u03a0|} represented as different MMMs. Under this uncertainty, given a set of MMMs noted as \u03a0, the Robust Reward Placement (RRP) problem seeks a set of reward states SR, within a budget, that maximize the ratio of agent\u2019s cumulative reward over the optimal one, when the model \u03c0 \u2208\u03a0 is unknown. In particular, given a budget L and a cost function c : S \u2192N+, we seek a reward placement S\u2217 R \u2286S such that: S\u2217 R = arg max SR min \u03c0\u2208\u03a0 F(SR|\u03c0) F(S\u2217 \u03c0|\u03c0) s.t. X s\u2208SR c[s] \u2264L, (3) where S\u2217 \u03c0 = arg max SR F(SR|\u03c0) is the optimal reward placement for a given model \u03c0 \u2208\u03a0 within budget L. This formulation is equivalent to minimizing the maximum regret ratio of F(SR|\u03c0), i.e., 1 \u2212F (SR|\u03c0) F (S\u2217 \u03c0|\u03c0) . The motivation arises from the fact that stakeholders are prone to compare what they achieve with what they could optimally achieve. The solution may also be interpreted as the optimal placement when the model \u03c0 \u2208\u03a0 in which agents are moving is chosen by an omniscient adversary, i.e. an adversary that chooses the setting \u03c0 after observing the set reward states SR. 5 Hardness and Inapproximability Results In this section we examine the optimization problem of RRP and we show that is NP-hard in general. First, in Theorem 1 we prove that even for a single model (|\u03a0| = 1) the optimal solution cannot be found in polynomial time, due to a reduction from the 0\u20131 KNAPSACK problem [Karp, 1972]. Theorem 1. The RRP problem is NP-hard even for a single model, that is |\u03a0| = 1. Proof. In the 0\u20131 KNAPSACK problem [Karp, 1972] we are given a set of items U, each item u \u2208U having a cost c(u) and, wlog, an integer value F(u) and seek a subset V \u2286U that has total cost P v\u2208V c(v) no more than a given budget L and maximum total value P v\u2208V F(v). In order to reduce 0\u20131 KNAPSACK to RRP, we set a distinct state s \u2208S for 1We use the terms \u201dsetting\u201d and \u201dmodel\u201d interchangeably. each item u \u2208U with the same cost, i.e., S = U, assign to each state a self-loop with transition probability 1, let each state be a reward state, and set a uniform initial distribution of agents over states equal to 1/|S| and steps probability equal to M[s, k] = 1, \u2200k \u2208[1, . . . , F(u)]. For a single setting, an optimal solution to the RRP problem of Equation (3) is also optimal for 0\u20131 KNAPSACK problem, which is NP-hard. Theorem 2 proves that RRP is inapproximable in polynomial time within constant factor, by a reduction from the HITTING SET problem, unless exceeding the budget constraint. Theorem 2. Given a budget L and set of models \u03a0, it is NPhard to approximate the optimal solution to RRP within a factor of \u2126(1/ n1\u2212\u03f5), for any constant \u03f5 > 0, unless the cost of the solution is at least \u03b2L, with \u03b2 \u2265ln |\u03a0|. Proof. We reduce the HITTING SET problem [Karp, 1972] to RRP and show that an approximation algorithm for RRP implies one for HITTING SET. In the HITTING SET problem, given a collection of X items, C = {c1, c2, . . . , cX} and a set of M subsets thereof, Bi \u2286C, i \u2208{1, . . . , M}, we seek a hitting set C\u2032 \u2286C such that Bi \u2229C\u2032 \u0338= \u2205\u2200i \u2208{1, . . . , M}. Given an instance of HITTING SET, we reduce it to RRP as follows. First, for each subset Bi we set a MMM \u03c0i (|\u03a0| = M) over the same set of states S = Sl \u222aSr with Sl \u2229Sr = \u2205. For each subset Bi we set a state sl i \u2208Sl and for each item ci we set a state sr i \u2208Sr. We set the initial probabilities I as uniform for all states in Sl, equal to 1/ |Sl| for all models. For each model \u03c0i \u2208\u03a0, there are transition probabilities 1 from each state sl j to state sl i, with i \u0338= j, and uniform transition probabilities from sl i to each state sr j if and only if cj \u2208Bi. States in Sr are absorbing, i.e., each state has a self-loop with probability 1. Figure 2 shows a small example of a HITTING SET instance and its RRP equivalent. We set the cost for absorbing states in Sr to 1 and let each node in Sl have a cost exceeding L. By this construction, if the reward placement SR does not form a hitting set, then it exists at least a subset Bi, such that Bi \u2229SR = \u2205, hence min\u03c0 F (SR|\u03c0) F (S\u2217 \u03c0|\u03c0) = 0. In reverse, if SR forms a hitting set, it holds that min\u03c0 F (SR|\u03c0) F (S\u2217 \u03c0|\u03c0) \u2265 1 |Sr| > 0. Thus, a hitting set exists if and only if min\u03c0 F (SR|\u03c0) F (S\u2217 \u03c0|\u03c0) > 0. In effect, if we obtained an approximation algorithm for RRP by increasing the budget to \u03b2L, for \u03b2 > 1, then we would also approximate, with a budget increased by a factor of \u03b2, the HITTING SET problem, which is NP-hard for \u03b2 < (1 \u2212\u03b4) ln |\u03a0| and \u03b4 > 0 [Dinur and Steurer, 2014]. Hitting Set RRP Figure 2: HITTING SET (left) and RRP reduction (right). \f6 Connections to Knapsack Problems In this section, we establish connections between RRP and KNAPSACK problems, which are useful in our solutions. Monotonicity and Additivity. Lemma 3 establishes that the cumulative reward function F(SR|\u03c0) is monotone and additive with respect to reward states SR. These properties are crucial on evaluating the reward function taking advantage of pre-computations. Lemma 3. The cumulative reward F(SR|\u03c0) in Equation (1) is a monotone and additive function of reward states SR. Proof. By Equation (1) we obtain the monotonicity property of the cumulative reward function F(\u00b7|\u03c0). Given a model \u03c0 \u2208\u03a0 and two sets of reward states A \u2286B \u2286S every term of F(A|\u03c0) is no less than its corresponding term of F(B|\u03c0) due to Equation (2). For the additivity property it suffices to show that any two sets of reward states A, B \u2286S satisfy: F(A|\u03c0) + F(B|\u03c0) = F(A \u222aB|\u03c0) + F(A \u2229B|\u03c0). At time t = 0, r0 A + r0 B = r0 A\u2229B + r0 A\u222aB, with rt X being the cumulative reward at time t for the set of reward states X. Assume wlog that the equality holds for time t. It suffices to prove that the additivity property holds for t+1. In timestamp t + 1, agent at state s \u2208S moves to s\u2032 \u2208S. We distinguish three cases as follows: 1. If s\u2032 / \u2208A \u222aB then s\u2032 / \u2208A \u2229B, s\u2032 / \u2208A and s\u2032 / \u2208B, thus additivity holds. 2. If s\u2032 \u2208A \u222aB and s\u2032 / \u2208A \u2229B then either s\u2032 \u2208A or s\u2032 \u2208B. Assume wlog that s\u2032 \u2208A, then it holds that: rt+1 A = rt A + T [s, s\u2032], rt+1 A\u222aB = rt A\u222aB + T [s, s\u2032], rt+1 B = rt B and rt+1 A\u2229B = rt A\u2229B. 3. If s\u2032 \u2208A \u2229B then s\u2032 \u2208A and s\u2032 \u2208B. Then, it holds that: rt+1 A = rt A + T [s, s\u2032], rt+1 B = rt B + T [s, s\u2032], rt+1 A\u222aB = rt A\u222aB + T [s, s\u2032], and rt+1 A\u2229B = rt A\u2229B+T [s, s\u2032]. In all scenarios the cumulative reward function is additive. Next, Lemma 4 states that RRP under a single model \u03c0 (|\u03a0| = 1), i.e., the maximization of F(SR|\u03c0) within a budget L, is solved in pseudo-polynomial time thanks to the additivity property in Lemma 3 and a reduction from the 0\u20131 KNAPSACK problem [Karp, 1972]. Lemma 4 also implies that we can find the optimal reward placement with the maximum expected reward by using a single expected setting \u03c0. Lemma 4. For a single model \u03c0 (|\u03a0| = 1) and a budget L, there is an optimal solution for RRP that runs in pseudopolynomial time O(Ln). Proof. For each state si \u2208S we set an item ui \u2208U with cost c(ui) = c[si] and value F(ui) = F({si}|\u03c0). Since the reward function is additive (Lemma 3), it holds that F(SR|\u03c0) = P si\u2208SR F({si}|\u03c0) = P ui\u2208U F(ui). Thus, we can optimally solve single setting RRP in pseudo-polynomial time by using the dynamic programming solution for 0\u20131 KNAPSACK [Martello and Toth, 1987]. In the MAX\u2013MIN 0\u20131 KNAPSACK problem (MNK), given a set of items U, each item u \u2208U having a cost c(u), and a collection of scenarios X, each scenario x \u2208X having a value Fx(u), we aim to determine a subset V \u2286 U that has total cost no more than a given budget L and maximizes the minimum total value across scenarios, i.e., argV max minx P u\u2208V Fx(u). The following lemma reduces the RRP problem to MAX\u2013MIN 0\u20131 KNAPSACK [Yu, 1996] in pseudo-polynomial time. Lemma 5. RRP is reducible to MAX\u2013MIN 0\u20131 KNAPSACK in O(|\u03a0|Ln) time. 7 Approximation Algorithm Here, we introduce \u03a8-Saturate2, a pseudo-polynomial time binary-search algorithm based on the Greedy-Saturate method [He and Kempe, 2016]. For any \u03f5 > 0, \u03a8-Saturate returns an \u03f5-additive approximation of the optimal solution by exceeding the budget constraint by a factor O(ln |\u03a0|/\u03f5). The \u03a8-Saturate Algorithm. Algorithm 1 presents the pseudocode of \u03a8-Saturate. As a first step, in Lines 1-2, the algorithm finds the optimal reward placement S\u2217 \u03c0 for each model \u03c0 \u2208\u03a0; this is needed for evaluating the denominator of the RRP objective value in Equation (3). By Lemma 4, S\u2217 \u03c0 is computed in pseudo-polynomial time using the dynamic programming algorithm for the KNAPSACK problem. Then, in Lines 5-18 the algorithm executes a binary search in the range of the min\u2013max objective ratio (Line 4). In each iteration, the algorithm makes a guess \u03b7 of the optimal min\u2013max objective value (Line 6), and then seek a set of reward states SR (Line 7), of minimum cost, with score at least \u03b7 (Line 8), within distance \u03f5 > 0. Finding SR of the minimum cost, implies an optimal solution for the NP-hard RRP problem. In Lines 9-10, \u03a8-Saturate evaluates function min \u0010 \u03b7, F (SR|\u03c0) F (S\u2217 \u03c0|\u03c0) \u0011 which, for fixed \u03c0 and \u03b7, is monotone and submodular3, by using the Greedy approximation algorithm of Wolsey [1982]. If the formed solution exceeds the budget constraint, the algorithm decreases the upper bound of the search scope (Lines 12-13), otherwise it increases the lower bound and updates the optimal solution S\u2217 R (Lines 14-16). Finally, it returns the optimal solution found (Line 19). Following an analogous proof to Theorem 3 in work of He and Kempe [2016], we derive Lemma 6 which states that \u03a8-Saturate approximates the optimal value within distance \u03f5 when it exceeds the budget by a factor O(ln |\u03a0|/ \u03f5), i.e., offers a bicriteria approximation solution. Lemma 6. For any constant \u03f5 > 0, let \u03b2 = 1 + ln 3|\u03a0| \u03f5 . \u03a8-Saturate finds a reward placement SR of cost at most \u03b2L with min\u03c0 F (SR|\u03c0) F (S\u2217 \u03c0|\u03c0) \u2265min\u03c0 F (S\u2217 R|\u03c0) F (S\u2217 \u03c0|\u03c0) \u2212\u03f5 = OPT \u2212\u03f5, and S\u2217 R = argSR max min\u03c0 F (SR|\u03c0) F (S\u2217 \u03c0|\u03c0) s.t. P s\u2208SR c[s] \u2264L. Different to the pseudo-polynomial time dynamic programming (Knapsack, Line 2) algorithm we opt, the GreedySaturate algorithm [He and Kempe, 2016] uses a simple 2\u03a8 stands for \u201dpseudo\u201d coming from Greek word \u201d\u03c8\u03b5\u03c5\u03b4\u03ae\u03c2\u201d. 3The minimum of a constant function (\u03b7) and a monotone additive function \u0010 F (SR|\u03c0) F (S\u2217 \u03c0|\u03c0) , Lemma 3 \u0011 is monotone and submodular. The term F(S\u2217 \u03c0|\u03c0) is constant as it has been computed in Line 2. \fAlgorithm 1 \u03a8-Saturate Algorithm Input: MMMs \u03a0, max steps K, budget L, precision \u03f5, extra size parameter \u03b2. Output: Optimal Reward Placement S\u2217 R of cost at most \u03b2L. 1: for \u03c0 \u2208\u03a0 do 2: S\u2217 \u03c0 \u2190Knapsack(\u03c0, L) 3: end for 4: \u03b7min \u21900, \u03b7max \u21901, S\u2217 R \u2190\u2205 5: while (\u03b7min \u2212\u03b7max) \u2265\u03f5 do 6: \u03b7 \u2190(\u03b7max + \u03b7min)/2 7: SR \u2190\u2205 8: while P \u03c0\u2208\u03a0 min \u0010 \u03b7, F (SR|\u03c0) F (S\u2217 \u03c0|\u03c0) \u0011 < (\u03b7 \u00b7 |\u03a0| \u2212\u03b7 \u00b7 \u03f5/3) do 9: s \u2190arg max s\u2208S\\SR P \u03c0\u2208\u03a0 1 c(s) \u0010 min \u0010 \u03b7, F (SR\u222a{s}|\u03c0) F (S\u2217 \u03c0|\u03c0) \u0011 \u2212 min \u0010 \u03b7, F (SR|\u03c0) F (S\u2217 \u03c0|\u03c0) \u0011 \u0011 10: SR \u2190SR \u222a{s} 11: end while 12: if P s\u2208SR c[s] > \u03b2L then 13: \u03b7max \u2190\u03b7 14: else 15: \u03b7min \u2190\u03b7 \u00b7 (1 \u2212\u03f5/3) 16: S\u2217 R \u2190SR 17: end if 18: end while 19: return S\u2217 R Greedy algorithm to approximate the optimal reward placement S\u2217 \u03c0 (Lines 1-2). Greedy4 provides an 1/ 2-approximation of the optimal solution for a monotone additive function over a knapsack constraint [Johnson and Garey, 1979]. The reward function is monotone and additive (Lemma 3), thus the following corollary holds. Corollary 7. For any constant \u03f5 > 0, let \u03b2 = 1 + ln 3|\u03a0| \u03f5 . Greedy-Saturate finds a reward placement SR of cost at most \u03b2L with min\u03c0 F (SR|\u03c0) F (S\u2217 \u03c0|\u03c0) \u22651 2 min\u03c0 F (S\u2217 R|\u03c0) F (S\u2217 \u03c0|\u03c0) \u2212\u03f5 = 1 2OPT \u2212\u03f5, and S\u2217 R = argSR max min\u03c0 F (SR|\u03c0) F (S\u2217 \u03c0|\u03c0) s.t. P s\u2208SR c[s] \u2264L. Notably, for \u03b2 = 1, \u03a8-Saturate returns an non-constant approximation of the optimal solution within the budget constraint L. In particular, the next corollary holds. Corollary 8. For any constant \u03f5 > 0, let \u03b3 = 1+ln 3|\u03a0| \u03f5 . For \u03b2 = 1, \u03a8-Saturate satisfies the budget constraint and returns an 1 \u03b3 (OPT \u2032\u2212\u03f5) approximation factor of the optimal solution, with OPT \u2032 = maxSR min\u03c0 F (SR|\u03c0) F (S\u2217 \u03c0|\u03c0) s.t. P s\u2208SR c[s] \u2264L \u03b3 . It is important to note that the approximation in Corollary 8 is non-constant, meaning that it can be arbitrarily small. This is also derived from the inapproximability result of Theorem 2. However, the corollary indicates that if the optimal value for a smaller budget constraint is non-zero, then \u03a8Saturate for \u03b2 = 1 provides an approximation of the optimal solution within the initial budget constraint L. 4 The algorithm iteratively selects the element, within the budget, that offer the maximal marginal gain divided by its cost. 8 Heuristic Solutions Inspired from previous works on node selection in networks [He and Kempe, 2016; Zhang et al., 2020] and the connection of RRP with Knapsack problems, we propose four heuristic methods. For a single model (|\u03a0| = 1) and under uniform costs (c[s] = c \u2200s \u2208S), these four heuristics find an optimal solution. However, contrary to \u03a8-Saturate algorithm (Lemma 6), they may perform arbitrarily bad in the general multi-model case, even by exceeding the budget constraint. To accelerate the selection process, we use the Lazy Greedy technique that updates values selectively [Minoux, 1978] in all heuristics, except the one using dynamic programming. All Greedy. The All Greedy method optimally solves the RRP problem for each model \u03c0 \u2208\u03a0 separately using the Knapsack dynamic programming algorithm (Lemma 4) and then picks, among the collected solutions, the one yielding the best value of the objective in Equation (3). All Greedy is optimal for a single model with arbitrary cost function. Myopic. A greedy algorithm that iteratively chooses the reward state s\u2217\u2208S, within the budget, that offers the maximal marginal gain ratio to the RRP objective divided by the cost, that is s\u2217= arg max s\u2208S\\SR min \u03c0\u2208\u03a0 \u0010 1 c[s] F (SR\u222a{s}|\u03c0)\u2212F (SR|\u03c0) F (S\u2217 \u03c0|\u03c0) \u0011 . Best Worst Search (BWS). This algorithm initially assigns a score H(s) for each state s \u2208S. This score is defined as the minimum cumulative reward when SR = {s}, that is H(s) = min\u03c0 F({s}|\u03c0). As a final step, BWS iteratively chooses the reward state s\u2217, within the budget, that offers the maximal marginal gain to the score divided by the cost, that is s\u2217= arg max s\u2208S\\SR \u0010 H(SR\u222a{s})\u2212H(SR) c[s] \u0011 . Dynamic Programming (DP-RRP). In Lemma 5 we reduced RRP to MAX\u2013MIN 0\u20131 KNAPSACK (MNK) in pseudo-polynomial time. While MNK admits an optimal solution using a pseudo-polynomial time dynamic programming algorithm, its running time grows exponentially with the number of settings |\u03a0| [Yu, 1996]. To overcome this time overhead, we propose a more efficient albeit non-optimal dynamic-programming algorithm for the RRP problem, noted as DP-RRP. For reward placement SR, we denote the cumulative reward for each setting as the following |\u03a0|-tuple: g(SR) = \u0000F(SR|\u03c01), F(SR|\u03c02), . . . , F(SR|\u03c0|\u03a0|) \u0001 . We use an (n + 1) \u00d7 (L + 1) matrix M whose entries are |\u03a0|tuples of the form g(\u00b7). Let min g(SR) = min\u03c0i F(SR|\u03c0i) be the minimum reward, across |\u03a0| settings. We define the maximum of two entries g(SR1) and g(SR2), as arg maxSR\u2208{SR1,SR2} min g(SR), i.e. the one holding the largest minimum reward. We initialize M[\u00b7, 0] = M[0, \u00b7] = (0, 0, . . . , 0) and recursively compute M[i, j] as follows: M[i, j] = max{M[i\u22121, j], M[i\u22121, j\u2212c[i]]+g({i})}, (4) where M[i, j] stands for a solution using the first i states, by some arbitrary order, and j units of budget. In the recursion of Equation (4), the first option stands for not choosing state si as a reward state, while the latter option stands for doing so while paying cost c[i] and gaining the additive reward g({i}). We compute M[n, L] as above in space and \ftime complexity \u0398(|\u03a0|Ln) and backtrack over M to retrieve the selected reward states in the final solution. Note that, for a single model, i.e. |\u03a0| = 1 and arbitrary cost function, Equation (4) returns an optimal solution. Worst Case Performance. While all heuristics approach the optimal solution under a single setting, they may perform arbitrarily bad with multiple settings. In Lemma 9 we prove that this holds even when exceeding the budget constraint, contrariwise to the \u03a8-Saturate algorithm (Lemma 6). Lemma 9. The heuristics for RRP may perform arbitrarily bad even when they exceed the budget constraint from L to \u03b2L, with \u03b2 = 1 + ln 3|\u03a0| \u03f5 and \u03f5 > 0. Extensions. All algorithms work, without any modification, with rewards of arbitrary non-negative values, i.e. R[\u00b7] \u2208R+, and when partial solution is already given. 9 Experimental Analysis In this section we evaluate the running time and performance of algorithms on synthetic and real-world data. We use different problem parameters as shown in Table 1; the default values of all parameters are marked in bold. To satisfy the budget constraint, for the \u03a8-Saturate algorithm we fix \u03b2 = 1 as in Corollary 8 and precision \u03f5 = (|\u03a0| \u00b7 103)\u22121. We set the budget L as a percentage of the total cost P s\u2208S c[s]. To benefit from the additivity property of Lemma 3, we precompute the cumulative reward F({s}|\u03c0) for each state s \u2208S and model \u03c0 \u2208\u03a0. We implemented5 all methods in C++ 17 ran experiments on a 376GB server with 96 CPUs @2.6GHz. 9.1 Generated Data We use two different types of synthetic datasets to represent the stochastic networks (MMMs). In each type we generate the graph and then we sample edge-weights using a normal distribution to make different settings. In more details: Erd\u02dd os-R\u00b4 enyi: We generate 6 directed graphs of different sizes as shown in Table 1. In all datasets, we preserve the same out-degree (default is 6), thus we modify the probability of creating an edge accordingly. Scale-Free: We generate 6 directed scale-free graphs of different sizes as shown in Table 1. We use the work of Bollob\u00b4 as et al. [2003] which introduces three parameters to construct the network: p\u03b1 (p\u03b3) is the probability of adding a new node connected to an existing node chosen randomly by its indegree (out-degree), and p\u03b2 is the probability adding an edge 5https://anonymous.4open.science/r/RRP-F6CA Parameter Values n 2500, 5000, 7500, 10000 10000 10000, 12500 |\u03a0| 2,5,10,15,20 K 2,4,6,8,10 L 10%, 25%, 50%, 75% Erd\u02dd os-R\u00b4 enyi \u27e8d\u27e9 3,6,9,12 Scale-Free p\u03b2 0.6, 0.7, 0.8, 0.9 Table 1: Setting Parameters (u, v), with u and v selected by its in-degree and out-degree respectively. In all datasets we tune p\u03b2 (default is 0.8), such that p\u03b1 + p\u03b2 + p\u03b3 = 1 and p\u03b1 = 2p\u03b3. For a fixed graph structure we further generate |\u03a0| = 20 distinct settings corresponding to different models. To do that, we sample edge-weights using 20 different normal distributions (one for each setting) with the same mean value, set as 1/(# of out-neighbors), while we vary the standard deviation. Whenever the sampled value is negative we arbitrary set the edge-weight zero. Each of the graphs is directed and edge-weighted, thus transition probabilities T correspond to the normalized edge-weights. Moreover, we set the initial probabilities I proportional to the sum of out-weights. Finally, we set the cost of each node as the average number of its in-neighbors (rounding down) among the distinct settings. Time and Performance. Figure 3 plots the average preprocessing and running time over 20 runs for all algorithms as the size of graph and the budget increases. Notably, precomputations take superlinear time with respect to the size n as the time complexity needed for the power iteration is O(Kn) in sparse networks, where K is the maximum number of steps. Moreover, the linear growth of the runtime graph size (n) and budget increase, indicates the efficiency of all algorithms except DP-RRP approach whose time complexity is at least quadratic in n (\u0398(|\u03a0|Ln) when L = \u2126(n)). Figure 4 shows how algorithms perform on average over 20 runs as different parameters vary. We observe that, for the Erd\u02dd os-R\u00b4 enyi dataset the \u03a8-Saturate algorithm outperforms all 2.5 5 7.5 10 12.5 \u00b7103 0 2 4 \u00b7104 Pre-Time (msec) Erd\u02dd os-R\u00b4 enyi 2.5 5 7.5 10 12.5 \u00b7103 2 4 \u00b7104 Scale-Free 2.5 5 7.5 10 12.5 \u00b7103 0 0.5 1 \u00b7104 n Time (msec) \u03a8-Saturate AllGreedy Myopic BWS DP-RRP 2.5 5 7.5 10 12.5 \u00b7103 0.5 1 \u00b7104 n 0.25 0.5 0.75 0 1 2 \u00b7104 L Time (msec) 0.25 0.5 0.75 0 1 2 \u00b7104 L Figure 3: Preprocessing and Running Time vs. n, L for Erd\u02dd os-R\u00b4 enyi (left) and Scale-Free (right) datasets. \f2.5 5 7.5 10 12.5 \u00b7103 0.82 0.84 0.86 Score \u03a8-Saturate AllGreedy Myopic BWS DP-RRP 1 5 10 15 20 0.6 0.7 0.8 0.9 1 2 4 6 8 10 0.8 0.9 0.25 0.5 0.75 0.78 0.8 0.82 0.84 0.86 0.88 3 6 9 12 0.8 0.85 0.9 Erd\u02dd os-R\u00b4 enyi 2.5 5 7.5 10 12.5 \u00b7103 0.75 0.8 n Score 1 5 10 15 20 0.6 0.8 1 |\u03a0| 2 4 6 8 10 0.7 0.8 0.9 K 0.25 0.5 0.75 0.75 0.8 0.85 0.9 L 0.6 0.7 0.6 0.7 0.7 0.75 0.8 0.85 \u27e8d\u27e9(top) p\u03b2 (bott.) Scale-Free Figure 4: Performance on Erd\u02dd os-R\u00b4 enyi (top) and Scale-Free (bottom) datasets. heuristics techniques in all settings, while at Scale-Free graph the DP-RRP has the best overall performance. For a single setting, |\u03a0| = 1, all heuristics find an almost optimal solution. As expected, the performance of all algorithms decreases as we increase the number of models |\u03a0|, since the adversary has a larger pool of models to select from. By analogy, the increase in the number of steps increases the cumulative reward as it expands the feasible movements of the agents leading to a decrease of the min ratio. In contrast, the larger the budget is, the higher score algorithms can achieve. Intuitively, more expensive reward placements offer higher cumulative reward, and consequently min ratio objective is determined by those scores. This is more evident by looking at the increase at the performance on the Scale-Free dataset, as there are fewer nodes with high in-degree. This is experimentally confirmed, as the growth of \u27e8d\u27e9results to random networks with uniform degree-distribution, while simultaneously the cumulative reward increases. In addition, larger p\u03b2 values result in networks with a more skewed power-law in-degree distribution, rendering the problem more intricate. 9.2 Real-World Data To further validate our algorithms, we create graphs using real-world movement data. We gathered movement records from Baidu Map, covering Xuanwu District in Nanjing6 from July 2019 to September 2019. These records consist of sequential Points of Interest (POIs) with timestamps, allowing us to calculate the probability of transitioning between POIs based on the Markovian assumption. Using these probabilities, we construct graphs where nodes represent POIs and edges depict the transition probabilities. Each graph represents a 7-day period, resulting in a total of 13 graphs. The combined dataset contains a total of 51,943 different nodes. Our study also introduces practicality by considering data-driven ad placement in the city. We thus assign costs to POIs based on their visit frequency and a fixed value: c[x] = \u230afrequency(x)/25 + 50\u230b. The initial and steps probabilities follow the same default setup as the synthetic datasets. 6https://en.wikipedia.org/wiki/Nanjing 0.25 0.5 0.75 0 2 4 \u00b7105 L Time (msec) 0.25 0.5 0.75 3 4 5 \u00b710\u22121 L Score Figure 5: Time, Score vs. L for the Xuanwu Dataset. Time and Performance. The preprecessing time for the Xuanwu dataset (N = 51, 943, |\u03a0| = 13) takes 118 seconds. Figure 5 presents the running time and the performance of the algorithms as budget constraint increases. DP-RRP is the most time consuming followed by the \u03a8-Saturate method, while the BWS strategy remains the most efficient solution. The score plot does not follow an uptrend as budget grows, since our min ratio objective function of Equation (3) is not a monotone function of the budget. In terms of effectiveness, DP-RRP consistently outperforms all algorithms in all budget constraints indicating its ability to uncover high quality solutions even for hard scenarios, while the performance of \u03a8-Saturate and other heuristic algorithms fluctuates. 10" +} \ No newline at end of file diff --git a/abs_9K/test_abstract_short_2405.05615v1.json b/abs_9K/test_abstract_short_2405.05615v1.json new file mode 100644 index 0000000000000000000000000000000000000000..c169ac17cdb4cd587ca7a79cad45fa259e5fa06c --- /dev/null +++ b/abs_9K/test_abstract_short_2405.05615v1.json @@ -0,0 +1,18 @@ +{ + "url": "http://arxiv.org/abs/2405.05615v1", + "title": "Memory-Space Visual Prompting for Efficient Vision-Language Fine-Tuning", + "abstract": "Current solutions for efficiently constructing large vision-language (VL)\nmodels follow a two-step paradigm: projecting the output of pre-trained vision\nencoders to the input space of pre-trained language models as visual prompts;\nand then transferring the models to downstream VL tasks via end-to-end\nparameter-efficient fine-tuning (PEFT). However, this paradigm still exhibits\ninefficiency since it significantly increases the input length of the language\nmodels. In this paper, in contrast to integrating visual prompts into inputs,\nwe regard visual prompts as additional knowledge that facilitates language\nmodels in addressing tasks associated with visual information. Motivated by the\nfinding that Feed-Forward Network (FFN) of language models acts as \"key-value\nmemory\", we introduce a novel approach termed memory-space visual prompting\n(MemVP), wherein visual prompts are concatenated with the weights of FFN for\nvisual knowledge injection. Experimental results across various VL tasks and\nlanguage models reveal that MemVP significantly reduces the training time and\ninference latency of the finetuned VL models and surpasses the performance of\nprevious PEFT methods. Code: https://github.com/JieShibo/MemVP", + "authors": "Shibo Jie, Yehui Tang, Ning Ding, Zhi-Hong Deng, Kai Han, Yunhe Wang", + "published": "2024-05-09", + "updated": "2024-05-09", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.CL", + "cs.LG" + ], + "label": "Original Paper", + "paper_cat": "Parameter AND Efficient AND Fine AND Tuning", + "gt": "Current solutions for efficiently constructing large vision-language (VL)\nmodels follow a two-step paradigm: projecting the output of pre-trained vision\nencoders to the input space of pre-trained language models as visual prompts;\nand then transferring the models to downstream VL tasks via end-to-end\nparameter-efficient fine-tuning (PEFT). However, this paradigm still exhibits\ninefficiency since it significantly increases the input length of the language\nmodels. In this paper, in contrast to integrating visual prompts into inputs,\nwe regard visual prompts as additional knowledge that facilitates language\nmodels in addressing tasks associated with visual information. Motivated by the\nfinding that Feed-Forward Network (FFN) of language models acts as \"key-value\nmemory\", we introduce a novel approach termed memory-space visual prompting\n(MemVP), wherein visual prompts are concatenated with the weights of FFN for\nvisual knowledge injection. Experimental results across various VL tasks and\nlanguage models reveal that MemVP significantly reduces the training time and\ninference latency of the finetuned VL models and surpasses the performance of\nprevious PEFT methods. Code: https://github.com/JieShibo/MemVP", + "main_content": "Introduction Recently, the investigation of pre-trained foundation models has achieved remarkable success in the fields of both computer vision and natural language processing (Touvron et al., 2023; OpenAI, 2023; Tang et al., 2024; Radford et al., 2021), thereby fostering advancements in vision-language (VL) models. It has been found that VL models can be 1School of Intelligence Science and Technology, Peking University 2Huawei Noah\u2019s Ark Lab 3National Key Laboratory of General Artificial Intelligence. Correspondence to: Yunhe Wang , Zhi-Hong Deng , Kai Han . Proceedings of the 41 st International Conference on Machine Learning, Vienna, Austria. PMLR 235, 2024. Copyright 2024 by the author(s). \u001f\u001e\u001d\u001c\u001b\u001a\u0019\u001d\u001e \u0018\u0017\u0016\u0017\u001d\u0015 \u0014\u0015\u001a\u001d\u0013\u001b\u001e \u0012\u0011\u0015\u0010\u000f\u0011\u0010\u001b\u000e \r\u001d\u0013\u001b\f \u000b\u001b \u0019\u000e\t\u0015\b\u000f\u0019 \u001f\u001d\u0016\u0017\u0019\u0017\u001d\u0015 \u0014\u0007\u0006\u001b\u0013\u0013\u0017\u0015\u0010 \u001f\u001e\u001d\u001c\u001b\u001a\u0019\u001d\u001e \u0018\u0017\u0016\u0017\u001d\u0015 \u0014\u0015\u001a\u001d\u0013\u001b\u001e \u0012\u0011\u0015\u0010\u000f\u0011\u0010\u001b\u000e \r\u001d\u0013\u001b\f \u000b\u001b \u0019\u000e\t\u0015\b\u000f\u0019 \r\u001b\u0007\u001d\u001e\u0005 \u0012\u0012\u0011\r\u0004\u000e\u0003\u0002 \u0001\u001a\u0017\u001b\u0015\u001a\u001b\u007f\u0004 \u0004\u0081\u001b\u001e\u0011\u0010\u001b\u000e\u0004\u001a\u001a \u000b\u001e\u0011\u0017\u0015\u0017\u0015\u0010\u000e\u000b\u0017\u0007\u001b\u000e\u008d\u0006\u0016\u000e\u008f\u000e\u0090\u009d \t\u0015\u00a0\u001b\u001e\u001b\u0015\u001a\u001b\u000e\u000b\u0017\u0007\u001b\u000e\u008d\u0006\u0016\u000e\u008f\u000e\u00ad\u0090\u009d \u0080\u0082\u0083\u0084\u0085 \u0082\u0083\u0090\u0080\u000e\u0016\u0086\u0006\u0011\u0019\u001a\u0087 \u0088\u0083\u0090\u0089\u000e\u0016\u0086\u0006\u0011\u0019\u001a\u0087 \u001f\u001e\u001d\u001c\u001b \u001c\u001d\u001a\u0019\u000e\u0016\u0086\u0006\u0011\u0019\u001a\u0087 \u0018\u001d\u0019\u0019\u000e\u0016\u0086\u0006\u0011\u0019\u001a\u0087 \u008d\u0011\u009d\u000e\u0012\u0012\u0011\u0018\u0004\u008a\u0012\u001d\u008b\u0004 \u008d\u0006\u009d\u000e\r\u001b\u0007\u0018\u001f\u000e\u008d\u008c\u000f\u001e\u0016\u009d \u0012\u001d\u008b\u0004 Figure 1. Illustration of PEFT methods using (a) the conventional input-space visual prompting and (b) our memory-space visual prompting. MemVP outperforms previous paradigms in terms of performance, training speed, and inference speed. efficiently constructed upon off-the-shelf pre-trained vision encoders and language models (Tsimpoukelli et al., 2021; Alayrac et al., 2022; Li et al., 2023; Liu et al., 2023b). The de-facto paradigm to combine them involves projecting the outputs of vision encoders, i.e., image features, to visual prompts within the input space of the language models via linear projection or resampler. Subsequently, the language models concentrate the visual prompts with the text embedding tokens and process them as a whole. Nevertheless, the scale of both vision models and language models is experiencing exponential growth, e.g., ViTG (Zhai et al., 2022) has 1.8B parameters and LLaMA (Touvron et al., 2023) has up to 70B parameters. Therefore, both pre-training and fine-tuning their combinations with a vast number of parameters for downstream VL tasks become prohibitively expensive in terms of training and storage resources. To mitigate this challenge, parameter-efficient fine-tuning (PEFT) methods incorporate lightweight modules (e.g., adapters (Houlsby et al., 2019), LoRA (Hu et al., 2022)) into the models, and/or select a small subset of pretrained parameters (e.g., bias, normalization). During finetuning, only these modules and selected parameters are updated. Prior studies (Sung et al., 2022; Luo et al., 2023) have demonstrated that, even without resource-intensive VL pretraining, the combinations of vision encoders and language models can still be transferred to downstream VL tasks via PEFT while matching the performance of full fine-tuning. 1 arXiv:2405.05615v1 [cs.CV] 9 May 2024 \fMemory-Space Visual Prompting for Efficient Vision-Language Fine-Tuning Although such \u201cinput-space visual prompting & PEFT\u201d paradigm proves efficient for training and storage, its mechanism of visual prompts still limits the inference and training efficiency. For instance, the average length of the text inputs is only 6.3 in VQAv2 (Goyal et al., 2017) dataset and 81 in ScienceQA (Lu et al., 2022) dataset, whereas the number of visual tokens can be up to 256 in LLaVA (Liu et al., 2023b). Consequently, in many scenarios, the input tokens of the language models are mostly visual tokens, thereby significantly amplifying the computation cost during training and inference. In this paper, we aim to explore an alternative manner for integrating the visual information into language models for downstream VL tasks, which is intended to not only be parameter-efficient but also facilitate fast training and inference. Existing research (Geva et al., 2021) has found that, the Feed-Forward Network (FFN) of language models acts as key-value memory that stores factual association as knowledge, e.g., \u201cStrawberries are red\u201d could be such knowledge stored in FFNs. Inspired by this, we infer that the visual information also contains vision-related factual association that is not included in the memory of language models, e.g., the language models do not realize \u201cThe fruits in the image are red\u201d. Therefore, it is necessary to inject such external knowledge into language models to enable them to tackle vision-related tasks. Since FFN is the main carrier of knowledge, we can put the visual information in the memory space of language models, i.e., weights of FFN, instead of input space, thus avoiding extending the input length. Based on this motivation, we propose Memory-Space Visual Prompting (MemVP), a PEFT framework for adapting pretrained vision encoders and language models to downstream VL tasks. As shown in Figure 1, MemVP first projects the features extracted by vision encoders to the dimension of language models as visual prompts. The position-embeded visual prompts are concatenated with the weight matrices of the fully-connected (FC) layers in each FFN block of the language models. During fine-tuning, we freeze most parameters of the vision encoders and language models, only the VL projection layers and position embeddings are tunable. Without extending the inputs, MemVP only introduces a very small amount of extra parameters and computation to the language models, and is thus more efficient during training and inference. To evaluate the efficiency and effectiveness of MemVP, we conduct experiments across various downstream VL benchmarks, including visual question answering on VQAv2, GQA (Hudson & Manning, 2019), and ScienceQA, and image captioning on COCO Captions (Chen et al., 2015). Additionally, we evaluate MemVP on language models with different scales and architectures, including BART (Lewis et al., 2020) and T5 (Raffel et al., 2020) with an encoderdecoder architecture, as well as decoder-only LLaMA-7B and LLaMA-13B. MemVP demonstrates superior performance compared to previous PEFT baselines, while achieving remarkable acceleration for both training and inference. 2. Related Work 2.1. Vision-Language Models In the field of VL learning, many different model architectures have been proposed to meet the requirements of different VL tasks, such as dual-encoder (Radford et al., 2021), fusion-encoder (Tan & Bansal, 2019; Li et al., 2021; Kim et al., 2021; Dou et al., 2022), encoder-decoder (Cho et al., 2021; Wang et al., 2022b; Chen et al., 2023; Wang et al., 2022a; Li et al., 2022; 2023; Liu et al., 2023b), etc. Recently, the rapid advancement of large language models has prompted a growing number of researchers to regard VL tasks as a process of visual-conditioned text generation, and focus on how to involve vision information in off-theshelf pre-trained language models. For example, BLIP (Li et al., 2022) and Flamingo (Alayrac et al., 2022) insert new cross-attention layers into the language models to interact with visual features; Frozen (Tsimpoukelli et al., 2021), LLaVA (Liu et al., 2023b), and PaLI (Chen et al., 2023) use the vision encoder to generate visual prompts as the inputs of language models. BLIP-2 (Li et al., 2023) also uses a large Q-former as resampler to reduce the length of visual prompts. 2.2. Parameter-Efficient Fine-Tuning for VL Alignment PEFT has already been widely studied in the field of vision (Rebuffi et al., 2017; Chen et al., 2022; Zhang et al., 2022; Lian et al., 2022; Jie & Deng, 2023; Jie et al., 2023), language (Houlsby et al., 2019; Pfeiffer et al., 2021; Hu et al., 2022; Zaken et al., 2022; Liu et al., 2021), and multimodality (Sung et al., 2022; Hu et al., 2023; Luo et al., 2023; Zhang et al., 2023b; Jiang & Zheng, 2023; Lu et al., 2023). Particularly, based on the pre-trained vision encoders and language models, the VL models can be trained in a parameter-efficient manner. There are many studies focusing on PEFT of such assembled VL models on downstream tasks. VL-Adapter (Sung et al., 2022) and VL-PET (Hu et al., 2023) project the image features as visual prompts, and fine-tune the projector and PEFT modules inserted in the T5 or BART models. Differently, LLaMA-Adapter (Zhang et al., 2023a) concatenates the visual prompts with the hidden state of LLaMA\u2019s intermediate layers. LaVIN (Luo et al., 2023) inserts adapters in both the vision encoder and LLaMA, and introduces a routing mechanism for adapters. Through PEFT, it becomes possible to train VL models using off-the-shelf uni-modal models with less time and 2 \fMemory-Space Visual Prompting for Efficient Vision-Language Fine-Tuning 0 100 200 300 400 Input Length 0.0 0.2 0.4 0.6 0.8 1.0 Training Time (s/batch) 2.6x slower w/ visual prompts w/o visual prompts 0 100 200 300 400 Input Length 0.0 0.2 0.4 0.6 0.8 1.0 1.2 Inference Time (s/batch) 4.8x slower w/ visual prompts w/o visual prompts Figure 2. Training and inference time of LLaMA-7B on a single V100. The training process adopts PEFT in which we only tune LoRA modules. The training batch size and inference batch size are 4 and 16, respectively, to maximize utilization of GPU memory. We also highlight the position when the text token length is 64 w/ and w/o input-space visual prompts. The length of visual prompts is 256 as in LLaVA. We fix the output length to 1. GPU memory. However, it is noteworthy that these studies do not take computation efficiency into account, which is one of the main contributions of our paper. 2.3. Memory of Language Models Geva et al. (2021) discover that the FFN of pre-trained language models is essentially key-value memory which stores factual association. Based on this finding, Dai et al. (2022) locate and edit knowledge in language models by replacing certain rows of the matrices of FFN with the embedding of the object. Meng et al. (2022) edit the located factual knowledge by adding new key-value pairs to FFN. Dai et al. (2023) expand the size of FFN with extra keys and values as a knowledge bank. Cheng et al. (2023) replace FFN in language models with differentiable plug-in key-value memory for interpretability. However, current works only focus on pure language models, without exploring the potential of visual information as external factual knowledge. 3. Revisiting Visual Prompts in VL Models Current VL models mostly adopt a common architecture, including a pre-trained vision encoder, a pre-trained language model, and a module that bridges the two components. An efficient bridging module could be one or several FC layers that project the features of the images into the input space of the language model as visual prompts. Although the VL projection of visual prompts is parameter-efficient, it is not computation-efficient enough for training and inference. To obtain fine-grained local visual information, the visual prompts are usually projected from patch features of images, which contain a considerably large number of tokens. For example, LLaVA (Liu et al., 2023b) uses ViT-L/14 as vision encoder, which involves 256 tokens to express each image. The additional visual prompts significantly increase the length of the input sequence, leading to more computation during training and inference. To what extent do the visual prompts affect the computation speed? We show the inference speed across different lengths of input and output on LLaMA-7B in Figure 2. The computational complexity is O(L2d+Ld2) for Multi-Head Self-Attention (MHSA) and O(LdD) for FFN, in which L, d, and D are the length of token sequence, dimension of tokens, and hidden dimension of FFN, respectively. For example, after applying the visual prompts with 256 tokens to LLaMA-7B as in LLaVA, the training and inference latency of the language model part increase to 2.6\u00d7 and 4.8\u00d7 on the text with an input length of 64 and an output length of 1. Are there alternative solutions to use fewer visual tokens? BLIP2 (Li et al., 2023) uses a Q-former as resampler to reduce the number of visual tokens, which compresses the length of visual prompts from 256 to 32. Flamingo (Alayrac et al., 2022) uses a single token as the visual prompt, and insert new resampler and cross-attention to interact with visual features. Although reducing the sequence length, these methods introduce hundreds of millions, or even billions, of new parameters, which necessitate large-scale VL pre-training. Therefore, we have to perform expensive VL pre-training again when switching to new pre-trained vision encoders or language models. Moreover, since the new modules are large, the training process cannot be parameter-efficient enough to reduce memory and time costs. Also, the large new modules still bring considerably more computation. Overall, to obtain VL models that are efficient during both training and inference, we need a new paradigm to concatenate pre-trained vision encoders and language models, which i) introduces negligible new parameters and extra computation; and ii) performs well when PEFT on downstream VL tasks. 4. Memory-Space Visual Prompting 4.1. Preliminary: Reformulation of FFN The standard FFN of language models is composed of two FC layers with non-linear activation in-between. Supposing x \u2208Rd is a input token of the FFN, the FFN can be formulated as: FFN(x) = \u03d5(xW 1)W 2, (1) in which \u03d5 is activation like ReLU and GELU, W 1 \u2208Rd\u00d7D and W 2 \u2208RD\u00d7d are the weight matrices of the two FC layers. Note that W 1 and W 2 can be rewritten as: W 1 = (k1, k2, ..., kD), W 2 = (v1, v2, ..., vD)\u22ba, (2) 3 \fMemory-Space Visual Prompting for Efficient Vision-Language Fine-Tuning \u001f\u001e\u001d\u001c\u001b\u001a\u0019\u001d\u001e \u0018\u0017\u0016\u0017\u001d\u0015 \u0014\u0015\u001a\u001d\u0013\u001b\u001e \u0012\u0011\u0010\u000f \u000e\u000e\r \f\u001b\u000b\u0019 \u0014\t\b\u001b\u0013\u0013\u0017\u0015\u0007 \u0006\u0005\u0015\u0007\u0004\u0005\u0007\u001b \u0012\u001d\u0013\u001b\u0003 \u0018\u0017\u0016\u0004\u0005\u0003 \u0002\u0015\u0001\u0004\u0019 \f\u001b\u000b\u0019 \u0002\u0015\u0001\u0004\u0019 \u001f\u001e\u001d\u001c\u001b\u001a\u0019\u001d\u001e \u0018\u0017\u0016\u0017\u001d\u0015 \u0014\u0015\u001a\u001d\u0013\u001b\u001e \u000e\u000e\r \f\u001b\u000b\u0019 \u0014\t\b\u001b\u0013\u0013\u0017\u0015\u0007 \u0006\u0005\u0015\u0007\u0004\u0005\u0007\u001b \u0012\u001d\u0013\u001b\u0003 \u0018\u0017\u0016\u0004\u0005\u0003 \u0002\u0015\u0001\u0004\u0019 \f\u001b\u000b\u0019 \u0002\u0015\u0001\u0004\u0019 \u0012\u0011\u0010\u000f \u007f\u000f\u0019\u0019\u0015 \u001f\u001e\u001d\u001c\u001b\u001a\u0019\u001d\u001e \u0018\u0017\u0016\u0017\u001d\u0015 \u0014\u0015\u001a\u001d\u0013\u001b\u001e \u0012\u0011\u0010\u000f \u000e\u000e\r \f\u001b\u000b\u0019 \u0014\t\b\u001b\u0013\u0013\u0017\u0015\u0007 \u0006\u0005\u0015\u0007\u0004\u0005\u0007\u001b \u0012\u001d\u0013\u001b\u0003 \u0018\u0017\u0016\u0004\u0005\u0003 \u0002\u0015\u0001\u0004\u0019 \f\u001b\u000b\u0019 \u0002\u0015\u0001\u0004\u0019 \u0081 \u001f\u001d\u0016\u0017\u0019\u0017\u001d\u0015 \u0014\t\b\u001b\u0013\u0013\u0017\u0015\u0007 \u001f\u001e\u001b\u008d\f\u001e\u0005\u0017\u0015\u001b\u0013 \u001f\u0005\u001e\u0005\t\u001b\u0019\u001b\u001e\u0016 \r\u001b\u008f \u001f\u0005\u001e\u0005\t\u001b\u0019\u001b\u001e\u0016 \u0002\u0015\u0019\u001b\u001e\t\u001b\u0013\u0017\u0005\u0019\u001b \u000e\u001b\u0005\u0019\u0004\u001e\u001b\u0016 \u0090\u0005\u009d \u0002\u0015\u0001\u0004\u0019\u008d\u0016\u0001\u0005\u001a\u001b \u00a0\u0017\u0016\u0004\u0005\u0003 \u0001\u001e\u001d\t\u0001\u0019\u0017\u0015\u0007 \u0090\b\u009d \u00ad\u001e\u001d\u0016\u0016\u008d\u0005\u0019\u0019\u001b\u0015\u0019\u0017\u001d\u0015 \u0003\u0005\u0080\u001b\u001e\u0016 \u0090\u001a\u009d \u0012\u001b\t\u0018\u001f \u0090\u0082\u0004\u001e\u0016\u009d ... ... ... ... \u000e\u000e\r Figure 3. Overview of the mainstream paradigms to concatenate vision encoder and language model. (a) Concatenating visual prompts with the text tokens as inputs of the language model is not computation-efficient, e.g., LLaVA, VL-Adapter, VL-PET. (b) Using cross-attention layers to incorporate the visual information from visual tokens is not parameter-efficient, e.g., Flamingo, BLIP. (c) Our MemVP injects visual prompts into the FFN blocks of language models, achieving both parameter and computation efficiency. in which ki \u2208Rd and vi \u2208Rd are entries of key and value, respectively. Then, the FFN can be rewritten as FFN(x) = D X i=1 \u03d5(\u27e8x, ki\u27e9) \u00b7 vi. (3) Therefore, the FFN can be interpreted as using input x as the query to calculate its similarity with keys, and gathering values based on the similarity. Previous work has found that FFN acts as a key-value memory storing factual knowledge (Geva et al., 2021). 4.2. FFN with Visual Prompting As illustrated in Figure 3, in conventional input-space visual prompting, the image features are projected to the prefix of the input as context for text generation. Since increasing the input length leads to inefficiency, we avoid using extra visual tokens, and thus all the visual information needs to be contained in textual tokens. A solution to incorporating visual information is to let the textual tokens retrieve information from the visual features. Previous works like Flamingo and BLIP perform retrieval via cross-attention layers, which can be formulated as XAttn(x) = softmax \u0012xW qW k \u22baZ\u22ba \u221a d \u0013 ZW vW o \u22ba, (4) in which x \u2208 Rd is a textual token and Z = (z1, z2, ..., zn)\u22ba\u2208Rn\u00d7d\u2032 is the visual features. However, the cross-attention layer introduces a large amount of new parameters, i.e., W q/k/v/o, which is far from parameter efficiency and brings considerable additional computation. Note that the cross-attention essentially performs a soft look-up using the query xW q from the key-value pairs (ZW k, ZW v) and outputs the weighted average of the retrieved values. Inspired by the fact that FFN also performs similar retrieval from its key-value memory, we consider a more simplified and efficient retrieval process for visual features: Retrieval(x) = n X i=1 \u03d5(\u27e8x, K(zi)\u27e9) \u00b7 V(zi), (5) in which K(zi), V(zi) \u2208Rd are the key and value corresponding to zi. This formulation shares a similar form with Eq (3). Since the size of FFN\u2019s key-value memory D is usually much larger than the number of visual features n (D = 11008 in LLaMA-7B and n = 256 for ViT-L/14), the computation of retrieving visual features is insignificant. Therefore, we do not introduce new cross-attention layers as in previous work, but perform such retrieval along with FFN instead. From the perspective of FFN, we regard the (K(zi), V(zi)) as new memory entries to complement vision-related knowledge that language models used to lack. The new visual key-value entries are inserted into memory, FFN(x) = D X i=1 \u03d5(\u27e8x, ki\u27e9)\u00b7vi + n X i=1 \u03d5(\u27e8x, K(zi)\u27e9)\u00b7V(zi). (6) As for K and V, they should realize two key functions: i) aligning the dimension between visual feature zi \u2208Rd\u2032 and textual token x \u2208Rd, and ii) identifying the position of each entry in the visual input. We use a projector f, which could be one or several FC layers, to project the visual features to the dimension of the textual token as a visual prompt. The projector is shared between K and V for parameter efficiency. The projected visual features are then added with position embedding, K(zi) = \u03bbf(zi) + pk i , V(zi) = \u03bbf(zi) + pv i , (7) in which \u03bb is a hyperparameter and pk, pv \u2208Rn\u00d7d are position embedding for visual prompts inserted into keys and values, respectively. 4 \fMemory-Space Visual Prompting for Efficient Vision-Language Fine-Tuning To implement Eq (6), the position-embedded visual prompts are inserted into the memory as new key-value entries. For the FFN block, the weight matrices are modified to W \u2032 1 = (k1, k2, ..., kD, \u03bbf(z1) + pk 1, ..., \u03bbf(zn) + pk n), W \u2032 2 = (v1, v2, ..., vD, \u03bbf(z1) + pv 1, ..., \u03bbf(zn) + pv n)\u22ba. (8) Since the visual prompts are concatenated with the FFN weights which are actually memories, we call the proposed new paradigm memory-space visual prompting (MemVP). Besides the standard FFN above, which is widely used in small and middle-scale language models, large language models usually adopt Gated Linear Units (GLU) to enhance the FFN for better performance. For instance, LLaMA uses SwiGLU in FFN, which is FFN(x) = (SiLU(xW 1) \u2297xW 3)W 2. (9) Supposing W 3 = (g1, ..., gD), Eq (9) can be rewritten as FFN(x) = D X i=1 SiLU(\u27e8x, ki\u27e9) \u00b7 \u27e8x, gi\u27e9\u00b7 vi, (10) where \u27e8x, gi\u27e9can be viewed as matching the query with another key. For FFN using GLU, we simply let the second key entries responding to the visual prompts be x |x|2 2 , i.e., modify W 3 to W \u2032 3 = (g1, g2, ..., gD, x |x|2 2 , ..., x |x|2 2 ), (11) which is equivalent to omitting the second key when looking up the visual knowledge to avoid involving more parameters, i.e., FFN(x) = D X i=1 SiLU(\u27e8x, ki\u27e9) \u00b7 \u27e8x, gi\u27e9\u00b7 vi + n X i=1 SiLU(\u27e8x, \u03bbf(zi) + pk i \u27e9) \u00b7 (\u03bbf(zi) + pv i ). (12) In this paradigm, only the projector and position embedding are newly introduced, which are negligible compared with the large size of the pre-trained models. During fine-tuning, we can freeze the parameters of the vision encoders and language models, and only fine-tune these new parameters. From another perspective, the added key and value entries can be regarded as the two fully connected layers of a visionconditioned adapter for PEFT. Therefore, in practice, we also adopt some design philosophy of adapters (Luo et al., 2023). First, we set the length of position embedding as a hyperparameter to control the number of trainable parameters. We allow the length of position embedding to be longer than the visual prompts, in which case we simply zero-pad the visual prompt to align their lengths. Second, we add another scaling factor to the retrieval results as a hyperparameter to control their magnitude. 4.3. Complexity Analysis We consider a language model layer that is only composed of MHSA and FFN blocks. For simplicity, we omit the bias terms and normalization layers. Let L, d, and n denote the length of token sequence, dimension of tokens, and length of visual prompts, respectively. The FLOPs of MHSA and FFN are 8Ld2 + 4L2d and 16Ld2 respectively. We use FLOPsLM, FLOPsVP, and FLOPsMemVP to denote the FLOPs of a single transformer layer in the language model without visual prompts, with input-space visual prompts, and with memory-space visual prompts, respectively. Then we have FLOPsLM = 4Ld(6d + L). (13) For the previous manner which uses input-space visual prompting, the length of the input sequence becomes L + n. Then, the additional FLOPs of a layer are FLOPsVP \u2212FLOPsLM = 4nd(6d + n + 2L). (14) Whereas for MemVP, the length of the input is unchanged, and only the hidden dimension of FFN is increased. The additional FLOPs of a layer is FLOPsMemVP \u2212FLOPsLM = 4ndL. (15) Since current VL models basically satisfy d >> n, and for VL tasks we have n > L in the most cases, we find that FLOPsVP is multiple times of FLOPsLM, but the difference between FLOPsLM and FLOPsMemVP can be ignored. For other architectures such as encoder-decoder model, MemVP mainly reduces the FLOPs of the encoder part. Overall, MemVP is computation-efficient for VL tasks on various language model architectures. 5. Experiments In all the experiments, we follow prior works (Sung et al., 2022; Hu et al., 2023; Luo et al., 2023) adopting a fast and economic adaptation setting, i.e., the resource-intensive VL pre-training stage is not incurred. Although VL pre-training has already been widely used nowadays, our setting has practical significance since it enables low-cost deployment on new foundation models, considering the rapid evolution of language models. 5.1. Experiments on BART & T5 Datasets and Baselines. For visual question answering, we evaluate our method on VQAv2 (Goyal et al., 2017) and GQA (Hudson & Manning, 2019); for image captioning, we evaluate on COCO Captions (Chen et al., 2015). All these tasks are regraded as text generation tasks which directly output the answers in an open-ended space. Note that, different from previous work (Sung et al., 2022; Hu 5 \fMemory-Space Visual Prompting for Efficient Vision-Language Fine-Tuning Table 1. Results on VQAv2, GQA, and COCO Captions. \u201cFLOPs\u201d denotes the average FLOPs in language models on test set. We report average performance over three runs on Karpathy test split for VQAv2 and COCO Captions, and on test-dev split for GQA. All the baseline results are reproduced using the official code of VL-PET (Hu et al., 2023). Method #Trainable Params (M/task) VQAv2 GQA COCO Captions Average Score VQA Score FLOPs (G) VQA Score FLOPs (G) CIDEr FLOPs (G) BART-base Full Fine-Tuning 141.16 65.4 4.8 53.1 5.3 110.6 6.4 76.4 Compacter 3.87 64.2 4.9 52.3 5.4 115.3 6.5 77.3 LoRA 3.92 64.8 4.8 52.2 5.3 115.1 6.4 77.4 VL-Adapter 3.87 65.5 4.9 53.7 5.4 114.3 6.5 77.8 VL-PET 3.84 65.3 5.0 53.9 5.5 120.3 6.6 79.8 MemVP (Ours) 3.78 65.2 1.2 55.1 1.8 120.2 2.8 80.2 T5-base Full Fine-Tuning 224.54 64.3 9.4 52.0 10.8 112.6 12.9 76.3 Compacter 6.11 65.5 9.6 53.6 11.0 113.4 13.2 77.5 LoRA 6.05 63.3 9.4 50.8 10.8 113.9 12.9 76.0 VL-Adapter 6.10 65.6 9.6 54.4 11.0 113.4 13.2 77.8 VL-PET 6.07 65.4 9.8 54.6 11.3 121.2 13.4 80.4 MemVP (Ours) 6.00 65.7 2.3 56.0 3.8 120.8 5.8 80.8 Figure 4. Left: Training time, training memory, and inference time of T5-base on VQAv2. The per-GPU batch sizes for training and inference are 64 and 512, respectively. Measured on V100 GPUs. Right: Average score vs. FLOPs of BART-base on the three datasets. The visual prompts of VL-PET are downsampled to reduce the input length. et al., 2023) using a multi-tasks learning setting where the VQA tasks benefit from the concurrently trained captioning data, we fine-tune MemVP and all the baselines on each dataset individually. We compare MemVP with baselines using previous input-space visual prompting, including current state-of-the-art PEFT methods on BART and T5: VLAdapter (Sung et al., 2022) and VL-PET (Hu et al., 2023), as well as representative PEFT methods designed for language models: Compacter (Mahabadi et al., 2021) and LoRA (Hu et al., 2022). We also report the results of fully fine-tuning the language models with input-space visual prompting. Implementation Details. Following previous work (Sung et al., 2022; Hu et al., 2023), we use ResNet-101 pre-trained via CLIP (Radford et al., 2021) to pre-extract image features. The resolution of input images is 224 \u00d7 224. The visual encoder is frozen during fine-tuning, and the PEFT modules are only inserted into the language model. For the language part, we use BART-base (Lewis et al., 2020) and T5-base (Raffel et al., 2020) with encoder-decoder architecture. For our MemVP, the grid features before global average pooling are projected to visual prompts via a single FC layer, and the visual prompts are only injected into the FFN blocks of language encoders. Additionally, we also unfreeze the layer normalization of language models. We train on each dataset for 20 epochs with batch size 8 \u00d7 64 and report performance on the test set. The hyperparameters of all methods are summarized in Appendix. Results and Analyses. As shown in Table 1, our MemVP achieves average performance better than current state-ofthe-art PEFT method, VL-PET, and much better than other baselines. However, the FLOPs in the language models of MemVP are only 23%\u201344% of other baselines. To exhibit the advantage of shorter inputs, we compare the training speed, training memory, and inference speed of all methods on VQAv2 in Figure 4 (left). Compared with VL-PET, MemVP is 1.7\u00d7 faster during training and 1.4\u00d7 faster during inference, while using only 56% training memory. Although PEFT only unlocks a small number of parameters for training, the gradient still needs to be propagated back through the whole language model, leading to considerable time and memory consumption. Therefore, the time and memory costs during training and inference are profoundly affected by the FLOPs in language models. MemVP releases the time and memory burden for fine-tuning by directly reducing FLOPs, suggesting that computation efficiency is also crucial in designing PEFT methods. Furthermore, we compared MemVP with a straightforward strategy to reduce the length of the visual prompt: 2D adap6 \fMemory-Space Visual Prompting for Efficient Vision-Language Fine-Tuning \u001f\u001e\u001d\u001c\u001b\u001a\u0019\u0018\u0017\u001c\u0016\u001b\u001f\u001e\u001d\u001c\u001b\u001a\u0019\u001b\u001a\u0018\u001b\u001c\u001e\u0017\u001b\u0019\u0016\u0015\u0014 \u001f\u001e\u001d\u001c\u001b\u001a\u0019\u0018\u0017\u001c\u0016\u001b\u001f\u001e\u001d\u001c\u001b\u001a\u0019\u001b\u001c\u001e\u0017\u001b\u0013\u0012\u0011\u0012\u0010\u001b \u001b \u001b \u001b \u0012\u000f\u001b\u001c\u001e\u0017\u001b\u0019\u0016\u0015\u0014 \u001f\u001e\u001d\u001c\u001b\u001a\u0019\u0018\u0017\u001c\u0016\u001b\u001f\u001e\u001d\u001c\u001b\u001a\u0019\u001b\u001c\u001e\u0018\u001b\u000e\u001d\u0018\u001b \u0017\u0016\u001b\u001c\u001e\u0018\u001b\u0015\u001d\u0014\u001b\u0014\u001a\u0013\u001e\u001c\u001b\u0019\u001a\u0012\u0018\u001b\u0011\u0018\u001d\u0014\u001a\u0016\u0013\u0010 \u001f\u001e\u001d\u001c\u001b\u001a\u0019\u0018\u0017\u001c\u0016\u001b\u001f\u001e\u001d\u001c\u001b\u001a\u0019\u001b\u001c\u001e\u0018\u001b\u000f\u001d\u0016\u001b \u0017\u0016\u001b\u001c\u001e\u0018\u001b\u0015\u001d\u0014\u001b\u001f\u001e\u001d\u001c\u001b\u001a\u0019\u001a\u0012\u0018\u001b\u0011\u0018\u001d\u0014\u001a\u0016\u0013\u0010 Figure 5. Visual knowledge locating. The similarity values between blod text tokens and keys of visual knowledge are averaged over all layers. \u001f\u001e\u001d\u001c\u001b\u001a\u0019\u0018\u0017\u001c\u0016\u001b\u001f\u001e\u001d\u001c\u001b\u001a\u0019\u0018\u001c \u001c\u001c\u001c\u001c\u001c\u001c\u001c\u001c\u001c\u001c\u001c\u001c\u001c\u001c\u0017\u0016\u001a\u0015\u0014\u001c\u0013\u001e\u0012\u0011\u0010\u0011\u001c\u000f\u001e\u001c\u0018\u001e\u000e\u001c\u0011\u0010\u0010\r \u0015\u0017\u001c\u0018\u0017\u001c\u0016\u001b\u0014 \u0015\u0017\u001c\u0018\u0017\u001c\u0016\u001b\u0013 Figure 6. Visual knowledge distortion. Left: Inputs of model; Middle: Original similarity between text tokens and keys of visual knowledge; Right: Distorted similarity. The values in the red rectangle are set to 0. tive pooling. As illustrated in Figure 4 (right), after pooling the visual prompt, the input-space prompting methods suffer from obvious performance degradation, implying that the fine-grained local information is lost in this process. By contrast, MemVP can use long visual prompts without extending the input, thus outperforming the baselines in terms of efficiency. Visualization. We conduct experiments to verify our main motivation, i.e., the visual information can be inserted into memories of language models as external knowledge. If the model acquires the visual knowledge successfully, we are supposed to observe that i) the visual knowledge related to the text inputs is retrieved, and ii) when the model fails to retrieve the correct knowledge under manual distortion, the model should output the corresponding wrong contents. In Figure 5, we visualize the similarity between queries and keys, i.e., \u03d5(\u27e8x, \u03bbf(zi)\u27e9) in Eq (6), of BART-base finetuned on VQAv2. We find that the text tokens have a high similarity with the keys of related visual knowledge entries, implying that the corresponding values are retrieved. For instance, when asking the model \u201cWhat is in the sky?\u201d, the model retrieves knowledge entries around the plain; when asked \u201cWhat is the color of the sky?\u201d, the model retrieves knowledge entries of the background. Moreover, we find that different words in the input sentence have different preferences, e.g., when asking the model \u201cWhat is the man on the far right side wearing?\u201d, the \u201cman\u201d token retrieves the knowledge entries that contain men, and the \u201cright\u201d token retrieves the entries on the right side of the image. Next, we try distorting the knowledge by editing the querykey similarity. As the example in Figure 6, when asking the model \u201cHow many black horses do you see?\u201d, the model mainly retrieves the entries containing the black horse. Then, we manually block the retrieval of the two most responsive entries by setting \u03d5(\u27e8x, \u03bbf(zi)\u27e9) = 0. As a result, the model outputs \u201c0\u201d since it fails to obtain knowledge about the existence of black horse. Overall, these observations verify that the visual information is actually inserted into memory and direct the outputs of language models. 5.2. Experiments on LLaMA Datasets and Baselines. We use a challenging VQA task, ScienceQA (Lu et al., 2022), to evaluate our method. ScienceQA is a large-scale science question-answering dataset compiled from diverse knowledge domains. We compare MemVP with other LLaMA-based fine-tuned models with input-space visual prompting, including LLaVA (Liu et al., 2023b), LLaMA-Adapter (Zhang et al., 2023a), and LaVIN (Luo et al., 2023). We also provide results of LLaVA equipped with LoRA. All these methods adopt a onestage paradigm, i.e., directly generating the answers endto-end without multi-stage chain-of-thought (CoT) prompting (Zhang et al., 2023c). We adopt the training recipe used by Luo et al. (2023) and train each method for 20 epochs. All these methods use a ViT-L/14 pre-trained via CLIP as the visual encoder. We also report zero-shot results of GPT4 (OpenAI, 2023). Implementation Details. Following LLaVA (Liu et al., 2023b), MemVP and LLaVA-LoRA use the 256 patch features before the last layer of ViT-L/14 and project them as visual prompts. Differently, LaVIN and LLaMA-adapter stack 6 global features (i.e., [CLS] tokens of ViT) selected from different intermediate layers as much shorter visual prompts. The projectors of MemVP, LaVIN, and LLaVALoRA are two FC layers with non-linear activation in between. Since LaVIN also inserts adapters in the visual encoder, we adopt a comparable strategy on MemVP and LLaVA-LoRA for a fair comparison. Specifically, we introduce parallel adapters to the FFN of the vision encoder following previous work (Chen et al., 2022). Moreover, since LLaMA has much more layers and larger dimension than BART and T5, we also share the position embedding of MemVP across different layers for parameter efficiency. For the samples that do not have image inputs, we simply set the visual prompts of MemVP to zero tensors, and only insert the position embedding. Results and Analyses. As shown in Table 2, our MemVP significantly outperforms all the baseline PEFT methods on both LLaMA-7B and LLaMA-13B. LLaVA-LoRA performs better than LaVIN and LLaMA-Adapter, indicating that VL 7 \fMemory-Space Visual Prompting for Efficient Vision-Language Fine-Tuning Table 2. Accuracy on ScienceQA test set. Question categories: NAT = natural science, SOC = social science, LAN = language science, TXT = w/ text context, IMG = w/ image context, NO = no context, G1-6 = grades 1-6, G7-12 = grades 7-12. \u2020 denotes our reproduced results. Other results are quoted from their original papers. Method #Trainable Params Language Model VL Pre-Train Subject Context Modality Grade Average NAT SOC LAN TXT IMG NO G1-6 G7-12 Human 90.23 84.97 87.48 89.60 87.50 88.10 91.59 82.42 88.40 GPT-4 (0-shot) GPT-4 84.06 73.45 87.36 81.87 70.75 90.73 84.69 79.10 82.69 LLaVA 7B Vicuna-7B \u221a 89.84 LLaVA 13B Vicuna-13B \u00d7 85.81 LLaVA 13B Vicuna-13B \u221a 90.36 95.95 88.00 89.49 88.00 90.66 90.93 90.90 90.92 PEFT methods LLaMA-Adapter 1.8M LLaMA-7B \u00d7 84.37 88.30 84.36 83.72 80.32 86.90 85.83 84.05 85.19 LLaVA-LoRA\u2020 4.4M LLaMA-7B \u00d7 91.70 94.60 86.09 91.25 90.28 88.64 91.52 89.65 90.85 LaVIN 3.8M LLaMA-7B \u00d7 89.25 94.94 85.24 88.51 87.46 88.08 90.16 88.07 89.41 MemVP (Ours) 3.9M LLaMA-7B \u00d7 94.45 95.05 88.64 93.99 92.36 90.94 93.10 93.01 93.07 LaVIN 5.4M LLaMA-13B \u00d7 90.32 94.38 87.73 89.44 87.65 90.31 91.19 89.26 90.50 MemVP (Ours) 5.5M LLaMA-13B \u00d7 95.07 95.15 90.00 94.43 92.86 92.47 93.61 94.07 93.78 Table 3. Training and inference time. Measured on 8\u00d7A800 GPUs without memory-saving or speed-up techniques (e.g., flash attention). The per-GPU batch size is 4 for training and 64 for inference. Method Length of Visual Prompt #Trainable Params Training Time (s/batch) Inference Time (s/batch) LLaVA-LoRA 7B 256 4.4M 0.49 3.42 LaVIN 7B 6 3.8M 0.39 2.06 MemVP 7B 256 3.9M 0.28 1.88 MemVP 13B 256 5.5M 0.46 3.07 Table 4. Ablation experiments on ScienceQA. \u201cAverage\u201d and \u201cIMG\u201d denote the accuracy on the whole test set and on the IMG subset, respectively. Settings Average IMG #Trainable Params (M) MemVP 7B 93.07 92.36 3.9 w/o visual prompts 85.33 76.05 3.3 visual features: local \u2192global 89.01 84.18 3.9 position embedding: add \u2192concat 89.79 86.07 3.9 insert visual prompts in keys only 91.94 90.23 3.9 insert visual prompts in values only 92.78 92.36 3.9 models benefit from the local visual information in longer visual prompts. Notably, MemVP also beats LLaVA, a fully fine-tuned model with VL pre-training, on average results as well as 7 out of 8 subsets. Besides, we also compare the training and inference speed of different PEFT methods in Table 3. In spite of the long visual prompts, MemVP is still 1.4\u00d7 faster than LaVIN during training, since the routing mechanism of LaVIN delays the training speed. LLaVALoRA, which also uses local visual prompts in input space, is 1.75\u00d7 and 1.8\u00d7 slower than MemVP in training and inference, respectively. Overall, memory-space prompting exhibits remarkable advantage in computation efficiency. To demonstrate the effectiveness of the components of MemVP, we conduct comprehensive ablation experiments. As in Table 4, when we insert the position embedding without adding visual prompts into the language model, its performance on IMG subset degrades significantly, since the language model cannot obtain the visual knowledge. We note that using global features as in LaVIN leads to a drop in performance due to the loss of local information. We also attempt to concatenate the position embedding with visual prompts instead of adding to them, where the visual prompts will not acquire hard-coded position information but the number of trainable parameters keeps unchanged. The degraded performance indicates the importance of position information for visual prompts since the text inputs may be location-related. When only inserting visual prompts in keys or values, the model performs worse in both cases. 6." +} \ No newline at end of file diff --git a/abs_9K/test_abstract_short_2405.05691v1.json b/abs_9K/test_abstract_short_2405.05691v1.json new file mode 100644 index 0000000000000000000000000000000000000000..d6f2870f27821f2fb5b07baa5aa7ebd461c9abda --- /dev/null +++ b/abs_9K/test_abstract_short_2405.05691v1.json @@ -0,0 +1,17 @@ +{ + "url": "http://arxiv.org/abs/2405.05691v1", + "title": "StableMoFusion: Towards Robust and Efficient Diffusion-based Motion Generation Framework", + "abstract": "Thanks to the powerful generative capacity of diffusion models, recent years\nhave witnessed rapid progress in human motion generation. Existing\ndiffusion-based methods employ disparate network architectures and training\nstrategies. The effect of the design of each component is still unclear. In\naddition, the iterative denoising process consumes considerable computational\noverhead, which is prohibitive for real-time scenarios such as virtual\ncharacters and humanoid robots. For this reason, we first conduct a\ncomprehensive investigation into network architectures, training strategies,\nand inference processs. Based on the profound analysis, we tailor each\ncomponent for efficient high-quality human motion generation. Despite the\npromising performance, the tailored model still suffers from foot skating which\nis an ubiquitous issue in diffusion-based solutions. To eliminate footskate, we\nidentify foot-ground contact and correct foot motions along the denoising\nprocess. By organically combining these well-designed components together, we\npresent StableMoFusion, a robust and efficient framework for human motion\ngeneration. Extensive experimental results show that our StableMoFusion\nperforms favorably against current state-of-the-art methods. Project page:\nhttps://h-y1heng.github.io/StableMoFusion-page/", + "authors": "Yiheng Huang, Hui Yang, Chuanchen Luo, Yuxi Wang, Shibiao Xu, Zhaoxiang Zhang, Man Zhang, Junran Peng", + "published": "2024-05-09", + "updated": "2024-05-09", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.MM" + ], + "label": "Original Paper", + "paper_cat": "Diffusion AND Model", + "gt": "Thanks to the powerful generative capacity of diffusion models, recent years\nhave witnessed rapid progress in human motion generation. Existing\ndiffusion-based methods employ disparate network architectures and training\nstrategies. The effect of the design of each component is still unclear. In\naddition, the iterative denoising process consumes considerable computational\noverhead, which is prohibitive for real-time scenarios such as virtual\ncharacters and humanoid robots. For this reason, we first conduct a\ncomprehensive investigation into network architectures, training strategies,\nand inference processs. Based on the profound analysis, we tailor each\ncomponent for efficient high-quality human motion generation. Despite the\npromising performance, the tailored model still suffers from foot skating which\nis an ubiquitous issue in diffusion-based solutions. To eliminate footskate, we\nidentify foot-ground contact and correct foot motions along the denoising\nprocess. By organically combining these well-designed components together, we\npresent StableMoFusion, a robust and efficient framework for human motion\ngeneration. Extensive experimental results show that our StableMoFusion\nperforms favorably against current state-of-the-art methods. Project page:\nhttps://h-y1heng.github.io/StableMoFusion-page/", + "main_content": "Introduction Human motion generation aims to generate natural, realistic, and diverse human motions, which could be used for animating virtual characters or manipulating humanoid robots to imitate vivid and rich human movements without long-time manual motion modeling and professional *Corresponding authors: ManZhang (zhangman@bupt.edu.cn) and Junran Peng(jrpeng4ever@126.com) Table 1. StableMoFusion achieves superior performance on motion generation compared to other state-of-the-art methods. Lower FID and higher R Precision mean, the better. Method FID\u2193 R Precision (top3)\u2191 MDM [28] 0.544 0.611 MLD [3] 0.473 0.772 MotionDiffuse [37] 0.630 0.782 ReMoDiffuse [38] 0.103 0.795 StableMoFusion (Ours) 0.098 0.841 Figure 1. Comparison of the inference time costs on motion generation. The closer the model is to the origin, the better. skills[1, 4, 37]. It shows great potential in the fields of animation, video games, film production, human-robot interaction and etc. Recently, the application of diffusion models to human motion generation has led to significant improvements in the quality of generated motions [3, 28, 37]. arXiv:2405.05691v1 [cs.CV] 9 May 2024 \fDespite the notable progress made by diffusion-based motion generation methods, its development is still hindered by several fragmented and underexplored issues: 1) Lack of Systematic Analysis: these diffusion-based motion generation work usually employ different network architectures and training pipelines, which hinders crossmethod integration and the adoption of advancements from related domains. 2) Long Inference Time: due to the timeconsuming iterative sampling process, most existing methods are impractical for applications with virtual characters and humanoid robots, where real-time responsiveness is crucial. 3) Footskate Issue: foot skating (footskate) in generated motions remains a major concern. This significantly undermines the quality of generated motions and limits their practical applicability. Therefore, in order to fill these research gaps and enhance the effectiveness and reliability of diffusion-based motion generation in practical applications, our study conducts a comprehensive and systematic investigation into network architectures, training strategies, and inference process. Our investigation is specifically directed towards text conditional motion generation, as text prompts are arguably the most promising format for practical application and the most convenient input modality among various conditional signals. Ultimately, we present a robust and efficient framework for diffusion-based motion generation, called StableMoFusion, as illustrated in Figure 2. In StableMoFusion, we use Conv1D UNet with AdaGN and linear cross-attention as the motion-denoising network, and improve its generalization capability with GroupNorm tweak. During training, two effective strategies were employed to enhance the network\u2019s ability to generate motion. During inference, we use four training-free acceleration tricks to achieve efficient inference. Furthermore, we present a footskate cleanup method based on a mechanical model and optimization. Extensive experiments demonstrate that StableMoFusion achieves an excellent trade-off between text-motion consistency and motion quality compared to other state-ofthe-art methods, as shown in Table 1. Meanwhile, Stablemofusion\u2019s efficient inference process notably reduces the minimum number of iterations required for generation from 1000 to 10, as well as shorter inference times than methods of about the same performance, achieving an average inference time of 0.5 seconds on the Humanm3D test set, as shown in Figure 1. In addition, our footskate cleanup method within diffusion framework sizably solves the foot skating problem of motion generation as shown in Section 5.4. Our major contributions can be summarized as follows: \u2022 We perform a systematic evaluation and analysis on the design of each component in the diffusion-based motion generation pipeline, including network architectures, training strategies, and inference process. \u2022 We propose an effective mechanism to eliminate foot skating which is a comment issue in current methods. \u2022 By consolidating these well-designed components, we present a robust and efficient diffusion-based motion generation framework named StableMoFusion. Extensive experiments demonstrate its superiority in text-motion consistency and motion quality. 2. Related Work 2.1. Motion Diffusion Generation In recent years, the application of diffusion models to human motion generation has led to significant improvements in the quality of generated motions. MotionDiffuse [37] softly fuses text features into diffusion-based motion generation through cross-attention. MDM [28] experimented with the separate Transformer encoder, decoder, GRU as denoising networks, respectively. PyhsDiff [35] incorporates physical constraints to generate more realistic motions; Prior MDM [24] uses diffusion priors to allow the model to be applied to specific generative tasks; MLD [3] utilizes the latent space of VAE to speed up diffusion generation; ReMoDiffuse [38] uses a retrieval mechanism to enhance the motion diffusion model. All of these methods use Transformer-based network structure, while MoFusion [4] and GMD [13] use Conv1D UNet for motion diffusion generation. Our work towards a more robust and efficient diffusionbased motion generation framework through a comprehensive investigation into network architectures, training strategies, and inference process. It also addresses the practical application challenges of long inference time and footskate phenomenon. 2.2. Training-Free Sampling To reduce the inference time with a trained network, there have been many advanced samplers to accelerate DDPM [8]. Song et al. [26] show that using Stochastic Differential Equation (SDE) for sampling has a marginally equivalent probability Ordinary Differential Equations (ODE). And then, DDIM [25] constructs a class of non-Markovian diffusion processes that realize skip-step sampling. PNDN [16] uses pseudo numerical to accelerate the deterministic sampling process. DEIS [39] and DPMSolver [17] improve upon DDIM by numerically approximating the score functions within each discretized time interval. Meanwhile, several work have focused on speeding up stochastic sampling. For example, Gotta Go Fast [11] utilizes adaptive step sizes to speed up SDE sampling, and Lu et al. [18] converts the higher-order ODE solver into an SDE sampler to address the instability issue. 2 \fWhile these samplers have demonstrated efficacy in image generation, their impact on motion diffusion models remains unexplored. In this work, we evaluate them to find the most appropriate one for motion generation. 2.3. Footskate Cleanup In order to generate realistic motions in computer animation, various methods have been developed to improve footskate issue. Edge [29] embeds the foot contact term into the action representation for training and applies Contact Consistency Loss as a constraint to keep the physical plausibility of motion. RFC [34], Drop [10] and Physdiff [35] uses reinforcement learning to constrain the physical states of actions, such as ground force reaction and collision situations to get a realism motion. UnderPressure [19] and GroundLink [7] respectively collect foot force datasets during motion. UnderPressure [19] also utilizes this dataset to train a network capable of predicting vertical ground reaction forces. Based on this, UnderPressure proposes a foot skating cleanup method. 3. Preliminaries The pipeline of Diffusion model [8] involve three interconnected processes: a forward process that gradually diffuses noise into sample, a reverse process that optimizes a network to eliminate the above perturbation, and an inference process that utilizes the trained network to iteratively denoise noisy sample. Specifically, a motion denoising network is first trained to predict the original motion x0 from the noisy motion xt: randomly select a ground-truth motion x0 and a diffusion timestep t \u223cU[0, T], T being the maximum timestep. And then the noisy motion xt after t-step diffusion is gained by Equation 1, xt = \u221a\u00af \u03b1tx0 + \u221a 1 \u2212\u00af \u03b1t\u03f5 (1) where \u03f5 is a Gaussian noise. \u221a\u00af \u03b1t and \u221a1 \u2212\u00af \u03b1t are the strengths of signal and noise, respectively. When \u221a\u00af \u03b1t is small enough, we can approximate xt \u223cN(0, I). Next, given a motion-denoising model G\u03b8(xt, t, c) for predicting the original sample, parameterized by \u03b8, the optimization can be formulated as follows: min \u03b8 Et\u223cU[0,T ],x0\u223cpdata||G\u03b8(xt, t, c) \u2212x0||2 2 (2) In the inference process, a trained motion-denoising network can progressively generate samples from noise with various samplers. For instance, DDPM [8] iteratively denoise the noisy data from t to a previous timestep t\u2032, as shown in Algorithm 1. Algorithm 1 Inference Given a text prompt c xt \u223cN(0, I) for t = T to 1 do e x0 = G(xt, t, c) \u03f5 \u223cN(0, I) if t > 1, else \u03f5 = 0 xt\u22121 = \u221a\u00af \u03b1t\u22121\u03b2t 1\u2212\u00af \u03b1t e x0 + \u221a\u03b1t(1\u2212\u00af \u03b1t\u22121) 1\u2212\u00af \u03b1t xt + \u02dc \u03b2t\u03f5 end for return x0 4. Method Through comprehensive exploratory experiments conducted on diffusion-based motion generation, we propose a novel diffusion framework, named StableMoFusion, as illustrated in Figure 2, to facilitate robust and efficient motion generation. This section begins with our investigation on the architecture of motion-denoising networks. Next, we discuss several training strategies pivotal for enhancing model performance in Section 4.2. Subsequently, we introduce our improvements in the inference process in Section 4.3, tailored to enable efficient inference. Lastly, we discuss and present a solution to the footskate issue in Section 4.4. 4.1. Model Architecture Most existing work use Transformer [30]-based architectures as the motion-denoising network; however, it remains questionable whether these architectures are best for diffusion-based motion generation. In this subsection, we will present three new network architectures fine-tuned for the motion generation task: Conv1D UNet [4, 13], Diffusion Transformer (DiT) [20] and the latest Retentive Network (RetNet) [27]. 4.1.1 Conv1D UNet Baseline We chose the Conv1D UNet with AdaGN [5] and skip connections in GMD [13] as the Conv1D UNet baseline and modify the structure to a canonical Unet structure, which consist of four downsampling stages. The motion length n is successively reduced from Nto \u230aN/8\u230b, and then the corresponding up-sampling phase is used to upsample. There are two residual Conv1D blocks for each down-sampling or up-sampling stage, with a single block shown as Figure 3 (a). Block Adjustment We introduce Residual Linear MultiHead Cross-Attention after each block to effectively integrate textual cues, and dropout is incorporated into the original Conv1D block to enhance model generalization, as shown in Figure 3 (b). In the baseline block, text prompts 3 \f... ... the person is dancing the waltz. N Random Mask Forward Process Reverse Process Inference Process the person is dancing the waltz. the person is is waving Text Encoder Timestep Encoder timestep t Text prompts ... SDE DPM-Solver++ 2M Karras Output Footskate Cleanup ... Conv1D U-Net DiT ( a ) ( b ) ( c ) ( d ) else Linear Multi-Head Cross-Attention Conv1d Mish Conv1d Mish Scale, Shift Dropout MLP x t text GroupNorm Rearrange Rearrange GroupNorm Rearrange Rearrange Conv1d GroupNorm Mish Conv1d GroupNorm Mish Scale, Shift x t MLP text Linear Multi-Head Cross-Attention Conv1d Mish Conv1d Mish Scale, Shift Dropout MLP x t text GroupNorm Rearrange Rearrange GroupNorm Rearrange Rearrange CondUNet1D Block Motion Padding Motion sequence x text Scale, Shift LayerNorm Scale Multi-Head Self-Attention Scale, Shift LayerNorm Scale FFN t MLP x Linear Multi-Head Cross-Attention Scale, Shift LayerNorm Scale Multi-Head Self-Attention Scale, Shift LayerNorm Scale FFN Figure 2. Overview of StableMoFusion, which is composed of a diffusion forward process, a reverse process on CondUNet1D motiondenoising network, and an efficient inference. The colors of the arrows indicate different stages: blue for training, red for inference, and black for both. LayerNorm Retention LayerNorm FFN x t Linear Multi-Head Cross-Attention ... ... person is dancing the waltz. N Random Mask ard Process se Process ence Process person is dancing the waltz. the person is is waving Text Encoder Timestep Encoder timestep t Linear Multi-Head Cross-Attention X[Cond_indices] Embedded_timestep X Dropout Scale, Shift Conv1d GroupNorm permute(0,2,1) permute(0,2,1) Mish Cond_indices Embedded_text Mish GroupNorm permute(0,2,1) permute(0,2,1) Conv1d MLP Text prompts ... SDE DPM-Solver++ 2M Karras Output Footskate Cleanup ... [N, D, L] [N, L] permute(0,2,1) permute(0,2,1) Conv1D UNet DiT RetNet ( a ) ( b ) ( c ) ( d ) ( e ) ( f ) text else Linear Multi-Head Cross-Attention Conv1d Mish Conv1d Mish Scale, Shift Dropout MLP x t text GroupNorm Rearrange Rearrange GroupNorm Rearrange Rearrange LayerNorm Retention LayerNorm FFN x t text Conv1d GroupNorm Mish Conv1d GroupNorm Mish Scale, Shift x t MLP text Linear Multi-Head Cross-Attention Conv1d Mish Conv1d Mish Scale, Shift Dropout MLP x t text GroupNorm Rearrange Rearrange GroupNorm Rearrange Rearrange CondUNet1D Block Motion Padding Motion sequence x text Scale, Shift LayerNorm Scale Multi-Head Self-Attention Scale, Shift LayerNorm Scale FFN t MLP x Linear Multi-Head Cross-Attention Scale, Shift LayerNorm Scale Multi-Head Self-Attention Scale, Shift LayerNorm Scale FFN text MLP t Figure 3. Visualization of the block structure and their adjustments of Conv1D UNet, DiT and RetNet. Pink blocks indicate structures that have been added or modified. are encoded with timesteps and integrated into motion coding using a simple formula: x \u00b7 (1 + scale) + shift. However, this approach doesn\u2019t effectively incorporate textual cues into motion sequences because it applies uniform operations across the entire sequence. In diffusion pipelines, noise uniformly affects the entire sequence, resulting in consistent mappings between motion frames and timesteps. However, since each frame\u2019s motion corresponds to distinct textual cues, a straightforward \"scale and shift\" approach is insufficient for injecting textual information. Our solution employs an attention mechanism to dynamically focus each motion frame on its associated textual information. Residual connections help mitigate potential computation biases introduced by cross attention. 4 \fGroupNorm Tweak We rearranged the data before and after applying Group Normalization, as depicted in Figure 3 (b), to minimize the impact of padded data during network forward propagation. When testing the adapted Conv1D UNet on datasets like KIT-ML with varying sequence lengths, we noticed a significant performance drop. This suggests that the model struggles with datasets containing extensive padding. Further investigation revealed that implementing Group Normalization within the baseline block caused this issue. Since Conv1D operates along the temporal dimension, directly applying Group Normalization to the input disrupted the differentiation between padded and non-padded data, affecting loss computation and gradient descent. 4.1.2 Diffusion Transformer Baseline To explore the effectiveness of the DiT structure for motion generation, we replace the Vision Transformer used for images in the DiT with self-attention used for motion data as the baseline, with the basic block structure shown in Figure 3 (c). For text-to-motion generation, we embed text prompts via the CLIP [22] encoder and project them into token concatenated with motion embeddings for self-attention. It scales and shifts the motion embedding before and after each autoregressive computation using timestep, which ensures the motion denoising trajectory closely aligned with the timestep. Block Adjustment We have also tried to incorporate Linear Multi-Head Cross-Attention into the DiT framework, as shown in Figure 3 (d). This adjustment allows for a more nuanced fusion of textual cues with motion dynamics than fusing all the text information into the one-dimensional text embedding in baseline, which enhances the coherence and relevance of generated motion sequences. 4.1.3 Retentive Network Baseline Our RetNet baseline follows a straightforward implementation similar to MDM, where the timesteps encoding is concatenated with the textual projection to form tokens, which are then fed along with motion embeddings into RetNet, with its basic block shown in Figure 3 (e). RetNet incorporates a gated multi-scale retention mechanism, which enhances information retention and processing capabilities, thereby enabling nuanced comprehension and generation of motion sequences. Through our investigation, we aim to ascertain the feasibility of leveraging RetNet for motion generation tasks. Block Adjustment To further integrate textual information, we also employ Linear Multi-Head Cross-Attention between retention and FFN, as shown in Figure 3 (f). By segregating temporal and textual features, our approach aims to preserve the distinct characteristics of each modality and allow the model to independently learn and leverage relevant cues for motion generation. This separation enhances the model\u2019s interpretability and flexibility, enabling it to better capture the intricacies of both temporal dynamics and semantic context. 4.1.4 Final Model Architecture Ultimately, we choose the Conv1D UNet with block adjustment and GroupNorm tweak as the motion-denoising model of StableMoFusion, as shown in Figure 2. We call this network as CondUNet1D. Both DiT and RetNet use the idea of attention to activate the global receptive field in the temporal dimension, which benefits the modeling of longrange dependency. The receptive field of Conv1D UNet is mainly in the convolution kernel window, promoting a coherent and smooth transition between frames. We tend to prioritize smoother generation in current applications of motion generation. In our StableMoFusion, we set the base channel and channel multipliers of UNet to 512 and [2,2,2,2] respectively. For text encoder, we leverage pre-trained CLIP [22] token embeddings, augmenting them with four additional transformer encoder layers, the same as MotionDiffuse [37], with a latent text dimension of 256. For timesteps encoder, it is implemented using position encoding and two linear layers, the same as StableDiffusion [23], with a latent time dimension of 512. 4.2. Training Strategies Recent research has shown that key factors in the training strategies of the diffusion model affect the learning pattern and its generative performance [2]. In this subsection, we will analyze the impact of two empirically valid training strategies on diffusion-based motion generation: exponential moving average and classifier-free guidance. 4.2.1 Exponential Moving Average Exponential Moving Average (EMA) calculates a weighted average of a series of model weights, giving more weight to recent data. Specifically, assume the weight of the model at time t as \u03b8t, then the EMA formulated as: vt = \u03b2 \u00b7 vt\u22121 + (1 \u2212\u03b2) \u00b7 \u03b8t, where vt denotes the average of the network parameters for the first t iterations (v0 = 0), and \u03b2 is the weighted weight value. During the training of the motion-denoising network, the network parameters change with each iteration, and the motion modeling oscillates between text-motion consistency and motion quality. Therefore, the use of EMA can smooth 5 \fout the change process of these parameters, reduce mutations and oscillations, and help to improve the stability ability of the motion-denoising model. 4.2.2 Classifier-Free Guidance To further improve the generation quality, we use Classifier-Free Guidance (CFG) to train the motiondenoising generative model. By training the model to learn both conditioned and unconditioned distributions (e.g., setting c = \u2205for 10% of the samples), CFG ensures that the models can effectively capture the underlying data distribution across various conditions. In inference, we can tradeoff text-motion consistency and fidelity using s: Gs (xt, t, c) = G (xt, t, \u2205) + s \u00b7 (G (xt, t, c) \u2212G (xt, t, \u2205)) (3) This ability to balance text-motion consistency and fidelity is crucial for producing varied yet realistic outputs, enhancing the overall quality of generated motion. 4.3. Efficient Inference Time-consuming inference time remains a major challenge for diffusion-based approaches. To address this problem, we improve inference speed by integrating four effecient and training-free tricks in the inference process: 1) efficient sampler, 2) embedded-text cache, 3) parallel CFG computation, and 4) low-precision inference. 4.3.1 Efficient Sampler We integrate the SDE variant of second-order DPMSolver++ sampler (SDE DPM-Solver++ 2M) into diffusionbased motion generation to reduce denoising iterations. DPM-Solver is a high-order solver for diffusion stochastic differential equations (SDEs), which implies additional noise will be introduced during the iterative sampling. Thereby, stochasticity of its sampling trajectories helps to reduce the cumulative error [33], which is crucial for the realism of generated motion. In addition, we adopt the Karras Sigma [12] to set discrete timesteps. This method leverages the theory of constant-velocity thermal diffusion to determine optimal timesteps, thereby maximizing the efficiency of motion denoising within a given number of iterations. 4.3.2 Embedded-text Cache We integrate the Embedded-text Cache mechanism into the inference process to avoid redundant calculations. In diffusion-based motion generation, the text prompt remain unchanged across iterations, resulting in same embedded text in each computation of the denoising network. Specifically, we compute the text embedding initially and subsequently utilize the embedded text directly in each network Figure 4. Red: the foot joints as 0th frame. Green: the corresponding keypoints. At 5th frame, the offset of red and green points indicate the footskate phenomenon. forward, thereby reducing computational redundancy and speeding up inference. 4.3.3 Parallel CFG Computation We implement the inference process of CFG in parallel to speed up the single iteration calculation while maintaining model generation performance. Due to the CFG mechanism Equation 3, in each iterative step during inference, it is necessary to execute a conditional and an unconditional denoising, respectively, using the trained motion network, and then sum up the results. 4.3.4 Low-precision Inference We utilize half-precision floating point (FP16) computation during inference to accelerate processing. Newer hardware supports enhanced arithmetic logic units for lowerprecision data types. By applying parameter quantization, we convert FP32 computations to lower-precision formats, effectively reducing computational demands, parameter size, and memory usage of the model. 4.4. Footskate Reduction Figure 4 shows an example for the foot skating phenomenon. The motion frame rate is 20. The two frames in the figure have a time difference of 0.25s. Based on our life experience, it is difficult to complete a motion and return to the original pose within 0.25s. Although the foot postures in the two frames remain unchanged, there are changes in the positions of the joints, as observed from the variations in joint position values and their distances relative to red points. For this motion, what we expect is that the feet are anchored at the same point. Typically, choosing the foot position of middle frames during foot skating as the fixed point minimizes the impact on the adjacent frames. The key to eliminating foot skating is to first identify the foot joints and frame ranges where foot skating occurs, and then anchor those keypoints at their positions p in the intermediate frames. We formulate this constraint as a loss 6 \fterm shown in Equation 4 where j indicates joint and f is frame ranges. Lfoot = Jskating X j Fskating X f (Pj \u2212p) (4) Jskating contains all the joints where foot skating may occur, specifically including right ankle, right toes, left ankle and left toes. Fskating is a collection of all frame ranges of the joint j. Pj means the positions of joint j. We incorporate Equation 4 to a gradient descent algorithm to correct foot skating motion. Following UnderPressure [19], we use vertical ground reaction forces (vGRFs) to identity foot joint j and its skating frames f. The vGRFs predition model of UnderPressure V23 requires motion of a predefined 23 joints skeleton S23, which is different from our motion data. In our work, we utilize HumanML3D[6] with 22 skeletal joints S22 and KIT-ML [21] motion with 21 skeletal joints. The subsequent foot skating cleanup primarily focused on HumanML3D. We transferred the pre-trained weights of V23 to our own model V \u03b8 22 using the constraints Equation 5, enabling us to directly predict the vertical ground reaction forces for HumanML3D motions. P is keypoints of HumanML3D motion. PS23 is the result of retargeting P to skeleton S23. min \u03b8 \u2225V \u03b8 22(P) \u2212V23(PS23)\u22252 2 (5) L = \u03c9qLpose + \u03c9fLfoot + \u03c9tLtrajectory + \u03c9vLvGRFs (6) Lfoot = Lfoot(P, \u02c6 P, V23, PS23) (7) LvGRFs = LvGRFs(P, \u02c6 P, V \u03b8 22) (8) Drawing inspiration from UnderPressure [19], we use foot contact loss Lfoot to fix contact joints, pose loss Lpose and trajectory loss Ltrajectory to to keep the semantic integrity of motion, vGRFs loss LvGRF s to keep valid foot pose. Our supplementary material provides detailed definitions of these loss terms. The final definition of our loss function is as Equation 6 [19] where \u03c9q, \u03c9f, \u03c9t, \u03c9v are weights of its loss item. P is keypoints of footskating motion and \u02c6 P is the result keypoints after footskate cleanup. Through our method, the footskate cleanup process can be generalized to various skeletal motions. In a few cases, motion corrected by Equation 6 may occurs unreasonable or unrealistic poses. The diffusion model trained on a large amount of motion data learns the prior knowledge of real motions and has the ability to correct the invalid motions. Therefore, we use our pretrained diffusion model to correct such cases. Motivated by OmniControl [32] and Physdiff [35], we incorporates footskate cleaning method into the diffusion denoising process, denote as StableMoFusion\u2217. 5. Experiments 5.1. Dataset and Evaluation Metrics We use HumanML3D [6] and KIT-ML [21] dataset for our experiments. HumanML3D Dataset contains 14,646 motions and 44,970 motion annotations. KIT Motion Language Dataset contains 3,911 motions and 6,363 natural language annotations. The evaluation metrics can be summarized into four key aspects: 1) Motion Realism: Frechet Inception Distance (FID), which evaluates the similarity between generated and ground truth motion sequences using feature vectors extracted by a pre-trained motion encoder [6]. 2) Text match: R Precision calculates the average top-k accuracy of matching generated motions with textual descriptions using a pretrained contrastive model [6]. 3) Generation diversity: Diversity measures the average joint differences across generated sequences from all test texts. Multi-Modality quantifies the diversity within motions generated for the same text. 4) Time costs: Average Inference Time per Sentence (AITS) [3] measures the inference efficiency of diffusion models in seconds, considering generation batch size as 1, without accounting for model or data loading time. In all of our experiments, FID and R Precision are the principal metrics we used to conduct our analysis and draw conclusions. 5.2. Implements Details For training, we use DDPM [8] with T = 1, 000 denoising steps and variances \u03b2t linearly from 0.0001 to 0.02 in the forward process. And we use AdamW with an initial learning rate of 0.0002 and a 0.01 weight decay to train the sample-prediction model for 50,000 iterations at batch size 64 on an RTX A100. Meanwhile, learning rate reduced by 0.9 per 5,000 steps. On gradient descent, clip the gradient norm to 1. For CFG, setting c = \u2205for 10% of the samples. For inference, we use the SDE variant of second-order DPM-Solver++ [18] with Karras Sigmas [12] in inference for sampling 10 steps. The scale for CFG is set to 2.5. 5.3. Quantitative Results We compare our StableMoFusion with several state-of-theart models, including T2M [6], MDM [28], MLD [3], MotionDiffuse [37], T2M-GPT [36], MotionGPT [9], ReMoDiffuse [38], M2DM [14] and fg-T2M [31]. on the HumanML3D [6] and KIT-ML [21] datasets in Table 2 and Table 3, respectively. Most results are borrowed from their own paper and we run the evaluation 20 times and \u00b1 indicates the 95% confidence interval. Our method achieves the state-of-the-art results in FID and R Precision (top k) on the HumanML3D dataset, and also achieves good results on the KIT-ML dataset: the best R Precision (top k) and the second best FID. This 7 \fTable 2. Quantitative results on the HumanML3D test set. The right arrow \u2192means the closer to real motion the better. Red and Blue indicate the best and the second best result. Method FID \u2193 R Precision\u2191 Diversity \u2192 Multi-modality \u2191 top1 top2 top3 Real 0.002\u00b1.000 0.511\u00b1.003 0.703\u00b1.003 0.797\u00b1.002 9.503\u00b1.065 T2M [6] 1.067\u00b1.002 0.457\u00b1.002 0.639\u00b1.003 0.743\u00b1.003 9.188\u00b1.002 2.090\u00b1.083 MDM [28] 0.544\u00b1.044 0.320\u00b1.005 0.498\u00b1.004 0.611\u00b1.007 9.599\u00b1.086 2.799\u00b1.072 MLD [3] 0.473\u00b1.013 0.481\u00b1.003 0.673\u00b1.003 0.772\u00b1.002 9.724\u00b1.082 2.413\u00b1.079 MotionDiffuse [37] 0.630\u00b1.001 0.491\u00b1.001 0.681\u00b1.001 0.782\u00b1.001 9.410\u00b1.049 1.553\u00b1.042 GMD [13] 0.212 0.670 9.440 T2M-GPT [36] 0.116\u00b1.004 0.491\u00b1.003 0.680\u00b1.003 0.775\u00b1.002 9.761\u00b1.081 1.856\u00b1.011 MotionGPT [9] 0.232\u00b1.008 0.492\u00b1.003 0.681\u00b1.003 0.778\u00b1.002 9.528\u00b1.071 2.008\u00b1.084 ReMoDiffuse [38] 0.103\u00b1.004 0.510\u00b1.005 0.698\u00b1.006 0.795\u00b1.004 9.018\u00b1.075 1.795\u00b1.043 M2DM [14] 0.352\u00b1.005 0.497\u00b1.003 0.682\u00b1.002 0.763\u00b1.003 9.926\u00b1.073 3.587\u00b1.072 Fg-T2M [31] 0.243\u00b1.019 0.492\u00b1.002 0.683\u00b1.003 0.783\u00b1.002 9.278\u00b1.072 1.614\u00b1.049 StableMoFusion (Ours) 0.098\u00b1.003 0.553\u00b1.003 0.748\u00b1.002 0.841\u00b1.002 9.748\u00b1.092 1.774\u00b1.051 Table 3. Quantitative results on the KIT-ML test set. The right arrow \u2192means the closer to real motion the better. Red and Blue indicate the best and the second best result. Method FID \u2193 R Precision\u2191 Diversity \u2192 Multi-modality \u2191 top1 top2 top3 Real Motion 0.031\u00b1.004 0.424\u00b1.005 0.649\u00b1.006 0.779\u00b1.006 11.08\u00b1.097 T2M [6] 2.770\u00b1.109 0.370\u00b1.005 0.569\u00b1.007 0.693\u00b1.007 10.91\u00b1.119 1.482\u00b1.065 MDM [28] 0.497\u00b1.021 0.164\u00b1.004 0.291\u00b1.004 0.396\u00b1.004 10.847\u00b1.109 1.907\u00b1.214 MLD [3] 0.404\u00b1.027 0.390\u00b1.008 0.609\u00b1.008 3.204\u00b1.027 10.80\u00b1.117 2.192\u00b1.071 MotionDiffuse [37] 1.954\u00b1.062 0.417\u00b1.004 0.621\u00b1.004 0.739\u00b1.004 11.10\u00b1.143 0.730\u00b1.013 T2M-GPT [36] 0.514\u00b1.029 0.416\u00b1.006 0.627\u00b1.006 0.745\u00b1.006 10.921\u00b1.108 1.570\u00b1.039 MotionGPT [9] 0.510\u00b1.016 0.366\u00b1.005 0.558\u00b1.004 0.680\u00b1.005 10.35\u00b1.084 2.328 \u00b1.117 ReMoDiffuse [38] 0.155\u00b1.006 0.427\u00b1.014 0.641\u00b1.004 0.765\u00b1.055 10.80\u00b1.105 1.239\u00b1.028 M2DM [14] 0.515\u00b1.029 0.416\u00b1.004 0.628\u00b1.004 0.743\u00b1.004 11.417\u00b1.97 3.325\u00b1.37 Fg-T2M [31] 0.571\u00b1.047 0.418\u00b1.005 0.626\u00b1.004 0.745\u00b1.004 10.93\u00b1.083 1.019\u00b1.029 StableMoFusion (Ours) 0.258\u00b1.029 0.445\u00b1.006 0.660\u00b1.005 0.782\u00b1.004 10.936\u00b1.077 1.362\u00b1.062 demonstrates the ability of StableMoFusion to generate high-quality motions that align with the text prompts. On the other hand, while some methods excel in diversity and multi-modality, it\u2019s crucial to anchor these aspects with accuracy (R-precision) and precision (FID) to strengthen their persuasiveness. Otherwise, diversity or multimodality becomes meaningless if the generated motion is bad. Therefore, our StableMoFusion achieves advanced experimental results on two datasets and shows robustness in terms of model performance. Figure 5. Visualization comparison results before and after our footskate cleanup. The red bounding box shows details of skating feet. 8 \f5.4. Qualitative Result Figure 5 shows the visual results of our footskate cleanup method, StableMoFusion\u2217. The red bounding box of footskate motion clearly has multiple foot outlines, whereas ours shows only one. The comparison graph shows the effectiveness of our method for cleaning footskate. Directly applying the footskate cleanup method of UnderPressure [19] to our motion would result in motion distortion, while our method effectively avoids such deformation. In our supplementary material, we will further present a comparison between our method and the UnderPressure method by videos to illustrate it. 5.5. Inference Time We calculate AITS of StableMoFusion and ReMoDiffuse [38] with the test set of HumanML3D[6] on Tesla V100 as MLD [3] does, the other results of Figure 1 are borrowed from [3]. For instance, MDM [28] with CFG requires 24.74s for average inference; MotionDiffuse [37] without CFG uses condition encoding cache and still requires 14.74s of average inference. While the MLD [3] reduces the average inference time to 0.217s by applying DDIM50 in latent space, we find this approach lacks the ability to edit and control motion by manipulating the model input. To tackle this, we employ 1) efficient sampler, 2) embedded-text cache, 3) parallel CFG computation, and 4) low-precision inference to reduce iteration counts and network latency. As shown in Figure 1, our StableMoFusion significantly shortens the inference time and achieves higher performance within the original motion space. However, it remains incontrovertible that StableMoFusion\u2019s inference speed trails behind that of MLD, and fails to meet the industry\u2019s real-time standard with an average inference time of 0.5s. Thus, our future work will focus on acceleration: the inference time of StableMoFusion is currently tied to the computation of the network, and we will further investigate how to scale down the model and how to reduce single-step latency in inference. 5.6. Ablation 5.6.1 Network Architecture We evaluate and compare all the architectures mentioned in Section 4.1 with the same training and inference pipeline. For a fair comparison, all methods use the real motion length from the ground truth to clip generated motion and seed(0) for one evaluation. As Table 4 show, each network enhancement in cross-attention has demonstrated performance enhancements, elucidating its pivotal role in augmenting model efficacy and effectiveness. Among them, Conv1D UNet achieves the best generation performance. And fine-tuning Conv1D UNet\u2019s GroupNorm can effectively improve its performance on the KIT-ML dataset, reducing the FID by about 64%. It also proves that the GoupNorm tweak on UNet is mainly useful for the dataset with dispersed length distributions, such as KIT-ML dataset. Table 4. Comparison of various architectures and adjustments. Dataset Network FID \u2193 R Precision (top3) \u2191 HumanML3D Conv1D UNet basline 0.245 0.780 + cross-attention 0.074 0.821 + GroupNorm Tweak 0.089 0.840 DiT baseline 0.884 0.711 + cross-attention 0.113 0.787 RetNet baseline 1.673 0.740 + cross-attention 0.147 0.853 KIT-ML Conv1D UNet+ cross-attention 0.658 0.756 + GroupNorm Tweak 0.237 0.780 5.6.2 Effective Inference By using the SDE variant of second-order DPM-Solver++ with Karras sigma, the inference process of diffusion-based motion generation is able to significantly reduce the minimum number of iterations required for generation from 1000 to 10 while enhancing the quality of generated motions, approximately 99% faster than the original inference process, as shown in Table 5. The application of embedded text caching and parallel CFG further reduces the average inference time by about 0.3s and 0.15s, respectively. Our experiments also show that reducing the computational accuracy of the motiondenoising model by half, from FP32 to FP16, does not adversely affect the generation quality. This suggests that 32bit precision is redundant for motion generation task. Table 5. The progressive effect of each efficient and training-free trick of StableMoFusion in inference process. Method FID\u2193 R Precision (top3)\u2191 AITS\u2193 Inference Steps\u2193 base (DDPM1000) 1.251 0.760 99.060 1000 + Efficient Sampler 0.076 0.836 1.004(-99%) 10 + Embedded-text Cache 0.076 0.836 0.690(-31%) 10 + Parallel CFG 0.076 0.836 0.544(-21%) 10 + FP16 0.076 0.837 0.499(-8%) 10 6." +} \ No newline at end of file diff --git a/abs_9K/test_abstract_short_2405.05707v1.json b/abs_9K/test_abstract_short_2405.05707v1.json new file mode 100644 index 0000000000000000000000000000000000000000..c19571f01e89638b3d293fda1682a63f2aff82b2 --- /dev/null +++ b/abs_9K/test_abstract_short_2405.05707v1.json @@ -0,0 +1,16 @@ +{ + "url": "http://arxiv.org/abs/2405.05707v1", + "title": "LatentColorization: Latent Diffusion-Based Speaker Video Colorization", + "abstract": "While current research predominantly focuses on image-based colorization, the\ndomain of video-based colorization remains relatively unexplored. Most existing\nvideo colorization techniques operate on a frame-by-frame basis, often\noverlooking the critical aspect of temporal coherence between successive\nframes. This approach can result in inconsistencies across frames, leading to\nundesirable effects like flickering or abrupt color transitions between frames.\nTo address these challenges, we harness the generative capabilities of a\nfine-tuned latent diffusion model designed specifically for video colorization,\nintroducing a novel solution for achieving temporal consistency in video\ncolorization, as well as demonstrating strong improvements on established image\nquality metrics compared to other existing methods. Furthermore, we perform a\nsubjective study, where users preferred our approach to the existing state of\nthe art. Our dataset encompasses a combination of conventional datasets and\nvideos from television/movies. In short, by leveraging the power of a\nfine-tuned latent diffusion-based colorization system with a temporal\nconsistency mechanism, we can improve the performance of automatic video\ncolorization by addressing the challenges of temporal inconsistency. A short\ndemonstration of our results can be seen in some example videos available at\nhttps://youtu.be/vDbzsZdFuxM.", + "authors": "Rory Ward, Dan Bigioi, Shubhajit Basak, John G. Breslin, Peter Corcoran", + "published": "2024-05-09", + "updated": "2024-05-09", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "label": "Original Paper", + "paper_cat": "Diffusion AND Model", + "gt": "While current research predominantly focuses on image-based colorization, the\ndomain of video-based colorization remains relatively unexplored. Most existing\nvideo colorization techniques operate on a frame-by-frame basis, often\noverlooking the critical aspect of temporal coherence between successive\nframes. This approach can result in inconsistencies across frames, leading to\nundesirable effects like flickering or abrupt color transitions between frames.\nTo address these challenges, we harness the generative capabilities of a\nfine-tuned latent diffusion model designed specifically for video colorization,\nintroducing a novel solution for achieving temporal consistency in video\ncolorization, as well as demonstrating strong improvements on established image\nquality metrics compared to other existing methods. Furthermore, we perform a\nsubjective study, where users preferred our approach to the existing state of\nthe art. Our dataset encompasses a combination of conventional datasets and\nvideos from television/movies. In short, by leveraging the power of a\nfine-tuned latent diffusion-based colorization system with a temporal\nconsistency mechanism, we can improve the performance of automatic video\ncolorization by addressing the challenges of temporal inconsistency. A short\ndemonstration of our results can be seen in some example videos available at\nhttps://youtu.be/vDbzsZdFuxM.", + "main_content": "INTRODUCTION With the rapid increase in the popularity of streaming video in recent years, today\u2019s media consumers have become accustomed to high-definition and vibrant video experiences, in color and on demand. However, there are also many substantial video archives with content that remains available in black and white only. Unlocking the potential of these archives, and infusing them with color, presents an exciting opportunity to engage with modern audiences, and breathe new life into classic movies and television episodes. By seamlessly blending cutting-edge technology with classic content, we not only enhance the visual appeal for contemporary viewers but also ensure that the historical significance of these timeless works are faithfully maintained. A. TRADITIONAL COLORIZATION Colorizing black-and-white multimedia is a formidable challenge characterized by its inherent complexity. It presents a \u2018one-to-many\u2019 scenario, wherein multiple feasible colorization outcomes can be derived for a single black-and-white video, as illustrated by recent research [1]. Traditional approaches for video colorization are manual and labor-intensive, demanding the dedicated efforts of interdisciplinary teams comprised of skilled colorists and rotoscoping animators, artists and historians. These teams invest extensive hours to ensure the production of a convincing and coherent end result. The intricacies of colorization are particularly difficult in the realm of videos, where the sheer volume of frames per second amplifies the complexity [2]. 1 arXiv:2405.05707v1 [cs.CV] 9 May 2024 \fWard et al.: LatentColorization: Latent Diffusion-Based Speaker Video Colorization FIGURE 1a. \"Sherlock Holmes and the Woman in Green\" (1945) black-and-white frames. FIGURE 1b. \"Sherlock Holmes and the Woman in Green\" (1945) LatentColorization output frames. Therefore, automation of the video colorization process is highly desirable. B. AUTOMATIC COLORIZATION Automatic video colorization can be seen as a means to significantly reduce the cost traditionally associated with manually colorizing/restoring vintage movies, an expensive proposition that is often limited to organizations with substantial budgets. Since the labor costs associated with expert colorists are a significant barrier, manual colorization has also been largely limited to popular films or TV shows (e.g., Doctor Who), with numerous other works (social history movies, documentaries, films by lesser-known directors, etc.) omitted where the cost-benefit analysis could not justify their colorization. As a consequence, various research efforts have tackled the need to automate aspects of the colorization process. These efforts span from earlier methods such as histogram matching [3], to more recent interactive approaches such as scribble-based systems [4] and exemplar-based approaches [5], as well as more recent developments in terms of deep learning-based colorization [6]. While the results still lag behind those that can be achieved of an experienced human colorizer, the automated approaches referred to above have made significant advancements in terms of their accuracy over prior systems. In terms of the state-of-the-art, one current benchmark for automatic video colorization is held by Wan et al. [7]. However, it is important to note that their approach not only colorizes but also restores videos, making it a difficult benchmark for systems that are focused solely on colorization. DeOldify [6], provides colorized outputs without image restoration, and therefore can be more easily compared against colorization-only approaches such as the one presented in this paper. Recent research [8] has shown the advantages of selfsupervised learning methodologies for colorization, removing the resource-intensive need for creating and curating manually labelled datasets for training models. Constructing custom labelled datasets can be a resource-intensive and time-consuming endeavor, particularly when dealing with video content which has both staticand motion-related information. C. RESEARCH CONTRIBUTION Driven by the recent increase in the adoption of diffusion models [9]\u2013[11], the field of generative modelling has produced a variety of contributions including Stable Diffusion [12], Imagen [13], and DALL\u2022E 2 [14] which have gained attention in both research and the mainstream media. Within the context of video colorization, the majority of techniques are based on GAN-based methods [15], [16], as well as the utilization of transformer-based approaches [17] such as those featured in [7], [18], [19]. Notably, Saharia et al. [20] propose leveraging diffusion models for various image-to-image tasks, including colorization. This paper introduces an innovative approach to videobased colorization, employing a latent-based denoising diffusion model. Our method demonstrates improvements over the state-of-the-art DeOldify [6] method, across a range of standard evaluation metrics including Power Signal to Noise Ratio (PSNR), Structural Similarity (SSIM), Fr\u00e9chet Inception Distance (FID), Fr\u00e9chet Video Distance (FVD), and Naturalness Image Quality Evaluator (NIQE). Furthermore, we provide comparative results for Blind/Referenceless Image Spatial Quality Evaluator (BRISQUE). It is also worth noting that our method yields an average improvement of approximately 18% when FVD is employed as the evaluation metric. This result is also collaborated by our user study where LatentColorization is preferred 80% of the time to the 2 \fWard et al.: LatentColorization: Latent Diffusion-Based Speaker Video Colorization previous state-of-the-art. We introduce a novel system for achieving temporal consistency in video colorization through the application of a latent diffusion model. A sample visual, before and after, is given in Figures 1a and 1b. To summarise, the unique contributions of our proposed work are as follows: \u2022 We apply fine-tuned latent diffusion models to the automatic video colorization task with exemplar frame conditioning. \u2022 We ensure temporal consistency in automatic video colorization through the use of our autoregressive conditioning mechanism. \u2022 We build a novel end-to-end video colorization application. \u2022 We achieve state-of-the-art performance over a range of datasets, metrics and evaluations. The structure of this paper is as follows: In \u00a72, we examine related work. \u00a73 provides an in-depth description of our methodology. Then, \u00a74 presents the results of our evaluations, which are further examined in \u00a75. Conclusions are given in \u00a76, and we outline our future research directions in \u00a77. II. RELATED WORK A. CONVENTIONAL DEEP LEARNING APPROACHES Generative adversarial networks, commonly referred to as GANs [15], have emerged as a common technology in the enhancement of existing video content, in domains including sign-language addition [21], low-light enhancement [22], and video colorization [5]. GAN-based methods have also been extensively used for image colorization [23]\u2013[30]. For example, Isola et al. proposed Pix2Pix [23], which has performed well on various benchmarks, including the FID-5K benchmark using the ImageNet Val dataset. In the context of video colorization, DeOldify [6] and more recently GCP [31] stand out as two of the more prominent GAN-based approaches. DeOldify [6] is a self-attention-based Generative Adversarial Network (GAN) [32]. It incorporates NoGAN training [33] and adheres to a Two Time Scale Update Rule [34]. While DeOldify is capable of generating credible colorizations, it has a tendency to produce somewhat subdued or less vibrant colors, characteristic of GAN-based systems. GCP [31] leverages color priors encapsulated in a pretrained Generative Adversarial Networks (GAN) for automatic colorization. Specifically, they \u201cretrieve\u201d matched features (similar to exemplars) via a GAN encoder and then incorporate these features into the colorization process with feature modulations. Other works, such as [35]\u2013[37], have also made contributions to the field of video colorization. It is important to note that GANs, due to their reliance on multiple loss functions, are challenging to train, susceptible to mode collapse, and often encounter convergence issues [38]\u2013[40]. Furthermore, only certain GAN-based automatic colorization systems consider temporal consistency, such as Zhao et al. [41]. This means that the systems that do not account for temporal consistency do not maintain coherence across successive frames, which is a crucial aspect of video colorization. Video Colorization with Hybrid Generative Adversarial Network (VCGAN) [41] is an end-to-end recurrent colourization network that prioritises temporal consistency in automatic video colorization. DeepRemaster, as introduced by Iizuka et al. in their work [42], is a Convolutional Neural Network (CNN)-based colorization system. As well as colorization, it also performs super-resolution, noise reduction, and contrast enhancement. Its performance makes it a suitable benchmark for comparison in our work. Transformers, known for their success in diverse machine learning domains, including Natural Language Processing (NLP) and Computer Vision (CV), have achieved stateof-the-art results in various low-resolution computer vision tasks, exemplified by their second-place ranking on the FID5K benchmark using the ImageNet Val dataset. However, the computational complexity of their self-attention mechanism scales significantly with higher image resolutions, presenting a challenge for handling high-resolution images [19], [43]. While ongoing research efforts aim to mitigate this challenge, it remains an open area of investigation. Unlike GANs, transformers exhibit greater resilience to mode collapse, thanks to their distinctive attention mechanism. Kumar et al. have introduced ColTran [18], a transformerbased image colorization model that operates through a threestep process. Initially, it colorizes a low-resolution version of the image, as it leverages self-attention, which is computationally demanding for high-resolution photos. Subsequently, it upscales the image and then the colors, yielding highresolution colorized images. ColTran excels in producing vibrant colorizations, yet it falls short of catering to the specific demands of video colorization, leading to inconsistencies in video colorizations. B. DIFFUSION MODELS Diffusion models, as initially introduced by Sohl-Dickstein et al. [9], operate by learning how to reconstruct data from noise. They encompass two distinctive stages: Forward Diffusion Process: In this phase, Gaussian noise is incrementally incorporated into the data through a stepwise progression spanning multiple timesteps. This gradual introduction of noise gradually transforms the original information until the desired level of diffusion or alteration is attained. Reverse Diffusion Process: Subsequently, a learning model is employed to reverse this diffusion process, effectively reconstructing the original data [44], as illustrated in Figure 2. Unlike Generative Adversarial Networks (GANs), diffusion models are resilient to mode collapse, and they have demonstrated success across various domains, including 3 \fWard et al.: LatentColorization: Latent Diffusion-Based Speaker Video Colorization FIGURE 2. Diagram of the Diffusion Process: This diagram illustrates the operation of the diffusion model in both the forward and backward processes. In the forward process, it visually portrays the incremental addition of Gaussian noise to the input image x0 until it becomes visually indistinguishable from Gaussian noise xT (top). Subsequently, it showcases the learned backward diffusion process, where the model gradually removes the Gaussian noise from xT to return to the original image x0 (bottom). video generation [45], [46], audio generation [47], [48], and image generation [12], [14], [45]. An illustration of the application of diffusion models to still-image colorization can be found in Palette [49], a diffusion model tailored for a variety of image-to-image tasks. Palette attains the top position on the leader-board in the FID5K benchmark using the ImageNet Val dataset. Concurrently, Liu et al. [50] are engaged in research focused on the challenge of achieving temporally consistent video colorization, employing pre-trained diffusion models. A distinction lies in their approach as they utilize text-based conditioning for their system. In contrast, our methodology relies on exemplar frames as the conditioning input. This strategic choice was made based on our belief that using an image for conditioning provides a higher degree of expressive control compared to text-based approaches. A challenge with diffusion models is their demanding computational requirements during both the training and testing phases. Nevertheless, ongoing research endeavors are actively addressing this issue [51]\u2013[53]. Several approaches have emerged to mitigate this challenge: Down-sampling and Super-resolution: Works such as Make-A-Video [54] tackle this issue by initially downsampling the resolution of images in the diffusion process and subsequently restoring the resolution using a superresolution algorithm. Latent Diffusion: Another approach, exemplified by Latent Diffusion [12], modifies the diffusion process to operate in the latent space of a trained autoencoder, as opposed to the pixel space. This results in reductions in both inference and training times due to the reduced dimensionality of the data inputted into the diffusion process. The work presented in this paper represents the first instance, where the video colorization task is tackled through the use of an image-to-image latent diffusion model employing exemplar frames. III. METHODOLOGY A. DESIGN CONSIDERATIONS For achieving temporally consistent video colorization, there are two popular methods: Implicit Temporal Consistency: In this approach, the notion of ensuring explicit temporal consistency is considered unnecessary. The belief is that with a sufficiently accurate system and reasonably similar input (e.g., consecutive frames in a video sequence), the colorized output should naturally exhibit similarity and relative consistency. As a result, temporal consistency is managed implicitly. Explicit Temporal Consistency: This project aligns with the second methodology, which emphasizes the explicit addressing of temporal consistency. Rather than relying on the system to learn it implicitly, this approach involves conditioning for temporal consistency explicitly. The advantages of this approach include reduced training time, decreased data requirements, and a lower computational load. However, it necessitates more intricate system engineering to explicitly convey the requirements to the system. Within the realm of implicit temporal consistency methodologies, several approaches are prevalent, with three of the most common being: Optical Flow-Based: Optical flow-based colorization methods operate by conditioning the system to maintain color consistency over time. However, it is worth noting that a limitation of this approach is the potentially high computational cost associated with calculating optical flow, making it less practical in certain applications [55]. Exemplar-Based: Exemplar-based methods involve providing the system with a reference image to guide its col4 \fWard et al.: LatentColorization: Latent Diffusion-Based Speaker Video Colorization orization process. This typically entails human intervention or a database retrieval algorithm with a collection of reference images [56]. Hybrid-Based: Some methods adopt a hybrid approach by combining different methodologies to harness the benefits of multiple systems simultaneously. This strategy, as seen in works like [5], [57], seeks to leverage the strengths of various techniques to enhance overall performance. FIGURE 3. Comparison of 3 consecutive frames with different operations applied: First Row (Ground Truth): This row showcases the original, unaltered images, representing the ground truth reference. Second Row (Diffusion Model): In the second row, you can observe the colorization output generated by our original diffusion model. Third Row (Diffusion Model with Post-Processing): Here, the output of the diffusion model is presented with an additional post-processing procedure applied to enhance the results. Fourth Row (LatentColorization): The final row displays the results obtained from LatentColorization . B. DATA PROCESSING We use the following datasets as part of our experiments: GRID Dataset: The GRID dataset [58] is a collection of video recordings featuring individuals speaking. It encompasses high-quality facial recordings of 1,000 sentences spoken by each of 34 talkers, with a distribution of 18 males and 16 females, resulting in a total of 34,000 sentences. Lombard Grid Dataset: An extension of the GRID dataset, the Lombard Grid dataset [59], includes 54 talkers, each contributing 100 utterances. Among these 54 talkers, 30 are female, and 24 are male, expanding the dataset\u2019s diversity. Sherlock Holmes Movies Dataset: This dataset is a collection of professionally colorized frames extracted from \u2019Sherlock Holmes and the Woman in Green,\u2019 \u2019Sherlock Holmes Dressed to Kill,\u2019 \u2019Sherlock Terror by Night,\u2019 and \u2019Sherlock Holmes and the Secret Weapon.\u2019 These diverse datasets provide a foundation for our research in the field of speaker video colorization and temporally consistent diffusion models. Our dataset consisted of 10,000 frames allocated for training the model, with an additional 700 frames reserved for testing purposes. Each frame was uniformly resized to 128x128 pixels. To ensure the generalizability of our model, the training and testing frames were derived from distinct subjects, mitigating the risk of artificially inflated performance measures that would not extend to real-world scenarios. By conducting tests on benchmark datasets, we were able to compare our approach against previous methods. Furthermore, testing on the Sherlock Holmes-related data provided a valuable means of comparing our results to expert human colorizations. Additionally, training on open-domain videos underscores the potential of these resources in advancing the field of video colorization. C. SYSTEM OVERVIEW 1) Image Diffusion Based Set Up In our initial exploration, we considered adopting a setup akin to Palette [49], incorporating our temporal consistency mechanism and initial frame biasing, which will be elaborated on in Section III-D. However, we observed sub-optimal performance from this configuration, as the system\u2019s outputs exhibited undesired residual speckled noise, as illustrated in Fig. 3. To address the speckled noise in the diffusion colorization outputs, we explored two approaches: Non-Linear Means (nlmeans) Clustering: We initially applied the nlmeans clustering algorithm [60] to the images to mitigate the noise. However, this method relies on a hyperparameter that dictates the filter\u2019s strength. A stronger filter results in smoother images but may inadvertently remove high-quality details, such as hair and facial features. Conversely, a weaker filter may leave more residual speckled noise unfiltered. Overlaying Colorized Output with Black-and-White Inputs: As an alternative, we experimented with overlaying the colorized output with the original black-and-white inputs. This approach yielded superior results compared to the nlmeans filter, and it required less parameter tuning for filter strength. We opted to proceed with this approach, referred to as \u2019Diffusion Filtered\u2019. Despite our efforts to optimize noise reduction while preserving critical details, the final output quality still fell short of our improved approach, LatentColorization, which we will detail in the following section. Consequently, our final experiments did not incorporate the Palette-based approach [49]. 2) Latent Diffusion Based Set Up Inspired by Latent Diffusion [12], we devised LatentColorization. LatentColorization comprises three core components: an autoencoder, a latent diffusion model, and a conditioning mechanism, as visually represented in Fig 4a. 5 \fWard et al.: LatentColorization: Latent Diffusion-Based Speaker Video Colorization FIGURE 4a. The system architecture during training is depicted in the diagram, illustrating the key elements of the network and their interactions: Image Encoder: This component is responsible for encoding the input frames into embedding representations. It generates the ground truth embedding ZGT , the embedding of the current black-and-white frame ZBW , and the embedding of the previous color frame ZP . Denoising Unet: This is a critical part of the architecture, responsible for denoising and refining the embeddings generated by the Image Encoder that have passed through the forward diffusion process. Conditioning Mechanism: The conditioning mechanism is integral to the network, providing contextual information and conditioning signals to guide the colorization process. It takes into account various embeddings, including ZBW , ZP , and ZT , which represent the black and white input frame, the output of the model at the previous timestep, and the noisy frame to be denoised. Image Decoder: This component is responsible for decoding the predicted frames from their embedding representations. The architecture\u2019s design and interactions are essential for the model\u2019s training process, ensuring that it learns to generate accurate and temporally consistent colorizations over multiple timesteps. The latent diffusion model follows a two-step process, commencing with the forward diffusion phase (formulated in Eqn. 1). During this phase, Gaussian noise is systematically introduced to the data, incrementally transforming it until it becomes indistinguishable from Gaussian noise. During the second phase, the learned backward diffusion process is applied. This is where a neural network is trained to learn the original data distribution, and to draw samples from it by reconstructing the data from Gaussian noise. We represent formulations of this process with conditioning in Eqn.3 and without conditioning, as in Eqn.2. The forward diffusion process, as defined by [9], can be represented by the following formula: q(xt|xt\u22121) = N(xt; \u00b5t = p 1 \u2212\u03b2xt\u22121, \u03a3t = \u03b2tI) (1) In this formulation, the probability distribution q(\u00b7) of the image at each timestep xt, given the previous timestep xt\u22121, is characterized as a normal distribution N. This distribution is centred around a mean equal to the previous timestep xt\u22121, with noise incorporated. The magnitude of this noise is determined by the noise scheduler \u03b2 at time t and is further modulated by the identity matrix I. The noise scheduler \u03b2 typically follows a linear pattern, as exemplified in [44], or a cosine pattern, as demonstrated in [61]. The backward diffusion process, in accordance with [10], can be defined as follows: p\u03b8(xt\u22121|xt) = N(xt\u22121; \u00b5\u03b8(xt, t), \u03a3\u03b8(xt, t)) (2) In this definition, the probability distribution p\u03b8(\u00b7) of the slightly denoised image xt\u22121, given the noisier image xt, is characterized as a normal distribution N(). This distribution has a mean denoted as \u00b5 and a variance represented by \u03a3, both of which are learned and parameterized by the neural network indicated by \u03b8. The diffusion process can be conditioned using the following equation: p\u03b8(x0:T |y) = p\u03b8(xt) T Y t=1 p\u03b8(xt\u22121|xt, y) (3) In this equation, the probability density function p\u03b8 is akin to the unconditioned diffusion process, but conditioning is introduced at each timestep of the diffusion process, denoted as p\u03b8(xt\u22121|xt, y). In our specific scenario, the conditions encompass the previous frame, the grayscale frame, and the current frame during training, as illustrated in Fig.4a. During inference, the conditions consist of the previous frame, the grayscale frame, and noise, as indicated in Fig.4b. For a visual representation of our network architecture during training and inference, as well as a breakdown of where each equation is utilized, please refer to Fig 4a and Fig 4b. Additionally, for a more in-depth explanation of these equations and their derivation, you can explore the references provided in [9], [10], [62]. In the training process, the current frame ground truth, the current frame in black and white, and the previous frame are fed into the image encoder. These images are compressed into their respective embeddings, namely ZGT , ZBW , and ZP . The chosen autoencoder for this purpose is a Vector 6 \fWard et al.: LatentColorization: Latent Diffusion-Based Speaker Video Colorization FIGURE 4b. During inference, the system architecture remains largely consistent with the training phase, with one significant difference: Gaussian Noise in Place of Ground Truth Frame: Instead of the ground truth frame, the system introduces Gaussian noise as input during the testing phase. This alteration simulates real-world scenarios where the model must colorize frames without the ground truth. The rest of the architecture, including the Image Encoder, Denoising Unet, Conditioning Mechanism, Image Decoder, and their interactions, remains unchanged. This design allows the model to assess its performance under conditions that more closely resemble practical, ground truth-free scenarios. Quantized Variational AutoEncoder (VQ-VAE), as detailed in [63]. During the forward diffusion process, the current frame\u2019s ground truth embedding ZGT has noise applied to it based on the noise timestep, resulting in ZT . Simultaneously, the ground truth black and white embedding ZBW and the previous frame embedding ZP are concatenated. The noised embedding ZT is then denoised using the Unet and conditioned on ZBW and ZP . During the backward diffusion process, the neural network learns to predict the noise that was added during the forward diffusion process at time step T. Denoising the noise embedding ZT using the predicted noise results in ZT \u22121. We use a simple mean square error loss between the predicted noise, vs the actual noise added to the embedding in order to train the network. By employing the previous frame as conditioning, temporal consistency between frames is ensured throughout the video sequence, resulting in coherent colorization. During inference, the same process as the training scheme is followed, with the exception that the model is fed pure Gaussian noise representing frame ZT . The denoising is then repeated T times, after which the denoised embedding is passed through to the image decoder in order to produce the predicted frame. This process is depicted explicitly in Fig.4b. The system can be used in two different ways. First, it can be employed in an entirely end-to-end manner, where no additional guidance from the user is needed. In this setup, the system serves as an image colorization tool for the first frame. Then, this initial colorized frame is used in an auto-regressive fashion to guide the colorization of subsequent frames in the video clip. Second, the system can be used interactively, allowing the user to manually colorize the initial frame. This manual colorization becomes the condition for initiating the colorization of the following frames. This second approach provides control over the colorization process but requires the user to provide the initial colorization. D. TEMPORAL CONSISTENCY Temporal consistency was maintained through an autoregressive conditioning mechanism, where the current video frame was conditioned on the previous frame and the grayscale version of the current frame. This approach ensured that colorization remained consistent across the video frames. For a detailed illustration, refer to Fig 4a and Eqn 4. This mechanism is similar to the approaches used in other studies such as [64] and [65], where models were conditioned with information from the previous frame to guarantee temporal consistency in the context of video generation. Essentially, maintaining consistent colors throughout a video sequence becomes more achievable when the model can \"remember\" the colors from the previous frame. Ct = f(Ct\u22121..n, Gt)\u2200t \u2208T (4) In Eqn 4, we denote the color image as C \u2286RL,a,b, the grayscale image as G \u2286RL, and f() represents the colorization function performed by the neural network. Here, n signifies the length of the conditioning window frame, T is the total length of the video, and t indicates a specific moment within the video sequence. This equation describes how the colorization process is conditioned on both the previous frame and the grayscale version of the current frame, ensuring temporal consistency across the video frames. Throughout the video sequence, we maintain temporal consistency by providing the colorizer with the previous frame as a reference. However, a challenge arises at the 7 \fWard et al.: LatentColorization: Latent Diffusion-Based Speaker Video Colorization beginning of the sequence, denoted as t0, where there is no previous frame available for conditioning. To address this, we introduce an initial colorized frame at t0. This initial frame is advantageous because it introduces an element of user preference, which can be highly practical. It effectively reduces the video colorization task to that of coloring a single image, which then serves as the starting point for colorizing the entire video with a bias towards the initial frame. This approach offers flexibility and aligns with humancentric AI concepts for video colorization. We refer to this approach as \"initial frame biasing\". Additionally, it provides a clear method for evaluating the system, as ground truth is available for the initial frame, making traditional referencebased metrics such as PSNR, SSIM, FID, and FVD effective for assessment. It also allows for a user study where one can compare performance against the ground truth. E. HYPERPAREMETER AND TRAINING SET UP The hyperparameters used in the experiment are detailed in Table 1. The experiment employed the ADAM optimizer [66], with most of the values being adopted from the specifications of Stable Diffusion [12]. Any additional hyperparameters were determined through a process of empirical testing. An image size of 128x128 pixels required a 4x decrease in processing time as opposed to 256x256 pixels. Training at 256x256 pixels takes 165 minutes per epoch on an NVIDIA RTX 2080, whereas training at 128x128 pixels takes 38 minutes per epoch. Using 200 Diffusion steps for training and 50 for testing resulted in good performance. Input channels must be nine to account for the conditioning, three channels for color previous frame, three channels for the image and three channels for the black-and-white current frame. Having a batch size of 256 and a learning rate of 1.25e\u22127 resulted in convergence and reasonably fast training times. Train Test Image Size 128x128 128x128 Total Frames 10000 700 Diffusion Steps 200 50 Noise Schedule Linear Linear Linear Start 1.5e \u221203 1.5e \u221203 Linear End 0.0195 0.0195 Input Channels 9 9 Inner Channels 64 64 Channels Multiple 1, 2, 3, 4 1, 2, 3, 4 Res Blocks 2 2 Head Channels 32 32 Drop Out 0 0 Batch Size 256 8 Epochs 350 Learning Rate 1.25e\u22127 TABLE 1. The hyperparameter setup provides the values used for both training and testing IV. EVALUATION The performance evaluation of the colorization process combines both qualitative and quantitative assessments to gauge its success. Following similar colorization studies [7] our work is compared on standard metrics. The key metrics used for this evaluation are as follows: Power Signal to Noise Ratio (PSNR): This metric measures the quality of colorized images by comparing them to the corresponding ground truth images. It quantifies the difference between the pixel values of the colorized and ground truth images. Higher PSNR values indicate better performance. Structural Similarity Index (SSIM): SSIM evaluates the structural similarity between colorized images and ground truth images. It considers not only pixel values but also the structure and patterns in the images. Higher SSIM values indicate greater similarity to the ground truth. Fr\u00e9chet Inception Distance (FID): FID assesses the distance between the distribution of features extracted from colorized images and real images. Lower FID values indicate closer similarity to real images. Fr\u00e9chet Video Distance (FVD): FVD is a video-specific metric that measures the difference between generated and real videos by comparing the mean and covariance of their features. Lower FVD values represent better video colorization quality. Naturalness Image Quality Evaluator (NIQE): NIQE is a referenceless metric that quantifies the naturalness of colorized images using statistical measures. Lower NIQE values indicate more natural-looking images. Blind/Referenceless Image Spatial Quality Evaluator (BRISQUE): BRISQUE is another referenceless metric that evaluates the quality of colorized images. It learns the characteristics of natural images and quantifies the deviation from these characteristics. Lower BRISQUE values represent better image quality. Mean Opinion Score (MOS): MOS is a weighted average of survey participants\u2019 perceived quality of an image or video. Higher MOS score represents a higher opinion of the subjective quality of the media. A combination of these quantitative metrics and visual inspection, see Fig 7, allows for a comprehensive assessment of the colorization process, enabling objective and subjective evaluation of its performance. Evaluating colorization is a very subjective task, and therefore, as well as the metrics used, a survey was conducted to obtain a subjective measure of our performance. This survey was conducted in a similar manner to the survey conducted by Wu et al. [31]. 8 \fWard et al.: LatentColorization: Latent Diffusion-Based Speaker Video Colorization A. QUALITATIVE ANALYSIS The qualitative results in Figure 7 visually compare the colorization performance of different methods, including DeOldify [6], ColTran [18], DeepRemaster [42], GCP [31], VCGAN [41], LatentColorization without temporal consistency enabled, LatentColorization, and the ground truth. These comparisons are based on image sequences from the GRID [58] and Lombard Grid [59] datasets. Additional qualitative results can also be seen in our appendices. This visual assessment allows for a direct comparison of how well LatentColorization performs in relation to other state-of-theart methods. Based on the qualitative analysis of the results in Figure 7, the following conclusions can be drawn: DeOldify [6] produces consistent colorizations, but they tend to appear dull and have a halo effect around the subject. ColTran generates colorful images, but it suffers from inconsistencies throughout the sequence. DeepRemaster [42] provides produces dull, conservative colorizations. GCP [31] produces colorful, consistent colorizations, but they are not faithful to the ground truth. VCGAN [41] seems to mostly apply a blueish filter to the frames. LatentColorization w/o TC produces colorization similar to the ground truth. It is difficult to visually distinguish between LatentColorization w/o TC, LatentColorization and the ground truth itself. LatentColorization impressively colorizes the sequence, maintaining faithfulness to the original, vibrancy in color, and overall consistency. Overall, LatentColorization appears to outperform the other methods in terms of fidelity to the original, colorfulness, and consistency. B. QUANTITATIVE ANALYSIS Quantitative evaluation is an essential aspect of assessing the quality and performance of colorization methods. It helps provide an objective measure of how well these methods perform. By evaluating colorizations both frame by frame and as a video sequence, you can gain insights into the strengths and weaknesses of each approach and determine how well they maintain consistency and quality throughout the sequence. This quantitative assessment complements the qualitative analysis and provides a more comprehensive understanding of the colorization results. Table 2 provides a quantitative evaluation of the colorization methods, considering various image metrics. It is a useful way to compare the performance of DeOldify [6], DeepRemaster [42], ColTran [18], GCP [31], VCGAN [41] , LatentColorization without temporal consistency mechanism, LatentColorization, and human colorization. By assessing metrics such as PSNR, SSIM, FID, FVD, NIQE, and BRISQUE, you can analyze the quality, similarity, and naturalness of the colorized images. This comparison enables a more data-driven and objective assessment of how well each method performs. The results presented in Table 2 indicate that LatentColorization performs well across all of the referenced and nonreferenced metrics, surpassing the state-of-the-art DeOldify [6] by an average of \u02dc =18% in terms of FVD. This performance showcases the effectiveness of LatentColorization in achieving high-quality and consistent video colorization results. Comparing LatentColorization against human-level colorization is an important evaluation. Using non-reference image quality assessment metrics like NIQE and BRISQUE to assess the relative performance when no ground truth is available is a valuable approach. These metrics provide insights into how closely the colorization generated by LatentColorization aligns with human-expert colorization in terms of image quality. The results in Table 2 show that LatentColorization outperforms human colorization according to NIQE and BRISQUE, which indicates that the colorizations produced by LatentColorization are of high quality when assessed using these non-reference metrics. The other methods also perform well on BRISQUE and NIQE scores relative to the Human Colorized version of the video. Colorization is a subjective matter, and therefore, these metrics must be paired with a user survey to evaluate the systems\u2019 performances. C. SURVEY A survey was conducted to get a more subjective view of the performance of LatentColorization. This study aimed to evaluate the difference in performance between our proposed approach, LatentColorization, and its closest competitor in our experiments, DeOldify [6]. Thirty-two participants were shown three sets of three videos and were asked a question on each set. Each dataset had an associated video set. The survey questions can be seen in our appendices. For the Grid [58] dataset, the participants were shown three versions of the same video taken from the dataset side-byside. One video version had been colorized by LatentColorization, the other by DeOldify [6], and the third was the ground truth. The ground truth video was labelled as such, whereas the LatentColorization and DeOldify [6] versions of the video were anonymous. To distinguish the LatentColorization version of the video from the DeOldify version [6] they were labelled with 1 and 2. After the participants had watched the videos, they were asked which video they thought was closer to the ground truth. The purpose of this question (question 1) was to differentiate in a head-to-head competition in which the colorization system was able to produce outputs which were similar to the ground truth colors of the video. For the Lombard Grid [59] dataset, the participants were shown three versions of an example video taken from the dataset shown side-by-side. Again, one version was colorized by LatentCololorization, the other by DeOldfiy [6], and the third was the ground truth. In contrast to the previous question, the ground truth video was anonymous this time, and the three videos were titled 1,2 and 3. After the participants watched the video, they were asked to rank the three videos in terms of which one looked the most realistic. Therefore, this question (question 2) acted as a visual turning test where 9 \fWard et al.: LatentColorization: Latent Diffusion-Based Speaker Video Colorization Dataset Method PSNR \u2191 SSIM \u2191 FID \u2193 FVD \u2193 NIQE \u2193 BRISQUE \u2193 Grid [58] DeOldify [6] 28.07 0.79 52.67 520.75 44.04 32.47 DeepRemaster [42] 27.7 0.77 108.68 927.91 51.19 41.44 ColTran [18] 28.08 0.76 91.76 759.32 49.69 34.55 GCP [31] 27.74 0.75 109.75 1555.53 48.44 33.76 VCGAN [41] 27.86 0.83 67.79 951.28 44.24 37.16 LatentColorization w/o TC 29.63 0.89 20.92 350.35 46.4 33.73 LatentColorization 30.88 0.9 22.26 241.94 41.46 34.68 Lombard Grid [59] DeOldify [6] 30.69 0.93 17.63 396.2 46.43 33.1 DeepRemaster [42] 30.09 0.95 32.9 1382.56 52.36 35.97 ColTran [18] 29.96 0.89 37.7 1583.94 51.25 29.71 GCP [31] 29.86 0.91 85.09 432.31 48.73 33.65 VCGAN [41] 30.2 0.96 72.17 2146.79 50.72 31.01 LatentColorization w/o TC 30.35 0.92 18.41 490.89 44.53 33.84 LatentColorization 30.71 0.93 17.01 375.34 45.79 34.61 Sherlock Holmes Movies DeOldify [6] 42.07 41.15 DeepRemaster [42] 62.36 42.98 ColTran [18] 47.15 37.52 GCP [31] 49.87 41.95 VCGAN [41] 49.84 39.86 Human colorized 48.43 39.78 LatentColorization w/o TC 47.13 38.49 LatentColorization 46.24 41.11 Overall DeOldify [6] 29.19 0.86 40.47 520.85 45.22 35.14 DeepRemaster [42] 28.90 0.86 70.79 1155.24 55.60 40.13 ColTran [18] 29.02 0.83 64.73 1171.63 49.36 33.93 GCP [31] 29.80 0.83 97.42 993.92 49.01 36.45 VCGAN [41] 29.03 0.9 69.98 1549.04 48.27 36.01 Human colorized 48.43 39.78 LatentColorization w/o TC 29.99 0.91 19.67 420.62 46.02 35.35 LatentColorization 30.80 0.92 19.64 308.64 44.50 36.80 TABLE 2. The quantitative comparisons provide a detailed evaluation of different colorization methods across various datasets. These methods include DeOldify, DeepRemaster, ColTran, GCP , VCGAN, Human Colorized, LatentColorization without Temporal Consistency and LatentColorization. The evaluation criteria encompass several metrics, including PSNR, SSIM, FID, FVD, NIQE, and BRISQUE. By comparing these metrics on individual datasets and a combined dataset (consisting of GRID, Lombard Grid, and Sherlock Holmes Movies), the study aims to assess and compare the performance of these colorization methods. This information allows for an evaluation of how LatentColorization compares to other state-of-the-art methods in various scenarios. humans were tested to see if they could tell the difference between a colorization and a ground truth video. The idea behind this is that the better the performance of the colorization system, the more difficult it should be to distinguish between the colorization system and the ground truth. For the Sherlock Holmes dataset, the participants were shown three versions of an example video from the dataset side-by-side. One version had been colorized by LatentColorization, the other by DeOldify [6], and the third was the human-colorized version. This time, the human-colorized version of the video was labelled, and the LatentColorization and DeOldify [6] versions were left anonymous. After the participants had watched the clips, they were asked which of the automatically colorized versions of the clip was closer to the human-colorized version. The purpose of this question (question 3) then was to determine the relative performance of LatentColorization, DeOldify [6] with respect to human expert colorizations. We then collated the survey results and analysed them. The results can be seen visually in fig. 5. The X-axis represents the Mean Opinion Score (MOS) for each question\u2019s methods. The Y axis indicates the relevant question. The color-coded bars represent each of the methods. The mean opinion score was calculated for each method for each question. For Question 1 and Question 2, the mean opinion score is simply the tally of each of the votes as it compares two methods. For Question 3, the mean opinion score is the sum of the ratings for each method divided by the number of methods. Interpreting the graph, we can see that overall LatentColorization was preferred to DeOldify [6]. For question 1, DeOldify [6] received seven votes, and LatentColorization received 25 votes, indicating a preference for LatentColorization on this question. For question 2, the ground truth received the highest MOS score of 28.00, followed by LatentColorization at 20.00 and DeOldiy [6] at 13.67. Summarising this result, the ground truth was preferred most of the time, followed by LatentColorization and finally DeOldify [6]. For question 3, LatentColorization was chosen 26 times out of 31, indicating a strong preference for LatentColorization. D. ABLATION STUDY An ablation study was undertaken to evaluate the impact of the temporal consistency mechanism on the LatentColorization system. The results for both LatentColorization and LatentColorization without temporal consistency mechanism are recorded in Table 2. LatentColorization refers to the version of LatentColorization with the temporal consistency mechanism enabled, and LatentColorization w/o TC refers 10 \fWard et al.: LatentColorization: Latent Diffusion-Based Speaker Video Colorization FIGURE 5. The graph of the results of the survey. Each group represents a particular question. The X-axis represents the Mean Opinion Score (MOS) for each question\u2019s methods. The Y axis indicates the relevant question. The color-coded bars represent each of the methods. to the version of LatentColorization where the temporal consistency metric has been disabled. The results of LatentColorization and LatentColorization without temporal consistency appear similar. The main difference is that the FVD values for LatentColorization are roughly 10% lower than LatentColorization without temporal consistency\u2019s FVD values. As a result of this observation, it can be deduced that the temporal consistency mechanism is indeed improving the video quality of the output and, therefore, ensuring temporal consistency. E. FAILURE CASES There were also instances where the system failed to colorize faithfully to the ground truth. This particularly occurred for out-of-distribution data where the videos were from a different domain than speaker videos 6. LatentColorization fails to apply realistic colors to the bedroom scene. It initially manages to separate the walls from the bed as it colorizes the walls blue and the bed orange, see Frame 1. As time passes, see Frame 2 and Frame 3; LatentColorization tends towards a dull grey color. This indicates that LatentColorization is sensitive to the domain that the video is from, and when it does not recognize the contents of a video, it resorts to drab, dull colors. V. DISCUSSION In this section, we discuss our model\u2019s results compared to other approaches from the field and an overview highlighting the main limitations associated with our approach. A. MODEL COMPARISONS The comparison between LatentColorization and nonautoregressive models like ColTran [18] provides insights into the importance of the autoregressive nature of the system FIGURE 6. The comparison of three frames from the system taken from out-of-distribution data. The top row is the black-and-white version of the video, the middle frame is the output of LatentColorization, and the bottom row is the ground truth. It can be seen that LatentColorization has failed to colorize faithfully to the ground truth. in the context of video colorization. Fig 7 demonstrates the difference in consistency between the two approaches. The frames colorized by LatentColorization appear more consistent throughout the video sequence, while those generated by ColTran [18] exhibit more variation. This suggests that the autoregressive nature of LatentColorization, where each frame is conditioned on the previous ones, plays a role in maintaining temporal consistency and ensuring that the colorization is coherent across the entire video. In contrast, nonautoregressive approaches like ColTran [18] may struggle to achieve the same level of consistency in colorized sequences. The qualitative assessment of the colorizations in Fig 7 highlights the differences in colorfulness among LatentColorization, and DeOldify [6]. LatentColorization produces colorful results. In contrast, DeOldify [6] appears grey, suggesting that it may suffer from a lack of color diversity. This observation is consistent with the idea that GANs, which DeOldify [6] is based on, can be susceptible to mode collapse, where they produce limited and less diverse color variations. This observation also correlated with the survey results where LatentColorization was preferred to DeOldify [6] 80% of the time. DeepRemaster [42] Vs LatentColorization: DeepRemaster [42] has struggled with the colorization of this material and has resorted to very bland, dull colors, unlike LatentColorization. GCP [31] Vs LatentColorization: it can be seen that LatentColorization is closer to the ground truth than GCP [31]. GCP has produced colorful output, but it is different in color from the ground truth. It has not succumbed to the mode collapse of its GAN-based architecture, especially on the Lombard Grid [59] dataset. This could potentially be a result of its retrieval mechanism. VCGAN [41] Vs LatentColorization: it can be seen that LatentColorization is closer to the ground truth than VCGAN [41]. VCGAN has produced a blue filter type effect on the 11 \fWard et al.: LatentColorization: Latent Diffusion-Based Speaker Video Colorization FIGURE 7. The qualitative comparison of colorization results from various systems, including DeOldify [6], ColTran [18], DeepRemaster [42], GCP [31], VCGAN [41], LatentColorization without the temporal consistency mechanism enabled (LatentColorization w/o TC), LatentColorization and the ground truth, for both the GRID [58] dataset (left) and the Lombard Grid [59] dataset (right) is shown. In the GRID [58] dataset, DeOldify\u2019s [6] colorization, depicted in the first row, exhibits desaturated colors and a halo effect around the subject. ColTran [18], in the second row, produces more colorful results but lacks consistency throughout the sequence. DeepRemaster [42] produces dull, conservative colorizations. GCP [31] produces colorful, consistent colorizations, but they are not faithful to the ground truth. VCGAN [41] produces drab, monotone colorizations. LatentColorization w/o TC produces colorization similar to the ground truth. It is difficult to visually distinguish between LatentColorization w/o TC, LatentColorization and the ground truth itself. The ground truth, represents the original color frames. Similar observations can be made for the Lombard Grid [59] dataset. These visual comparisons demonstrate that LatentColorization consistently delivers colorization results that closely match the original colors, making it a promising technique for automatic video colorization tasks. frames. LatentColorization Vs LatentColorization without temporal consistency: has been investigated in the ablation study. Essentially, it is difficult to visually differentiate between the two, and the main difference can be seen quantitatively in their relative FVD scores. The quantitative evaluation, as shown in Table 2, indicates that LatentColorization achieved scores on the NIQE and BRISQUE metrics that are close to human-level colorization. In summary, these results suggest that LatentColorization, in this experiment, is comparable to human-level colorization in terms of the assessed quality metrics. This highlights the effectiveness of the LatentColorization method in generating high-quality colorized videos. This evaluation also correlates with our survey, where LatentColorization received a higher preference from the subjects than DeOldify [6]. The survey also shows a tendency of the users to prefer the ground truth videos over both LatentColorization and DeOldify [6]. 12 \fWard et al.: LatentColorization: Latent Diffusion-Based Speaker Video Colorization B. LIMITATIONS One of the main limitations of our approach is that the datasets we use are specific to speaker videos, causing our model to perform more poorly on out of domain data. We intend to address this in our future work by training our model on a more diverse dataset capturing a wide range of scenarios. Our model also exhibits poor performance when it compares to inference speed. For instance, colorizing a fivesecond clip at fifty diffusion steps takes roughly one hundred and fifty seconds on an Nvidia 2080 with 8GB of VRAM. One of the main drawbacks to a diffusion modelbased system is its inference time, as the model must sample each frame equal to the number of diffusion steps chosen. Real-time colorization is beyond the scope of this work, but generally, real-time is not a requirement in applications. Ethical concerns which must also be considered. Two main worries associated with this type of technology are potential misuse and bias. Defining the potential misuse of colorization systems is a difficult task with various nuances. Opponents of colorization believe that it is unnecessary and defaces the original work. Proponents retort that it makes the material more approachable to wider audiences [67]. In addition to the ethical considerations surrounding the potential issue of these systems, there are also concerns regarding the bias of these systems [68]. Through experimentation it has been found that these systems can be susceptible to tending towards outputs which are similar to the data that they were trained on. As datasets can be biased, the models can also inherit this bias and therefore output inaccurate results. In colorization systems, this can present itself in such manners as incorrect skin colors or incorrect color uniforms which may give a distorted view of history. VI." +} \ No newline at end of file diff --git a/abs_9K/test_abstract_short_2405.05791v1.json b/abs_9K/test_abstract_short_2405.05791v1.json new file mode 100644 index 0000000000000000000000000000000000000000..f8b84cbf96ba4212e28751da3a9085ce8c554be1 --- /dev/null +++ b/abs_9K/test_abstract_short_2405.05791v1.json @@ -0,0 +1,16 @@ +{ + "url": "http://arxiv.org/abs/2405.05791v1", + "title": "Sequential Amodal Segmentation via Cumulative Occlusion Learning", + "abstract": "To fully understand the 3D context of a single image, a visual system must be\nable to segment both the visible and occluded regions of objects, while\ndiscerning their occlusion order. Ideally, the system should be able to handle\nany object and not be restricted to segmenting a limited set of object classes,\nespecially in robotic applications. Addressing this need, we introduce a\ndiffusion model with cumulative occlusion learning designed for sequential\namodal segmentation of objects with uncertain categories. This model\niteratively refines the prediction using the cumulative mask strategy during\ndiffusion, effectively capturing the uncertainty of invisible regions and\nadeptly reproducing the complex distribution of shapes and occlusion orders of\noccluded objects. It is akin to the human capability for amodal perception,\ni.e., to decipher the spatial ordering among objects and accurately predict\ncomplete contours for occluded objects in densely layered visual scenes.\nExperimental results across three amodal datasets show that our method\noutperforms established baselines.", + "authors": "Jiayang Ao, Qiuhong Ke, Krista A. Ehinger", + "published": "2024-05-09", + "updated": "2024-05-09", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "label": "Original Paper", + "paper_cat": "Diffusion AND Model", + "gt": "To fully understand the 3D context of a single image, a visual system must be\nable to segment both the visible and occluded regions of objects, while\ndiscerning their occlusion order. Ideally, the system should be able to handle\nany object and not be restricted to segmenting a limited set of object classes,\nespecially in robotic applications. Addressing this need, we introduce a\ndiffusion model with cumulative occlusion learning designed for sequential\namodal segmentation of objects with uncertain categories. This model\niteratively refines the prediction using the cumulative mask strategy during\ndiffusion, effectively capturing the uncertainty of invisible regions and\nadeptly reproducing the complex distribution of shapes and occlusion orders of\noccluded objects. It is akin to the human capability for amodal perception,\ni.e., to decipher the spatial ordering among objects and accurately predict\ncomplete contours for occluded objects in densely layered visual scenes.\nExperimental results across three amodal datasets show that our method\noutperforms established baselines.", + "main_content": "Introduction Robots often encounter unfamiliar objects in ever-changing unstructured environments such as warehouses or homes [31]. These scenarios require systems capable of manipulating objects based on their complete shape and occlusion relationships rather than their visibility or category [2, 7, 33]. However, most state-of-the-art amodal segmentation methods [1, 8, 15, 32], which are usually constrained by the need for class-specific data, struggle to generalize to unseen objects and are susceptible to misclassification. Diffusion probabilistic models specialize in capturing and reproducing complex data distributions with high fidelity [11], making them well-suited for generating the invisible parts of unknown objects. In contrast to traditional convolutional networks that often struggle with the complexity of occlusions [10, 27], diffusion models proficiently reconstruct objects through their iterative refinement process. This process is particularly advantageous for inferring occluded object regions, as it progressively recovers the occluded parts based on visible context and learned possible object shapes. Additionally, while current amodal segmentation methods typically overlook the uncertainty in the shape of the hidden part, diffusion models inherently sample from the learned distribution [25, 38], providing multiple plausible hypotheses for the occluded shape. Given these capabilities, diffusion models present a fitting approach for advancing the field of amodal segmentation. We introduce a novel diffusion model for sequential amodal segmentation that does not rely on object categories. Our approach transcends traditional single or dual-layer prediction limitations [12, 17, 22] by enabling the simultaneous segmentation of unlimited object layers in an image. In addition, our framework generates multiple plausible amodal masks for each arXiv:2405.05791v1 [cs.CV] 9 May 2024 \f2 Layer 1 Layer 2 Ground Truth Prediction Cumulative Mask Image Layer 3 Layer 4 Image Layer 3 Layer 4 Ground Truth Prediction Image Layer 3 L Ground Truth Prediction Layer 4 Ground Truth Pr Layer 3 Figure 1: The cumulative mask and amodal mask predictions for an input image. Our method can generate reliable amodal masks layer by layer and allows multiple objects per layer. object from a single input image, contrasting with prior approaches that depend on multiple ground truths to achieve varied results [9, 25, 34]. Tailored to the amodal task, our method requires only a single ground truth per object during training to capture the diversity of occlusions, overcoming the limitations of existing amodal datasets that typically provide only one annotation per object and neglect the variability in invisible regions. Our framework takes an RGB image as input and sequentially predicts the amodal masks for each object, as illustrated in Fig. 1. The iterative refinement process of our proposed algorithm, inspired by human perception mechanisms for invisible regions [28], leverages preceding identified items to infer subsequent occluded items. Specifically, it employs a cumulative mask, which aggregates the masks of previously identified objects. This strategy allows the model to maintain a clear record of areas already segmented, directing its focus toward unexplored regions. By focusing the prediction effort on uncertain or occluded regions, our approach improves the accuracy and reliability of the amodal segmentation process. We validate our approach through comprehensive ablation studies and performance benchmarking across three amodal datasets, demonstrating its superiority in handling complex sequential amodal segmentation challenges. The main contributions of our work are: \u2022 A new sequential amodal segmentation method capable of predicting unlimited layers of occlusion, enabling occlusion modelling in complex visual scenes. \u2022 Occluded shape representation which is not based on labelled object categories, enhancing its applicability in diverse and dynamic settings. \u2022 A diffusion-based approach to generating amodal masks that captures the uncertainty over occluded regions, allowing for diverse segmentation outcomes. 2 Related Work Amodal segmentation with order perception requires segmentation of the entire objects by including both visible and occluded regions while explicitly resolving the layer order of \f3 all objects in the image. Establishing layering of objects allows for a comprehensive understanding of the scene and the spatial relationships between objects, which is essential for tasks such as autonomous driving, robot grasping, and image manipulation [2, 14, 40]. Current amodal segmentation methods mainly assess occlusion states of individual objects [6, 22, 26, 30] or between pairs [2, 12, 37], but tend to ignore the global order in a complex scene, such as the relationship between independent groups. While some work [1, 40] has begun to address amodal segmentation with perceptible order, they fall short for class-agnostic applications due to design constraints on category-specific dependencies. Class-agnostic segmentation aims to detect masks without relying on pre-learned categoryspecific knowledge. It is vital for scenarios where comprehensive labelling is resourceintensive or when encountering unseen categories [23, 31]. However, amodal segmentation approaches usually depend on predefined class labels and thus have limited ability to handle unknown objects [15, 19]. While there are a few methods which consider the class-agnostic amodal segmentation, [2] is for RGB-D images with depth data rather than RGB images, [5] relies on the bounding box of the object as an additional input to predict amodal masks, [41] treats amodal masks prediction and ordering as separate tasks thus designs the methods individually, and other requires additional inputs for prediction such as visible mask [20, 39] Segmentation with diffusion models has recently attracted interest as its ability to capture complex and diverse structures in an image that traditional models might miss [4, 16, 35, 36]. Particularly in medical imaging, diffusion models are used to generate multiple segmentation masks to simulate the diversity of annotations from different experts [9, 25, 34, 38]. However, these methods are designed for the visible part of images and do not adequately address the diversity of predictions required for the hidden part of objects. In summary, our approach addresses sequential amodal segmentation with two key improvements: First, a novel segmentation technique capable of globally predicting occlusion orders, offering a comprehensive understanding of object occlusion relationships in a scene. Second, a diffusion-based model to provide diverse predictions for amodal masks, especially for the occluded portions. This model uniquely employs cumulative occlusion learning that utilises all preceding masks to provide vital spatial context, thus boosting its ability to segment occluded objects. 3 Problem Definition Our goal is to amodally segment multiple overlapping objects within an image without object class labels, while determining the occlusion order of these objects. Specifically, the task requires inferring complete segmentation masks of all objects, including both the visible and occluded portions, and assigning a layering order to these segments. For a given RGB image I, the goal of our sequential amodal segmentation approach is two-fold. First, to produce a collection of amodal segmentation masks {Mi}N i=1, where each mask Mi represents the full extent of the corresponding object Oi within the scene\u2014this includes both visible and occluded regions. Second, to assign a layer ordering {Li}N i=1 to these objects based on their mutual occlusions, thereby constructing an occlusion hierarchy. The layer variable Li adheres to the occlusion hierarchy defined by [1]. The bi-directional occlusion relationship Z(i, j) indicates if Oi is occluded by Oj, given by: Z(i, j) = ( 1, if object Oi is occluded by object O j, 0, otherwise. (1) \f4 The set Si comprises indices of those objects occluding Oi, is defined by Si = { j|Z(i, j) = 1}. Subsequently, the layer ordering Li for each object Oi is computed based on: Li = ( 1, if Si = / 0, 1+max j\u2208Si Lj, otherwise. (2) The ultimate goal is to derive an ordered sequence of amodal masks \u03c4 = \u27e8M1,...,MN\u27e9 that correctly represents the object layers in image I. 4 Methodology The architecture of our proposed model is shown in Fig. 2. Details on the architectural components, the cumulative guided diffusion model and the cumulative occlusion learning algorithm are discussed in Sections 4.1 and 4.2, respectively. ,QSXW\u0003,PDJH 3UHGLFWLRQV &XPXODWLYH\u00032FFOXVLRQ\u0003/HDUQLQJ ,QSXW\u0003,PDJH &XPXODWLYH\u00030DVN 'LIIXVLRQ\u00030RGHO 'LIIXVLRQ\u00030RGHO 'LIIXVLRQ\u00030RGHO 'LIIXVLRQ\u00030RGHO 2XWSXW \u0011\u0003\u0011\u0003\u0011 \u0011\u0003\u0011\u0003\u0011 \u0011\u0003\u0011\u0003\u0011 Figure 2: Architecture of our model. Our model receives an RGB image as input and predicts multiple plausible amodal masks layer-by-layer, starting with the unoccluded objects and proceeding to deeper occlusion layers. Each layer\u2019s mask synthesis receives as input the cumulative occlusion mask from previous layers, thus providing a spatial context for the diffusion process and helping the model better segment the remaining occluded objects. 4.1 Diffusion-based Framework Denoising diffusion probabilistic models (DDPM) are popular generative models that provide powerful frameworks for learning complex data distributions [11]. Building on the improved DDPMs [21], we introduce a novel approach that extends the capabilities of diffusion models to the domain of amodal segmentation, which involves segmenting visible regions while inferring the shapes of occluded areas. This is distinct from existing diffusion models that focus primarily on visible image features, where additional understanding of occlusion structure in an image makes it a unique challenge. Cumulative mask. We introduce the cumulative mask\u2014a critical innovation that incorporates the spatial structures of objects, facilitating the understanding of both visible and occluded object parts. The cumulative mask aggregates the masks of all objects which are in front of (and potentially occluding) the current layer. Specifically, the cumulative mask for an object Oi with layer order Li encompasses the masks of all objects with a layer order lower than Li, thereby representing the cumulative occlusion up to that layer. For each object Oi with its amodal mask Mi and layer order Li, the cumulative mask CMi is formalized as: CMi = [ { j|L j1), the mask we selected is more suitable for constructing cumulative masks than using the mean mask directly. Failure analysis. A common challenge arises from errors in sequential prediction, particularly determining which of two objects is in front of the other when the overlapping region is occluded by a third object. This may lead to objects being predicted in incorrect layers, as illustrated in Fig. 4 (b). Synthetic images can amplify this challenge due to fewer spatial cues (such as height in the image plane or scene semantics) to disambiguate occluded object order. Our cumulative occlusion learning mitigates the impact of these errors by considering the cumulative mask for all preceding layers. We demonstrate the robustness of our method to such failures through noise introduction experiments in the next section. 5.4 Noise Introduction Experiment in Cumulative Mask Our model leverages the ground truth cumulative mask as input during training, while inference uses the predicted masks from previous layers to build the cumulative mask, as described in Sec. 4.2. A common idea is to utilize the predicted cumulative mask in training, mirroring the inference setup. However, this complicates the early stages of training, when all of the predicted masks (and thus the cumulative mask) are similar to random noise. To bridge the gap between training and inference, we conducted experiments in which we introduced controlled noise into the cumulative mask during training, to simulate the types of errors which occur during inference. The experiment was designed to mimic common types of inference errors, such as continuous prediction errors due to layer dependencies or over-segmentation due to boundary ambiguity. This was achieved by selectively omitting instances from a random layer in the cumulative mask while keeping the input RGB image and the prediction mask unchanged. These experiments also simulate and seek to understand the impact of sequential prediction errors on the model\u2019s performance. By introducing noise into the cumulative mask during training, we effectively create scenarios where the model must handle instances segmented into the wrong layer, as happens when the model makes sequential prediction errors. \f11 Specifically, instances from a randomly chosen layer (excluding the fully visible layer) are excluded from the cumulative mask. Mathematically, selecting a random layer index irand from [2, n], the perturbed version of the cumulative mask, denoted as P, is derived by: P = CM \u2212Mirand (12) Where CM is the original cumulative mask, and Mi is the ground truth mask of the ith layer instance (i \u2208[2,n]). The subtraction here is a pixel-wise binary operation. During training, the model will replace CM with P as input at a specified noise level ratio. Noise 0% 5% 10% 15% 20% 0% 5% 10% 15% 20% Layer AP IOU 1 57.8 51.7 56.6 56.0 57.6 57.1 50.3 55.8 55.3 56.9 2 45.4 37.5 44.1 40.2 40.3 44.8 35.5 43.2 38.8 39.2 3 30.0 24.6 28.0 24.9 23.5 28.8 21.9 26.8 22.4 20.8 4 14.2 10.7 12.1 10.3 9.2 12.2 7.9 10.3 8.0 6.5 5 3.6 3.3 3.4 3.2 2.9 1.9 1.9 2.2 1.7 1.0 Table 3: Comparison at different noise levels, evaluated with AP and IOU. Noise-free training results in the highest AP across the layers, and the highest IOU for the first four layers and the second highest for the fifth layer. Tab. 3 illustrates the model\u2019s performance in terms of AP and IOU across different layers and noise levels. It was observed that the highest AP was achieved with 0% noise for all layers. Similar to AP, the IOU results also showed that the highest performance was generally observed with 0% noise, except for the 5th layer, where a slight increase was noted at 10% noise level. Overall, this suggests that adding noise in training has very limited benefit. On the contrary, training without noise achieves the best performance in terms of AP or IOU in the vast majority of cases. The results of the experiment provide insight into the model\u2019s robustness to errors in the sequential segmentation process and validate the effectiveness of our cumulative occlusion learning approach. By focusing on the cumulative mask for all preceding layers, our approach avoids the cascading effects of sequential prediction errors, ensuring more reliable performance even in complex occlusion scenarios. Despite the theoretical appeal of mimicking inference conditions during training, the results indicate that using ground truth cumulative masks remains the more effective approach. This strategy consistently yielded superior results across most metrics and layers, showing its suitability to our model training process. Based on these findings, our training strategy uses the ground truth cumulative masks. 5.5 Comparisons with Other Methods We benchmark against DIS [34], a leading diffusion-based segmentation method. For comparison, we trained distinct DIS models for each layer under the same iterations and evaluated the segmentation results separately for each layer. Tab. 4 comprehensively compares our method and the improved DIS across different layers on three amodal datasets. The performance of the MUVA dataset after five layers is omitted because the performance of both models approaches zero. The superiority of our method is particularly evident in deeper layers, where our method maintains reasonable performances, whereas DIS shows a marked \f12 Layer 1 2 3 4 5 Dataset Method IOU / AP IOU / AP IOU / AP IOU / AP IOU / AP Intra-AFruit DIS 89.5 / 90.7 81.6 / 82.6 52.4 / 52.6 9.8 / 12.4 0.5 / 2.0 Ours 94.3 / 94.7 87.4 / 88.2 76.2 / 77.3 26.7 / 27.6 7.2 / 7.4 ACOM DIS 31.6 / 34.8 26.6 / 28.7 1.6 / 10.2 0.2 / 6.0 0.1 / 2.5 Ours 57.1 / 57.8 44.8 / 45.4 28.8 / 30.0 12.2 / 14.2 1.9 / 3.6 MUVA DIS 68.2 / 71.5 19.3 / 27.3 0.1 / 8.6 0.2 / 3.4 0 / 0.5 Ours 77.0 / 79.3 48.7 / 51.2 25.4 / 27.8 8.5 / 9.9 1.0 / 1.1 Table 4: Comparison with a diffusion-based segmentation model [34] without cumulative occlusion learning. Our method exhibits great improvement in complex, deeper-layer scenes. Dataset Intra-AFruit ACOM MUVA Method Supervision Framework AP w/ Layer AP w/o Layer AP w/ Layer AP w/o Layer AP w/ Layer AP w/o Layer PointRend Supervised CNN-based N/A 70.9 N/A 22.0 N/A 38.9 AISFormer Supervised Transformer-based N/A 70.4 N/A 34.9 N/A 49.7 PLIn Weakly supervised CNN-based 42.2 78.9 3.9 17.0 16.3 47.3 Ours Supervised Diffusion-based 84.6 92.6 45.4 65.5 53.1 55.7 Table 5: Comparison with category-specific segmentation models. PointRend [13], AISFormer [32] and PLIn [1] are trained on category-specific data, whereas our models are trained using class-agnostic data. We evaluate the models by focusing solely on the segmentation quality, disregarding any category information. decline, especially in the MUVA dataset. These results highlight the robustness of cumulative occlusion learning in handling layered occlusions across various datasets, particularly in more complex scenarios involving multiple layers of object occlusion. Due to the lack of class-agnostic amodal segmentation methods with layer perception, we compare against category-specific methods like PLIn for amodal segmentation with occlusion layer prediction [1], AISFormer for amodal segmentation without layer perception [32], and PointRend for modal segmentation [13]. We trained these comparison models using category-labelled amodal masks to meet their requirement for category-specific learning, while our model is trained on data without category labels. For evaluation, we ignore category label accuracy for the comparison models, reporting only segmentation accuracy. We present the AP results considering two scenarios in Tab. 5: with layer prediction, where segmentation precision is contingent on correct layer assignment, and without layer prediction, where segmentation is recognized irrespective of layer placement. Despite being trained on class-agnostic data, our method surpasses category-specific models trained on category-labelled data. Furthermore, Fig. 5 visually demonstrates our method\u2019s superiority in amodal mask segmentation. Our approach provides plausible masks even for heavilyoccluded objects, showcasing its enhanced segmentation capability in complex scenes involving multiple layers of object occlusion. We provide more visualisations of our model\u2019s predictions for the Intra-AFruit [1], MUVA [15] (Fig. 7), (Fig. 6) and ACOM [1] (Fig. 8) test sets. As we can see from the figures, our model performs robustly with different objects and different levels of occlusion. \f13 Image Ground Truth (a) Ours (b) DIS (c) CIMD (d) PLIn (e) PointRend Figure 5: Comparison of predictions on Intra-AFruit (top) and MUVA (bottom) test image by (b) DIS [34] (c) CIMD [25] (d) PLIn [1] (e) PointRend [13] and (a) ours, where (b) and (c) are diffusion-based methods. Dashed circles indicate objects that missed being predicted. Others fail to segment objects or provide less plausible amodal masks compared to ours. Layer 1 Layer 2 Layer 3 Layer 4 Cumulative Mask Ground Truth Cumulative Mask Amodal Mask Amodal Mask Prediction Prediction Ground Truth Image (a) Cumulative Mask Ground Truth Cumulative Mask Amodal Mask Amodal Mask Prediction Prediction Ground Truth Image (b) Layer 5 Figure 6: Visualisation of the prediction of our model on the Intra-AFruit [1] test set. Each layer\u2019s amodal mask synthesis receives the cumulative mask of the previous layers as input, thus providing a spatial context for the prediction and helping to segment the remaining occluded objects better. We can see that our model can predict amodal masks and occlusion layers well for multiple objects in a given image. \f14 Layer 1 Layer 2 Layer 3 Layer 4 Cumulative Mask Ground Truth Cumulative Mask Amodal Mask Amodal Mask Prediction Prediction Ground Truth Image (a) Cumulative Mask Ground Truth Cumulative Mask Amodal Mask Amodal Mask Prediction Prediction Ground Truth Image (b) Figure 7: Visualisation of the prediction of our model on the MUVA [15] test set. Layer 1 Layer 2 Layer 3 Cumulative Mask Ground Truth Cumulative Mask Amodal Mask Amodal Mask Prediction Prediction Ground Truth Image (a) Cumulative Mask Ground Truth Cumulative Mask Amodal Mask Amodal Mask Prediction Prediction Ground Truth Image (b) Figure 8: Visualisation of the prediction of our model on the ACOM [1] test set. \f15 6" +} \ No newline at end of file diff --git a/abs_9K/test_abstract_short_2405.05800v1.json b/abs_9K/test_abstract_short_2405.05800v1.json new file mode 100644 index 0000000000000000000000000000000000000000..b94e8a844bb1c5309836f05e25ce9c4ba7893dc9 --- /dev/null +++ b/abs_9K/test_abstract_short_2405.05800v1.json @@ -0,0 +1,17 @@ +{ + "url": "http://arxiv.org/abs/2405.05800v1", + "title": "DragGaussian: Enabling Drag-style Manipulation on 3D Gaussian Representation", + "abstract": "User-friendly 3D object editing is a challenging task that has attracted\nsignificant attention recently. The limitations of direct 3D object editing\nwithout 2D prior knowledge have prompted increased attention towards utilizing\n2D generative models for 3D editing. While existing methods like Instruct\nNeRF-to-NeRF offer a solution, they often lack user-friendliness, particularly\ndue to semantic guided editing. In the realm of 3D representation, 3D Gaussian\nSplatting emerges as a promising approach for its efficiency and natural\nexplicit property, facilitating precise editing tasks. Building upon these\ninsights, we propose DragGaussian, a 3D object drag-editing framework based on\n3D Gaussian Splatting, leveraging diffusion models for interactive image\nediting with open-vocabulary input. This framework enables users to perform\ndrag-based editing on pre-trained 3D Gaussian object models, producing modified\n2D images through multi-view consistent editing. Our contributions include the\nintroduction of a new task, the development of DragGaussian for interactive\npoint-based 3D editing, and comprehensive validation of its effectiveness\nthrough qualitative and quantitative experiments.", + "authors": "Sitian Shen, Jing Xu, Yuheng Yuan, Xingyi Yang, Qiuhong Shen, Xinchao Wang", + "published": "2024-05-09", + "updated": "2024-05-09", + "primary_cat": "cs.GR", + "cats": [ + "cs.GR", + "cs.CV" + ], + "label": "Original Paper", + "paper_cat": "Diffusion AND Model", + "gt": "User-friendly 3D object editing is a challenging task that has attracted\nsignificant attention recently. The limitations of direct 3D object editing\nwithout 2D prior knowledge have prompted increased attention towards utilizing\n2D generative models for 3D editing. While existing methods like Instruct\nNeRF-to-NeRF offer a solution, they often lack user-friendliness, particularly\ndue to semantic guided editing. In the realm of 3D representation, 3D Gaussian\nSplatting emerges as a promising approach for its efficiency and natural\nexplicit property, facilitating precise editing tasks. Building upon these\ninsights, we propose DragGaussian, a 3D object drag-editing framework based on\n3D Gaussian Splatting, leveraging diffusion models for interactive image\nediting with open-vocabulary input. This framework enables users to perform\ndrag-based editing on pre-trained 3D Gaussian object models, producing modified\n2D images through multi-view consistent editing. Our contributions include the\nintroduction of a new task, the development of DragGaussian for interactive\npoint-based 3D editing, and comprehensive validation of its effectiveness\nthrough qualitative and quantitative experiments.", + "main_content": "Introduction 1 2 Background 3 2.1 Preliminaries on 3D Gaussian Splatting .................................................... 3 2.2 Preliminaries on Diffusion Models .......................................................... 3 2.3 Preliminaries on Multi-view Diffusion Model .............................................. 4 3 Literature Review 6 3.1 2D Editing .................................................................................... 6 3.2 3D Editing .................................................................................... 6 3.3 Multi-view Diffusion Model ................................................................. 7 4 Methodology 8 4.1 Interactive 3D Point-based Manipulation ................................................... 8 4.2 Multi-view Identity Preserving Fine-tuning ................................................ 10 4.3 Multi-view Consistent Editing............................................................... 10 4.3.1 Motion Supervision ................................................................. 10 4.3.2 Point Tracking ....................................................................... 12 4.3.3 Multi-view Consistent Denoising................................................... 12 5 Experiments 13 5.1 Implementation Details ...................................................................... 13 5.2 Editing Results ............................................................................... 13 ii \f5.3 Ablation Study ............................................................................... 14 6 Discussion 17 7 Conclusion 18 iii \fList of Figures 4.1 Pipeline of DragGaussian. ...................................................................... 8 4.2 Overview of our UI. .............................................................................. 9 4.3 Drawing mask using the brush................................................................... 9 4.4 Stages for Multi-view Consistent Editing. ...................................................... 11 5.1 Multi-view consistent editing on a chair......................................................... 14 5.2 Editing and masking on a Gaussian chair. ...................................................... 14 5.3 Drawing mask using the brush................................................................... 15 5.4 Editing results on multi-view images generated by MVDream. (a) column shows the initial images, (b) colunm shows the images after DDIM Inversion and DDIM Sampling process without editing, (c) column shows the images edited by our multi-view image consistent editing pipeline without LoRA finetune, and (d) column shows the editing results with LoRA finetune. ................................................................................... 16 iv \fChapter 1 Introduction Due to the lack of large scale 3D dataset, editing directly on 3D object or scene without 2D prior often lacks generalizability. 3D editing utilizing 2D generative models [1] has gain extensive attention recently. A famous framework of three editing was proposed in Instruct NeRF-to-NeRF [2], that is to edited the multi-view projected 2D images from a 3D representation using 2D editing methods, and then use the edited 2D images as new training data to finetune the initial 3D representation. However, the 2D editing algorithm in this work called Instruct Pix2Pix [3] is a semantic guided editing method, which isn\u2019t very user-friendly. Compared with semantic guidance, some dragging can better represent the user\u2019s editing goal. To address this issue, we tried to leverage the widely adopted 2D drag editing approach known as DragGAN [4], renowned for its ability to facilitate interactive point-based image manipulation. Within this framework, users initiate the editing process by selecting pairs of anchor and destination points on an image. Subsequently, the model executes semantically coherent modifications, seamlessly relocating the content from the anchor points to their corresponding destinations. Moreover, users have the option to delineate specific editable regions by drawing masks, ensuring targeted alterations while preserving the remainder of the image. However, it\u2019s worth noting that due to its reliance on GAN [5] within its network architecture, this method cannot deal with open-vocabulary inputs, posing a significant drawback on generalizability for its practical application in real-world scenarios. In this case, we turned to DragDiffusion, the first interactive point-based image editing method with diffusion models. Empowered by large-scale pre-trained diffusion models [6], DragDiffusion achieves accurate spatial control in image editing with significantly better generalizability, while allowing openvocabulary input data. 1 \fWhen it comes to 3D representation, Neural radiance field (NeRF) methods [7, 8] have shown great power and synthesizing novel-view images. However, as current 2D diffusion models have the limited ability to localize editing regions, it is hard to generate delicate edited scenes as the unwanted regions are usually changed with diffusion models. Recent 3D Gaussian Splatting [9](3D-GS) has been a groundbreaking work in the field of radiance field, which is the first to achieve a real sense of realtime rendering while enjoying high rendering quality and training speed. Besides its efficiency, we further notice its natural explicit property. 3D-GS has a great advantage for editing tasks as each 3D Gaussian exists individually. It is easy to edit 3D scenes by directly manipulating 3D Gaussians with desired constraints applied, including choosing explicit editing mask and choosing the center of one of the Gaussian points as the editing start points (handle points). Some works [10, 11] have proposed effective semantic guided editing on 3D Gaussian recently. Taking into account the above considerations, we propose a 3D object drag-editing framework based on 3D Gaussian Splatting representation, named DragGaussian. In our pipeline, users input pretrained 3D Gaussian object models into the system through a user interface. Upon visualization, they can designate initial and final points for drag editing or specify regions for editing. These points are projected onto 2D images from various camera angles via the projection module. Subsequently, employing Multi-view Consistent Editing, we produce the modified 2D images. Prior to this, to enhance the adaptability of the pre-trained 2D editing network to diverse inputs, we fine-tune the editing network using an enhanced version of multi-view LoRA. Finally, we refine the original 3D Gaussian model using the modified 2D images to demonstrate the edited appearance of the 3D objects. Our contributions are summarized as follows: 1) We proposed a new task: 3D object editing through drag operation on 3D Gaussian. 2) We present a novel 3D object editing method DragGaussian, the first to achieve interactive pointbased 3D editing with diffusion models on 3D Gaussian representation. 3) Comprehensive qualitative and experiments demonstrate the effectiveness of our DragGaussian. 2 \fChapter 2 Background 2.1 Preliminaries on 3D Gaussian Splatting 3D Gaussian splatting[9] is a modern and effective method for 3D visualization. It uses point-based 3D Gaussians, labeled as \ud835\udc3a= {\ud835\udc541, \ud835\udc542, . . . \ud835\udc54\ud835\udc41}, where each \ud835\udc54\ud835\udc56consists of a set {\ud835\udf07, \u03a3, \ud835\udc50, \ud835\udefc} with \ud835\udc56ranging from 1 to \ud835\udc41. Here, \ud835\udf07represents the 3D position of the Gaussian\u2019s center, \u03a3 is the 3D covariance matrix, \ud835\udc50is the RGB color value, and \ud835\udefcdenotes the opacity. This method is notable for its streamlined Gaussian representation and its efficient, differentiable rendering technique, allowing for high-quality, real-time rendering. The rendering process of splatting is described by the equation \ud835\udc36= \u2211\ufe01 \ud835\udc56\u2208\ud835\udc41 \ud835\udc50\ud835\udc56\ud835\udf0e\ud835\udc56 \ud835\udc56\u22121 \u00d6 \ud835\udc57=1 (1 \u2212\ud835\udefc\ud835\udc57), (2.1) where \ud835\udf0e\ud835\udc56= \ud835\udefc\ud835\udc56\ud835\udc52\u22121 2 (\ud835\udc65\ud835\udc56)\ud835\udc47\u03a3\u22121(\ud835\udc65\ud835\udc56) indicates the Gaussian\u2019s impact on a pixel, with \ud835\udc65\ud835\udc56being the pixel\u2019s distance from the \ud835\udc56-th Gaussian\u2019s center. 2.2 Preliminaries on Diffusion Models Denoising diffusion probabilistic models (DDPM) [12] constitutes a family of latent generative models. Concerning a data distribution \ud835\udc5e(z), DDPM approximates \ud835\udc5e(z) as the marginal \ud835\udc5d\ud835\udf03(z0) of the joint distribution between Z0 and a collection of latent random variables Z1:\ud835\udc47. Specifically, 3 \f\ud835\udc5d\ud835\udf03(z0) = \u222b \ud835\udc5d\ud835\udf03(z0 : z1:\ud835\udc47) \ud835\udc51z1:\ud835\udc47, (2.2) where \ud835\udc5d\ud835\udf03(z\ud835\udc47) is a standard normal distribution and the transition kernels \ud835\udc5d\ud835\udf03(z\ud835\udc61|z\ud835\udc61\u22121) of this Markov chain are all Gaussian conditioned on z\ud835\udc61. In our context, Z0 corresponds to image samples given by users, and Z\ud835\udc61corresponds to the latent after \ud835\udc61steps of the diffusion process. [13] proposes Latent Diffusion Model (LDM), which maps data into a lower-dimensional space via a Variational Auto-Encoder (VAE) [14] and models the distribution of the latent embeddings instead. Based on the framework of LDM, several powerful pretrained diffusion models have been released publicly, including the Stable Diffusion (SD) model (https://huggingface.co/stabilityai). In SD, the network responsible for modeling \ud835\udc5d\ud835\udf03(z\ud835\udc61|z\ud835\udc61\u22121) is implemented as a UNet [15] that comprises multiple self-attention and cross-attention modules [16]. Based on DDPM method, Denoising Diffusion Implicit Models (DDIM) [17] has also been proposed, who\u2019s sampling process is shown in Equ. 2.3. DDIM perfectly solves the two issues of DDPM. DDIM has a fast sampling speed, and its sampling process is deterministic because \ud835\udf0ecan be set to 0. Therefore, no noise is introduced during the sampling process. \ud835\udc65\ud835\udc61\u22121 = \u221a\u00af \ud835\udefc\ud835\udc61\u22121 \ud835\udc65\ud835\udc61\u2212 \u221a\ufe01 1 \u2212\u221a\u00af \ud835\udefc\ud835\udc61\ud835\udf00\ud835\udc61 \u221a\u00af \ud835\udefc\ud835\udc61 ! + \u221a\ufe01 1 \u2212 \u00af \ud835\udefc\ud835\udc61\u22121 \u2212\ud835\udf0e2\ud835\udf00\ud835\udc61+ \ud835\udf0e2\ud835\udf00 (2.3) The above formula can be reversed to derive the calculation formula for \ud835\udc65\ud835\udc61from \ud835\udc65\ud835\udc61\u22121 as shown in Equ. 2.4, called DDIM Inversion. Given the initial image \ud835\udc650, all that is needed is to use a pre-trained U-net model to predict noise, and then continuously update to obtain \ud835\udc65\ud835\udc61. \ud835\udc65\ud835\udc61= \u221a\u00af \ud835\udefc\ud835\udc61 \u221a\u00af \ud835\udefc\ud835\udc61\u22121 (\ud835\udc65\ud835\udc61\u22121 \u2212 \u221a\ufe01 1 \u2212 \u00af \ud835\udefc\ud835\udc61\u22121\ud835\udf00\ud835\udc61) + \u221a\ufe01 1 \u2212\u00af \ud835\udefc\ud835\udc61\ud835\udf00\ud835\udc61. (2.4) 2.3 Preliminaries on Multi-view Diffusion Model By leveraging insights from both 2D and 3D datasets, a multi-view diffusion model can attain the generalizability of 2D diffusion models while maintaining the consistency of 3D renderings. In an earlier study, MVDream [18] laid the groundwork by effectively training a multi-view diffusion network. This network generates four orthogonal and coherent multi-view images based on a given text prompt and corresponding camera embedding. In our DragGaussian approach, we utilize the 4 \fU-net architecture pretrained in MVDream as a foundational model. In the training stage of MVDream, each block of the multi-view network contains a densely connected 3D attention on the four view images, which allows a strong interaction in learning the correspondence relationship between different views. To train such a network, it adopts a joint training with the rendered dataset from the Objaverse [19] and a larger scale text-to-image (t2i) dataset, LAION5B [20], to maintain the generalizability of the fine-tuned model. Formally, given text-image dataset X = {x, \ud835\udc66} and a multi-view dataset X\ud835\udc5a\ud835\udc63= {x\ud835\udc5a\ud835\udc63, \ud835\udc66, c\ud835\udc5a\ud835\udc63}, where \ud835\udc65is a latent image embedding from VAE [14], \ud835\udc66 is a text embedding from CLIP [21], and c is their self-designed camera embedding, we may formulate the multi-view (MV) diffusion loss as, L\ud835\udc40\ud835\udc49(\ud835\udf03, X, X\ud835\udc5a\ud835\udc63) = Ex,\ud835\udc66,c,\ud835\udc61,\ud835\udf00 h \u2225\ud835\udf00\u2212\ud835\udf00\ud835\udf03(\ud835\udc65\ud835\udc5d; \ud835\udc66, c\ud835\udc5d, \ud835\udc61)\u22252i (2.5) where, (x\ud835\udc5d, c\ud835\udc5d) = \uf8f1 \uf8f4 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f4 \uf8f3 (x, 0) with probability \ud835\udc5d, (x\ud835\udc5a\ud835\udc63, c\ud835\udc5a\ud835\udc63) with probability 1 \u2212\ud835\udc5d. Here, x is the noisy latent image generated from a random noise \ud835\udf00and image latent, \ud835\udf00\ud835\udf03is the multi-view diffusion (MVDiffusion) model parameterized by \ud835\udf03. 5 \fChapter 3 Literature Review 3.1 2D Editing In the domain of 2D image editing, guidance modalities encompass visual, textual, and other forms such as point-based mouth tracking [22]. Visual guidance leverages pixel-space properties to afford precise control, allowing for the repurposing of image synthesis techniques across a range of editing tasks by adjusting visual elements like semantic maps, as seen in [23, 24]. Textual guidance, in contrast, offers a more dynamic and flexible medium for articulating visual concepts, exemplified by methods like Instruct Pix2Pix[3]. Recent advancements have spotlighted point-based editing as a method for nuanced image content manipulation, illustrated by [25, 4, 26]. Specifically, DragGAN [4] employs latent code optimization and point tracking for effective dragging manipulation, though its utility is bounded by the intrinsic limitations of GANs. FreeDrag [27] seeks to enhance DragGAN through a novel point-tracking-free approach. Furthermore, advancements in diffusion models [12] have led to developments such as DragDiffusion [28] and DragonDiffusion [29], which adapt DragGAN\u2019s framework to the diffusion model context, significantly enhancing its versatility and generalization capabilities. 3.2 3D Editing In the realm of NeRF-based 3D editing, the integration of text and image-guided methodologies marks a significant advancement. Works such as Sine [30], NeRF-Art [31], and TextDeformer [32] employ semantic-driven and geometric manipulation techniques, while Clip-NeRF [33] innovatively 6 \fcombines text and image inputs for enhanced NeRF manipulation. Building on these foundations, InstructNeRF2NeRF [2] introduces a novel text-guided scene editing approach using 2D diffusion models, albeit with the caveat of potential global scene impacts due to its 2D image reliance. In parallel, methods like Ed-NeRF [34] and DreamEditor [35] utilize static masking to define editing zones, and Watch Your Steps [36] dynamically localizes edits via relevance mapping during NeRF training, further refining the precision of NeRF-based editing. Focusing on object-level modifications, FocalDreamer [37] offers meticulous control over object attributes, ensuring detailed and consistent alterations, while Image Sculpting [38] introduces an intuitive platform for geometric adjustments, enabling users to artistically sculpt and modify objects within 3D spaces. APAP [1] offers a novel shape deformation method, enables plausibility-aware mesh deformation and preservation of fine details of the original mesh while offering an interface that alters geometry by directly displacing a handle along a certain direction. These approaches exemplify the nuanced control and artistic freedom now achievable in object editing. Expanding the spectrum of 3D editing techniques, Gaussian-based methods, as exemplified by GaussianEditor [11, 10], utilize text directives for the refined manipulation of 3D Gaussians. This underscores the agility, precision, and controllability inherent in Gaussian representations for 3D edits. In a complementary vein, Drag3D merges the capabilities of DragGAN[4] and Get3D [39] to pioneer an interactive 3D dragging edit demonstration. This mesh-based manipulation technique empowers users with direct, intuitive interactions, leveraging advanced generative and reconstructive technologies to facilitate object transformations, thereby enriching the toolkit available for 3D object editing. However, Drag3D\u2019s mesh-based nature can result in less smooth and realistic 3D representations, and its reliance on DragGAN restricts it to predefined semantics, limiting open-ended drag editing. 3.3 Multi-view Diffusion Model Traditional 2D diffusion models[40, 6] are tailored for single-view image generation, lacking capabilities for 3D viewpoint adjustments. In response, recent innovations such as SweetDreamer [41], Wonder3D [42], Zero123++ [43], Viewset Diffusion [44], SyncDreamer [45], Mvdream [18], and Imagedream [46] have emerged, refining multi-view diffusion models with 3D data to integrate camera poses, thereby facilitating multi-view image generation of a singular object from text prompts or single images. Concurrently, 3D editing techniques like Efficient-NeRF2NeRF [47] have leveraged multi-view enhanced diffusion models to heighten editing precision. 7 \fChapter 4 Methodology In this section, we formally present the proposed DragGaussian pipeline and the whole system. The whole pipeline is shown in Fig. 4.4. Users upload pre-trained 3D Gaussian object models into the system via a UI interface, whereupon visualization, they can select starting and ending points for dragging editing or editable regions (Sec. 4.1). Through the projection module, we obtain the coordinates of the selected points on 2D images from different camera perspectives. Subsequently, via Multi-view Consistent Editing (Sec. 4.3), we generate the edited 2D images. Prior to this, to better adapt the pre-trained 2D editing network to the open-vocabulary input, we fine-tune the editing network using an improved version of multi-view LoRA (Sec. 4.2). Finally, we further train the original 3D Gaussian model using the edited 2D images to present the edited effect of the 3D objects. User Drag on 3DGS 3DGS Finetune Project dragging points onto multi-view images. Edited images Rendered images (Latent Space) (Edited Latent Space) Handle Point Target Point Mask Selection (optional) Multi-view Consistent\u00a0 Editing Module 3DGS Figure 4.1: Pipeline of DragGaussian. 4.1 Interactive 3D Point-based Manipulation The user interface (UI) overview is illustrated in Fig. 4.2. Leveraging an initial UI repository, we have developed our custom supersplat tool. Several buttons, including \u2019start points,\u2019 \u2019end points,\u2019 and 8 \f\u2019brush,\u2019 have been integrated into this interface. Users have the flexibility to select a precise \ud835\udc5bGaussian points as start points for dragging (S = \ud835\udc461, \ud835\udc462, ..., \ud835\udc46\ud835\udc5b), followed by the selection of corresponding end points anywhere within the 3D space (E = \ud835\udc381, \ud835\udc382, ..., \ud835\udc38\ud835\udc5b). Subsequently, the coordinates of these points, along with four randomly chosen camera poses C = {\ud835\udc361, \ud835\udc362, \ud835\udc363, \ud835\udc364}, are passed to a projection module \ud835\udc43. The projection module computes the coordinates of these points on 2D splatted images of the 3D Gaussian from specific poses, denoted as s = {\ud835\udc601, \ud835\udc602, ..., \ud835\udc60\ud835\udc5b} and e = {\ud835\udc521, \ud835\udc522, ..., \ud835\udc52\ud835\udc5b}, using the following functions: s = \ud835\udc43(S, C), e = \ud835\udc43(E, C). (4.1) (a) Splat size configured to 0. (b) Spat size configured to 1. Figure 4.2: Overview of our UI. Different from other 3D object representation methods, the 3D Gaussian is explicit, allowing users to directly specify editable Gaussians in 3D space. In our UI, users can click the \u2019brush\u2019 button to apply a mask to Gaussian points eligible for editing, as illustrated in Fig. 3. This set of Gaussian points, \ud835\udc3a\ud835\udc5a\ud835\udc4e\ud835\udc60\ud835\udc58, which is subject to updates, will be immediately fed into the model. During the subsequent fine-tuning process of pre-trained Gaussians, \ud835\udc3a\ud835\udc5a\ud835\udc4e\ud835\udc60\ud835\udc58serves as a control mechanism, ensuring that parameters of Gaussians not belonging to \ud835\udc3a\ud835\udc5a\ud835\udc4e\ud835\udc60\ud835\udc58are not updated. (a) Splat size configured to 0. (b) Spat size configured to 1. Figure 4.3: Drawing mask using the brush. 9 \f4.2 Multi-view Identity Preserving Fine-tuning Before editing on multi-view 2d images, we conduct an extra multi-view identity-preserving finetuning on the pre-trained multi-view diffusion U-Net. This stage aims to ensure that the diffusion denoising U-Net encodes the features of multi-view images more accurately (than in the absence of this procedure), thus facilitating the consistency of the identity of the multi-view images throughout the editing process. Following the fine-tuning method in DragDiffusion, our fine-tuning process is also implemented with LoRA [48], whose objective function is Lft(\ud835\udc67, \u0394\ud835\udf03) = E\ud835\udf16,\ud835\udc61 \u0002 \u2225\ud835\udf16\u2212\ud835\udf16\ud835\udf03+\u0394\ud835\udf03(\ud835\udefc\ud835\udc61\ud835\udc67+ \ud835\udf0e\ud835\udc61\ud835\udf16)\u22252 2 \u0003 , (4.2) where \ud835\udf03and \u0394\ud835\udf03represent the U-Net and LoRA parameters respectively, \ud835\udc67is the real image, \ud835\udf16\u223cN (0, I) is the randomly sampled noise map, \ud835\udf16\ud835\udf03+\u0394\ud835\udf03(\u00b7) is the noise map predicted by the LoRA-integrated UNet, and \ud835\udefc\ud835\udc61and \ud835\udf0e\ud835\udc61are parameters of the diffusion noise schedule at diffusion step \ud835\udc61. The fine-tuning objective is optimized via gradient descent on \u0394\ud835\udf03. During fine-tuning, images of \ud835\udc5bviews from the same 3D gaussian are combined into a single batch, noted as z, sharing a same sampled timestep \ud835\udc61and independently sampled noise maps \ud835\udf16\ud835\udc56. For each batch, our loss function can be specified as: Lbatch(z, \u0394\ud835\udf03) = 1 \ud835\udc5b \ud835\udc5b \u2211\ufe01 \ud835\udc56=1 \u0002 \u2225\ud835\udf16\ud835\udc56\u2212\ud835\udf16\ud835\udf03+\u0394\ud835\udf03(\ud835\udefc\ud835\udc61z\ud835\udc56+ \ud835\udf0e\ud835\udc61\ud835\udf16\ud835\udc56)\u22252 2 \u0003 . (4.3) Following the setting of MVDream, we set \ud835\udc5b= 4 during fine-tuning. 4.3 Multi-view Consistent Editing After multi-view identity preserving fune-tuning, we add noise to the projected 2d images using DDIM inversion (Sec. 2.2), and then implement motion supervision and point tracking methods to optimize the diffusion latent according to the user directions to achieve editing on 2D images. 4.3.1 Motion Supervision We denote the \ud835\udc5b2d handle points (start points) at the \ud835\udc58-th motion supervision iteration as {\u210e\ud835\udc58 \ud835\udc56= (\ud835\udc65\ud835\udc58 \ud835\udc56, \ud835\udc66\ud835\udc58 \ud835\udc56) : \ud835\udc56= 1, . . . , \ud835\udc5b} and their corresponding target points as {\ud835\udc52\ud835\udc58 \ud835\udc56= (\ud835\udc65\ud835\udc58 \ud835\udc56, \ud835\udc66\ud835\udc58 \ud835\udc56) : \ud835\udc56= 1, . . . , \ud835\udc5b}. Each 10 \fDDIM inversion Latent Space Motion Supervision Edited Latent Space MVDream U-Net Text Prompt ...... Camera Pose LoRA Finetune Point Tracking Latent Editing Module MVDream U-Net Figure 4.4: Stages for Multi-view Consistent Editing. of the input four images is denoted as \ud835\udc670; the \ud835\udc61-th step latent (the result of \ud835\udc61-th DDIM inversion) is denoted as \ud835\udc67\ud835\udc61. We pass \ud835\udc67\ud835\udc61through the U-Net architecture extracted from the pre-trained MVDream model. Let \ud835\udc39(\ud835\udc67\ud835\udc61) represent the output feature maps of the U-Net used for motion supervision, and \ud835\udc39\u210e\ud835\udc58 \ud835\udc56(\ud835\udc67\ud835\udc61) denote the feature vector at pixel location \u210e\ud835\udc58 \ud835\udc56. Also, we denote the square patch centered around \u210e\ud835\udc58 \ud835\udc56as \u03a9(\u210e\ud835\udc58 \ud835\udc56; \ud835\udc5f1) = {(\ud835\udc65, \ud835\udc66) : |\ud835\udc65\u2212\ud835\udc65\ud835\udc58 \ud835\udc56| \u2264\ud835\udc5f1, |\ud835\udc66\u2212\ud835\udc66\ud835\udc58 \ud835\udc56| \u2264\ud835\udc5f1}. Then, the motion supervision loss at the \ud835\udc58-th iteration is defined as: \ud835\udc3f\ud835\udc5a\ud835\udc60(\ud835\udc67\ud835\udc58 \ud835\udc61) = \ud835\udc5b \u2211\ufe01 \ud835\udc56=1 \u2211\ufe01 \ud835\udc5e \u03a9 \u0010\r \r\ud835\udc39\ud835\udc5e+\ud835\udc51\ud835\udc56(\u02c6 \ud835\udc67\ud835\udc58 \ud835\udc61) \u2212\ud835\udc60\ud835\udc54(\ud835\udc39\ud835\udc5e(\u02c6 \ud835\udc67\ud835\udc58 \ud835\udc61)) \r \r 1 + \ud835\udf06 \r \r \r \u0010 \u02c6 \ud835\udc67\ud835\udc58 \ud835\udc61\u2212\ud835\udc60\ud835\udc54(\u02c6 \ud835\udc670 \ud835\udc61\u22121) \u0011 \u2299(1 \u2212\ud835\udc40) \r \r \r 1 , (4.4) where \u02c6 \ud835\udc67\ud835\udc58 \ud835\udc61is the \ud835\udc61-th step latent after the \ud835\udc58-th update, sg(\u00b7) is the stop gradient operator (i.e., the gradient will not be back-propagated for the term sg(\ud835\udc39\ud835\udc5e(\u02c6 \ud835\udc67\ud835\udc58 \ud835\udc61))), \ud835\udc51\ud835\udc56= \u0012 \ud835\udc54\ud835\udc56\u2212\u02c6 \u210e\ud835\udc56 \ud835\udc58 \u2225\ud835\udc54\ud835\udc56\u2212\u02c6 \u210e\ud835\udc56 \ud835\udc58\u22252 \u0013 , \u2225\ud835\udc54\ud835\udc56\u2212\u02c6 \u210e\ud835\udc56 \ud835\udc58\u22252 is the normalized vector pointing from \u02c6 \u210e\ud835\udc56 \ud835\udc58to \ud835\udc54\ud835\udc56, \ud835\udc40is the binary mask specified by the user, \ud835\udc39\ud835\udc5e+\ud835\udc51\ud835\udc56(\u02c6 \ud835\udc67\ud835\udc58 \ud835\udc61) is obtained via bilinear interpolation as the elements of \ud835\udc5e+ \ud835\udc51\ud835\udc56may not be integers. In each iteration, \u02c6 \ud835\udc67\ud835\udc58 \ud835\udc61is updated by taking one gradient descent step to minimize \ud835\udc3f\ud835\udc5a\ud835\udc60: \u02c6 \ud835\udc67\ud835\udc58+1 \ud835\udc61 = \u02c6 \ud835\udc67\ud835\udc58 \ud835\udc61\u2212\ud835\udf02\ud835\udf15\ud835\udc3f\ud835\udc5a\ud835\udc60(\u02c6 \ud835\udc67\ud835\udc58 \ud835\udc61) \ud835\udf15\u02c6 \ud835\udc67\ud835\udc58 \ud835\udc61 , (4.5) where \ud835\udf02is the learning rate for latent optimization. 11 \f4.3.2 Point Tracking Since the motion supervision updates \u02c6 \ud835\udc67\ud835\udc58 \ud835\udc61on the four 2d images, the positions of the handle points on them may also change. Therefore, we need to perform point tracking on each of the four images to update the handle points after each motion supervision step. To achieve this goal, we use Multi-view U-Net feature maps \ud835\udc39(\u02c6 \ud835\udc67\ud835\udc58+1 \ud835\udc61 ) and \ud835\udc39(\u02c6 \ud835\udc67\ud835\udc61) to track the new start points. Specifically, we update each of the start points \u210e\ud835\udc58 \ud835\udc56with a nearest neighbor search within the square patch \u03a9(\u210e\ud835\udc58 \ud835\udc56; \ud835\udc5f2) = {(\ud835\udc65, \ud835\udc66) : |\ud835\udc65\u2212\ud835\udc65\ud835\udc58 \ud835\udc56| \u2264 \ud835\udc5f2, |\ud835\udc66\u2212\ud835\udc66\ud835\udc58 \ud835\udc56| \u2264\ud835\udc5f2} as follows: \u210e\ud835\udc58+1 \ud835\udc56 = arg min\ud835\udc5e\u2208\u03a9(\u210e\ud835\udc58 \ud835\udc56;\ud835\udc5f2)\u2225\ud835\udc39\ud835\udc5e(\u02c6 \ud835\udc67\ud835\udc58+1 \ud835\udc61 ) \u2212\ud835\udc39\u210e0 \ud835\udc56(\u02c6 \ud835\udc67\ud835\udc61)\u22251. (4.6) 4.3.3 Multi-view Consistent Denoising After we get editted 2d images from several viewpoints (here we set it as 4), we come to the stage to denoise them. Implementing the U-net pre-trained by MVDream (Sec. 2.3) and finetuned by LoRA (Sec. 4.2) to predict noise, following the equation of DDIM Sampling (Equ. 2.3), the four edited latent 2d images can be denoised step by step. Same as the Sampling process in DragDiffusion which uses U-net from Stable Diffusion [40], we replace the key and value vectors generated from \u02c6 \ud835\udc67\ud835\udc61with the ones generated from \ud835\udc67\ud835\udc61. With this simple replacement technique, the query vectors generated from \u02c6 \ud835\udc67\ud835\udc61 will be directed to query the correlated contents and texture of \ud835\udc67\ud835\udc61. 12 \fChapter 5 Experiments 5.1 Implementation Details In our experimental setup, we configured the number of steps for DDIM Inversion and Sampling to be 50, and set the guidance scale \ud835\udf02to 1.0. Specifically, during the DDIM Sampling process, we shared the Key-Value (KV) pairs from layer 8 out of 10, and from step 0. When editing real images, we do not apply classifier-free guidance (CFG) [49] in both DDIM Inversion and DDIM Denoising process. For the LoRA fine-tuning process, we employed a learning rate of 0.0005 for 300 LoRA steps. During the Multi-view Consistent Editing Module, we use the Adam optimizer with a learning rate of 0.01 to optimize the latent. The maximum optimization step is set to be 80. The hyperparameter \ud835\udc5f1 in Eqn. 4.4 and \ud835\udc5f2 in Equ. 4.6 are tuned to be 1 and 3, respectively. \ud835\udf06in Equ. 4.4 is set to 0.1 by default. Our experiments were conducted utilizing 2 NVIDIA RTX 3090 GPUs. 5.2 Editing Results Fig. 5.3 shows the editing results of a pretrained lego 3D Gaussian. Images on the left column are rendered from initial well-trained Gaussian, images on the middle column are 2d editing results of the model with start points (red) and end points (blue), and images on the right column are rendered results of the editted 3D Gaussian after finetuning 5000 iterations. In the editing results of both 2D and 3D, we can clearly see the shrinkage deformation caused by editing the LEGO tractor. However, constrained by the limitations of the 2D drag editing model, the resolution of the 2D editing results is lower, further leading to a blurred visual effect of the edited 3D Gaussian model. 13 \fFigure 5.1: Multi-view consistent editing on a chair. We also compared our method with Drag3D, a 3d mesh editing methods based on dragging operation. Because of the usage of diffusion model, our method is much power than Drag3D, which extends the idea of DragGAN into GET3D. However, the representation of 3D Gaussian provides a much clearer rendering result and also provide users the possibility to choose editing areas explicitly. 5.3 Ablation Study Figure 5.2: Editing and masking on a Gaussian chair. To showcase the effectiveness of our editing approach for 2D images using the pre-trained MVDream network, we conducted several preliminary experiments. These experiments involved editing both multi-view 2D images derived from 3D Gaussian datasets\u2019 projections and multi-view images generated by MVDream itself. In Fig.5.1, an exemplary editing outcome is presented. The chair data originates from the NeRF Synthetic dataset and was utilized for pre-training a 3D Gaussian model. Fig.5.2 demonstrates the user\u2019s editing process and mask selection, depicting the utilization of brush tools for mask creation. The editing outcome on an open vocabulary dataset reveals conspicuous traces of editing on the chair\u2019s back, along with evident distortions in unmasked areas such as the chair 14 \flegs. Additionally, the overall size of the chair appears diminished post-editing. The constraints imposed by the pre-trained MVDream U-net\u2019s generalizability become apparent when confronted with unseen data, highlighting limitations in our 2D consistent editing pipeline. Figure 5.3: Drawing mask using the brush. Nonetheless, our reliable 2D editing pipeline adeptly handles data generated by the U-net. When provided with a text prompt like \u201dA chair,\u201d MVDream generates chair images from four different viewpoints. Subsequently, we feed these four chairs into DDIM Inversion, the 2D image editing process, and the multi-view denoising stage within DragGaussian\u2019s pipeline, as illustrated in Fig.4.3. To enhance the editing process, we include four pairs of manually selected start and end points. The resulting purely 2D edited images are depicted in Fig.5.4. 15 \fFigure 5.4: Editing results on multi-view images generated by MVDream. (a) column shows the initial images, (b) colunm shows the images after DDIM Inversion and DDIM Sampling process without editing, (c) column shows the images edited by our multi-view image consistent editing pipeline without LoRA finetune, and (d) column shows the editing results with LoRA finetune. To validate the effectiveness of LoRA finetuning, we contract the editing result on multi-view 2d images with and without LoRA finetune. In Fig. 5.4, (c) column shows the images edited by our multi-view image consistent editing pipeline without LoRA finetune, and (d) column shows the editing results with LoRA finetune. It\u2019s evident that editing outcomes lacking LoRA fine-tuning fail to maintain the original background color and inadvertently alter parts of the chair that weren\u2019t intended to be edited. Conversely, employing LoRA fine-tuning on the U-net of MVDream yields significantly superior results, preserving undisturbed regions more effectively. 16 \fChapter 6 Discussion Miss Point Tracking Issue The point tracking method used in our DragGaussian bases on a traditional framework for point based image editing. However, there may exist several issues in this tracking framework, which is discussed in a recent work called FreeDrag [27]. One issue is miss tracking, where the process of dragging points faces difficulty in accurately following the desired handle points. This problem is especially prevalent in highly curved areas with a significant perceptual path length, as observed within latent space [50]. In such instances, the optimized image undergoes significant alterations, causing handle points in subsequent iterations to be placed outside the intended search area. Furthermore, in certain cases, miss tracking can result in handle points disappearing. It\u2019s noteworthy that during miss tracking, the cumulative error in the motion supervision step progressively increases as iterations continue, due to the misalignment of tracked features. Another issue is ambiguous tracking, where tracked points are positioned within regions resembling the handle points. This dilemma occurs when parts of the image share features similar to the intended handle points, leading to uncertainty in the tracking process. This problem presents a significant challenge as it can mislead the motion supervision process in subsequent iterations, resulting in inaccurate or misleading guidance. 17 \fChapter 7" +} \ No newline at end of file diff --git a/abs_9K/test_abstract_short_2405.05846v1.json b/abs_9K/test_abstract_short_2405.05846v1.json new file mode 100644 index 0000000000000000000000000000000000000000..30c4347266989333e5de92e70649335466b60018 --- /dev/null +++ b/abs_9K/test_abstract_short_2405.05846v1.json @@ -0,0 +1,17 @@ +{ + "url": "http://arxiv.org/abs/2405.05846v1", + "title": "Could It Be Generated? Towards Practical Analysis of Memorization in Text-To-Image Diffusion Models", + "abstract": "The past few years have witnessed substantial advancement in text-guided\nimage generation powered by diffusion models. However, it was shown that\ntext-to-image diffusion models are vulnerable to training image memorization,\nraising concerns on copyright infringement and privacy invasion. In this work,\nwe perform practical analysis of memorization in text-to-image diffusion\nmodels. Targeting a set of images to protect, we conduct quantitive analysis on\nthem without need to collect any prompts. Specifically, we first formally\ndefine the memorization of image and identify three necessary conditions of\nmemorization, respectively similarity, existence and probability. We then\nreveal the correlation between the model's prediction error and image\nreplication. Based on the correlation, we propose to utilize inversion\ntechniques to verify the safety of target images against memorization and\nmeasure the extent to which they are memorized. Model developers can utilize\nour analysis method to discover memorized images or reliably claim safety\nagainst memorization. Extensive experiments on the Stable Diffusion, a popular\nopen-source text-to-image diffusion model, demonstrate the effectiveness of our\nanalysis method.", + "authors": "Zhe Ma, Xuhong Zhang, Qingming Li, Tianyu Du, Wenzhi Chen, Zonghui Wang, Shouling Ji", + "published": "2024-05-09", + "updated": "2024-05-09", + "primary_cat": "cs.CR", + "cats": [ + "cs.CR", + "cs.CV" + ], + "label": "Original Paper", + "paper_cat": "Diffusion AND Model", + "gt": "The past few years have witnessed substantial advancement in text-guided\nimage generation powered by diffusion models. However, it was shown that\ntext-to-image diffusion models are vulnerable to training image memorization,\nraising concerns on copyright infringement and privacy invasion. In this work,\nwe perform practical analysis of memorization in text-to-image diffusion\nmodels. Targeting a set of images to protect, we conduct quantitive analysis on\nthem without need to collect any prompts. Specifically, we first formally\ndefine the memorization of image and identify three necessary conditions of\nmemorization, respectively similarity, existence and probability. We then\nreveal the correlation between the model's prediction error and image\nreplication. Based on the correlation, we propose to utilize inversion\ntechniques to verify the safety of target images against memorization and\nmeasure the extent to which they are memorized. Model developers can utilize\nour analysis method to discover memorized images or reliably claim safety\nagainst memorization. Extensive experiments on the Stable Diffusion, a popular\nopen-source text-to-image diffusion model, demonstrate the effectiveness of our\nanalysis method.", + "main_content": "INTRODUCTION Diffusion probabilistic models [14, 39] have shown impressive capability in the generation of images [31, 32], videos [7], 3D point cloud [24], etc. These techniques lay the foundation for commercial systems or communities such as Stable Diffusion [32], Midjourney [4], DALL\u00b7E 2/3 [3, 31] and Imagen [33], which have attracted millions of active users. The popularity of diffusion models can be attributed to the hierarchical denoising procedure, which offers high stability when trained on billions of data [36] and scalability to multimodal conditional generation. The large-scale dataset used to train the state-of-the-art textto-image generation models, e.g., the open-source image-caption dataset LAION-5B [36], are widely acknowledged to contain content that will raise concerns about copyright and privacy. For example, as reported, LAION-5B could refer to photographers\u2019 work without authorization [12] and private medical photographs were also found therein [1]. With the uncurated data for training, diffusion models are likely to generate content that infringes the copyright of creators or exposes private information. Caption: Mothers influence on her young hippo Training Image Generated Images Caption: Emma Watson to play Belle in Disney's Beauty and the Beast Figure 1: Examples of memorized images in Stable Diffusion. The right four random samples are all the same as the corresponding training image in the first column. In this work, we focus on the problem of memorization in textto-image diffusion models, a worst case of training data misuse. Memorization in text-to-image diffusion models is a failure of generation that, when input with certain prompt but different random seeds, a model always rigidly generates the same data as those in its training set. This type of generation is regarded as failed because a probabilistic generative model is supposed to generate novel and diversified images. Figure 1 illustrates two examples of memorization in Stable Diffusion. Memorization in text-to-image diffusion models is not only a technical problem analogous to mode collapse as Generative Adversarial Networks (GAN) [6], but also a prejudice to the interests of image owners. In terms of copyright protection, even the model developers are authorized to train their model with copyrighted images, the image owners will never expect their images to be replicated to arbitrary users as this would cause arXiv:2405.05846v1 [cs.CR] 9 May 2024 \findisciplinable dissemination. In past years, text-to-image models have been facing lawsuits for generating derivative images that mimic the style of artists. However, compared to derivative generations whose legality is still in pending [35], exact replication of copyrighted images is undisputedly intolerable. For privacy preservation, a series of works [16, 27] have proposed to use synthetic data in place of real data to prevent sharing of private information. For this goal, potential memorization should also be carefully circumvented. The existence of memorization in text-to-image models was first demonstrated by Carlini et al. [8] and Somepalli et al. [40, 41]. They studied the most popular open-source text-to-image diffusion model Stable Diffusion [32] and discovered prompts that trigger the model to generate training images. Although text-to-image diffusion models are found to be vulnerable to memorization, a practical analysis method is still a challenging problem. First of all, existing analysis methods [8, 40, 41, 46] are all prompt-based: They first generate massive candidate images using captions from the original training set and then detect risky generations of low diversity [8], search for generated images highly similar to training images [40, 41] or detect prompts with high prediction errors [46]. The prompt-based analysis methods are unable to determine whether an arbitrary image is memorized or not. Actually they are unaware of which images might be memorized only after memorization has been discovered. Besides, for the other images whose training captions seem not trigger memorization phenomena, their safety against memorization is still uncertain and hard to be analyzed by existing methods, because it is impossible to exhaustively test all prompts. To this end, a practical analysis method is expected to be image-based rather than prompt-based. Second, a practical analysis method requires quantitative measurement of memorization. Previous works focus on the discovery of memorized images and lack accurate description of memorization for each instance. Quantitative measurement of memorization not only provides strong evidence for the security risks of memorized images, but allows model developers to responsibly claim safety for normal images to their owners. To cope with the challenges, we consider a practical scenario where the model developers predefine a target set of copyrighted or privacy-preserving images. They aim to perform a security analysis on the target images to decide whether they are memorized by the model and to quantify the extent to which they are memorized. Based on the analysis, developers are able to claim the safety against memorization for the target images to their data providers, or discover memorized images in advance and fix the vulnerability. To perform the security analysis, we first formally define image memorization in diffusion models and identify three conditions to say an image is memorized, named similarity, existence and probability. The similarity condition means that generated images should be exactly alike a target image. As mentioned before, this condition reflects the worst case misuse of training data and poses a significant security threat. Instead of calculating the similarity between generated images and target images, we utilize the model\u2019s prediction error as a metric to recognize image replications. This metric is as effective as previous metrics in recognition of image replication. It also enables us to invert the model to find inputs that cause replication, based on which we conduct analysis for the other two conditions. The existence condition requires that there exist a prompt to trigger the replication of a target image. We propose a prompt inversion algorithm to analyze this condition and verify by contradiction the existence of such prompt. The probability condition is fulfilled when a target image are frequently replicated at sampling time. We propose to measure the condition by comparing model\u2019s prediction error on the target image to those of a safe model. If the target image would be replicated with high probability, a significant distribution shift away from the error distribution of the safe model can be observed. We verify by contradiction that the unconditional diffusion models trained on large-scale data are safe from memorization and thus utilized as the safe model. We conduct comprehensive experiments on Stable Diffusion to demonstrate the effectiveness of our analysis method. In summary, we make the following contributions in this paper: \u2022 We perform a more practical analysis on the memorization in text-to-image diffusion models. Our analysis method is image-based and does not need to collect massive prompts, which is more reliable than prompt-based analysis methods. \u2022 We provide a formal definition of memorization in text-toimage diffusion models and identify three conditions of it. We then propose effective metrics and algorithms to measure each condition and ultimately quantify the extent to which the target images are memorized. \u2022 We demonstrate the viability of our analysis method through detailed experiments on Stable Diffusion, which reveals the intrinsic properties of memorization in text-to-image diffusion models. 2 BACKGROUND 2.1 Diffusion Model Diffusion probabilistic models [14, 39] are a class of latent variable models consisting of a hierarchy of denoising autoencoders. The encoder is not learned but replaced by a manually designed diffusion process. Given input image \ud835\udc6501 and a total of \ud835\udc47steps, the diffusion process is modeled as a Markov chain that gradually adds Gaussian noises \ud835\udf160:\ud835\udc47\u22121 to the input image \ud835\udc650 according to a weight schedule \ud835\udefc1:\ud835\udc47: \ud835\udc5e(\ud835\udc651:\ud835\udc47|\ud835\udc650) = \ud835\udc47 \u00d6 \ud835\udc61=1 \ud835\udc5e(\ud835\udc65\ud835\udc61|\ud835\udc65\ud835\udc61\u22121), \ud835\udc5e(\ud835\udc65\ud835\udc61|\ud835\udc65\ud835\udc61\u22121) = N (\ud835\udc65\ud835\udc61; \u221a\ud835\udefc\ud835\udc61\ud835\udc65\ud835\udc61\u22121, (1 \u2212\ud835\udefc\ud835\udc61)\ud835\udf16\ud835\udc61\u22121), \ud835\udc5e(\ud835\udc65\ud835\udc61|\ud835\udc650) = N (\ud835\udc65\ud835\udc61; \u221a\u00af \ud835\udefc\ud835\udc61\ud835\udc650, (1 \u2212\u00af \ud835\udefc\ud835\udc61)\ud835\udf160), \u00af \ud835\udefc\ud835\udc61= \ud835\udc61 \u00d6 \ud835\udc56=1 \ud835\udefc\ud835\udc56. (1) \u00af \ud835\udefc\ud835\udc61gradually decreases to almost zero in the last step \ud835\udc47so that \ud835\udc65\ud835\udc47 is close to pure Gaussian noise. The process of generating image \ud835\udc650 is the reverse of the diffusion process and also a Markov chain starting at \ud835\udc65\ud835\udc47\u223cN (0, \ud835\udc3c): \ud835\udc5d(\ud835\udc650:\ud835\udc47) = \ud835\udc5d(\ud835\udc65\ud835\udc47) \ud835\udc47 \u00d6 \ud835\udc61=1 \ud835\udc5d\ud835\udf03(\ud835\udc65\ud835\udc61\u22121|\ud835\udc65\ud835\udc61). (2) 1In this paper we intensionally confuse the use of \ud835\udc65and \ud835\udc650 to denote an image. In the contexts related to the diffusion process we use \ud835\udc650 and otherwise \ud835\udc65. \fIf the diffusion process is divided into sufficient steps, each reverse step \ud835\udc5d\ud835\udf03(\ud835\udc65\ud835\udc61\u22121|\ud835\udc65\ud835\udc61) can be approximated by a Gaussian transformation that is trained to match the corresponding diffusion step \ud835\udc5e(\ud835\udc65\ud835\udc61\u22121|\ud835\udc65\ud835\udc61,\ud835\udc650). This is implemented by minimizing the following objective: L = E\ud835\udc61,\ud835\udc650,\ud835\udf160 \u0002 \u2225\ud835\udf160 \u2212\ud835\udf16\ud835\udf03(\ud835\udc65\ud835\udc61,\ud835\udc61)\u22252 2 \u0003 , (3) where \ud835\udf16\ud835\udf03is a neural network that predicts the added noise \ud835\udf160, \ud835\udc65\ud835\udc61= \u221a\u00af \ud835\udefc\ud835\udc61\ud835\udc650 + \u221a1 \u2212\u00af \ud835\udefc\ud835\udc61\ud835\udf16. After training, the vanilla sampling procedure starts with a random Gaussian noise \ud835\udc65\ud835\udc47\u223cN (0, \ud835\udc3c) and removes the predicted noise stepwise by \ud835\udc65\ud835\udc61\u22121 = 1 \u221a\ud835\udefc\ud835\udc61(\ud835\udc65\ud835\udc61\u22121\u2212\ud835\udefc\ud835\udc61 \u221a1\u2212\u00af \ud835\udefc\ud835\udc61\ud835\udf16\ud835\udf03(\ud835\udc65\ud835\udc61,\ud835\udc61)) + \ud835\udf0e\ud835\udc61N (0, \ud835\udc3c), where \ud835\udf0e\ud835\udc61= \u221a (1\u2212\ud835\udefc\ud835\udc61) (1\u2212\u00af \ud835\udefc\ud835\udc61\u22121) \u221a1\u2212\u00af \ud835\udefc\ud835\udc61 when \ud835\udc61> 1 and 0 when \ud835\udc61= 1. The vanilla sampling algorithm is extremely slow to generate an image as it must invoke the network \ud835\udf16\ud835\udf03for \ud835\udc47times (e.g., 1000 steps in Stable Diffusion). To mitigate the problem, a variety of efficient sampling algorithms are proposed, such as DDIM sampler [42], PLMS sampler [22], etc. 2.2 Conditional Diffusion Model Diffusion models can be extended to conditional variants to generate images under the guidance of some input condition, e.g., object class, textual prompt. Text-to-image models are conditional diffusion models that allow users to input some prompts to indicate the desired content of generated images. There are mainly two types of guidance, i.e., Classifier Guidance [9] and Classifier-Free Guidance [15]. Classifier Guidance additionally trains a classifier on the noisy image \ud835\udc65\ud835\udc61to predict its coupled condition \ud835\udc50and utilizes the gradients from the classifier to guide the sampling. Most diffusion models like Stable Diffusion choose Classifier-Free Guidance because it does not need to train an extra classifier. Classifier-Free Guidance implicitly trains two models, an unconditional model \ud835\udf16\ud835\udf03(\ud835\udc65\ud835\udc61,\ud835\udc61) and a conditonal model \ud835\udf16\ud835\udf03(\ud835\udc65\ud835\udc61,\ud835\udc61,\ud835\udc50). The two models share parameters and the unconditional model is trained by randomly replacing input condition \ud835\udc50with null (for textual condition, the unconditional model is always input an empty string). At sampling time, the predicted noise is a linear combination of unconditonal prediction and conditional prediction: \u02c6 \ud835\udf16\ud835\udf03(\ud835\udc65\ud835\udc61,\ud835\udc61,\ud835\udc50) = \ud835\udf16\ud835\udf03(\ud835\udc65\ud835\udc61,\ud835\udc61) + \ud835\udefe(\ud835\udf16\ud835\udf03(\ud835\udc65\ud835\udc61,\ud835\udc61,\ud835\udc50) \u2212\ud835\udf16\ud835\udf03(\ud835\udc65\ud835\udc61,\ud835\udc61)), (4) where a larger hyperparameter \ud835\udeferesults in generated images more consistent with the input condition. 2.3 Text-To-Image Diffusion Model An instance of conditional diffusion models, which we will study in this work, is text-to-image diffusion models. To obtain semantically meaningful condition \ud835\udc50, the input prompt is first tokenized and projected into a sequence of continuous token embeddings \ud835\udc52= [\ud835\udc520,\ud835\udc521, ...,\ud835\udc52\ud835\udc41\u22121], where \ud835\udc41is the number of tokens. The token embeddings are further encoded as the condition \ud835\udc50by a pre-trained image-text model, for example, CLIP [29] or language model, for example, T5 [30]. Depending on the specific modeling, the condition \ud835\udc50is either incorporated into the middle layers of the noise prediction network \ud835\udf16\ud835\udf03(\ud835\udc65\ud835\udc61,\ud835\udc61,\ud835\udc50) via cross-attention [32, 33], or concatenated with a sequence of image tokens, modeling \ud835\udf16\ud835\udf03(\ud835\udc65\ud835\udc61,\ud835\udc61,\ud835\udc50) autoregressively as a single stream [31]. (a) Exactly alike generation (b) Derivative generation Figure 2: Examples of exactly alike generation and derivative generation. Among the advanced text-to-image diffusion models, Stable Diffusion is open-sourced both in implementation and its training data, therefore we will utilize it for our study. To generate high-resolution images, Stable Diffusion first train an autoencoder which encodes an image \ud835\udc65into a lower-dimensional representation \ud835\udc67= E(\ud835\udc65) perceptually equivalent to the data space. The diffusion model is trained in the reduced space. At sampling time, after generating a latent \ud835\udc67\u2032, a high-resolution image \ud835\udc65\u2032 is obtained via the pre-trained decoder \ud835\udc65\u2032 = D(\ud835\udc67\u2032). 3 DEFINITION OF MEMORIZATION We first formalize the definition of memorization and then make comparisons to existing ones: Definition. A training sample \ud835\udc65is memorized if, at sampling time, there exists a prompt, under whose guidance the model will generate samples that are exactly alike \ud835\udc65with a significant probability. Exactly alike does not mean verbatim same or byte-by-byte match in the file system. It is still on the perception level but excludes even a minor transformation such as change in view point and component recombination. Exactly alike training sample \ud835\udc65, existence of a prompt and significant probability are three conditions to say a training sample is memorized. For brevity, we call them the similarity, existence and probability conditions. Existing works cover the three conditions to varying degrees. Carlini et al. [8] provide a strict definition of memorization that a training image is eidetic memorized if it has at most \ud835\udc58instances in the training set and is extractable from the model via some prompts. We both count it as memorization if the generated samples are exactly alike or eidetic to training ones (Figure 2a). Other works [40, 41, 46, 48] do not give a formal definition and discuss a wider scope of memorization in the form of derivative generation, such as partial copy and style-like copy (Figure 2b). Restricting memorization to the most extreme case \"exactly alike\" has several advantages over a wider scope. First, lawsuits against derivative actions in image generation models are still in very early stages [35]. It takes time to render decisions on its legality. In contrast, \"exactly alike\" memorization is by no means allowed if the related images are copyrighted or private. Second, from a technical perspective, diffusion models are inherently trained to replicate training samples \fpixel by pixel, as in Equation 4. Therefore, \"exactly alike\" memorization is not only defined at the problem level, but also possible to find evidence in the model itself. This allows us to utilize the internal statistics of the model to measure its memorization problem, rather than relying on external models to match training images and generate images, which is less reliable due to potential risks such as adversarial attack [49]. The existence condition is not a concern for previous works as they analyze memorization in a prompt-based way such that the condition is always satisfied. For our image-based analysis, the condition is important to be able to expose realistic risks, as discussed later. As for the probability condition, Carlini et al. do not involve the probability condition explicitly in the definition but in their membership inference attack designed to detect abnormal prompts, which motivates us in our definition. Other works [40, 41, 46, 48] do not place an emphasis on probability. The probability condition is critical for analyzing memorization; as we will show later, any samples can be extracted from diffusion models, but not all are memorized. 4 RECOGNIZING IMAGE REPLICATION We begin the measurement of memorization in diffusion models with a preliminary investigation on the recognition of image replication, which aims to decide the condition that a generated image \ud835\udc65\u2032 replicates the target image \ud835\udc650 (the similarity condition). Effective recognition is the basis for further measurement. Existing works adopted a \"tiled\" \ud835\udc592 distance [8] or SSCD [40, 41] (a pre-trained model for copy detection) representations to calculate the similarity between \ud835\udc65\u2032 and \ud835\udc650. Wen et al. [48]\u2019s metric was designed to detect abnormal prompts and could not be used to identify a replication of \ud835\udc650. Nevertheless, to have an in-depth understanding of training image replication and accurate recognition, a more intrinsic and informative metric is necessary. 4.1 Methodology Suppose that the input prompt is represented as \ud835\udf11(\ud835\udc52), where \ud835\udc52= [\ud835\udc520,\ud835\udc521, ...,\ud835\udc52\ud835\udc41\u22121] is a sequence of token embeddings and \ud835\udf11is a text encoder. To generate an image, a random Gaussian noise \ud835\udf160 \u223cN (0, \ud835\udc3c) is sampled and follows an iterative denoising process as introduced in Section 2.1. Besides the initial noise \ud835\udf160, the vanilla sampling algorithm of diffusion models adds a different Gaussian noise at each step. Therefore, the generated image is determined by an array of noises. However, in practice more efficient samplers are utilized, e.g., DDIM sampler [42] and PLMS sampler [22], which only sample once at the beginning and then follow a deterministic denoising process. If the same initial noise is used, then the generated image will be exactly the same. We adopt DDIM sampler [42] in our experiments, therefore only consider the initial noise. To recognize whether a noise-prompt pair (\ud835\udf160,\ud835\udc52) can replicate the target image \ud835\udc650, we find it strongly correlated with the model\u2019s prediction error when we utilize \ud835\udf160 to blur \ud835\udc670 = E(\ud835\udc650). Instead of the default \ud835\udf160-prediction error, we consider a more direct and effective \ud835\udc670-prediction error: L(\ud835\udc650,\ud835\udf160,\ud835\udc52) = E\ud835\udc61 \u0002 \u2225\ud835\udc670 \u2212\ud835\udc67\ud835\udf03(\ud835\udc67\ud835\udc61,\ud835\udc61,\ud835\udf11(\ud835\udc52))\u22252 2 \u0003 = E\ud835\udc61 \"\r \r \r \r\ud835\udc670 \u2212\ud835\udc67\ud835\udc61\u2212\u221a1 \u2212\u00af \ud835\udefc\ud835\udc61\ud835\udf16\ud835\udf03(\ud835\udc67\ud835\udc61,\ud835\udc61,\ud835\udf11(\ud835\udc52)) \u221a\u00af \ud835\udefc\ud835\udc61 \r \r \r \r 2 2 # = E\ud835\udc61 \u0014 1 \u2212\u00af \ud835\udefc\ud835\udc61 \u00af \ud835\udefc\ud835\udc61 \u2225\ud835\udf160 \u2212\ud835\udf16\ud835\udf03(\ud835\udc67\ud835\udc61,\ud835\udc61,\ud835\udf11(\ud835\udc52))\u22252 2 \u0015 , (5) where \ud835\udc67\ud835\udc61= \u221a\u00af \ud835\udefc\ud835\udc61\ud835\udc670+\u221a1 \u2212\u00af \ud835\udefc\ud835\udc61\ud835\udf160. The \ud835\udc670-prediction error is equivalent to reweighted \ud835\udf160-prediction error. The weight term 1\u2212\u00af \ud835\udefc\ud835\udc61 \u00af \ud835\udefc\ud835\udc61 increases with larger \ud835\udc61, which favors more accurate predictions in earlier sampling steps (later steps in the diffusion process correspond to earlier steps in the generation process). The intuition is that if the diffusion model can accurately predict \ud835\udc670 out of \ud835\udf160-blurred \ud835\udc67\ud835\udc61at all steps (especially early sampling steps), then the sampling trace starting at \ud835\udf160 will head towards \ud835\udc670 and finally generate \ud835\udc650 = D(\ud835\udc670). Note that L(\ud835\udc650,\ud835\udf160,\ud835\udc52) only performs single-point detection (single noise \ud835\udf160 and single prompt \ud835\udc52) and cannot be readily used to analyze memorization. Aligning the starting point. In Stable Diffusion, the timestep schedule is discrete over a range (1000). The noisy image \ud835\udc67\ud835\udc47= \u221a\u00af \ud835\udefc\ud835\udc47\ud835\udc670 + \u221a1 \u2212\u00af \ud835\udefc\ud835\udc47\ud835\udf160 at the last step has minor difference from the Gaussian noise \ud835\udf160, with Signal-to-Noise Ratio (SNR) of 0.0047. However, we have found that the minor difference could exert significant influence over the generation results, i.e., the generated images by \ud835\udc67\ud835\udc47and \ud835\udf160 could be different. The gap between \ud835\udc67\ud835\udc47and \ud835\udf160 is not constrained during diffusion model training; thus the behavior of \ud835\udf160 generation cannot be fully captured by the related loss function. To eliminate the inconsistency, we generate using \ud835\udc67\ud835\udc47= \u221a\u00af \ud835\udefc\ud835\udc47\ud835\udc670 + \u221a1 \u2212\u00af \ud835\udefc\ud835\udc47\ud835\udf160, a practice involved in image editing works [26]. This equals to sample from a biased Gaussian distribution N (\u221a\u00af \ud835\udefc\ud835\udc47\ud835\udc670, (1 \u2212\u00af \ud835\udefc\ud835\udc47)\ud835\udc3c). 4.2 Experiment Setup The correlation between our proposed metric L(\ud835\udc650,\ud835\udf160,\ud835\udc52) and the replication of \ud835\udc650 through (\ud835\udc67\ud835\udc47,\ud835\udc52) can be verified through a pair of bidirectional experiments. 4.2.1 The Ability of L(\ud835\udc650,\ud835\udf160,\ud835\udc52) to Recognize Replication. This experiment evaluates that given a realistic dataset {(\ud835\udc65\ud835\udc56 0,\ud835\udf16\ud835\udc56 0,\ud835\udc52\ud835\udc56,\ud835\udc66\ud835\udc56)}\ud835\udc40 \ud835\udc56=1, where \ud835\udc66\ud835\udc56= 1 indicates replication and otherwise not, whether L(\ud835\udc650,\ud835\udf160,\ud835\udc52) is able to accurately recognize replications. We use Stable Diffusion V1.4 for evaluation. To build the dataset, we collect a set of 78 memorized image-prompt pairs found by Webster [46]. Each image is augmented with an additional BLIP [20] generated prompt. The BLIP-generated prompt provides adequate non-replcation samples. This results in 156 image-prompt pairs. For each pair, we randomly sample 50 different Gaussian noises and then manually annotate \ud835\udc66\ud835\udc56for each sample (\ud835\udc65\ud835\udc56 0,\ud835\udf16\ud835\udc56 0,\ud835\udc52\ud835\udc56). Finally, we build a dataset consisting of 7800 samples, where replication occurs in 3645 samples. An accurate estimation of L(\ud835\udc650,\ud835\udf160,\ud835\udc52) requires a traversal of 1000 steps for Stable Diffusion. For efficiency, we uniformly sample 50 steps. Following Wen et al. [48], the detection performance is measured by Area Under Curve (AUC) of the Receiver Operating Characteristic (ROC) and the True Positive Rate at the False Positive Rate of 1% (TPR@1%FPR). 4.2.2 The Ability of L(\ud835\udc650,\ud835\udf160,\ud835\udc52) to Generate Replication. The effectiveness of L(\ud835\udc650,\ud835\udf160,\ud835\udc52) can also be presented in reverse. It can \fTable 1: Recognition results of replication. Metric Sample Level Image Level AUC TPR@1%FPR AUC TPR@1%FPR Tiled \ud835\udc592 [8] 0.999 0.973 1.000 1.000 SSCD [40, 41] 1.000 0.999 1.000 1.000 \ud835\udc670-prediction error 0.999 0.986 1.000 0.999 0 250 500 750 1000 Timestep 0.0 0.1 0.2 0.3 0.4 0.5 0.6 z0-prediction error Replication Normal Replication Normal 0.06 0.08 0.10 0.12 0.14 0.16 0.18 0.20 z0-prediction error (a) Image Level 0 250 500 750 1000 Timestep 0.0 0.2 0.4 0.6 0.8 1.0 z0-prediction error Replication Normal Replication Normal 0.00 0.05 0.10 0.15 0.20 0.25 0.30 0.35 z0-prediction error (b) Sample Level Figure 3: The \ud835\udc670-prediction error over each timesteps (left) and the distribution (right). (a) presents the error distribution of one example image. be shown that a small level of L(\ud835\udc650,\ud835\udf160,\ud835\udc52) is sufficient for generating replications. We study this effect in a tough setting. For the unmemorized (normal) images from LAION-Aesthetics V2 6.5+, a subset of Stable Diffusion\u2019s training set with predicted aesthetics scores no less than 6.5, it is generally of low probability to sample an \ud835\udf160 \u223cN (0, \ud835\udc3c) that replicates \ud835\udc650 [40]. However, we are able to invert a feasible \ud835\udf16\u2217 0 that replicates the original \ud835\udc650 by minimizing L(\ud835\udc650,\ud835\udf160,\ud835\udc52), \ud835\udf16\u2217 0 = arg min \ud835\udf160 L(\ud835\udc650,\ud835\udf160,\ud835\udc52). (6) The ability of L(\ud835\udc650,\ud835\udf160,\ud835\udc52) to trigger a rare event yields strong evidence for its correlation with replication. For all experiments, we use the Adam optimizer with an initial learning rate of 0.1 and without weight decay. We use a batch size of 32 (timesteps) and train for a total of 1K iterations. 4.3 Results 4.3.1 The Ability of L(\ud835\udc650,\ud835\udf160,\ud835\udc52) to Recognize Replication. The performance is evaluated in sample-level and image-level. The samplelevel takes the 7800 samples all together for evaluation. The Imagelevel evalution calculates AUC and TPR@1%FPR respectively for each image and average them. Table 1 presents the recognition results. All the metrics achieve almost perfect performance. Figure 3 shows the distribution of L(\ud835\udc650,\ud835\udf160,\ud835\udc52) for replication samples and normal ones. For each individual sample, there is a clear margin between replication and normal samples across most timesteps (Figure 3a), particularly in later steps. While sample-level distribution shows a large overlap between replication and normal samples (Figure 3b). This indicates that there is not a universal criterion for recognizing replication for all images. What\u2019s more, the normal samples present \ud835\udc670-prediction error with a larger variance (Figure 3 right), which indicates that the normally generated images are more diversified than the memorized generations. 4.3.2 The Ability of L(\ud835\udc650,\ud835\udf160,\ud835\udc52) to Generate Replication. We invert the initial noise \ud835\udf160 for each image with different input prompts, including their training caption, a BLIP-generate caption and an empty string. As shown in Figure 4, for either memorized training images or randomly sampled normal images, for either original training captions, BLIP-genearted new captions or empty captions, minimizing L(\ud835\udc650,\ud835\udf160,\ud835\udc52) produces successful inversion of the input noise \ud835\udf160 that leads to replication of \ud835\udc650. It demonstrates that L(\ud835\udc650,\ud835\udf160,\ud835\udc52) is a strong indicator for training image replication. Compared to normal images, the inversion for memorized images presents relatively more authentic reconstruction, which indicates that memorized images are easier to replicate. Condition 1: similarity. The \ud835\udc670-prediction error meets the similarity condition. We directly utilize internal prediction errors of diffusion models as an indicator of the similarity between the generated image and target image. We believe that based on the model\u2019s own function for comparison is more reliable than using a coarse metric [8] or external independently trained models [40, 41]. 5 TRIGGER THE MEMORIZATION Recognizing image replication works after the deployment of diffusion models to prevent possible leakage of training images. The developers of an image generation model also have strong motivation to perform safety analysis on a target set of sensitive images during development of their model. This acts as a proactive defense against memorization. The main goal of the safety analysis against memorization is to determine whether the target images are memorized and to measure the extent to which they are memorized. As a straightforward approach, searching for prompts that are prone to generate the target images is not feasible for safety measurement because it is random and laborious. Instead, we propose an inversion-based analysis without the need to access any prompts. The safety analysis against memorization is accomplished in two steps. First, for each target image, we attempt to invert an input prompt that triggers the model\u2019s memorization behavior on it. We verify by contradiction that if an image is safe, then it is impossible to invert a prompt that triggers its memorization. Second, we perform an analysis on the unconditional diffusion model and find that the unconditional diffusion model trained on \fTraining Caption: Mothers influence on her young hippo. BLIP Caption: There are two hippos standing next to each other near a body of water. Inverse Images Training Caption: A girl reading poster by Johann Georg Meyer. BLIP Caption: Painting of a girl reading a book in a corner of a room. Inverse Images Training Caption: Emma Watson to play Belle in Disney's Beauty and the Beast. BLIP Caption: A close up of a woman with a black shirt and tie. Training Caption: Spring Thaw Charles White. BLIP Caption: Painting of a stream in a snowy forest with trees and snow. Training Image Training Image Figure 4: The results of noise inversion for memorized images (left) and normal images (right). In each block, the leftmost image is the training image. The right three are generated images using inverted noises, along with training caption, BLIP-generated caption and an empty string \"\". Overall, each image can be sucessfully inverted by minimizing the \ud835\udc670-prediction error, while memorized images are easier to invert and of higher fidelity. large-scale data is safe from memorization. It thus serves as a guard for measuring the safety of the conditional text-to-image model. In this section, we elaborate how to trigger the memorization of an image. The measurement of memorization is descirbed in the next section. 5.1 Methodology To answer the question that if a target image could be memorized, we attempt to search for a prompt that triggers the generation of the target image. This can be done by minimizing the expectation of conditional prediction error with respect to the input token embeddings \ud835\udc52, \ud835\udc52\u2217= arg min \ud835\udc52 E \ud835\udf160\u223cN(0,\ud835\udc3c) [L(\ud835\udc650,\ud835\udf160,\ud835\udc52)]. (7) However, this straightforward prompt inversion causes overestimation of memorization. Indeed, we are always able to invert an optimal \ud835\udc52\u2217that reduces the prediction error of any target image \ud835\udc650 to a desired low level. As a result, the image appears to be \"memorized\". This is because the pre-trained vocabulary embeddings V only distribute as a finite number of spots in the infinite large embedding space. A valid \ud835\udc52\u2217that reflects the memorization of \ud835\udc650 should not only lead to a low level of prediction error but also be close to the manifold of vocabulary embeddings V. The condition can be fulfilled by adding a regularizer R(\ud835\udc52, V) to Equation 7, \ud835\udc52\u2217= arg min \ud835\udc52 E \ud835\udf160\u223cN(0,\ud835\udc3c) [L(\ud835\udc650,\ud835\udf160,\ud835\udc52)] + \ud835\udf06R(\ud835\udc52, V), (8) where \ud835\udf06is a hyperparameter to control the weight of regularizer. Condition 2: existence. The regularizer meets the existence condition. It works as an adversary to the expectation of conditional prediction error: A target image \ud835\udc650 is memorized if and only if the contradiction between them can be solved. If the regularized objective is not optimizable for a target image, then we can claim that the image is safe from memorization. The reliability of making (a) t-SNE. (b) \ud835\udc592-norm. Figure 5: Pre-trained token embeddings do not present a regular distribution. such a claim is estabilished on the trust in the optimizers utilized to minimize Equation 8. For deep neural networks, we believe that modern optimizers [18, 23] are capable of taking responsibility. It is challenging to accurately constrain the distance of token embeddings \ud835\udc52to the manifold of pre-trained vocabulary embeddings, because the pre-trained vocabulary embeddings do not present a regular distribution, as shown in Figure 5a for CLIP (CLIP is used as the text encoder of Stable Diffusion). We devise two regularizers that constrain the \ud835\udc592-norm of optimized token embeddings \ud835\udf16\u2217. This is motivated by the observation that minimizing the prediction error without regularization for normal images typically produces token embeddings with sufficiently large \ud835\udc592-norm. Therefore, the first regularizer equals an \ud835\udc592-norm regularizer R1(\ud835\udc52, V) = \u2225\ud835\udc52\u22252 2. R1 seems irrelevant to the vocabulary V but takes advantage of the fact that pre-trained vocabulary embeddings have relatively small \ud835\udc592-norm (see Figure 5b). Another regularizer R2 adds a term to R1 that encourages the learned token embeddings to be as close to any of the pre-trained vocabulary embeddings as possible, R2(\ud835\udc52, V) = \u2225\ud835\udc52\u22252 2 + 1 \ud835\udc41 \ud835\udc41\u22121 \u2211\ufe01 \ud835\udc56=0 H (\ud835\udc52\ud835\udc56, V), (9) \fwhere H (\ud835\udc52\ud835\udc56, V) is the entropy calculated over the probabilistic distribution on the inner-product distance between \ud835\udc56-th token and the vocabulary. This regularizer enables to search for realistic hard prompts. 5.2 Experiment Setup We use the 78 memorized images and 100 randomly sampled normal images from LAION as the target image set. For all experiments, we do not access training captions of the target images. We use the Adam optimizer with an initial learning rate of 0.01 without decay. The \ud835\udc592-norm regularization is implemented by Adam\u2019s inner weight decay. \ud835\udf06is set to 0.01. We use a batch size of 16 and optimize for a total of 500 iterations. Each image is resized and center cropped to 512 \u00d7 512 without augmentations. 5.3 Results Note that a prompt \ud835\udc52is composed of \ud835\udc41token embeddings, each of which represents a token. Stable Diffusion\u2019s text encoder by default uses a maximum length of 77 tokens, in which the first and last tokens are padded tokens indicating the start and end of a prompt. The rest 75 tokens are free to optimize. Figure 6: Examples of generated images using optimized token embeddings. 4 images are randomly generated for each optimization and all examples present the problem of memorization. The last row exhibits an example of partial memorization, where not all generations collapse to the same image. Through adjusting the number of tokens to optimize from 1 to 75, we find that out of the 78 memorized images discovered by Webster [46], the memorization of 66 images can be triggered by optimizing only 1 token, 2 images can be triggered by optimizing 2 tokens, the other 10 images are only partially memorized images, no matter how many tokens are optimized, as illustrated in Figure 6. In contrast, the memorization of normal images cannot be triggered with regularization. Figure 7 shows training statistics of memorized images and normal images, it can be seen that the prediction error and regularization term can be simultaneously optimized to small values for memorized images. In contrast, for normal images, only the \ud835\udc592-norm of token embeddings is minimized, while the prediction error of normal images remains high. It demonstrates that for Figure 7: \ud835\udc670-prediction errors and \ud835\udc592-norm of token embeddings during training time. Memorized images present low values for both prediction errors and \ud835\udc592-norm of token embeddings at the end of training. Prompt inversion for normal images can only optimize the \ud835\udc592-norm of token embeddings while prediction errors remain high across the whole training process. Algorithm 1 Hard prompt inversion for memorization. Input: the target image \ud835\udc650, encoder E, token embeddings \ud835\udc52= [\ud835\udc520,\ud835\udc521, ...,\ud835\udc52\ud835\udc41\u22121], vocabulary embeddings V, weight \ud835\udf06, number of candidate tokens\ud835\udc58, optimization steps \ud835\udc40, batch size \ud835\udc35, learning rate \ud835\udefe, timesteps \ud835\udc47in diffusion model Output: optimal hard prompt \u02c6 \ud835\udc61= [\u02c6 \ud835\udc610, \u02c6 \ud835\udc611, ..., \u02c6 \ud835\udc61\ud835\udc41\u22121] \ud835\udc670 = E(\ud835\udc650) \ud835\udc52\ud835\udc5f\ud835\udc5f= +\u221e for \ud835\udc56= 1 to \ud835\udc40do sample \ud835\udc35\ud835\udf16\u223cN (0, \ud835\udc3c), \ud835\udc61\u223c\ud835\udc48(1,\ud835\udc47) \ud835\udc67\ud835\udc61= \u221a\u00af \ud835\udefc\ud835\udc61\ud835\udc670 + \u221a1 \u2212\u00af \ud835\udefc\ud835\udc61\ud835\udf16 \ud835\udc54= \u2207\ud835\udc52L(\ud835\udc67\ud835\udc61,\ud835\udc61,\ud835\udc52) + R2(\ud835\udc52, V) \ud835\udc52= \ud835\udc52\u2212\ud835\udefe\ud835\udc54 end for Sample a test set \ud835\udf16\u223cN (0, \ud835\udc3c), \ud835\udc61\u223c\ud835\udc48(1,\ud835\udc47) \ud835\udc67\ud835\udc61= \u221a\u00af \ud835\udefc\ud835\udc61\ud835\udc670 + \u221a1 \u2212\u00af \ud835\udefc\ud835\udc61\ud835\udf16 for [\ud835\udc610,\ud835\udc611, ...,\ud835\udc61\ud835\udc41\u22121] \u2208 top-\ud835\udc58(\ud835\udc520V\ud835\udc47) \u00d7 top-\ud835\udc58(\ud835\udc521V\ud835\udc47) \u00d7 ... \u00d7 top-\ud835\udc58(\ud835\udc52\ud835\udc41\u22121V\ud835\udc47) do \ud835\udc52\u2032 = V(\ud835\udc610,\ud835\udc611, ..,\ud835\udc61\ud835\udc41\u22121) if L(\ud835\udc67\ud835\udc61,\ud835\udc61,\ud835\udc52\u2032) < \ud835\udc52\ud835\udc5f\ud835\udc5fthen \u02c6 \ud835\udc61= [\ud835\udc610,\ud835\udc611, ...,\ud835\udc61\ud835\udc41\u22121] end if end for return \u02c6 \ud835\udc61 normal (unmemorized) images, the contradiction between reducing prediction errors and aligning the learned tokens to the pre-trained tokens is unsolvable. Therefore, for the target images to protect, if we cannot optimize token embeddings that follows the pre-trained token embedding distribution to reduce the prediction error, then we can claim that the images are not memorized. \fTraining Caption: The no woman limits business podcast Invert Invert Invert Generate Generate Generate Ours AUTOPROMPT PEZ Training Image limits businesses podcast <|startoftext|> ek <|endoftext|> gar ze ze Training Caption: on her young Mothers influence hippo Invert Invert Invert Generate Generate Generate Ours AUTOPROMPT PEZ Training Image mothers hippo influence \u00e9\u0123 <|endoftext|> \u00f0\u0141\u0131\u00be ah vh eeee Figure 8: Results of hard prompt inversion for memorized images. Our inversion algorithm is able to invert an training image to a prompt that triggers its memorization. Existing hard prompt tuning methods AUTOPROMPT and PEZ are not effective in analyzing memorization. For the valid token embeddings that successfully trigger the memorization of some images, there is still a gap between the learned continuous token embeddings and discrete tokens. Simple regularizer, e.g., \ud835\udc592-norm regularizer as we used, does not provide a guarantee that the learning continuous token embeddings can be projected to realistic tokens. This is challenging because there are infinite number of points in the continuous embedding space, a subset of which have lower error than a possible hard prompt. The token embeddings could be over-optimized to areas that produce lower error but do not correspond to any tokens prompts. What\u2019s more, existing hard prompt tuning methods based on greedy algorithm are not applicable to search prompts that trigger the memorization of target images, because we have observed that prompts that trigger memorization do not necessarily have greedy property. To solve the problem, we propose a simple but effective algorithm to optimize hard prompts that trigger memorization, as in Algorithm 1. Algorithm 1 performs brute-force search in the Cartesian product of \ud835\udc41sets, each of which contains \ud835\udc58candidate tokens with smallest distance to the learned token embeddings. The optimal prompt is the one with minimal prediction error. The effectiveness of the algorithm heavily relies on the initialization, a common problem in hard prompt tuning [38, 47]. We repeat Algorithm 1 for a maximum of 20 runs with different initialization. We compare our algorithm with two hard prompt tuning algorithms AUTOPROMPT [38] and PEZ [47]. The number of tokens to optimize is set to 3. For the 20 inverted prompts, we choose the one with the lowest prediction error for illustration. Figure 8 illustrates 2 successful inversions. Our hard prompt inversion algorithm successfully inverts a prompt that trigger the memorization. It reflects that the memorization is only determined by a few key tokens (3 tokens in the example). It also reflects that the prompts that cause training image replication are not unique. The positions of the key tokens could be different. As shown in the example, the three words \"limits\", \"business\" and \"podcast\" are respectively the 3rd, 4th and 6th. It has no influence to shift them to the head of the prompt, as inverted by us. However, the order of tokens does not always have no effect. Permuting the prompt to \"businesses limits podcast\" would fail to trigger memorization. This explains why the hard prompt inversion is sensitive to initialization states. It is hard to constrain the position of inverted tokens simply by gradient descent. In contrast, AUTOPROMPT and PEZ do not work in prompt inversion for memorization. It demonstrates that inverting prompt for memorization is more difficult than semantic understanding tasks as their original applications. We have observed that the prompts that trigger memorization does not have greedy-solvable property, therefore they cannot be found by AUTOPROMPT and PEZ. Specifically, we initialize the prompt to \"limits business <|endoftext|>\" for AUTOPROMPT and PEZ, and run them to search for the third token \"podcast\". If it is greedy-solvable, AUTOPROMPT and PEZ would leave the first two words unchanged and find the last word \"podcast\". However, they gradually change the first two words and do not converge. \fDue to the dilemma, continuous token embeddings are adopted in subsequent measurement. Although the continuous token embeddings do not strictly meet the existence condition for potential memorized images, we would like to clarify that it is reasonable to use them for measurement for two reasons. Firstly, for potential memorized images, continuous token embeddings inverted with regularization are sufficient to indicate that memorization has happened. Secondly, for normal images, it is meaningless to invert hard prompts for them. Projecting the optimized token embeddings to hard prompts anyway will introduce additional error into measurement. 6 MEASURE THE MEMORIZATION We have discussed how to recognize the replication of a training image \ud835\udc650 given a pair of noise and prompt (\ud835\udf160,\ud835\udc52), and how to verify the existence of a prompt to trigger memorization of a training image. In this section, we focus on the measurement of memorization and describe how the measurement meets the last \ud835\udc5d\ud835\udc5f\ud835\udc5c\ud835\udc4f\ud835\udc4e\ud835\udc4f\ud835\udc56\ud835\udc59\ud835\udc56\ud835\udc61\ud835\udc66 condition. Given previous results, an intuitive method to measure the memorization would be first determining a threshold of the \ud835\udc670-prediction error L(\ud835\udc650,\ud835\udf160,\ud835\udc52) (Section 4) for recognizing replications and then estimate the probability of that L(\ud835\udc650,\ud835\udf160,\ud835\udc52) is no larger than the threshold when the inverted prompt \ud835\udc52\u2217(Section 5) is input. However, the intuitive method is difficult to implement. As demonstrated by Figure 3, there is not a universal threshold applicable to every image, hence a unique threshold must be determined for each image. To accurately locate the threshold, we can either take the upper bound of L(\ud835\udc650,\ud835\udf160,\ud835\udc52\u2217) or the lower bound of L(\ud835\udc650,\ud835\udf160,\ud835\udc52) for all normal prompts \ud835\udc52. Both options are difficult to implement, because the upper bound of L(\ud835\udc650,\ud835\udf160,\ud835\udc52\u2217) is prone to overestimation (not strictly \ud835\udc592 bounded) and the lower bound of L(\ud835\udc650,\ud835\udf160,\ud835\udc52) requires to evaluate all potential prompts, which is laborsome. Instead, we avoid deciding the boundary of replication and novel generation but propose an indirect measurement of memorization by comparing the distribution of L(\ud835\udc650,\ud835\udf160,\ud835\udc52\u2217) to the distribution of a safe model. Then the measurement of memorization equals how much threat an inverted prompt has introduced into the safe model. Motivated by previous observations [40], we find the unconditional diffusion model trained on large-scale data is safe from memorization and thus could be utilized as the safe model. For the remainder of this section, we first verify the safety of unconditional diffusion model and then describe the measurement. 6.1 Unconditional Model The unconditional model is part of the text-to-image model and used as penalty at sampling time (see Section 2.2). It can be safe from memorization for the following reasons. First, the unconditional model is trained to maximize the likelihood of data distribution without any outer guidance (empty string in Stable Diffusion). The memorization can only happen when the unconditional model frequently generates a certain image, a form of representation space collapse. However, one of the advantages of diffusion models is its stability in training, where no collapse is discovered. Second, under the observation that memorization is caused by overfitting to an image-prompt pair [41], the unconditional model has no chance to Figure 9: Results of noise inverison for memorized images in Stable Diffusion. Left: training image. Middle: generated image without regularization. Right: generated image with regularization. Even with regularization, memorized images can be successfully inverted. Unreg Reg 0.0 0.1 0.2 0.3 0.4 0.5 0.6 p-value Unreg Reg 0.010 0.005 0.000 0.005 0.010 0.015 mean Unreg Reg 0.96 0.98 1.00 1.02 1.04 1.06 1.08 1.10 1.12 var Figure 10: The distribution of \ud835\udc5d-value, mean and variance of noises inverted for memorized images in Stable Diffusion using their training captions. The minimum \ud835\udc5d-value and desired values of mean and variance are plotted in green dashed line. Without regularization, straightforward noise inversion caused the inverted noises far from the standard Guassian distribution. The regularized noise inversion successfully circumvent the over-optimization problem. Unreg: Unregularized, Reg: Regularized. overfit because its training data consists of image-null pairs which forms a many-to-one correspondence. Last, Somepalli et al. [40] have found that when the number of training data is large enough, unconditional diffusion models would not replicate training images, but only generate similar ones. 6.1.1 Methodology. It is intractable to estimate the probability that the model replicates \ud835\udc650 as it requires to find all the potential \ud835\udf16\u2217 0 and accumulate the probability within their \"exactly alike\" boundary. Therefore, it is impossible to estimate the safety of unconditional diffusion models directly by probability. We verify the safety of unconditional diffusion models against memorization by contradiction based on noise inversion that replicate a target image \ud835\udc65(Equation 6). In practice, it was shown that massive sampling from N (0, \ud835\udc3c) to generate \ud835\udc65for the unconditional model does not work [40]. Noise inversion seems to provide an approach, but we will demonstrate \f(a) Memorized images in LAION (b) Normal images in LAION (c) Normal images in FFHQ Figure 11: Results of noise inversion in unconditional Stable Diffusion and a diffusion model trained on FFHQ. Each block contains three images: left: training image, middle: generated image without regularization, right: generated image with regularization. For unconditional models, the training images, even memorized ones, cannot be replicated with normality regularization, which means that unconditional models have little probability of memorizing their training data. Unreg Reg 0.0 0.1 0.2 0.3 0.4 0.5 p-value Unreg Reg 0.005 0.000 0.005 0.010 0.015 0.020 mean Unreg Reg 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 var (a) Memorized images in LAION Unreg Reg 0.0 0.1 0.2 0.3 0.4 0.5 p-value Unreg Reg 0.012 0.010 0.008 0.006 0.004 0.002 0.000 0.002 0.004 mean Unreg Reg 0.2 0.4 0.6 0.8 1.0 var (b) Normal images in LAION Unreg Reg 0.0 0.1 0.2 0.3 0.4 0.5 p-value Unreg Reg 0.030 0.025 0.020 0.015 0.010 0.005 0.000 0.005 0.010 mean Unreg Reg 0.60 0.65 0.70 0.75 0.80 0.85 0.90 0.95 1.00 var (c) Normal images in FFHQ Figure 12: Unconditional models\u2019 distribution of \ud835\udc5d-value, mean and variance of inverted noises. Memorized images in text-toimage Stable Diffusion can not be replicated by its unconditional part. that noises found by this way are impossible to be drawn from N (0, \ud835\udc3c). Directly minimizing L(\ud835\udc65,\ud835\udf16) leads to over-optimization: Even for memorized image-prompt pairs, the noise \ud835\udf16\u2217obtained by minimizing L(\ud835\udc65,\ud835\udf16) are away from N (0, \ud835\udc3c), indeed there are a wealth of normal noises (noises that are likely drawn from N (0, \ud835\udc3c)) available. It becomes confusing for our verification whether there exist normal noises that will replicate \ud835\udc65. If there exist, we might just over-optimize and miss them. To avoid this interference factor, we assume that the noise \ud835\udf16to be optimized is drawn from another Gaussian distribution N (\ud835\udf07, \ud835\udf0e2) with parameters \ud835\udf07and \ud835\udf0e2. Motivated by the prior matching in Variational AutoEncoder (VAE) [19], we invert \ud835\udf07and \ud835\udf0e2 with an regularized objective: \ud835\udf07\u2217, (\ud835\udf0e2)\u2217= arg min \ud835\udf07,\ud835\udf0e2 E \ud835\udf16\u223cN(0,\ud835\udc3c) [L(\ud835\udc65, \ud835\udf07+ \ud835\udf0e\ud835\udf16)] +\ud835\udc37\ud835\udc3e\ud835\udc3f(N (\ud835\udf07, \ud835\udf0e2)\u2225N (0, \ud835\udc3c)), \ud835\udc37\ud835\udc3e\ud835\udc3f(N (\ud835\udf07, \ud835\udf0e2)\u2225N (0, \ud835\udc3c)) = 1 2 \u2211\ufe01 \ud835\udc56 (\ud835\udf072 \ud835\udc56+ \ud835\udf0e2 \ud835\udc56\u2212log\ud835\udf0e2 \ud835\udc56\u22121). (10) The regularization term calculates the distance between the Gaussian distribution where the noise is drawn and the standard Gaussian distribution. Through this reparameterization trick, we do not directly optimize \ud835\udf16but the distribution it follows. In this way, the prediction error of the diffusion model L(\ud835\udc65, \ud835\udf07+ \ud835\udf0e\ud835\udf16) and the regularization term become two adversaries. The contradiction between them can be solved iff noises drawn from a distribution close to the standard Gaussian distribution have low prediction errors (indicating memorization) simultanously. This constraint can be satisfied by the memorized image-prompt pairs in conditional text-to-image models, as shown in experiments. However, for unconditional models, it cannot be solved, which demonstrates that unconditional models are safe from memorization. 6.1.2 Experiment Setup. Apart from Stable Diffusion\u2019s unconditional model, we additionally investigate an unconditional diffusion model trained on the human face dataset FFHQ [17, 32] consisting of 70000 images. For Stable Diffusion, we perform the noise inversion for the 78 memorized images and 100 normal images randomly sampled from its training set. The input prompt is fixed to an empty string. For the model trained on FFHQ, 100 randomly sampled training images are used for experiments. We perform the Kolmogorov-Smirnov hypothesis test (KS test) on the optimized \ud835\udf16\u2217\u223cN (\ud835\udf07\u2217, (\ud835\udf0e2)\u2217) to decide whether \ud835\udf16\u2217can be drawn from a standard Gaussian distribution. The null hypothesis is set to \"\ud835\udf16\u2217is drawn from a standard Gaussian distribution\" and the \ud835\udc5d-value is set to 0.05 for all experiments. In a Kolmogorov-Smirnov test, if the calculated \ud835\udc5d-value is less than 0.05, the null hypothesis should be rejected and otherwise accepted. For each learned Gaussian distribution N (\ud835\udf07\u2217, (\ud835\udf0e2)\u2217), we randomly sample 1000 samples from it and take the average \ud835\udc5d-value over the 1000 samples. For optimization, Adam optimizer is used with an initial learning rate of 0.1 following cosine \fDistribution Shift Unconditional Conditional Figure 13: The introduction of a prompt shifts the unconditional error distribution. decay without weight decay. We use a batch size of 32 and train for a total of 500 iterations. 6.1.3 Results. We first demonstrate the effectiveness of our regularized noise inversion (Equation 10) to circumvent over-optimization through a study on memorized images in Stable Diffusion. For each image, we adopt their training prompt that will trigger memorization. Figure 9 shows the generation results using optimized noise \ud835\udf16\u2217. Whether regularized or not, memorized images are easy to reproduce. Figure 10 exhibits the \ud835\udc5d-value, mean and variance of the inverted noises by unregularized (Equation 6) and regularized (Equation 10) optimizations. It can be observed that inversion via our regularized objective produces normally distributed noises with high \ud835\udc5d-value of KS test, zero mean and unit variance. It effectively circumvents the over-optimization problem, which can be then utilized to measure the safety of unconditional models. For unconditional models, we perform noise inversion using Equation 10, with or without the KL-divergence regularization term. The results can be found in Figures 11 and 12. For unconditional models, it fails to reproduce training images on both models when the normality of noises is constrained. However, without normality regularization, as in Figure 12, the optimized noises present lower \ud835\udc5d-values, which indicates that they cannot be drawn from the standard Gaussian distribution with high probability. The results demonstrate that unconditional models are more safe to protect their training images from replication. Note that compared to Stable Diffusion trained on LAION, the diffusion model trained on FFHQ presents better normality for the inverted noises. This might be attributed to its limited number of training data (70000) embedded into a large latent space R3\u00d764\u00d764. In contrast, Stable Diffusion is trained on 2 billions of data with a slightly larger latent space R4\u00d764\u00d764. The large contrast between the number of training data and the dimensionality of latent space \"leaves more space to memorize one instance\", which can be observed in Figure 12c that noises inverted on FFHQ tend to have larger variance than those on LAION. 6.2 Measurement 6.2.1 Methodology. As discussed in Section 6.1, unconditional diffusion model trained on large-scale data is safe from memorization. Therefore, the unconditional error L(\ud835\udc650,\ud835\udf160) represents a safe distribution when \ud835\udf160 is sampled from the standard Gaussian distribution. 0.025 0.050 0.075 0.100 0.125 0.150 0.175 Conditional Unconditional (a) Memorized Image 0.26 0.27 0.28 0.29 0.30 Conditional Unconditional (b) Normal Image Figure 14: Example of prediction error distributions for a memorized image and a normal image in Stable Diffusion. It can then serve as a guard to measure the safety against memorization of any conditional error distribution L(\ud835\udc650,\ud835\udf160,\ud835\udc52) when some prompt \ud835\udc52is introduced. We consider the worst-case conditional error distribution L(\ud835\udc650,\ud835\udf160,\ud835\udc52\u2217) where \ud835\udc52\u2217is obtained through Equation 8. We then measure the extent to which \ud835\udc650 is memorized as the distribution shift of prediction errors from unconditional to the worst-case conditional, as illustrated in Figure 13. Distribution shift. The distribution shift can be calculated by the Wasserstein distance between unconditional error distribution and the worst-case conditional error distribution. Wasserstein distance measures the minimal cost to convert unconditional error distribution to conditional error distribution. Wasserstein distance is suitable for measurement of memorization because it takes into consideration the amount of errors that are lowered by introducing a prompt. The larger the Wasserstein distance is, the lower the prediction error has been reduced, and to more extent the target image is memorized. We denote the measure by M(\ud835\udc650). The distributions of L(\ud835\udc650,\ud835\udf160) and L(\ud835\udc650,\ud835\udf160,\ud835\udc52\u2217) are estimated using the Monto Carlo method. Condition 3: probability. The measurement based on the distribution shift meets the probability condition of memorization. We do not directly calculate the probability of memorization but calculate a correlated measure by referring to the safe unconditional model. Through this way, we avoid to determine an absolute threshold to distinguish between replicating and normal generations. According to Chebyshev\u2019s inequality, the probability that unconditional prediction errors deviates from its mean by more than \ud835\udc58\ud835\udf0eis at most 1/\ud835\udc582. Therefore, when a prompt is input instead of an empty string, the larger the distribution of the prediction errors is shifted towards the original rare case, the more probable that memorization has been triggered. 6.2.2 Experiment Setup. Based on the prompt inversion results, the extent to which a target image is memorized M(\ud835\udc650) can be estimated by the Wasserstein distance between the unconditional error distribution L(\ud835\udc650,\ud835\udf160) and worst-case conditional error distribution L(\ud835\udc650,\ud835\udf160,\ud835\udc52\u2217). For any image, we invert a sequence of token embeddings \ud835\udc52\u2217as in Equation 8. All the 75 free tokens are optimized. We calculate M(\ud835\udc650) for the 78 memorized images and 100 randomly sampled normal images. 1000 Gaussian noises are randomly sampled to estimate each error distribution. The probability density function is calculated with 2000 bins over the range [0, 0.4]. \fMemorized Normal 0.0 0.2 0.4 0.6 0.8 Figure 15: Memorization measured by Wasserstein distance for memorized images and normal images. Memorized images present a significant higher level of memorization. Figure 16: Examples of images that are memorized to relatively smaller extent. Original training images are in the first column. 6.2.3 Results. Figure 14 shows an example of the prediction error distribution for both memorized and normal images. The conditional error distribution of memorized images shows an obvious gap to the unconditional error distribution. However, the conditional error distribution of normal images get entangled in its unconditional error distribution. Figure 15 illustrates the Wasserstein distance distribution of all test images. Memorized images present significantly larger Wasserstein distances compared to normal images. Recall that there are partially memorized images in the test set. We find that these images correspond to lower distance compared to other completely memorized images, as shown in Figure 16. This demonstrates the effectiveness of our measurement to quantify the extent to which an image is memorized beyond simply distinguishing memorized images from normal ones. 7 RELATED WORK 7.1 Memorization in Image Generation Models Memorization has previously raised concerns in image generation models, e.g., GAN and VAE, mainly focusing on the type of unconditional generation. There have been studies on training algorithm [34] and evaluation metric [13] to improve the generalization ability of GANs to get rid of simply copying from training data. It has been shown that small data size [10] or too longer training [43] can cause memorization in GANs. Van der Burg et al. [44] measure memorization in VAE as the changed probability when removing one sample from the training set. For diffusion models, Vyas et al. [45] propose a copyright protection method to prevent replication of sensitive training images. The model is trained to match a safe model that does not take sensitive data for training. Carlini et al [8] and Somepalli et al. [40, 41] demonstrates that memorization also occur in text-to-image diffusion models. Memorized images are found from numerous generated samples by membership inference attack or searching for the most similar training images using image retrieval models. Webster [46] provides more efficient attacks to extract training images from text-to-image models. Subsequently, Wen et al. [48] focus on the detection of abnormal prompts that will trigger generation of training images. Compared to these works, we perform a practical analysis on training image memorization with no need to access any prompts. Our analysis not only is able to find memorized images, but also provides quantative measurement and allows developers to claim safety on normal images. 7.2 Inversion of Diffusion Models Inversion techniques in diffusion models are widely studied mainly for image editing [11, 26, 52]. Through inversion, the object, style and concept contained in the source images can be compressed in latent noises or input token embeddings. Then the inverted latent noises or input token embeddings are utilized to generate novel images that preserve the desired content. We leverage analogous inversion techniques to analyze training image memorization in diffusion models. Instead of utility, we focus more on the regularity of inverted signals, which is essential to identify memorized images. In this sense, memorized images are a class that is \"naturally\" invertible. 8 DISCUSSION AND" +} \ No newline at end of file diff --git a/abs_9K/test_abstract_short_2405.05852v1.json b/abs_9K/test_abstract_short_2405.05852v1.json new file mode 100644 index 0000000000000000000000000000000000000000..e7df43ba22590e5c9c9ab3ad5fba262177816787 --- /dev/null +++ b/abs_9K/test_abstract_short_2405.05852v1.json @@ -0,0 +1,21 @@ +{ + "url": "http://arxiv.org/abs/2405.05852v1", + "title": "Pre-trained Text-to-Image Diffusion Models Are Versatile Representation Learners for Control", + "abstract": "Embodied AI agents require a fine-grained understanding of the physical world\nmediated through visual and language inputs. Such capabilities are difficult to\nlearn solely from task-specific data. This has led to the emergence of\npre-trained vision-language models as a tool for transferring representations\nlearned from internet-scale data to downstream tasks and new domains. However,\ncommonly used contrastively trained representations such as in CLIP have been\nshown to fail at enabling embodied agents to gain a sufficiently fine-grained\nscene understanding -- a capability vital for control. To address this\nshortcoming, we consider representations from pre-trained text-to-image\ndiffusion models, which are explicitly optimized to generate images from text\nprompts and as such, contain text-conditioned representations that reflect\nhighly fine-grained visuo-spatial information. Using pre-trained text-to-image\ndiffusion models, we construct Stable Control Representations which allow\nlearning downstream control policies that generalize to complex, open-ended\nenvironments. We show that policies learned using Stable Control\nRepresentations are competitive with state-of-the-art representation learning\napproaches across a broad range of simulated control settings, encompassing\nchallenging manipulation and navigation tasks. Most notably, we show that\nStable Control Representations enable learning policies that exhibit\nstate-of-the-art performance on OVMM, a difficult open-vocabulary navigation\nbenchmark.", + "authors": "Gunshi Gupta, Karmesh Yadav, Yarin Gal, Dhruv Batra, Zsolt Kira, Cong Lu, Tim G. J. Rudner", + "published": "2024-05-09", + "updated": "2024-05-09", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.AI", + "cs.CL", + "cs.LG", + "cs.RO", + "stat.ML" + ], + "label": "Original Paper", + "paper_cat": "Diffusion AND Model", + "gt": "Embodied AI agents require a fine-grained understanding of the physical world\nmediated through visual and language inputs. Such capabilities are difficult to\nlearn solely from task-specific data. This has led to the emergence of\npre-trained vision-language models as a tool for transferring representations\nlearned from internet-scale data to downstream tasks and new domains. However,\ncommonly used contrastively trained representations such as in CLIP have been\nshown to fail at enabling embodied agents to gain a sufficiently fine-grained\nscene understanding -- a capability vital for control. To address this\nshortcoming, we consider representations from pre-trained text-to-image\ndiffusion models, which are explicitly optimized to generate images from text\nprompts and as such, contain text-conditioned representations that reflect\nhighly fine-grained visuo-spatial information. Using pre-trained text-to-image\ndiffusion models, we construct Stable Control Representations which allow\nlearning downstream control policies that generalize to complex, open-ended\nenvironments. We show that policies learned using Stable Control\nRepresentations are competitive with state-of-the-art representation learning\napproaches across a broad range of simulated control settings, encompassing\nchallenging manipulation and navigation tasks. Most notably, we show that\nStable Control Representations enable learning policies that exhibit\nstate-of-the-art performance on OVMM, a difficult open-vocabulary navigation\nbenchmark.", + "main_content": "Introduction As general-purpose, pre-trained \u201cfoundation\u201d models [Rom+22; Tou+23; Bro+20; Ope23; Liu+23; Ala+22; Che+22] are becoming widely available, a central question in the field of embodied AI has emerged: How can foundation models be used to construct model representations that improve generalization in challenging robotic control tasks [Bro+22; Zit+23; Sha+23]? Robotic control tasks often employ pixel-based visual inputs paired with a language-based goal specification, making vision-language model representations particularly well-suited for this setting. However, while vision-language representations obtained via Contrastive Language-Image Pre-training [CLIP; Rad+21]\u2014a state-of-the-art method\u2014have been successfully applied to a broad range of computer vision tasks, the use of CLIP representations has been shown to lead to poor downstream performance for robotic control. This shortcoming has prompted the development of alternative, control-specific representations for embodied AI [Nai+22; Ma+23] but has left other sources of general-purpose pre-trained vision-language representations\u2014such as text-to-image diffusion models\u2014largely unexplored for control applications. *Equal Contribution. 1 arXiv:2405.05852v1 [cs.CV] 9 May 2024 \f\u0003\t\u0006\u0005\u0004\u0002\b\u0001\u0000\u0007 \f \u0003\u0003 \u0018\t\u0017\t\u0001\u0014\u0012\u0011\u0010\u0001\t\u000e\u000e\u000f\b\r \u0001\u0005\r!\u0005\u0014#\u000f\u0006\u001b\u001a\t\r 2\u0001\u0005\u000e\u0010\u0014-\u0001\t\u0007\u000f\u001b\u0006\u000f\b\r 97\u00056\t3\u00054 \u00124\u0005\u0000B\u0005\u0006\u000f\b\r\u0014=B\u000f\u0006\t OSQRPINKMLJMQMHGPGSFH X\u00043\t\u0006 ]\\ [\t\u001b\b\u0007\t\u0001 ]\\ \u0012\r\u001b\b\u0007\t\u0001 ef9-\u0014f\u0005\r6B\u00056\t\u0014\u0012\r\u001b\b\u0007\t\u0001 \u007f-\u000f\u001b!\u000f\r6\u0014B\u0010\u0014\u0005\u0014!\t\u0006\u0006\u0000\tq \u0088\u0086\u0085\u0087\u0082\u0084 \u0083\u0082\u0081\u0082\u0080 =\u0006\u0005\u0093\u0000\t\u0014[\u000f\u0017\u0017B\u000e\u000f\b\r \f\u0001\u000f6\u000f\r\u0005\u0000\u001497\u00056\t [\t\r\b\u000f\u000e\t\u0007\u001497\u00056\t 0.0 0.2 0.4 0.6 0.8 1.0 Average Norm Success VAE CLIP R3M VC-1 SCR (Ours) SCR-FT (Ours) Overall Representation Comparison Figure 1: Left: Our paper proposes Stable Control Representations, which uses pre-trained text-toimage diffusion models as a source of language-guided visual representations for downstream policy learning. Right: Stable Control Representations enable learning control policies that achieve all-round competitive performance on a wide range of embodied control tasks, including in domains that require open-vocabulary generalization. Empirical results are provided in Section 5. In this paper, we propose Stable Control Representations (SCR): pre-trained visionlanguage representations from text-to-image diffusion models that can capture both high and low-level details of a scene [Rom+22; Ho+22]. While diffusion representations have seen success in downstream vision-language tasks, for example, in semantic segmentation [Bar+22; Tia+23; Wan+23], they have\u2014to date\u2014not been used for control. We perform a careful empirical analysis in which we deconstruct pre-trained text-to-image diffusion model representations to understand the impact of different design decisions. In our empirical investigation, we find that diffusion representations can outperform generalpurpose models like CLIP [Rad+21] across a wide variety of embodied control tasks despite not being trained for representation learning. This is the case even for purely vision-based tasks and settings that require task understanding through text prompts. A highlight of our results is the finding that diffusion model representations enable better generalization to unseen object categories in a challenging open-vocabulary navigation benchmark [Yen+23] and provide improved interpretability through attention maps [Tan+23]. Our key contributions are as follows: 1. In Section 4, we introduce a multi-step approach for extracting vision-language representations for control from text-to-image diffusion models. We show that these representations are capable of capturing both the abstract high-level and fundamental low-level details of a scene, offering an alternative to models trained specifically for representation learning. 2. In Section 5, we evaluate the representation learning capabilities of diffusion models on a broad range of embodied control tasks, ranging from purely vision-based tasks to problems that require an understanding of tasks through text prompts, thereby showcasing the versatility of diffusion model representation. 3. In Section 6, we systematically deconstruct the key features of diffusion model representations for control, elucidating different aspects of the representation design space, such as the input selection, the aggregation of intermediate features, and the impact of fine-tuning on enhancing performance. We have demonstrated that diffusion models learn versatile representations for control and can help drive progress in embodied AI. The code for our experiments can be accessed at: https://github.com/ykarmesh/stable-control-representations. 2 \f2 Related Work We first review prior work on representation learning and diffusion models for control. Representation Learning with Diffusion Models. Diffusion models have received a lot of recent attention as flexible representation learners for computer vision tasks of varying granularity\u2014ranging from key point detection and segmentation [Tia+23; Wan+23] to image classification [YW23; Tra22]. Wang et al. [Wan+23] has shown that intermediate layers of a text-to-image diffusion model encode semantics and depth maps that are recoverable by training probes. These approaches similarly extract representations by considering a moderately noised input, and find that the choice of timestep can vary based on the granularity of prediction required for the task. Yang and Wang [YW23] train a policy to select an optimal diffusion timestep, we simply used a fixed timestep per class of task. Several works [Tia+23; Wan+23; Tan+23] observe that the cross-attention layers that attend over the text and image embeddings encode a lot of the spatial layout associated with an image and therefore focus their method around tuning, post-processing, or extracting information embedded within these layers. Visual Representation Learning for Control. Over the past decade, pre-trained representation learning approaches have been scaled for visual discrimination tasks first, and control tasks more recently. Contrastively pre-trained CLIP [Rad+21] representations were employed for embodied navigation tasks by EmbCLIP [Kha+22]. MAE representations have been used in control tasks by prior works like VC-1 [Maj+23], MVP [Xia+22] and OVRLv2 [Yad+23]. R3M [Nai+22] and Voltron [Kar+23] leverage language supervision to learn visual representations. In contrast, we investigate if powerful text-to-image diffusion models trained for image generation can provide effective representations for control. Diffusion Models for Control. Diffusion models have seen a wide range of uses in control aside from learning representations. These can broadly be categorized into three areas. First, diffusion models have been used as a class of expressive models for learning action distribution for policies [Chi+23; Pea+23; HE+23]; this can help model multimodality and richer action distributions than Gaussians. Second, off-the-shelf diffusion models have been used to augment limited robot demonstration datasets by specifying randomizations for object categories seen in the data through inpainting [KVJ23; Yu+23; Man+22]. Diffusion models trained from scratch have also been shown to be an effective method for data augmentation [Lu+23; Jac+24]. Third, planning can be cast as sequence modeling through diffusion models [Jan+22; Aja+23; Du+23]. 3 Background We briefly review diffusion models and text-conditional image generation, and then describe the control setting we consider in this work. 3.1 Diffusion Models Diffusion models [SD+15; HJA20] are a class of generative models that learn to iteratively reverse a forward noising process and generate samples from a target data distribution p(x0), starting from pure noise. Given p(x0) and a set of noise levels \u03c3t for t = 1,..., T, a denoising function \u03f5\u03b8(xt, t) is trained on the objective LDM(\u03b8) = Ex0,\u03f5,t[\u2225\u03f5 \u2212\u03f5\u03b8 \u0000xt, t))\u22252 2 \u0003 = Ex0,\u03f5,t[\u2225\u03f5 \u2212\u03f5\u03b8 \u0000x0 + \u03c3t \u00b7 \u03f5, t))\u22252 2 \u0003 , (3.1) where \u03f5 \u223cN (0,1), t \u223cUnif(1, T), and x0 \u223cp(x0). To generate a sample x0 during inference, we first sample an initial noise vector xT \u223cN (0,\u03c3T) and then iteratively denoise this sample for t = T,...,1 by sampling from p(xt\u22121|xt), which is a function of \u03f5\u03b8(xt, t). 3 \fIn some settings, we may want to generate samples with a particular property. For example, we may wish to draw samples from a conditional distribution over data points, p(x0|c), where c captures some property of the sample, such as classification label or a text description [Rom+22; Sah+22]. In these settings, we may additionally train with labels to obtain a conditioned denoiser \u03f5\u03b8(xt, t, c) and generate samples using classifier-free guidance [HS21]. 3.2 Latent Diffusion Models Latent diffusion models [Rom+22] reduce the computational cost of applying diffusion models to high-dimensional data by instead diffusing low-dimensional representations of high-dimensional data. Given an encoder E(\u00b7) and decoder D(\u00b7), (3.1) is modified to operate on latent representations, z0 \u02d9 =E(x0), yielding LLDM(\u03b8) = Ex0,c,\u03f5,t[\u2225\u03f5 \u2212\u03f5\u03b8 \u0000E(x0) + \u03c3t \u00b7 \u03f5, t, c)\u22252 2 \u0003 , (3.2) where \u03f5 \u223cN (0,1), t \u223cUnif(1, T), x0, c \u223cp(x0, c). After generating a denoised latent representation z0, it can be decoded as x0 = D(z0). A popular instantiation of a conditioned latent diffusion model is the text-to-image Stable Diffusion model [SD; Rom+22]. The SD model is trained on the LAION-2B dataset [Sch+22] and operates in the latent space of a pre-trained VQ-VAE image encoder [ERO21]. The model architecture is shown at the top of Figure 1 and is based on a U-Net [RFB15], with the corresponding conditioning text prompts encoded using a CLIP language encoder [Rad+21]. 3.3 Policy Learning for Control We model our environments as Markov Decision Processes (MDP , Sutton and Barto [SB18]), defined as a tuple M = (S ,A , P ,R,\u03b3), where S and A denote the state and action spaces respectively, P(s\u2032|s, a) the transition dynamics, R(s, a) the reward function, and \u03b3 \u2208(0,1) the discount factor. Our goal is to optimize a policy \u03c0(a|s) that maximizes the expected discounted return E\u03c0,P \u0002P\u221e t=0 \u03b3tR(st, at) \u0003 . In this paper, we consider visual control tasks that may be language-conditioned, that is, states are given by s = [simage,stext], where stext specifies the task. We are interested in pretrained vision-language representations capable of encoding the state s as f\u03c6(simage,stext). This encoded state is then supplied to a downstream, task-specific policy network, which is trained to predict the action at. Our evaluation encompasses both supervised learning and reinforcement learning regimes for training the downstream policies. We train agents through behavior cloning on a small set of demonstrations for the few-shot manipulation tasks we study in Section 5.2. For the indoor navigation tasks we study in Sections 5.3 and 5.4, we use a version of the Proximal Policy Optimization [PPO, Sch+17] algorithm for reinforcement learning. 4 Stable Control Representations In this paper, we consider extracting language-guided visual representations from the opensource Stable Diffusion model. We follow a similar protocol as Wang et al. [Wan+23], Traub [Tra22], and Yang and Wang [YW23]: Given an image-text prompt, s = {simage,stext}, associated with a particular task, we use the SD VQ-VAE model as the encoder E(\u00b7) and partially noise the latents z0 \u02d9 =E(simage) to some diffusion timestep t. We then extract representations from the intermediate outputs of the denoiser \u03f5\u03b8(zt, t,stext). This process is illustrated in Figure 2. We refer to the extracted representations as Stable Control Representations (SCR). We will describe the design space for extracting SCR in the remainder of this section. 4 \f\t\u000f\r\t\u000b\u0007\u0005\u0004\u0006\u0003\f \u000e\b \r\u0002\u000f\u0001\f\u0000\u001c\u0016\u0017\u0015\u001b\u0013\u0018\u0018\u001a\u0016\u0012\u0011\u0019\u0010\u0014\u0013\u001b \u000f\"!\r\u001d \u000f\"!\r# \u000f\"!\r) \u000f\"!\r/ \u0002\u000e\u0001\r 9\u0000\u000f \u0005 C\u000f!A\r>! PNQJOMKQOMKIHLKGFKEKQMOMDNQ R R R [QMKFGNSOMK [QMKFGNSOMK [QMKFGNSOMK \u0019\u0010\u0012da\u0010d\u0013\u0011_\u0017^\u0013\\\\\u001a\u0012d \u001c\u0019tp\u0011\u0019\u0010\u0012da\u0010d\u0013\u0011_\u0012k\u0016\\\u0013\u001b {\u000b!y\u007f\u000by\f\r\u0080}\u000fz\u0003w \u008d\u008c\u0089\r\u0089! \u000f\u0001\f} \u0097\u000f\u000e\u0007\f\r\u0000\fA\f\u0000\r\u0090\r\r\r\r\r\u008e Figure 2: Extraction of Stable Control Representations from Stable Diffusion. Given an image-text prompt, s = {simage,stext}, we encode and noise the image and feed it into the U-Net together with the language prompt. We may then aggregate features from multiple levels of the downsampling process, as described in Section 4. 4.1 Layer Selection and Aggregation We are interested in evaluating the internal representations from the denoiser network, that is, the U-Net \u03f5\u03b8(\u00b7). The first design choice we consider is which layers of \u03f5\u03b8 to aggregate intermediate outputs from. The U-Net does not have a representational bottleneck, and different layers potentially encode different levels of detail. Trading off size with fidelity, we concatenate the feature maps output from the mid and down-sampling blocks to construct the representation. This results in a representation size comparable to that of the other pretrained models we study in Section 5. This is shown at the bottom of Figure 2 and we ablate this choice in Section 6.1. Since outputs from different layers may have different spatial dimensions, we bilinearly interpolate them so that they are of a common spatial dimension and can be stacked together. We then pass them through a learnable convolutional layer to reduce the channel dimension before feeding them to downstream policies. The method used to spatially aggregate pre-trained representations can significantly affect their efficacy in downstream tasks, as we will discuss in Section 6.4. We use the best-performing spatial aggregation method for all the baselines that we re-train in Section 5. 4.2 Diffusion Timestep Selection Next, we consider the choice of extraction timestep t for the denoising network (shown on the left of Figure 2). Recall that the images we observe in control tasks are un-noised (i.e., corresponding to x0), whereas the SD U-Net expects noised latents, corresponding to zt for t \u2208[0,1000]. The choice of timestep t influences the fidelity of the encoded latents since a higher value means more noising of the inputs. Yang and Wang [YW23] have observed that there are task-dependent optimal timesteps and proposed adaptive selection of t during training, while Xu et al. [Xu+23] have used t = 0 to extract representations from un-noised inputs to do open-vocabulary segmentation. We hypothesize that control tasks that require a detailed spatial scene understanding benefit from fewer diffusion timesteps, corresponding to a later stage in the denoising process. We provide evidence consistent with this hypothesis in Section 6.2. To illustrate the effect of the timestep, we display final denoised images for various t values in different domains in Figure 9. 4.3 Prompt Specification Since text-to-image diffusion models allow conditioning on text, we investigate if we can influence the representations to be more task-specific via this conditioning mechanism. For tasks that come with a text specifier, for example, the sentence \u201cgo to object X\u201d, we simply 5 \fInput Image Refer Expression \u201cpear\u201d \u201cA\u201d \u201cbeside\u201d \u201cbook\u201d \u201ca\u201d \u201cA\u201d \u201cand\u201d \u201crocket\u201d Input Image OVMM \u201cchair\u201d \u201ca\u201d Figure 3: The Stable Diffusion model allows us to extract word-level cross-attention maps for any given text prompt. We visualize these maps in a robotic manipulation environment and observe that they are accurate at localizing objects in a scene. Since these maps are category agnostic, downstream policies should become robust to unseen objects at test time. encode this string and pass it to the U-Net. However, some tasks are purely vision-based and in these settings, we explore whether constructing reasonable text prompts affects downstream policy learning when using the U-Net\u2019s language-guided visual representations. We present this analysis in Section 6.3. 4.4 Intermediate Attention Map Selection Recent studies [Wan+23; Tan+23] demonstrate that the Stable Diffusion model generates localized attention maps aligned with text during the combined processing of vision and language modalities. Wang et al. [Wan+23] leveraged these word-level attention maps to perform open-domain semantic segmentation. We hypothesize that these maps can also help downstream control policies to generalize to an open vocabulary of object categories by providing helpful intermediate outputs that are category-agnostic. Following Tang et al. [Tan+23], we extract the cross-attention maps between the visual features and the CLIP text embeddings within the U-Net. An example of the word-level attention maps is visualized in Figure 3. We test our hypothesis on an open-domain navigation task in Section 5.4, where we fuse the cross-attention maps with the extracted feature maps from the U-Net. We refer to this variant as SCR-ATTN. 4.5 Fine-Tuning on General Robotics Datasets Finally, we consider fine-tuning strategies to better align the base Stable Diffusion model towards generating representations for control. This serves to bridge the domain gap between the diffusion model\u2019s training data (e.g., LAION images) and robotics datasets\u2019 visual inputs (e.g., egocentric tabletop views in manipulation tasks or indoor settings for navigation). Crucially, we do not use any task-specific data for fine-tuning. Instead, we use a small subset of the collection of datasets used by prior works on representation learning for embodied AI [Maj+23; Xia+22]: we use subsets of the EpicKitchens [Dam+18], Something-Something-v2 [SS-v2; Goy+17], and Bridge-v2 [Wal+23] datasets. We adopt the same text-conditioned generation objective as that of the base model for the fine-tuning phase. As is standard, we fine-tune the denoiser U-Net \u03f5\u03b8 but not the VAE encoder or decoder. Image-text pairs are uniformly sampled from the video-text pairs present in these datasets. A possible limitation of this strategy is that text-video aligned pairs (a sequence of frames that correspond to a single language instruction) may define a many-to-one relation for image-text pairs. However, as we see in experiments in which we compare to the base Stable Diffusion model in Section 5, this simple approach to robotics alignment is useful in most cases. Further details related to fine-tuning are provided in Appendix B.1. We refer to the representations from this fine-tuned model as SCR-FT. 6 \f5 Empirical Evaluation In this work, we evaluate Stable Control Representations (SCR) on an extensive suite of tasks from 6 benchmarks covering few-shot imitation learning for manipulation in Section 5.2, reinforcement learning-based indoor navigation in Sections 5.3 and 5.4, and owing to space limitations, two tasks related to fine-grained visual prediction in Section 5.5. Together, these tasks allow us to comprehensively evaluate whether our extracted representations can encode both high and low-level semantic understanding of a scene to aid downstream policy learning. We begin this section by listing the common baselines used across tasks, followed by the description of individual task setups and results obtained. 5.1 Baselines We compare SCR and its variants (i.e., SCR-FT and SCR-FT-ATTN) to the following prior work in representation learning for control: 1. R3M [Nai+22] pre-trains a ResNet50 encoder on video-language pairs from the Ego4D dataset using time-contrastive video-language alignment learning. 2. MVP [Xia+22] and VC-1 [Maj+23] both pre-train ViT-B/L models with the masked auto-encoding (MAE) objective on egocentric data from Ego4D, Epic-Kitchens, SS-v2, and ImageNet, with VC-1 additionally pre-training on indoor navigation videos. 3. CLIP [Rad+21] trains text and ViT-based image encoders using contrastive learning on web-scale data. 4. Voltron [Kar+23] is a language-driven representation learning method that involves pre-training a ViT-B using MAE and video-captioning objectives on aligned text-video pairs from SS-v2. 5. SD-VAE [Rom+22] is the base VAE encoder used by SD to encode images into latents. To assess how well the vision-only methods would do on tasks with language specification, we concatenate their visual representations with the CLIP text embeddings of the language prompts. While we are limited by the architecture designs of the released models we are studying, to ensure a more fair comparison we try to match parameter counts as much as we can. We use the ViT-Large (307M parameters) versions of CLIP , MVP , and VC-1 since extracting SCR involves a forward pass through 400M parameters. 5.2 Few-shot Imitation Learning We start by evaluating SCR on commonly studied representation learning benchmarks in few-shot imitation learning. Specifically, our investigation incorporates five commonly studied tasks from Meta-World [Yu+19] (same as CORTEXBENCH [Maj+23]), which includes bin picking, assembly, pick-place, drawer opening, and hammer usage; as well as five tasks from the Franka-Kitchen environments included in the RoboHive suite [Kum+23], which entail tasks such as turning a knob or opening a door. We adhere to the training and evaluation protocols adopted in their respective prior works to ensure our results are directly comparable (detailed further in Appendix C.1). Results. We report the best results of SCR and baselines in Table 1a. On Meta-World, we see that SCR outperforms most prior works, achieving 94.9% success rate. In comparison, VC-1, the visual foundation model for embodied AI and CLIP achieved 92.3 and 90.1% respectively. On Franka-Kitchen, SCR obtains 49.9% success rate, which is much higher than CLIP (36.3%) and again outperforms all other baselines except for R3M. We note that R3M\u2019s sparse representations excel in few-shot manipulation with limited demos but struggle to transfer beyond this setting [Maj+23; Kar+23]. 7 \fTable 1: Average Success Rate and standard error evaluated across different representations. (a) Meta-World & Franka-Kitchen. Model Meta-World Franka-Kitchen R3M 96.0 \u00b1 1.1 57.6 \u00b1 3.3 CLIP 90.1 \u00b1 3.6 36.3 \u00b1 3.2 VC-1 92.3 \u00b1 2.5 47.5 \u00b1 3.4 Voltron 72.5 \u00b1 5.2 33.5 \u00b1 3.2 SD-VAE 75.5 \u00b1 5.2 43.7 \u00b1 3.1 SCR 94.4 \u00b1 1.9 45.0 \u00b1 3.3 SCR-FT 94.9 \u00b1 2.0 49.9 \u00b1 3.4 (b) ImageNav Model Success R3M 30.6 CLIP-B 52.2 VC-1 70.3 MVP 68.1 SD-VAE 46.6 SCR 73.9 SCR-FT 69.5 (c) OVMM Model Success Oracle 77.6 Detic 36.7 CLIP 38.7 \u00b1 1.7 VC-1 40.6 \u00b1 2.2 SCR 38.7 \u00b1 1.2 SCR-FT 41.9 \u00b1 1.0 SCR-FT-ATTN 43.6 \u00b1 2.1 We see that while the SD-VAE encoder performs competitively on Franka-Kitchen, it achieves a low success rate on Meta-World. This observation allows us to gauge the improved performance of SCR from the base performance gain we may get just from operating in the latent space of this VAE. Additionally, we see that the task-agnostic fine-tuning gives SCR-FT an advantage (4%) over SCR on Franka-Kitchen while making no difference on Meta-World. Note that the other high-performing baselines (R3M and Voltron) have been developed for downstream control usage with training objectives that take temporal information into account, while VC-1 has been trained on a diverse curation of robotics-relevant data. In this context, SCR\u2019s comparable performance shows that generative foundation models hold promise for providing useful representations for control, even with relatively minimal fine-tuning on non-task-specific data. 5.3 Image-Goal Navigation We now assess SCR in more realistic visual environments, surpassing the simple tabletop scenes in manipulation benchmarks. In these complex settings, the representations derived from pre-trained foundational models are particularly effective, benefiting from their large-scale training. We study Image-Goal Navigation (ImageNav), an indoor visual navigation task that evaluates an agent\u2019s ability to navigate to the viewpoint of a provided goal image [Zhu+17]. The position reached by the agent must be within a 1-meter distance from the goal image\u2019s camera position. This requires the ability to differentiate between nearby or similar-looking views within a home environment. This task, along with the semantic object navigation task that we study in Section 5.4, allows for a comprehensive evaluation of a representation\u2019s ability to code both semantic and visual appearance-related features in completely novel evaluation environments. We follow the protocol for the ImageNav task used by Majumdar et al. [Maj+23] and input the pre-trained representations to an LSTM-based policy trained with DD-PPO [Wij+19] for 500 million steps on 16 A40 GPUs (further details in Appendix C.3). Given the large training requirements, we only run SCR-FT and directly compare to the results provided in Majumdar et al. [Maj+23]. Results. We evaluate our agent on 4200 episodes in 14 held-out scenes from the Gibson dataset and report the success rate in Table 1b. We find that SCR outperforms MVP and CLIP (ViT-B), and is almost on par with VC-1 (69.5% vs 70.3%), the SOTA visual representation from prior work. We also see that R3M, the best model for few-shot manipulation from Table 1a performs very poorly (30.6%) in this domain, showing its limited transferability to navigation tasks. 5.4 Open Vocabulary Mobile Manipulation We now shift our focus to evaluating how Stable Diffusion\u2019s web-scale training can enhance policy learning in open-ended domains. We consider the Open Vocabulary Mobile 8 \fTrain Val Figure 4: Sample scenes from the Habitat environments for the ImageNav (left) and OVMM (center) tasks. Instances from training and validation datasets of the OVMM object set are shown on the right. Manipulation (OVMM) benchmark [Yen+23] that requires an agent to find, pick up, and place objects in unfamiliar environments. One of the primary challenges here is locating previously unseen object categories in novel scenes (illustrated in Figure 4 (left)). To manage this complex sparse-reward task, existing solutions [Yen+23] divide the problem into sub-tasks and design modular pipelines that use open-vocabulary object detectors such as Detic [Zho+22]. We study a modified version of the Gaze sub-task (detailed in Appendix C.2), which focuses on locating a specified object category for an abstracted grasping action. The task\u2019s success is measured by the agent\u2019s ability to precisely focus on the target object category. This category is provided as an input to the policy through its CLIP text encoder embedding. The evaluation environments cover both novel instances of object categories seen during policy learning, as well as entirely unseen categories. We compare to VC-1, the best model from Section 5.3 and CLIP , since prior work has studied it for openvocab navigation [Kha+22; Maj+22]. We also incorporate a baseline that trains a policy with ground truth object masks, evaluated using either the ground truth or Detic-generated masks (labeled as Oracle/Detic). Results. Table 1c shows SCR matches the performance of CLIP and SCR-FT surpasses VC-1 by 1.3%, beating CLIP and SCR by 3.2%. Surprisingly, VC-1\u2019s visual representation does better than CLIP\u2019s image encoder representation, given that the downstream policy has to fuse these with the CLIP text embedding of the target object category. Compared to these baselines, we can see the benefit of providing intermediate outputs in the form of textaligned attention maps to the downstream policy (+1.7%). These word-level cross-attention maps simultaneously improve policy performance and also aid explainability, allowing us to diagnose successes and failures. Samples of attention maps overlaid on evaluation episode images can be found in Appendix C. Interestingly, the foundation model representations (CLIP , VC-1, SCR) perform better than Detic. While object detections serve as a category-agnostic output that downstream pickand-place policies can work with, noisy detections can often lead to degraded downstream performance, as we see in this case. Nonetheless, there is still a sizeable gap to \u2018Oracle\u2019 which benefits from ground truth object masks. 5.5 Fine-Grained Visual Prediction In Sections 5.2 to 5.4, our analysis focused on the performance of various representations across an array of control tasks. We now turn our attention to two downstream tasks involving fine-grained visual prediction. The first task, Referring Expressions Grounding, is detailed within this section, while the second task, Grasp Affordance Prediction, is discussed in Appendix A.1. These tasks have been previously examined by Karamcheti et al. [Kar+23] as proxy measures to evaluate the efficacy of representations for control applications. The Referring Expressions Grounding task requires the identification and bounding box prediction of an object in an image based on its textual description. Similar to Karamcheti et al. [Kar+23], we use the OCID-Ref Dataset [Wan+21] for our experiments. We show a 9 \fThe lemon on the rear left of the instant_noodles. Figure 5: Sample from the OCID-Ref dataset used for the Referring Expressions task. Model Average Maximum clutter Medium clutter Minimum clutter CLIP 68.1 60.3 76.6 67.0 R3M 63.3 55.3 68.3 63.3 Voltron 92.5 96.9 91.8 90.2 VC-1 94.6 93.7 96.5 93.7 SD-VAE 94.3 93.2 96.3 93.4 SCR 92.9 91.1 95.9 91.8 SCR-FT 91.8 90.1 94.8 90.8 Table 2: Referring Expression Grounding (Accuracy at threshold IoU of 0.25 with label.). sample image-text pair from the dataset to showcase the complexity of the task in Figure 5. The frozen visual representation is concatenated with a text embedding and passed to a 4-layer MLP , which predicts the bounding box coordinates. We report the bounding box accuracy at a 25% Intersection-over-Union (IoU) threshold across different scene clutter levels for SCR-variants and baselines in Table 2. Results. We see that SCR is tied with Voltron and that VC-1 and SD-VAE perform the best with a 1.5% lead. The better performance of these vision-encoder-only methods highlights that on this task, it is not a challenge for the downstream decoder to learn to associate the visual embeddings with the (CLIP) text encoding of the language specification. Since the training budget is fixed, we observed that some of the runs could potentially improve over extended training. However, we were primarily interested in this task not just to compare the downstream visual prediction performance, but to use it as a testbed for exploring the following two questions: (1) Do the performance differences between the representations we evaluated in Sections 5.2 to 5.4, stem from the absence of fine-grained spatial information encoded within the representations? We refute this claim in Section 6.4, where we present the impact of the representations\u2019 spatial aggregation method on prediction performance. (2) Additionally, we explore to what extent language prompting influences the representations from SCR on language-conditioned tasks in Section 6.3. 6 Deconstructing Stable Control Representations In this section, we deconstruct Stable Control Representations to explain which design choices are most determinative of model robustness and downstream performance. 6.1 Layer Selection We begin our investigation by examining how the performance of SCR is influenced by the selection of layers from which we extract feature maps. We previously chose outputs from the mid and downsampling layers of the U-Net (Figure 2), because their aggregate size closely matches the representation sizes from the ViT-based models (VC-1, MVP , and CLIP). Appendix B.2 details the feature map sizes obtained for all the models we study. Table 3a lists the success rates achieved on the Franka-Kitchen domain when we use different sets of block outputs in SCR. We see that utilizing outputs from multiple layers is instrumental to SCR\u2019s high performance. This finding underscores a broader principle applicable to the design of representations across different models: Leveraging a richer set of features from multi-layer outputs should enhance performance on downstream tasks. However, it is important to acknowledge the practical challenges in applying this strategy to ViT-based models. The high dimensionality of each layer\u2019s patch-wise embeddings (16\u00d716\u00d71024 for ViT-L for images of size 224\u00d7224), may complicate the integration of multi-layer outputs. 10 \fTable 3: We analyze the impact of varying the denoising timestep, layers selection, and input text prompt for the performance of SCR on the Franka-Kitchen benchmark. We report the mean and standard error over 3 random seeds. (a) Denoising timestep. Timestep Success Rate 0 49.9 \u00b1 3.4 10 48.2 \u00b1 3.1 100 42.0 \u00b1 3.7 110 42.0 \u00b1 3.4 200 35.1 \u00b1 3.2 (b) Layers selection. Layers Success Rate Down[1-3] + Mid 49.9 \u00b1 3.4 Down[1-3] 43.0 \u00b1 3.4 Mid 41.6 \u00b1 3.3 Mid + Up[0] 42.1 \u00b1 3.6 Mid + Up[0-1] 48.1 \u00b1 3.6 (c) Input text prompt. Prompt Type Success Rate None 49.9 \u00b1 3.4 Relevant 49.2 \u00b1 3.5 Irrelevant 48.7 \u00b1 3.3 6.2 Sensitivity to the Noising Timestep Next, we characterize the sensitivity of task performance to the denoising step values chosen during representation extraction on the Franka-Kitchen tasks in Table 3b. We see that the performance across nearby timesteps (0 and 10 or 100 and 110) is similar, and that there is a benefit to doing a coarse grid search up to a reasonable noising level (0 vs 100 vs 200) to get the best value for a given task. 6.3 How is Language Guiding the Representations? Recall that in the OVMM experiments (Section 5.4), we concatenated the target object\u2019s CLIP text embedding to the visual representations before feeding it to the policy. For SCR and SCR-FT, we also provided the category as the text prompt to the U-Net, and additionally extracted the generated cross-attention maps for SCR-FT-ATTN. In this subsection, we seek to more closely understand how the text prompts impact the representations in SCR. We first consider the Franka-Kitchen setup from Section 5.2, which includes manipulation tasks that do not originally come with a language specification. We experiment with providing variations of task-relevant and irrelevant prompts during the representation extraction in SCR. Table 3c shows the downstream policy success rates for irrelevant (\u201can elephant in the jungle\u201d) and relevant (\u201ca Franka robot arm opening a microwave door\u201d) prompts, compared to the default setting of not providing a text prompt We see that providing a prompt does not help with downstream policy performance and may even degrade performance as the prompt gets more irrelevant to the visual context of the input. We now move to the Referring Expressions Grounding task from Section 5.5, which requires grounding language in vision to do bounding box prediction. To study the role of the U-Net in shaping the visual representations guided by the text, we examine different text integration methods to generate SCR representations and compare them to the Voltron baseline in Table 4. We compared the following approaches for providing the task\u2019s text specification to the task decoder (also depicted in Figure 6): (a) No text input: Exclude text prompt from both SCR and the task decoder by passing an empty prompt to the U-Net and using only the resulting SCR output for the decoder. (b) Prompt only: Pass text prompt only to the U-Net. (c) Concat only: Concatenate the CLIP embedding of the text prompt with the visual representation, feeding an empty prompt to the U-Net. (d) Prompt + Concat: Combine \u201cPrompt Only\u201d and \u201cConcat Only\u201d. (e) Only text encoding: Remove visual representations completely and rely only on CLIP text embeddings. Investigating the results of (a) and (b) in Table 4, it is evident that incorporating the text prompt into the U-Net significantly enhances accuracy compared to ignoring the text 11 \f\u0003\u0002\u0001\u0000\u0007\u0006\u0005\u0004 \u0002\r\u000b\u0003\u0002\u0001\u0000\u000b\b\u0010\f\u000e \u0005\u0014\u0013\u0015\u0002 \u0003\u0002\u0001\u0000\u0007\u0006\u0005\u0004 \u0000\u0013\u001e\f\u001d \u001c\"\u001b\u001b\u001a\u0019\"\u0018\u0010 \u001e\r\u000b\u0004,\u0018\u0014*\u0000\u000b\b\u0010\f\u000e \u0005\u0014\u0013\u0015\u0002 \u0003\u0002\u0001\u0000\u0007\u0006\u0005\u0004 \u0000\u0013\u001e\f\u001d \u001c\"\u001b\u001b\u001a\u0019\"\u0018\u0010 1 <\r\u000b\u0004,\u0018\u0014*\u0000\u000b\u0013\u0010<\u000b\u0007\u0018\u00102\u0013\u0000\u0005\u0014\u0013\u0015\u0002 \u0003\u0002\u0001\u0000\u0007\u0006\u0005\u0004 \u0000\u0013\u001e\f\u001d \u001c\"\u001b\u001b\u001a\u0019\"\u0018\u0010 1 2\r\u000b\u0007\u0018\u00102\u0013\u0000\u000b\b\u0010\f\u000e \u0005\u0014\u0013\u0015\u0002 \u0000\u0013\u001e\f\u001d \u001c\"\u001b\u001b\u001a\u0019\"\u0018\u0010 \u0013\r\u000bF\u0018\u000b\u0003\u0002\u0001\u0000\u000b\u0005\u0010*\u001a\u0000Figure 6: Illustration of different approaches to providing relevant visionlanguage inputs to a downstream task-decoder. Configuration Score (a) No text input 14.8 (b) Prompt only 82.7 (c) Concat only 92.2 (d) Prompt + Concat 92.9 (e) Only text encoding 37.5 Table 4: Ablating text input to SCR on referring expressions task. altogether. The difference in scores between (b) and (c) indicates that directly providing text embeddings to the decoder improves performance, suggesting that certain crucial aspects of object localization are not fully captured by the representation alone. Comparing (c) to (d), we see that with concatenated text embeddings, further modulation of the visual representations does not provide significant benefits. Finally, the significant decrease in the score for (e) reveals the extent to which the task relies on text-based guesswork. These findings align with both intuition and recent research on controllable generation with diffusion models [Zha+23] that highlights the challenges associated with using long-form text guidance. There are ongoing research efforts, focused on training models with more detailed image descriptions or leveraging approaches to encode and integrate sub-phrases of long texts, that seek to address these challenges. 6.4 The Effect of Spatial Aggregation In this study, we refine the approach for extracting representations by integrating a convolutional layer that downsamples the spatial grid of pre-trained representations. This adjustment, referred to as a \u201ccompression layer\u201d by Yadav et al. [Yad+23], aims to reduce the high channel dimension of pre-trained model outputs without losing spatial details, facilitating more effective input processing by downstream task-specific decoders. We explore the effect of spatial aggregation methods by comparing the convolutional downsampling layer method to multi-headed attention pooling (MAP) used for CLIP embeddings in Karamcheti et al. [Kar+23]. We find that using a compression layer significantly improves performance on the fine-grained visual prediction tasks described in Section 5.5 as reported in Table 5 (columns 3-4). This result challenges the conjecture made in prior work that CLIP representations are limited in their ability to provide accurate low-level spatial information [Kar+23] and emphasizes the critical role of appropriate representation aggregation. Table 5: We ablate the spatial aggregation method for VC-1 and CLIP . On the fine-grained visual prediction tasks, we compare the average precision between using multi-head attention pooling (MAP) and the compression layer. On the Meta-World & Franka-Kitchen tasks, we compare the average success rates (\u00b1 one standard error) between the CLS token and compression layer embeddings. Model Aggregation Refer Exp. Grasp Affordance Meta-World Franka-Kitchen Method Grounding Prediction VC-1 MAP/CLS 93.2 24.7 88.8 \u00b1 2.2 52.0 \u00b1 3.4 VC-1 Compression 94.6 83.9 92.3 \u00b1 2.5 47.5 \u00b1 3.4 CLIP MAP/CLS 68.1 60.3 88.8 \u00b1 3.9 35.3 \u00b1 3.4 CLIP Compression 94.3 72.9 90.1 \u00b1 3.6 36.3 \u00b1 3.2 12 \fBuilding on this result, we assess whether better spatial aggregation can improve the performance of CLIP representations on downstream control tasks. We present these results in Table 5 (columns 5-6) for VC-1 and CLIP on the MuJoCo tasks. We see that the compression layer often outperforms the use of CLS token embeddings (by 1-2%), but CLIP representations still fail to match the best-performing models. This result provides evidence that the underperformance of CLIP representations on control tasks is unlikely due to a lack of sufficiently fine-grained visual information. Finally, we note that the compression layer aggregation technique was used for all baselines in Tables 1b and 1c to ensure a strong baseline comparison. We recommend that future studies adopt this methodology to enable a fairer comparison of representations. 7 Discussion In Section 6, we deconstructed Stable Control Representations and highlighted techniques used in our approach can be applied to other foundational control models. Our analysis in Sections 6.1 and 6.4 revealed that using multi-layer features and appropriate spatial aggregation significantly affects performance, and overlooking these factors can lead to misleading conclusions about the capabilities of previously used representations. Next, our investigation into how language shapes diffusion model representations uncovered nuanced results and showed that text influence on representations does not consistently increase downstream utility. This is particularly evident in tasks where text specification is not required and where training and test environments are congruent, minimizing the need for semantic generalization. In contrast, tasks like referring expressions grounding demonstrate the necessity of direct access to text embeddings for accurate object localization, even when representations are modulated to considerable success. For the OVMM task, we identified a scenario where multimodal alignment is essential and proposed a method to explicitly utilize the latent knowledge of the Stable Diffusion model through text-aligned attention maps, which is not straightforward to do for other multimodal models. Future research could design methods to derive precise text-associated attribution maps for other models. Finally, we contrasted the simplicity of fine-tuning diffusion models with that of the contrastive learning objective required to fine-tune CLIP representations. The former only requires image-text or image-only samples for conditional and unconditional generation objectives, respectively, whereas the latter requires a sophisticated negative label sampling pipeline along with large batch sizes to prevent the model from collapsing to a degenerate solution [Rad+21]. We demonstrated this phenomenon empirically on the Franka-Kitchen environment by fine-tuning CLIP similarly to SCR-FT in Appendix A.2. 8" +} \ No newline at end of file diff --git a/abs_9K/test_abstract_short_2405.05945v1.json b/abs_9K/test_abstract_short_2405.05945v1.json new file mode 100644 index 0000000000000000000000000000000000000000..08f83195f0c2a0184eea6f5a25bef3548b003f9b --- /dev/null +++ b/abs_9K/test_abstract_short_2405.05945v1.json @@ -0,0 +1,16 @@ +{ + "url": "http://arxiv.org/abs/2405.05945v1", + "title": "Lumina-T2X: Transforming Text into Any Modality, Resolution, and Duration via Flow-based Large Diffusion Transformers", + "abstract": "Sora unveils the potential of scaling Diffusion Transformer for generating\nphotorealistic images and videos at arbitrary resolutions, aspect ratios, and\ndurations, yet it still lacks sufficient implementation details. In this\ntechnical report, we introduce the Lumina-T2X family - a series of Flow-based\nLarge Diffusion Transformers (Flag-DiT) equipped with zero-initialized\nattention, as a unified framework designed to transform noise into images,\nvideos, multi-view 3D objects, and audio clips conditioned on text\ninstructions. By tokenizing the latent spatial-temporal space and incorporating\nlearnable placeholders such as [nextline] and [nextframe] tokens, Lumina-T2X\nseamlessly unifies the representations of different modalities across various\nspatial-temporal resolutions. This unified approach enables training within a\nsingle framework for different modalities and allows for flexible generation of\nmultimodal data at any resolution, aspect ratio, and length during inference.\nAdvanced techniques like RoPE, RMSNorm, and flow matching enhance the\nstability, flexibility, and scalability of Flag-DiT, enabling models of\nLumina-T2X to scale up to 7 billion parameters and extend the context window to\n128K tokens. This is particularly beneficial for creating ultra-high-definition\nimages with our Lumina-T2I model and long 720p videos with our Lumina-T2V\nmodel. Remarkably, Lumina-T2I, powered by a 5-billion-parameter Flag-DiT,\nrequires only 35% of the training computational costs of a\n600-million-parameter naive DiT. Our further comprehensive analysis underscores\nLumina-T2X's preliminary capability in resolution extrapolation,\nhigh-resolution editing, generating consistent 3D views, and synthesizing\nvideos with seamless transitions. We expect that the open-sourcing of\nLumina-T2X will further foster creativity, transparency, and diversity in the\ngenerative AI community.", + "authors": "Peng Gao, Le Zhuo, Ziyi Lin, Chris Liu, Junsong Chen, Ruoyi Du, Enze Xie, Xu Luo, Longtian Qiu, Yuhang Zhang, Chen Lin, Rongjie Huang, Shijie Geng, Renrui Zhang, Junlin Xi, Wenqi Shao, Zhengkai Jiang, Tianshuo Yang, Weicai Ye, He Tong, Jingwen He, Yu Qiao, Hongsheng Li", + "published": "2024-05-09", + "updated": "2024-05-09", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "label": "Original Paper", + "paper_cat": "Diffusion AND Model", + "gt": "Sora unveils the potential of scaling Diffusion Transformer for generating\nphotorealistic images and videos at arbitrary resolutions, aspect ratios, and\ndurations, yet it still lacks sufficient implementation details. In this\ntechnical report, we introduce the Lumina-T2X family - a series of Flow-based\nLarge Diffusion Transformers (Flag-DiT) equipped with zero-initialized\nattention, as a unified framework designed to transform noise into images,\nvideos, multi-view 3D objects, and audio clips conditioned on text\ninstructions. By tokenizing the latent spatial-temporal space and incorporating\nlearnable placeholders such as [nextline] and [nextframe] tokens, Lumina-T2X\nseamlessly unifies the representations of different modalities across various\nspatial-temporal resolutions. This unified approach enables training within a\nsingle framework for different modalities and allows for flexible generation of\nmultimodal data at any resolution, aspect ratio, and length during inference.\nAdvanced techniques like RoPE, RMSNorm, and flow matching enhance the\nstability, flexibility, and scalability of Flag-DiT, enabling models of\nLumina-T2X to scale up to 7 billion parameters and extend the context window to\n128K tokens. This is particularly beneficial for creating ultra-high-definition\nimages with our Lumina-T2I model and long 720p videos with our Lumina-T2V\nmodel. Remarkably, Lumina-T2I, powered by a 5-billion-parameter Flag-DiT,\nrequires only 35% of the training computational costs of a\n600-million-parameter naive DiT. Our further comprehensive analysis underscores\nLumina-T2X's preliminary capability in resolution extrapolation,\nhigh-resolution editing, generating consistent 3D views, and synthesizing\nvideos with seamless transitions. We expect that the open-sourcing of\nLumina-T2X will further foster creativity, transparency, and diversity in the\ngenerative AI community.", + "main_content": "Introduction Recent advancements in foundational diffusion models, such as Sora [108], Stable Diffusion 3 [44], PixArt-\u03b1 [24], and PixArt-\u03a3 [25], have yielded remarkable success in generating photorealistic images and videos. These models demonstrate a paradigm shift from the classic U-Net architecture [61] to a transformer-based architecture [110] for diffusion backbones. Notably, with this improved architecture, Sora and Stable Diffusion 3 can generate samples at arbitrary resolutions and exhibit strong adherence to scaling laws, achieving significantly better results with increased parameter sizes. However, they only provide limited guidance on the design choices of their models and lack detailed implementation instructions and publicly available pre-trained checkpoints, limiting their utility for community usage and replication. Moreover, these methods are tailored to specific tasks such as image or video generation tasks, and are formulated from varying perspectives, which hinders potential cross-modality adaptation. To bridge these gaps, we present Lumina-T2X, a family of Flow-based Large Diffusion Transformers (Flag-DiT) designed to transform noise into images [114, 123], videos [14, 108], multi-views of 3D objects [131, 130], and audio clips [138] based on textual instructions. The largest model within the Lumina-T2X family comprises a Flag-DiT with 7 billion parameters and a multi-modal large language model, SPHINX [46, 85], as the text encoder, with 13 billion parameters, capable of handling 128K tokens. Specifically, the foundational text-to-image model, Lumina-T2I, utilizes the flow matching framework [92, 86, 4] and is trained on a meticulously curated dataset of highresolution photorealistic image-text pairs, achieving remarkably realistic results with merely a small proportion of computational resources. As shown in Figure 1, Lumina-T2I can generate highquality images at arbitrary resolutions and aspect ratios, and further enables advanced functionalities including resolution extrapolation [43, 55], high-resolution editing [57, 18, 78, 129], compositional generation [12, 162], and style-consistent generation [58, 143], all of which are seamlessly integrated into the framework in a training-free manner. In addition, to empower the generation capabilities across various modalities, Lumina-T2X is independently trained from scratch on video-text, multiview-text, and speech-text pairs to synthesize videos, multi-view images of 3D objects, and speech from text instructions. For instance, Lumina-T2V, trained with only limited resources and time, can produce 720p videos of any aspect ratio and duration, significantly narrowing the gap between Sora and open-source models. The core contributions of Lumina-T2X are summarized as follows: Flow-based Large Diffusion Transformers (Flag-DiT): Lumina-T2X utilizes the Flag-DiT architecture inspired by the core design principles from Large Language Models (LLMs) [145, 146, 19, 117, 122, 141, 166], such as scalable architecture [19, 150, 56, 163, 136, 36] and context window extension [112, 136, 30, 3] for increasing parameter size and sequence length. The modifications, including RoPE [136], RMSNorm [163], and KQ-norm [56], over the original DiT, significantly enhance the training stability and model scalability, supporting up to 7 billion parameters and sequences of 128K tokens. Moreover, Flag-DiT improves upon the original DiT by adopting the flow matching formulation [98, 86], which builds continuous-time diffusion paths via linear interpolation between noise and data. We have thoroughly ablated these architecture improvements over the label-conditioned generation on ImageNet [38], demonstrating faster training convergence, stable training dynamics, and a simplified training and inference pipeline. Any Modalities, Resolution, and Duration within One Framework: Lumina-T2X tokenizes images, videos, multi-views of 3D objects, and spectrograms into one-dimensional sequences, similar to the way LLMs [117, 26, 19, 116] process natural language. By incorporating learnable placeholders such as [nextline] and [nextframe] tokens, Lumina-T2X can seamlessly encode any modality regardless of resolution, aspect ratio, or even temporal duration into a unified 1-D token sequence. The model then utilizes Flag-DiT with text conditioning to progressively transform noise into clean data across all modalities, resolutions, and durations by explicitly specifying the positions of [nextline] and [nextframe] tokens during inference. Remarkably, this flexibility even allows for resolution extrapolation, enabling the generation of resolutions surpassing those encountered during training. For instance, Lumina-T2I trained at a resolution of 1024 \u00d7 1024 pixels can generate images ranging from 768 \u00d7 768 to 1792 \u00d7 1792 pixels by simply adding more [nextline] tokens, which significantly broadens the potential applications of Lumina-T2X. 3 \fTable 1: We compare the training setups of Lumina-T2I with PixArt-\u03b1. Lumina-T2I is trained purely on 14 million high-quality (HQ) image-text pairs, whereas PixArt-\u03b1 benefits from an additional 11 million high-quality natural image-text pairs. Remarkably, despite having 8.3 times more parameters, Lumina-T2I only incurs 35% of the computational costs compared to PixArt-\u03b1-0.6B. PixArt-\u03b1-0.6B with T5-3B Lumina-T2I-5B with LLaMa-7B Res. #images Batch Size Learning Rate GPU days Res. #images Batch Size Learning Rate GPU days (A100) (A100) 256 1M ImageNet 1024 2\u00d710\u22125 44 256 10M SAM 11392 2\u00d710\u22125 336 256 14M HQ 11392 2\u00d710\u22125 208 256 14M HQ 512 1\u00d710\u22124 96 512 14M HQ 2560 2\u00d710\u22125 160 512 14M HQ 256 1\u00d710\u22124 96 1024 14M HQ 384 2\u00d710\u22125 80 1024 14M HQ 128 1\u00d710\u22124 96 Low Training Resources: Our empirical observations indicate that employing larger models, high-resolution images, and longer-duration video clips can significantly accelerate the convergence speed of diffusion transformers. Although increasing the token length prolongs the time of each iteration due to the quadratic complexity of transformers, it substantially reduces the overall training time before convergence by lowering the required number of iterations. Moreover, by utilizing meticulously curated text-image and text-video pairs featuring high aesthetic quality frames and detailed captions [13, 24, 25], our Lumina-T2X model is able to generate high-resolution images and coherent videos with minimal computational demands. It is worth noting that the default Lumina-T2I configuration, equipped with a 5 billion Flag-DiT and a 7 billion LLaMA [145, 146] as its text encoder, requires only 35% of the computational resources compared to PixArt-\u03b1, which builds upon a 600 million DiT backbone and 3 billion T5 [120] as its text encoder. A detailed comparison of computational resources between the default Lumina-T2I and PixArt-\u03b1 is provided in Table 1. In this technical report, we first introduce the architecture of Flag-DiT and its overall pipeline. We then introduce the Lumina-T2X system, which applies Flag-DiT over various modalities. Additionally, we discuss advanced inference techniques that unlock the full potential of the pretrained Lumina-T2I. Finally, we showcase the results from models in the Lumina-T2X family, accompanied by in-depth analyses. To support future research in the generative AI community, all training, inference codes, and pre-trained models of Lumina-T2X will be released. 2 Method In this section, we revisit preliminary research that lays the foundation for Lumina-T2X. Building on these insights, we introduce the core architecture, Flag-DiT, along with the overall pipeline. Next, we delve into diverse configurations and discuss the application of Lumina-T2X across various modalities including images, videos, multi-view 3D objects, and speech. The discussion then extends to the advanced applications of the pretrained Lumina-T2I on resolution extrapolation, style-consistent generation, high-resolution editing, and compositional generation. 2.1 Revisiting RoPE, DiT, SiT, PixArt-\u03b1 and Sora Before introducing Lumina-T2X, we first revisit several milestone studies on leveraging diffusion transformers for text-to-image and text-to-video generation, as well as seminal research on large language models (LLMs). Rotary Position Embedding (RoPE) RoPE [136] is a type of position embedding that can encode relative positions within self-attention operations. It can be regarded as a multiplicative bias based on position \u2013 given a sequence of the query/key vectors, the n-th query and the m-th key after RoPE can be expressed as: \u02dc qm = f(qm, m) = qmeim\u0398, \u02dc kn = f(kn, n) = knein\u0398, (1) where \u0398 is the frequency matrix. Equipping with RoPE, the calculation of attention scores can be considered as taking the real part of the standard Hermitian inner product: Re[f(qm, m)f \u2217(kn, n)] = Re[qmk\u2217 nei\u0398(m\u2212n)]. (2) 4 \fIn this way, the relative position m \u2212n between the m-th and n-th tokens can be explicitly encoded. Compared to absolute positional encoding, RoPE offers translational invariance, which can enhance the context window extrapolation potential of LLMs. Many subsequent techniques further explore and unlock this potential, e.g., position interpolation [30], NTK-aware scaled RoPE [3], Yarn [112], etc. In this work, Flag-DiT applies RoPE to the keys and queries of diffusion transformer. Notably, this simple technique endows Lumina-T2X with superior resolution extrapolation potential (i.e., generating images at out-of-domain resolutions unseen during training), as demonstrated in Section 3.2, compared to its competitors. DiT, Scalable Interpolant Transformer (SiT) and Flow Matching U-Net has been the de-facto diffusion backbone in previous Denoising Diffusion Probabilistic Models [61] (DDPM). DiT [110] explores using transformers trained on latent patches as an alternative to U-Net, achieving state-ofthe-art FID scores on class-conditional ImageNet benchmarks and demonstrating superior scaling potentials in terms of training and inference FLOPs. Furthermore, SiT [98] utilizes the stochastic interpolant framework (or flow matching) to connect different distributions in a more flexible manner than DDPM. Extensive ablation studies by SiT reveal that linearly connecting two distributions, predicting velocity fields, and employing a stochastic solver can enhance sample quality with the same DiT architecture. However, both DiT and SiT are limited in model sizes, up to 600 million parameters, and suffer from training instability when scaling up. Therefore, we borrow design choices from LLMs and validate that simple modifications can train a 7-billion-parameter diffusion transformer in mixed precision training. PixArt-\u03b1 and -\u03a3 DiT explores the potential of transformers for label-conditioned generation. Built on DiT, PixArt-\u03b1 [24] unleashes this potential for generating images based on arbitrary textual instructions. PixArt-\u03b1 significantly reduces training costs compared with SDXL [114] and Raphael [159], while maintaining high sample quality. This is achieved through multi-stage progressive training, efficient text-to-image conditioning with DiT, and the use of carefully curated high-aesthetic datasets. PixArt-\u03a3 extends this approach by increasing the image generation resolution to 4K, facilitated by the collection of 4K training image-text pairs. Lumina-T2I is highly motivated by PixArt-\u03b1 and -\u03a3 yet it incorporates several key differences. Firstly, Lumina-T2I utilizes Flag-DiT with 5B parameters as the backbone, which is 8.3 times larger than the 0.6B-parameter backbone used by PixArt-\u03b1 and -\u03a3. According to studies on class-conditional ImageNet generation in Section 3.1, larger diffusion models tend to converge much faster than their smaller counterparts and excel at capturing details on high-resolution images. Secondly, unlike PixArt-\u03b1 and -\u03a3 that were pretrained on ImageNet [38] and SAM-HD [80] images, Lumina-T2I is trained directly on high-aesthetic synthetic datasets without being interfered by the domain gap between images from different domains. Thirdly, while PixArt-\u03b1 and -\u03a3 excel at generating images with the same resolution as training stages, our Lumina-T2I, through the introduction of RoPE and [nextline] token, possesses a resolution extrapolation capability, enabling generating images at a lower or higher resolution unseen during training, which offers a significant advantage in generating and transferring images across various scales. Sora Sora [108] demonstrates remarkable improvements in text-to-video generation that can create 1-minute videos with realistic or imaginative scenes spanning different durations, resolutions, and aspect ratios. In comparison, Lumina-T2V can also generate 720p videos at arbitrary aspect ratios. Although there still exists a noticeable gap in terms of video length and quality between Lumian-T2V and Sora, video samples from Lumina-T2V exhibit considerable improvements over open-source models on scene transitions and alignment with complex text instructions. We have released all codes of Lumina-T2V and believe training with more computational resources, carefully designed spatial-temporal video encoder, and meticulously curated video-text pairs will further elevate the video quality. 2.2 Architecture of Flag-DiT Flag-DiT serves as the backbone of the Lumina-T2X framework. We will introduce the architecture of Flag-DiT and present the stability, flexibility, and scalability of our framework. 5 \fText\u00a0 Embedding Global Avg Time\u00a0 Embedding Zero-init.\u00a0 Gating Zero-init.\u00a0 attention LN LN N RoPE RMSNorm Noisy input Proj. Proj. Linear Proj. \\f \\f \\f \\n \\f \\n \\n \\f Next-frame token tanh \\n \\n \\n \\n \\n \\n \\n \\n \\f Next-line token An astronaut riding a horse on Mars.... Label\u00a0 Embedding Time\u00a0 Embedding scale\u00a0 RMSNorm N RoPE RMSNorm Noisy input Predicted Velocity Linear Proj. \\n \\n \\n \\n \\n \\n \\n \\n \\n \\n \\n Predicted Noise or Next-line token Predicted Velocity Predicted Noise or KV Proj. Proj. Proj. scale\u00a0 scale & shift\u00a0 Feedforward scale\u00a0 scale & shift\u00a0 Proj. Proj. Proj. Proj. scale & shift\u00a0 Feedforward scale\u00a0 Dog LN scale & shift\u00a0 QKV Proj. RMSNorm QKV Proj. LN LN Prediction head Prediction head Patch Embed Patch Embed Figure 2: A comparison of Flag-DiT with label and text conditioning. (Left) Flag-DiT with label conditioning. (Right) Text conditioning with a zero-initialized attention mechanism. Flow-based Large Diffusion Transformers (Flag-DiT) DiT is rising to be a popular generative modeling approach with great scaling potential. It operates over latent patches extracted from a pretrained VAE [79, 14], then utilizes a transformer [150, 111] as denoising backbone to predict the mean and variance according to DDPM formulation [134, 135, 61, 105] from different levels of noised latent patches conditioned on time steps and class labels. However, the largest parameter size of DiT is only limited at 600M which is far less than LLMs (e.g., PaLM-540B [35, 7], Grok-1-300B, LLaMa3-400B [145, 146]). Besides, DiT requires full precision training which doubles the GPU memory costs and training speed compared with mixed precision training [99]. Last, the design choice of DiT lacks the flexibility to generate an arbitrary number of images (i.e., videos or multiview images) with various resolutions and aspect ratios, using the fixed DDPM formulation. To remedy the mentioned problems of DiT, Flag-DiT keeps the overall framework of DiT unchanged while introducing the following modifications to improve scalability, stability, and flexibility. \u2780Stability Flag-DiT builds on top of DiT [111] and incorporates modifications from ViT-22B [36] and LLaMa [145, 146] to improve the training stability. Specifically, Flag-DiT substitutes all LayerNorm [9] with RMSNorm [163] to improve training stability. Moreover, it incorporates keyquery normalization (KQ-Norm) [36, 56, 96] before key-query dot product attention computation. The introduction of KQ-Norm aims to prevent loss divergence by eliminating extremely large values within attention logits [36]. Such simple modifications can prevent divergent loss under mixedprecision training and facilitate optimization with a substantially higher learning rate. The detailed computational flow of Flag-DiT is shown in Figure 2. \u2781Flexibility DiT only supports fixed resolution generation of a single image with simple label conditions and fixed DDPM formulation. To tackle these issues, we first examine why DiT lacks 6 \fthe flexibility to generate samples at arbitrary resolutions and scales. We find that this stems from the design choice that DiT leverages absolute positional embedding (APE) [42, 144] and adds it to latent tokens in the first layer following vision transformers. However, APE, designed for vision recognition tasks, struggles to generalize to unseen resolutions and scales beyond training. Motivated by recent LLMs exhibiting strong context extrapolation capabilities [112, 136, 30, 3], we replace APE with RoPE [136] which injects relative position information in a layerwise manner, following Equations 1 and 2. Since the original DiT can only handle a single image at a fixed size, we further introduce learnable special tokens including the [nextline] and [nextframe] tokens to transform training samples with different scales and durations into a unified one-dimensional sequence. Besides, we add [PAD] tokens to transform 1-D sequences into the same length for better parallelism. This is the key modifications that can significantly improve training and inference flexibility with the support of training or generating samples with arbitrary modality, resolution, aspect ratios, and durations, leading to the final design of Lumina-T2X. Next, we switch from the DDPM setting in DiT to the flow matching formulation [98, 92, 86], offering another flexibility to Flag-DiT. It is well known the schedule defining how to corrupt data to noise has great impacts on both the training and sampling of standard diffusion models. Thus plenty of diffusion schedules are carefully designed and used, including VE [135], VP [61], and EDM [77]. In contrast, flow matching [86, 5] emerges as a simple alternative that linearly interpolates between noise and data in a straight line. More specifically, given the data x \u223cp(x) and Gaussian noise \u03f5 \u223cN(0, I), we define an interpolation-based forward process xt = \u03b1tx + \u03b2t\u03f5, (3) where \u03b10 = 0, \u03b2t = 1, \u03b11 = 1, and \u03b21 = 0 to satisfy this interpolation on t \u2208[0, 1] is defined between x0 = \u03f5 and x1 = x. Similar to the diffusion schedule, this interpolation schedule also offers a flexible choice of \u03b1t and \u03b2t. For example, we can incorporate the original diffusion schedules, such as \u03b1t = sin( \u03c0 2 t), \u03b2t = cos( \u03c0 2 t) for VP cosine schedule. In our framework, we adopt the linear interpolation schedule between noise and data for its simplicity, i.e., xt = tx + (1 \u2212t)\u03f5. (4) This formulation indicates a uniform transformation with constant velocity between data and noise. The corresponding time-dependent velocity field is given by vt(xt) = \u02d9 \u03b1tx + \u02d9 \u03b2t\u03f5 (5) = x \u2212\u03f5, (6) where \u02d9 \u03b1 and \u02d9 \u03b2 denote time derivative of \u03b1 and \u03b2. This time-dependent velocity field v : [0, 1]\u00d7Rd \u2192 Rd defines an ordinary differential equation named Flow ODE dx = vt(xt)dt. (7) We use \u03d5t(x) to represent the solution of the Flow ODE with the init condition \u03d50(x) = x. By solving this Flow ODE from t = 0 to t = 1, we can transform noise into data sample using the approximated velocity fields v\u03b8(xt, t). During training, the flow matching objective directly regresses the target velocity Lv = Z 1 0 E[\u2225v\u03b8(xt, t) \u2212\u02d9 \u03b1tx \u2212\u02d9 \u03b2t\u03f5 \u22252]dt, (8) which is named Conditional Flow Matching loss [86], sharing similarity with the noise prediction or score prediction losses in diffusion models. Besides simple label conditioning for class-conditioned generation, Flag-DiT can flexibly support arbitrary text instruction with zero-initialized attention [165, 45, 164, 10]. As shown in Figure 2 (b), Flag-DiT-T2I, a variant of Flag-DiT, leverages the queries of latent image tokens to aggregate information from keys and values of text embeddings. Then we propose a zero-initialized gating mechanism to gradually inject conditional information into the token sequences. The final attention output is formulated as A = softmax \u02dc Iq \u02dc IT k \u221adk ! Iv + tanh(\u03b1) softmax \u02dc IqT T k \u221adk ! Tv, (9) 7 \fFrame-wise Encoding Image Video Multi-view Audio Frame Sequence Frame Encoder SD1.5 VAE or Identity Latent Frames [H x W x T x C] Text Encoding H T W Text Prompt Text Encoder A person in a linen dress stands at a kitchen counter with a recipe book open, preparing to bake. Image related Video related Multi-view related Audio related A puffin stands amid green foliage on a cliff's edge. A painting of a mountain on a wooden easel. I have played the flute to the hurricane. CLIP or Llama or Text Embedding SPHINX or Phone Flattened Patch Sequences x Noise \u03f5 Zero-init Attention Noisy Input \u03b1tx + \u03b2t\u03f5 \u03b1t \u03b2t Sample Loss Predicted Velocity \u02c6 v\u03b8 Flag-DiT Block 1 Flag-DiT Block 2 Flag-DiT Block N \u2026\u2026 Text Conditioning Time Conditioning Gaussian Sample Transform Input & Target Construction Network Architecture & Loss Timestep t \\n: next line token \\f: next frame token sigmoid Timestep t Flow-based Large DiT ||\u02c6 v\u03b8 \u2212( \u02d9 \u03b1tx + \u02d9 \u03b2t\u03f5)||2 Target Velocity \u02d9 \u03b1tx + \u02d9 \u03b2t\u03f5 Target Velocity \u02d9 \u03b1tx + \u02d9 \u03b2t\u03f5 \u02d9 \u03b2t \u02d9 \u03b1t \u03b1t \u03b2t \u02d9 \u03b1t \u02d9 \u03b2t Figure 3: Our Lumina-T2X framework consists of four components: frame-wise encoding, input & target construction, text encoding, and prediction based on Flag-DiT. where \u02dc Iq and \u02dc Ik stand for applying RoPE defined in Equation 1 to image queries and values, dk is the dimension of queries and keys, and \u03b1 indicates the zero-initialized learnable parameter in gated cross-attention. In the experiment session, we discovered that zero-initialized attention induces sparsity gating which can turn off 90% text embedding conditions across layers and heads. This indicates the potential for designing more efficient T2I models in the future. Equipped with the above improvements, our Flag-DiT supports arbitrary resolution generation of multiple images with arbitrary conditioning using a unified flow matching paradigm. \u2782Scalability After alleviating the training stability of DiT and increasing flexibility for supporting arbitrary resolutions conditioned on text instructions, we empirically scale up Flag-DiT with larger parameters and more training samples. Specifically, we explore scaling up the parameter size from 600M to 7B on the label-conditioned ImageNet generation benchmark. The detailed configurations of Flag-DiT with different parameter sizes are discussed in Appendix B. Flag-DiT can be stably trained under mixed-precision configuration and achieve fast convergence compared with vanilla DiT as shown in the experiment section. After verifying the scalability of our Flag-DiT model, we scale up the token length to 4K and expand the dataset from label-conditioned 1M ImageNet to more challenging 17M high-resolution image-text pairs. We further successfully verified that Flag-DiT can support the generation of long videos up to 128 frames, equivalent to 128K tokens. As Flag-DiT is a pure transformer-based architecture, it can borrow the well-validated parallel strategies [132, 121, 169, 88, 87, 89, 72] designed for LLMs, including FSDP [169] and sequence parallel [88, 87, 89, 72] to support large parameter scales and longer sequences. Therefore, we can conclude that Flag-DiT is a scalable generative model with respect to model parameters, sequence length, and dataset size. 2.3 The Overall Pipeline of Lumina-T2X As illustrated in Figure 3, the pipeline of Lumina-T2X consists of four main components during training, which will be described below. Frame-wise Encoding of Different Modalities The key ingredient for unifying different modalities within our framework is treating images, videos, multi-view images, and speech spectrograms as frame sequences of length T. We can then utilize modality-specific encoders, to transform these inputs into latent frames of shape [H, W, T, C]. Specifically, for images (T = 1), videos (T = numframes), 8 \fand multiview images (T = numviews), we use SD 1.5 VAE to independently encode each image frame into latent space and concatenate all latent frames together, while we leave speech spectrograms unchanged using identity mapping. Our approach establishes a universal data representation that supports diverse modalities, enabling our Flag-DiT to effectively model. Text Encoding with Diverse Text Encoders For text-conditional generation, we encode the text prompts using pre-trained language models. Specifically, we incorporate a variety of diverse text encoders with varying sizes, including CLIP, LLaMA, SPHINX, and Phone encoders, tailored for various needs and modalities, to optimize text conditioning. We provided a series of Lumina-T2X trained with different text encoders mentioned above in our model zoo as shown in Figure 17. Input & Target Construction As described in Section 2.2, latent frames are first flattened using 2 \u00d7 2 patches into a 1-D sequence, then added with [nextline] and [nextframe] tokens as identifiers. Lumina-T2X adopts the linear interpolation schedule in flow-matching to construct the input and target following Equations 4 and 6 for its simplicity and flexibility. Inspired by the observation that intermediate timesteps are critical for both diffusion models [77] and flow-based models [44], we adopt the time resampling strategy to sample timestep from a log-norm distribution during training. Specifically, we first sample a timestep from a normal distribution N(0, 1) and map it to [0, 1] using the logistic function in order to emphasize the learning of intermediate timesteps. Network Architecture & Loss We use Flag-DiT as our denoising backbone. The detailed architecture of each Flag-DiT block is depicted in Figure 2. Given the noisy input, the Flag-DiT Blocks inject diffusion timestep added with global text embedding via the modulation mechanism and further integrate text conditioning via zero-initialized attention using Equation 9 mentioned in Section 2.2. We add RMSNorm at the start of each attention and MLP block to prevent the absolute values grow uncontrollably causing numerical instability. Finally, we compute the regression loss between predicted velocity \u02c6 v\u03b8 and ground-truth velocity \u02d9 \u03b1tx + \u02d9 \u03b2t\u03f5 using the Conditional Flow Matching loss in Equation 8. 2.4 Lumina-T2X System In this section, we introduce the family of Lumina-T2X, including Lumina-T2I, Lumina-T2V, LuminaT2MV, and Lumina-T2Speech. For each modality, Lumina-T2X is independently trained with diverse configurations optimized for varying scenarios, such as different text encoders, VAE latent spaces, and parameter sizes. The detailed configurations are provided in Appendix B. Lumina-T2I is the key component of our Lumina-T2X system, where we utilize the T2I task as a testbed for validating the effectiveness of each component discussed in Section 3.2. Notably, our most advanced Lumina-T2I model with a 5B Flag-DiT, 7B LLaMa text encoder, and SDXL latent space demonstrates superior visual quality and accurate text-to-image alignment. Then, we can extend the explored architecture, hyper-parameters, and other training details to videos, multi-views, and speech generation. Since videos and multi-views of 3D objects usually contain up to 1 million tokens, Lumina-T2V and Lumina-T2MV adopt a 2B Flag-DiT, CLIP-L/G text encoder, and SD-1.5 latent space. Although this configuration slightly reduces visual quality, it provides an effective balance for processing long sequences and a joint latent space for images and videos. Motivated by previous approaches [62, 24], Lumina-T2I, Lumina-T2V, and Lumina-T2MV employ a multi-stage training approach, starting from low-resolution, short-duration data while ending with high-resolution, long-duration data. Such a progressive training strategy significantly improves the convergence speed of Lumina-T2X. For Lumina-T2Speech, since the feature space of the spectrogram shows a completely different distribution than images, we directly tokenize the spectrogram without using a VAE encoder and train a randomly initialized Flag-DiT conditioned on a phoneme encoder for T2Speech generation. 2.5 Advanced Applications of Lumina-T2I Beyond its basic text-to-image generation capabilities, the text-to-image Lumina-T2I supports more complex visual creations and produces innovative visual effects as a foundational model. This includes resolution extrapolation, style-consistent generation, high-resolution image editing, and compositional generation \u2013 all in a tuning-free manner. Unlike previous methods that solve these tasks with varied approaches, Lumina-T2I can uniformly tackle these problems through token operations, as illustrated in Figure 4. 9 \fContent VAE \\n \\n 1 2 4 5 3 6 \\n \\n \\n \\n 1 2 6 7 3 4 8 9 5 10 11 12 16 17 13 14 18 19 15 20 \\n \\n \\n \\n 1 2 4 5 3 6 7 8 10 11 9 12 0 1 \\n \\n 1 2 4 5 3 6 \\n \\n [ ] Channel-wise normalization Resolution\u00a0 Extrapolation Style-consistent\u00a0 Generation High-Res. Editing Text 1 Text 2 Mask 1 Conditioning ODE/SDE 17922 Gaussian noise Coordinate (RoPE) Text-to-image Generation \\n \\n 1 2 7 8 Text 3 10242 3 4 9 10 5 11 6 12 Text 4 Compositional\u00a0 Generation Mask 2 Figure 4: Lumina-T2I supports text-to-image generation, resolution extrapolation, style-consistent generation, compositional generation, and high-resolution editing in a unified and training-free framework. Tuning-Free Resolution Extrapolation Due to exponential growth in computational demand and data scarcity, existing T2I models are generally limited to 1K resolution. Thus, there is a significant demand for low-cost and high-resolution extrapolation approaches [55, 43, 33]. The translational invariance of RoPE enhances Lumina-T2X\u2019s potential for resolution extrapolation, allowing it to generate images at out-of-domain resolutions. Inspired by the practices in previous arts, we adopt three techniques that can help unleash Lumina-T2X\u2019s potential of test-time resolution extrapolation: (1) NTK-aware scaled RoPE [3] that rescales the rotary base of RoPE to achieve a gradual position interpolation of the low-frequency components, (2) Time Shifting [44] that reschedules the timesteps to ensure consistent SNR across denoising processes of different resolutions, and (3) Proportional Attention [75] that rescales the attention score to ensure stable attention entropy across various sequence lengths. The visualization of resolution extrapolation can be found in Figure 7, and the details about the aforementioned techniques in our implementation can be found in Appendix A.1. In addition to generating images with large sizes, we observe that such resolution extrapolation can even improve the quality of the generated images, serving as a free lunch (refer to Section 3.2). Style-Consistent Generation The transformer-based diffusion model architecture makes LuminaT2I naturally suitable for self-attention manipulation applications like style-consistent generation. A representative approach is shared attention [58], which enables generating style-aligned batches without specific tuning of the model. Specifically, it uses the first image in a batch as the anchor/reference image, allowing the queries from other images in the batch to access the keys and values of the first image during the self-attention operation. This kind of information leakage effectively promotes a consistent style across the images in a batch. Typically, this can be achieved by concatenating the keys and values of the first image with those of other images before self-attention. However, in diffusion transformers, it is important to note that keys from two images contain duplicated positional embeddings, which can disrupt the model\u2019s awareness of spatial structures. Therefore, we need to ensure that key/value sharing occurs before RoPE, which can be regarded as appending a reference image sequence to the target image sequence. Compositional Generation Compositional, or multi-concepts text-to-image generation [74, 12, 162], which requires the model to generate multiple subjects at different regions of a single image, is seamlessly supported by our transformer-based framework. Users can define N different prompts and N bounding boxes as masks for corresponding prompts. Our key insight is to restrict the cross-attention operation of each prompt within the corresponding region during sampling. More specifically, at each timestep, we crop the noisy data xt using each mask and reshape the resulting sub-regions into a sub-region batch {x1 t, x2 t, . . . , xN t }, corresponding to the prompt batch {y1, y2, . . . , yN}. Then, we compute cross-attention using this sub-region batch and prompt batch and manipulate the output back to the complete data sample. We only apply this operation to cross10 \fattention layers to ensure the text information is injected into different regions while keeping the self-attention layers unchanged to ensure the final image is coherent and harmonic. We additionally set the global text condition as the embedding of the complete prompt, i.e., concatenation of all prompts, to enhance global coherence. High-Resolution Editing Beyond high-resolution generation, our Lumina-T2I can also perform image editing [57, 18], especially for high-resolution images. Considering the distinct features of different editing types, we first classify image editing into two major categories, namely style editing and subject editing. For style editing, we aim to change or enhance the overall visual style, such as color, environment, and texture, without modifying the main object of the image, while subject editing aims to modify the content of the main object, such as addition, replacement, and removal, without affecting the overall visual style. Then, we leverage a simple yet effective method to achieve this image editing within the Lumina-T2I framework. Specifically, given an input image, we first encode it into latent space using the VAE encoder and interpolate the image latent with noise to get the intermediate noisy latent at time \u03bb. Then, we can solve the Flow ODE from \u03bb to 1.0 with desired prompts for editing as text conditions. Due to the powerful generation capability of our model, it can faithfully perform the ideal editing while preserving the original details in high resolution. However, in style editing, we find that the mean and variance are highly correlated with image styles. Therefore, the above method still suffers from style leakage since the interpolated noisy data still retains the style of the original image in its mean and variance. To eliminate the influence of the original image styles, we perform channel-wise normalization on input images, transforming them to zero mean and unit variance. 3 Experiments 3.1 Validating Flag-DiT on ImageNet Training Setups We perform experiments on label-conditioned 256\u00d7256 and 512\u00d7512 ImageNet [38] generation to validate the advantages of Flag-DiT over DiT [111]. Large-DiT is a specialized version of Flag-DiT, incorporating the DDPM formulation [61, 105] to enable a fair comparison with the original DiT. We exactly follow the setups of DiT but with the following modifications, including, mixed precision training, large learning rate, and architecture modifications suite (e.g. QK-Norm, RoPE, and RMSNorm). By default, we report FID-50K [109, 39] using 250 DDPM sampling steps for Large-DiT and the adaptive Dopri-5 solver for Flag-DiT. We additionally report sFID [124], Inception Score [104], and Precision/Recall [83] for an extensive evaluation. Comparison with SOTA Approaches As shown in Table 2, Large-DiT-7B significantly surpasses all approaches on FID and IS score without using Classifier-free Guidance (CFG) [60], reducing the FID score from 8.60 to 6.09. This indicates increasing the parameters of diffusion models can significantly improve the sample quality without relying on extra tricks such as CFG. When CFG is employed, both Large-DiT-3B and Flag-DiT-3B achieve slightly better FID scores but much improved IS scores than DiT-600M and SiT-600M while only requiring 24% and 14% training iterations. For 512\u00d7512 label-conditioned ImageNet generation, Large-DiT with 3B parameters can significantly surpass other SOTA approaches by reducing FID from 3.04 to 2.52 and increasing IS from 240 to 303. This validates that increased parameter scale can better capture complex high-resolution details. By comparison with SOTA approaches on label-conditioned ImageNet generation, we can conclude that Large-DiT and Flag-DiT are good at generative modeling with fast convergence, stable scalability, and strong high-resolution modeling ability. This directly motivates Lumian-T2X to employ Flag-DiT with large parameters to model more complex generative tasks for any modality, resolution, and duration generation. Comparison between Flag-DiT, Large-DiT, and SiT We compared the performance of Flag-DiT, Large-DiT, and SiT on ImageNet-conditional generation, fixing the parameter size at 600M for a fair comparison. As demonstrated in Figure 5(a), Flag-DiT consistently outperforms Large-DiT across all epochs in FID evaluation. This indicates that the flow matching formulation can improve image generation compared to the standard diffusion setting. Moreover, Flag-DiT\u2019s lower FID scores compared to SiT suggest that meta-architecture modifications, including RMSNorm, RoPE, and K-Q norm, not only stabilize training but also boost performance. 11 \fImageNet 256\u00d7256 Benchmark Models Images (M) FID \u2193 sFID \u2193 IS \u2191 P \u2191 R \u2191 BigGAN-deep [17] 6.95 7.36 171.40 0.87 0.28 MaskGIT [21] 355 6.18 182.1 0.80 0.51 StyleGAN-XL [125] 2.30 4.02 265.12 0.78 0.53 ADM [39] 507 10.94 6.02 100.98 0.69 0.63 ADM-U [39] 507 7.49 5.13 127.49 0.72 0.63 LDM-8 [123] 307 15.51 79.03 0.65 0.63 LDM-4 [123] 213 10.56 103.49 0.71 0.62 DiffuSSM-XL [160] 660 9.07 5.52 118.32 0.69 0.64 DiT-XL/2 [111] 1792 9.62 6.85 121.50 0.67 0.67 SiT-XL/2-G [98] 1792 8.60 Large-DiT-7B 256 6.09 5.59 153.32 0.70 0.68 Classifier-free Guidance ADM-G [39] 507 4.59 5.25 186.70 0.82 0.52 ADM-G, ADM-U [39] 507 3.60 247.67 0.87 0.48 LDM-8-G [123] 307 7.76 209.52 0.84 0.35 LDM-4-G [123] 213 3.95 178.2 2 0.81 0.55 U-ViT-H/2-G [11] 512 2.29 247.67 0.87 0.48 DiT-XL/2-G [111] 1792 2.27 4.60 278.24 0.83 0.57 DiffuSSM-XL-G [160] 660 2.28 4.49 259.13 0.86 0.56 SiT-XL/2-G [98] 1792 2.06 4.50 270.27 0.82 0.59 Large-DiT-3B-G 435 2.10 4.52 304.36 0.82 0.60 Flag-DiT-3B-G 256 1.96 4.43 284.80 0.82 0.61 ImageNet 512\u00d7512 Benchmark ADM [39] 1385 23.24 10.19 58.06 0.73 0.60 ADM-U [39] 1385 9.96 5.62 121.78 0.75 0.64 ADM-G [39] 1385 7.72 6.57 172.71 0.87 0.42 ADM-G, ADM-U [39] 1385 3.85 5.86 221.72 0.84 0.53 U-ViT/2-G [11] 512 4.05 8.44 261.13 0.84 0.48 DiT-XL/2-G [111] 768 3.04 5.02 240.82 0.84 0.54 DiffuSSM-XL-G [160] 302 3.41 5.84 255.06 0.85 0.49 Large-DiT-3B-G 472 2.52 5.01 303.70 0.82 0.57 Table 2: Comparison between Large-DiT and Flag-DiT with other models on ImageNet 256 \u00d7 256 and 512 \u00d7 512 label-conditional generation. P, R, and -G denote Precision, Recall, and results with classifier-free guidance, respectively. We also include the total number of images during the training stage to offer further insights into the convergence speed of different generative models. Faster Training Speed with Mixed Precision Training Flag-DiT not only improves performance but also enhances training efficiency as well as stability. Unlike DiT, which diverges under mixed precision training, Flag-DiT can be trained stably with mixed precision. Thus Flag-DiT leads to faster training speeds compared with DiT at the same parameter size. We measure the throughputs of 600M and 3B Flag-DiT and DiT on one A100 node with 256 batch size. As shown in Table 4. Flag-DiT can process 40% more images per second. Faster Convergence with LogNorm Sampling During training, Flag-DiT-600M uniformly samples time steps from 0 to 1. Previous works [77, 44] have pointed out that the learning of score function in diffusion models or velocity field in flow matching is more challenging in the middle of the schedule. To address this, we have replaced uniform sampling with log-normal sampling, which places greater emphasis on the central time steps, thereby accelerating convergence. We refer to the Flag-DiT-600M model using log-normal sampling as Flag-DiT-600M-LogNorm. As demonstrated in Figure 5(b), Flag-DiT-600M-LogNorm not only achieves faster loss convergence but also improves the FID score significantly. 12 \f200 400 600 800 1000 1200 Training Iterations (k) 4 6 8 10 FID Score (cfg = 1.5) SiT-600M Large-DiT-600M Flag-DiT-600M (a) 400 900 1500 2000 2400 2800 Training Iterations (k) 2 3 4 5 6 7 8 FID Score (cfg = 1.5) Flag-DiT-600M w/o LogNorm Flag-DiT-600M w/ LogNorm (b) 200 400 600 800 1000 1200 1400 Training Iterations (k) 2 3 4 5 6 7 8 9 FID Score (cfg = 1.5) Large-DiT-600M Large-DiT-3B Large-DiT-7B (c) 0 100 200 300 400 500 600 700 Training Iterations (k) 0.74 0.76 0.78 0.80 0.82 0.84 0.86 0.88 0.90 Loss Random-Init. ImageNet-Init. (d) Figure 5: Training dynamics of different configurations, to explore the effects of (a) flow matching formulation and architecture modifications, (b) using LogNorm sampling, (c) scaling up model size, and (d) using ImageNet initialization. Scaling Effects of Large-DiT DiT demonstrates that the quality of generated images improves with an increase in parameters. However, the largest DiT model tested is limited to 600M parameters, significantly fewer than those used in large language models. Previous experimental sessions have validated the stability, effectiveness, and rapid convergence of Large-DiT. Building on this foundation, we have scaled the parameters of Large-DiT from 600M to 7B while maintaining the same hyperparameters. As depicted in Figure 5(c), this substantial increase in parameters significantly enhances the convergence speed of Large-DiT, indicating that larger models are more computeefficient for training. Influence of ImageNet Initialization PixArt-\u03b1 [24, 25] utilizes ImageNet-pretrained DiT, which learns pixel dependency, as an initialization for the subsequent T2I model. To validate the influence of ImageNet initialization, we compare the velocity prediction loss of Lumina-T2I with a 600M parameter model using ImageNet initialization versus training from scratch. As illustrated in Figure 5(d), training from scratch consistently results in lower loss levels and faster convergence speeds. Moreover, starting from scratch allows for a more flexible choice of configurations and architectures, without the constraints of a pretrained network. This observation also leads to the design of simple and fast training recipes shown in Table 1. 3.2 Results for Lumina-T2I Basic Setups The Lumina-T2I series is a key component of the Lumina-T2X, providing a foundational framework for the design of Lumina-T2V, Lumina-T2MV and Lumina-T2Speech. By default, all images in this technical report are generated using a 5B Flag-DiT coupled with a 7B LLaMa text encoder [145, 146]. The Lumina-T2I model zoo also supports various text encoder sizes, DiT parameters, input and target construction, and latent spaces, as shown in Appendix B. Lumina-T2I models 13 \fA detailed paper cut craft and illustration of a cute anime bunny girl sitting in the woods. A realistic landscape shot of the Northern Lights dancing over a snowy mountain range in Iceland, with long exposure to capture the motion and vibrant colors. A beautiful Victorian-era botanical garden featuring a charming pond and lovely daisies. A serene mountain landscape in the style of a Chinese ink painting, with a waterfall cascading down into a crystal-clear lake surrounded by ancient pines. An impressionist painting of a bustling caf\u00e9 terrace at night, with vivid colors and lively brush strokes. Batman, cute modern Disney style, ultra-detailed, gorgeous, trending on dribble Detailed pen and ink drawing of a happy pig butcher selling meat in its shop. A young girl\u2018s face disintegrates while beautiful colors fill her features, depicted in fluid dynamic brushwork with colorful dream-like illustrations. A serene twilight beach scene with silhouetted palm trees and bioluminescent waves, digital oil painting. a watercolor portrait of a Terrier dog, smiling and making a cute facial expression while looking at the camera, in Pixar style. An old shaman woman adorned with feathers and leather, portrayed in a photorealistic illustration with soft lighting and sharp focus. An 80s anime still illustrated, featuring a man and a woman in a city park, wearing retro clothing with muted pastel colors. An anthropomorphic Hulk, wearing glasses and smiling, is depicted in a cute and funny character design Two cute penguins in a romantic Valentine's yarn setting under the moonlight with pastel colors. A photograph showcases the beauty of desert flowers and mirrors illuminated by the soft morning light. The image, extremely photorealistic and meticulously detailed, depicts a lonely desert atmosphere with stars shining overhead. A red-haired male elf hunter with a shy expression is standing in a mystical forest, surrounded by fairy tale-like elements and vibrant spectral colors. Figure 6: Lumina-T2I is capable of generating images with arbitrary aspect ratios, delivering superior visual quality and fidelity while adhering closely to given text instructions. 14 \fTraining\u00a0 Resolution 1K \u00a0Resolution Extrapolation to 2K 16642 16642 16642 16642 14082 17922 17922 17922 17922 20482 20482 20482 20482 14082 14082 14082 12802 12802 12802 12802 10242 10242 10242 10242 5122 5122 5122 5122 down to 512px Figure 7: Resolution extrapolation samples of Lumina-T2I. Without any additional training, LuminaT2I is capable of directly generating images with various resolutions from 5122 to 17922. are progressively trained on images with resolutions of 256, 512, and 1024. Detailed information on batch size, learning rate, and computational costs for each stage is provided in Table 1. Fundamental Text-to-Image Generation Ability We showcase the fundamental text-to-image generation capability in Figure 6. The large capacity of the diffusion backbone and text encoder allows for the generation of photorealistic, high-resolution images with accurate text comprehension, utilizing just 288 A100 GPU days. By introducing the [nextline] token during the unified spatialtemporal encoding stage, Lumina-T2I can flexibly generate images from text instructions of various sizes. This flexibility is achieved by explicitly indicating the placement of [nextline] tokens during the inference stage. Free Lunch with Resolution-Extrapolation Resolution extrapolation brings not only larger-scale images but also higher image quality along with enhanced details. As shown in Figure 7, we observe the quality of generated images and text-to-image alignments can be significantly enhanced as we perform resolution extrapolation from 1K to 1.5K. Besides, Lumina-T2I is also capable of performing extrapolation to generate images with lower resolutions, such as 512 resolution, offering additional flexibility. Conversely, Pixart-\u03b1 [24], which uses standard positional embeddings instead of RoPE [136], does not show comparable generalization capabilities at test resolutions. Further enhancing the resolution from 1.5K to 2K can gradually lead to the failure of image generation due to the large domain gap between training and inference. The improvement of image quality and text-to-image alignment is a free lunch of Lumina-T2I as it can improve image generation without incurring any training costs. However, as expected, the free lunch is not without its shortcomings. The discrepancy between the training and inference domains can introduce minor artifacts. We believe the artifacts can be alleviated by collecting high-quality images larger than 1K resolution and performing few-shot parameter-efficient fine-tuning. 15 \f\u2026colorful, macro photo. Toy train\u2026 Toy airplane\u2026 Toy bicycle\u2026 Toy car\u2026 Candles and roses\u2026 A bottle\u2026 A pizza\u2026 A chef\u2026 \u2026in Japanese ukiyo-e style. A winter forest\u2026 A sleigh\u2026 A snowman\u2026 A snowy cabin\u2026 \u2026in Scandinavian folk art style. Figure 8: Style-consistent image generation samples produced by Lumina-T2I. Given a shared style description, Lumina-T2I can generate a batch of images with diverse style-consistent contents. Style-Consistent Generation Batch generation of style-consistent content holds immense value for practical application scenarios [58, 143]. Here, we demonstrate that through simple key/value information leakage, Lumina-T2I can generate impressive style-aligned batches. As shown in Figure 8, leveraging a naive attention-sharing operation, we can observe strong consistency within the generated batches. Thanks to the full-attention model architecture, we can obtain results comparable to those in [58] without using any tricks such as Adaptive Instance Normalization (AdaIN) [68]. Furthermore, we believe that, as previous arts [58, 143] illustrate, through appropriate inversion techniques, we can achieve style/concept personalization at zero cost, which is a promising direction for future exploration. Compositional Generation As illustrated in Figure 9, we present demos of compositional generation [162, 12] using the method described in Section 2.5. We can define an arbitrary number of prompts and assign each prompt an arbitrary region. Lumina-T2I successfully generates high-quality images in various resolutions that align with complex input prompts while retaining overall visual coherence. This demonstrates that the design choice of our Lumina-T2I offers a flexible and effective method that excels in generating complex high-resolution multi-concept images. 16 \fUpper Left: A serene countryside landscape dotted with quaint cottages and rolling hills, bathed in the warm glow of the setting sun. Upper Right: A post-apocalyptic world where nature has reclaimed the land and mutated creatures roam. Down Left: An old man, portrayed as a retro superhero. Down Right: A mysterious portal leading to another dimension, with swirling vortexes of light and energy. Left: A majestic castle perched atop a rocky cliff, overlooking a vast kingdom below. Right: A high-detail image of a wolf mutant deer in a winter landscape with rainbow-colored snow. Upper Left: An underwater city inhabited by aquatic creatures, with colorful coral reefs and schools of fish. Upper Right: A sprawling space station, bustling with activity and interstellar travelers. Down Left: A dystopian wasteland with ruins and debris. Down Right: A lone astronaut exploring the desolate surface of a distant planet, with the vast expanse of space stretching out behind them. Left: A samurai warrior holds a nice sword in a fantasy world, portrayed in an epic and adventurous painting mod. Right: A futuristic cityscape with towering skyscrapers and flying cars zooming through neon-lit streets. Up: A snowy mountain. Mid: A beautiful oil painting of a steamboat in a river. Down: A tranquil garden filled with blooming flowers 1792\u00d71792 1792\u00d71792 2048x1024 1024x2048 1024x2048 Figure 9: Compositional generation samples of Lumina-T2I. Our Lumina-T2I framework can generate high-quality images with intricate compositions based on a combination of prompts and designated regions. High-Resolution Editing Following the methods outlined in Section 2.5, we perform style and subject editing on high-resolution images [57, 18, 78, 129]. As depicted in Figure 10, Lumina-T2I can seamlessly modify global styles or add subjects without the need for additional training. Furthermore, we analyze various factors such as starting time and latent feature normalization in image editing, as shown in Figure 11. By varying the starting time from 0 to 1, we find that a starting time near 0 leads to complete spatial misalignment, while a starting time near 1 results in unchanged content. Setting the starting time to 0.2 provides a good balance between adhering to the editing instructions and preserving the structure of the original image. Compared with the generated image without normalization, it is clear that channel-wise normalization can effectively remove the original style of the input image while preserving its main content. By normalizing the latent features of the original image, our approach to image editing can better handle the editing instructions. Comparison with Pixart-\u03b1 Compared to PixArt-\u03b1 [24], Lumina-T2I can generate images at resolutions ranging from 5122 pixels to 17922 pixels. As demonstrated in Figure 12, PixArt-\u03b1 struggles to produce high-quality images at both lower and higher resolutions than the size of images used during training. Lumina-T2I utilizes RoPE, the [nextline] token, as well as layer-wise relative position injection, enabling it to effectively handle a broader spectrum of resolutions. In contrast, PixArt-\u03b1 relies on absolute position embedding and limits positional information to the initial layer, leading to a degradation in performance when generating images at out-of-distribution scales. Apart from resolution extrapolation, Lumina-T2I also adopts a simplified training pipeline, as shown in Table 1. Ablation studies conducted on ImageNet indicate that training with natural image domains such as ImageNet results in higher training losses in subsequent stages. This suggests that synthetic images from JourneyDB and natural images collected online (e.g., LAION [126, 127], COYO [20], SAM [80], and ImageNet [38]) belong to distinct distributions. Motivated by this observation, LuminaT2I trains directly on high-resolution synthetic domains to reduce computational costs and avoid suboptimal initialization. Additionally, inspired by the fast convergence of the FID score observed when training on ImageNet, Lumina-T2I adopts a 5 billion Flag-DiT, which has 8.3 times more 17 \fStyle Edit: \u2026 in the storm. 1408\u00d71408 Style Edit: \u2026 in autumn. 1408\u00d71408 Style Edit: \u2026 in the desert. 1408\u00d71408 Base: A photorealistic image of a tree in spring. 1408\u00d71408 Subject Edit: \u2026 a rabbit\u2026. 1408\u00d71408 Base: A realistic image of a groundhog peeking out of its burrow in the snow-covered winter landscape. 1408\u00d71408 Subject Edit: \u2026 wearing boots. 1408\u00d71408 Base: A photorealistic and ultra cute 3D rendered flamingo stands in a room with a Christmas tree. 1408\u00d71408 Figure 10: Demonstrations of style editing and subject editing over high-resolution images in a training-free manner. w/ norm w/o norm t=0 t=1 t=0.2 t=0.4 t=0.6 t=0.8 Dissimilar Great No Difference Style Mismatch Input Image Figure 11: Qualitative effects of the starting time and latent feature normalization in style editing. A starting time near 0.2 yields a good balance between preserving the original content and incorporating the desired target style, while removing normalization greatly hinders the model\u2019s ability to effectively transform image styles. parameters than PixArt-\u03b1, yet incurs only 35% training costs (288 A100 GPU days compared to 828 A100 GPU days). Analysis of Gate Distribution in Zero-Initialized Attention Cross-attention [139, 14] is the defacto standard for text conditioning. Unlike previous methods, Lumina-T2I employs zero-initialized attention mechanism [45, 165], which incorporates a zero-initialized gating mechanism to adaptively control the influence of text-conditioning across various heads and layers. Surprisingly, we observe that zero-initialized attention can induce extremely high levels of sparsity in text conditioning. As shown in Figure13(a), we visualize the gating values across heads and layers, revealing that most gating values are close to zero, with only a small fraction exhibiting significant importance. Interestingly, the most crucial text-conditioning heads are predominantly found in the middle layers, suggesting that these layers play a key role in text conditioning. To consolidate this observation, we truncated gates below a certain threshold and found that 80% of the gates can be deactivated without affecting the quality of image generation, as demonstrated in Figure 13(b). This observation suggests the possibility of truncating most cross-attention operations during sampling, which can greatly reduce inference time. 18 \fJohn Wick, dressed in Mandalorian attire, is a skilled bounty hunter in a high-speed chase pixart-\u03b1 lumina-t2x 512px 1024px 1280px 1536px 1792px 2048px Figure 12: Qualitative comparison between Lumina-T2I and PixArt-\u03b1 in generating images at multiple resolutions. The samples from Lumina-T2I demonstrate better alignment with the given text and superior visual quality across all resolutions compared to those from PixArt-\u03b1. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 Head 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 Layer Heatmap of Absolute T anh Values of Gates Across Layers and Heads 0.2 0.4 0.6 0.8 (a) A high-detail image of a wolf mutant deer in a winter landscape with rainbow-colored snow. A strong man with real facial wrinkles and pores casts an imposing figure near the bridge while a young woman stands in admiration in a dramatic and photorealistic scene with perfect background characters and silky smooth bokeh. An old man, portrayed as a retro superhero, stands in the streets of New York City at night Gate threshold: Active gates: 0.0 100% 0.2 3.61% 0.4 2.05% 0.6 0.97% 0.8 0.78% 1.0 0.0% (b) Figure 13: Gated cross-attention in Lumina-T2I. (a) Absolute tanh values of all gates across all layers and heads. (b) Qualitative results of generated images under different gate thresholds. 3.3 Results for Lumina-T2V Basic Setups Lumina-T2V shares the same architecture with Lumina-T2I except for the introduction of a [nextframe] token, which provides explicit information about temporal duration. By default, Lumina-T2V uses CLIP-L/G [118] as the text encoder and employs a Flag-DiT with 2 billion parameter as the diffusion backbone. Departing from previous approaches [51, 156, 73, 22, 23, 16, 172, 62, 15, 65, 29, 153, 167, 158, 52, 157] that rely on T2I checkpoints for T2V initialization and adopt decoupled spatial-temporal attention, Lumina-T2V takes a different route by initializing the Flag-DiT weights randomly and leveraging a full-attention mechanism that allows for interaction among all spatial-temporal tokens. Although this choice significantly slows down the training and overall inference speed, we believe that such an approach holds greater potential, particularly when ample computational resources are available. Lumina-T2V is independently trained on a subset of the Panda-70M dataset [31] and the collected Pexel dataset, comprising of 15 million and 40,000 videos, respectively. Similar to Lumina-T2I, Lumina-T2V employs a multi-stage training strategy that starts with shorter, low-resolution videos and subsequently advances to longer, higher-resolution videos. Specifically, in the initial stage, Lumina-T2V is trained on videos of a fixed size \u2013 such as 512 pixels in both height and width, and 32 frames in length for Pexel dataset, which collectively comprise approximately 32,000 tokens. During the second stage, it learns to handle videos of varying resolutions and durations, while imposing a limit of 128,000 tokens to maintain computational feasibility. 19 \f0 50000 100000 150000 200000 250000 300000 Steps 0.30 0.35 0.40 0.45 0.50 Loss 2B Lumina-T2V 8 GPUs 2B Lumina-T2V 128 GPUs (a) 0 500 1000 1500 2000 2500 3000 Steps 0.6 0.8 1.0 1.2 1.4 1.6 Loss Flag-DiT-600M Flag-DiT-2B Flag-DiT-5B (b) Figure 14: Training loss curve comparison between (a) 2B Flag-DiT trained on 8 GPUs and 128 GPUs, (b) different sizes of Large-DiTs. Frame 1-9 Frame 10-16 Scene Transition Figure 15: Short video generation samples of Lumina-T2V. Although the length and resolution of the generated videos are limited, these samples exhibit scene transition, indicating a promising way for long video generation. Observations of Lumina-T2V We observe that Lumina-T2V with large batch size can converge, while a small batch size struggles to converge. As shown in Figure 14(a), increasing the batch size from 32 to 1024 leads to loss convergence. On the other hand, similar to the observation in ImageNet experiments, increasing model parameters leads to faster convergence in video generation. As shown in Figure 14(b), as the parameter size increases from 600M to 5B, we consistently observe lower loss for the same number of training iterations. Samples for Video Generation As shown in Figure 15, the first stage of Lumina-T2V is able to generate short videos with scene dynamics such as scene transitions, although the generated videos are limited in terms of resolution and duration, with a maximum of 32K total tokens. After the second stage training on longer-duration and higher-resolution videos, Lumina-T2V can generate long videos with up to 128K tokens in various resolutions and durations. The generated videos, as illustrated in Figure 16, exhibit temporal consistency and richer scene dynamics, indicating a promising scaling trend when using more computational resources and data. 3.4 Results for Lumina-T2MV Please refer to Appendix C.2. 3.5 Results for Lumina-T2Speech Please refer to Appendix C.3. 4 Related Work AI-Generated Contents (AIGCs) Generating high-dimensional perceptual data content (e.g., images, videos, audio, etc) has long been a challenge in the field of artificial intelligence. In the era of deep learning, Generative Adversarial Networks (GANs) [48, 173, 70, 155, 17, 76] stand as a pioneering method in this field due to their efficient sampling capabilities, yet they face issues of 20 \fFrame 1-8 Frame 9-16 Frame 17-24 Frame 25-32 Frame 33-40 Frame 41-48 Frame 49-56 Frame 57-64 Fireworks over a Disney castle Figure 16: Long video generation samples of Lumina-T2V. Lumina-T2V enables the generation of long videos with temporal consistency and rich scene dynamics. training instability and mode collapse. Meanwhile, Variational Autoencoders (VAEs) [79, 82, 6, 147, 128] and flow-based models [40, 41] demonstrate better training stability and interpretability but lag behind GANs in terms of image quality. Following this, autoregressive models (ARMs) [149, 148, 34, 26] have shown exceptional performance but come with higher computational demands, and the sequential sampling mechanism is more suited to 1-D data. Nowadays, Diffusion Models (DMs) [133], learning to invert diffusion paths from real data towards random noise, have gradually become the de-facto approach of generative AI across multiple domains, with numerous practical applications [106, 8, 49, 107, 1, 114, 44, 2]. The success of diffusion models over the past four years can be attributed to the progress in several areas, including reformulating diffusion models to predict noise instead of pixels [61], improvements in sampling methods for better efficiency [134, 94, 95, 77], the introduction of classifier-free guidance that enables direct conversion of text to images [59], and cascaded/latent space models that reduce the computational cost of high-resolution generation [63, 123, 142]. Apart from generating high-quality images following text instruction, various applications, including high-resolution generation[55, 43, 69, 170, 33, 25], compositional generation [74, 12, 162], style-consistent generation [58, 143], image editing [57, 18, 78, 102], and controllable generation [164, 103, 168, 101], have been proposed to further extend the applicability of pretrained T2I models. Additionally, pre-trained T2I models are also applied with a decoupled temporal attention to generate videos [51, 156, 73, 22, 23, 16, 172, 62, 15, 65, 29, 153, 167, 158, 52, 157] and multi-views of 3D object [131, 84, 154, 174, 32, 151, 54, 93, 140, 91, 130]. The similar framework, with suitable adjustments, has also been applied to audio generation [67, 90, 47, 161]. Although this paradigm has achieved notable success at the current model scale [114, 113, 171], subsequent works have proven the better potential of diffusion models based on vision transformers (so-called Diffusion Transformer, DiT) [111]. Afterwards, SiT [98] and SD3 [44] further demonstrate that an interpolation or flow-matching framework [92, 86, 4, 5] can better enhance the stability and scalability of DiT \u2014 pointing the way for diffusion models to scale up to the next level. Very recently, Sora [108] has demonstrated the potential for scaling DiT with its powerful joint image and video generation capabilities. However, the detailed implementations have yet to be released. Therefore, inspired by Sora, we introduce Lumina-T2X to push the boundaries of open-source generative models by scaling the flow-based Diffusion Transformer to generate contents across any modalities, resolutions, and durations. 21 \f5" +} \ No newline at end of file diff --git a/abs_9K/test_abstract_short_2405.05949v1.json b/abs_9K/test_abstract_short_2405.05949v1.json new file mode 100644 index 0000000000000000000000000000000000000000..1daeeb7532f1c6f3c90b9f4996224c1b95f236d3 --- /dev/null +++ b/abs_9K/test_abstract_short_2405.05949v1.json @@ -0,0 +1,16 @@ +{ + "url": "http://arxiv.org/abs/2405.05949v1", + "title": "CuMo: Scaling Multimodal LLM with Co-Upcycled Mixture-of-Experts", + "abstract": "Recent advancements in Multimodal Large Language Models (LLMs) have focused\nprimarily on scaling by increasing text-image pair data and enhancing LLMs to\nimprove performance on multimodal tasks. However, these scaling approaches are\ncomputationally expensive and overlook the significance of improving model\ncapabilities from the vision side. Inspired by the successful applications of\nMixture-of-Experts (MoE) in LLMs, which improves model scalability during\ntraining while keeping inference costs similar to those of smaller models, we\npropose CuMo. CuMo incorporates Co-upcycled Top-K sparsely-gated\nMixture-of-experts blocks into both the vision encoder and the MLP connector,\nthereby enhancing the multimodal LLMs with minimal additional activated\nparameters during inference. CuMo first pre-trains the MLP blocks and then\ninitializes each expert in the MoE block from the pre-trained MLP block during\nthe visual instruction tuning stage. Auxiliary losses are used to ensure a\nbalanced loading of experts. CuMo outperforms state-of-the-art multimodal LLMs\nacross various VQA and visual-instruction-following benchmarks using models\nwithin each model size group, all while training exclusively on open-sourced\ndatasets. The code and model weights for CuMo are open-sourced at\nhttps://github.com/SHI-Labs/CuMo.", + "authors": "Jiachen Li, Xinyao Wang, Sijie Zhu, Chia-Wen Kuo, Lu Xu, Fan Chen, Jitesh Jain, Humphrey Shi, Longyin Wen", + "published": "2024-05-09", + "updated": "2024-05-09", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "label": "Original Paper", + "paper_cat": "Mixture AND of AND Experts", + "gt": "Recent advancements in Multimodal Large Language Models (LLMs) have focused\nprimarily on scaling by increasing text-image pair data and enhancing LLMs to\nimprove performance on multimodal tasks. However, these scaling approaches are\ncomputationally expensive and overlook the significance of improving model\ncapabilities from the vision side. Inspired by the successful applications of\nMixture-of-Experts (MoE) in LLMs, which improves model scalability during\ntraining while keeping inference costs similar to those of smaller models, we\npropose CuMo. CuMo incorporates Co-upcycled Top-K sparsely-gated\nMixture-of-experts blocks into both the vision encoder and the MLP connector,\nthereby enhancing the multimodal LLMs with minimal additional activated\nparameters during inference. CuMo first pre-trains the MLP blocks and then\ninitializes each expert in the MoE block from the pre-trained MLP block during\nthe visual instruction tuning stage. Auxiliary losses are used to ensure a\nbalanced loading of experts. CuMo outperforms state-of-the-art multimodal LLMs\nacross various VQA and visual-instruction-following benchmarks using models\nwithin each model size group, all while training exclusively on open-sourced\ndatasets. The code and model weights for CuMo are open-sourced at\nhttps://github.com/SHI-Labs/CuMo.", + "main_content": "Introduction The advent of GPT-4V [56] has sparked excitement within open-source communities to transform large language models (LLM) into multimodal LLMs. Recent multimodal LLMs [3, 13, 47] typically integrate pre-trained vision encoders and LLMs with visual instruction tuning data to fine-tune the pre-trained LLMs, enhancing their visual understanding capabilities. To further scale up multimodal * Work done during an internship at ByteDance Inc., San Jose, CA. Correspondence to X. Wang (xinyao.wang@bytedance.com) and H. Shi. MM-Vet LLaVA-Wild SEED-IMG MMMU MMBench SQA-IMG 52 49 46 43 85 82 79 76 73 71 69 67 40 38 36 34 80 70 60 50 74 73 72 71 MM1 7B (Private) LLaVA-NeXT Vicuna-7B CuMo Mistral-7B (Ours) LLaVA-NeXT Mistral-7B GQA MME 65 64 63 62 1550 1525 1500 1475 Mini-Gemini Vicuna-7B Figure 1. Comparisons of CuMo Mistral-7B with state-of-the-art 7B multimodal LLMs. CuMo outperforms strong open-sourced models such as Mini-Gemini and LLaVA-NeXT, as well as the private MM1 model. LLMs, previous efforts [8, 42, 44, 46, 48, 54] primarily focus on training the model with a more extensive collection of text-image paired data and employing stronger LLMs, significantly increasing training efforts. On the vision side, recent work concentrates on leveraging multiple vision encoders [20, 45] to enrich visual content, employing larger vision encoders [10], and using advanced visionlanguage connectors [6] to improve performance on multimodal tasks. However, these techniques result in an increased number of additional parameters and generate additional visual tokens for LLMs to process, making it inefficient to scale. In terms of efficiently scaling up models, Mixture-ofExperts (MoE) has become the de-facto framework in modern large-scale neural networks, particularly in natu1 arXiv:2405.05949v1 [cs.CV] 9 May 2024 \fTop-K Router MLP 1 MLP 2 MLP 3 MLP 4 Weighted-Sum CLIP-MoE MLP-MoE LLM (dense/MoE) What is the dog doing ? Word Embedding The dog is engaging in the activity of surfing. MoE block Figure 2. Architecture of CuMo. CuMo incorporates sparse Top-K MoE blocks into the CLIP vision encoder and vision-language MLP connector, thereby improving the multimodal LLM capabilities from the vision side. Skip connections are omitted for simplicity. Further implementation details are provided in Section 3.2. ral language processing (NLP). Most large language models (LLM) are built upon the transformer [68] architecture, wherein sparse MoE is used to replace the dense MLP block with the Top-K sparsely-gated MoE block [60]. Recent state-of-the-art open-sourced [30, 65] and private [58] LLMs have predominantly adopted the sparse MoE architecture. These models are scaled up using the MoE design during training while maintaining relatively lower inference costs as only selected MLP experts are activated during the feed-forward process. Nevertheless, the development and optimization of MoE-based models have been largely tailored to LLMs, and the exploration of scaling multimodal LLMs with MoE, especially on the vision side, remains largely unexplored. Motivated by these observations, we introduce CuMo, which integrates Top-K sparsely-gated MoE blocks into the vision encoder and the MLP connector of multimodal LLMs, as depicted in Figure 2. We also explore the associated training recipe and methodology for CuMo. Firstly, we pre-train the MLP connector and perform pre-finetuning to warm up the whole model without introducing the MoE architecture, which stabilizes the following visual instruction tuning stage with newly incorporated sparse MoE blocks. Then, we replace each MLP block with the sparse MoE block in the MLP connector and the vision encoder through co-upcycling. Each expert within the sparse MoE block is initialized from the corresponding MLP block after the pretraining and the pre-finetuning stages. Additionally, each MoE block contains a Top-K router trained from scratch to select experts during the visual instruction tuning stage with auxiliary losses on the router to maintain a balanced loading of experts. We conduct further comparisons between co-upcycled LLMs and pre-trained MoE-based LLMs. The results show that the pre-trained MoE-based LLMs significantly outperform the co-upcycled LLMs. As a result, the upcycling of LLMs is not included in CuMo. Our models are trained fully on open-sourced datasets that are converted to visual instruction following formats. Experimental results demonstrate that CuMo outperforms other stateof-the-art multimodal LLMs on various VQA and multimodal instruction-following benchmarks within the same model size group, as illustrated in Figure 1. Our contributions can be summarized as follows: \u2022 We introduce CuMo, which integrates co-upcycled sparsely-gated MoE layers into both the MLP connector and the vision encoder, enhancing the multimodal LLM with only slightly additional activated parameters. \u2022 We outline the training methodology for CuMo, including a three-stage training process with auxiliary losses to stabilize training and ensure a balanced loading of experts. \u2022 We train CuMo exclusively on open-sourced datasets and pre-trained models. It outperforms state-of-the-art opensourced and private multimodal LLMs across multiple competitive benchmarks within each model size group. 2. Related Works 2.1. Multimodal LLM While the ultimate goal for mulitmodal models may be generative across various modalities [4, 63, 70], mod2 \fern multimodal LLMs primarily focus on integrating additional modalities, such as vision, into LLMs. InstructBLIP [13] adopts Q-Former [38] to sample from visual tokens for LLM to feed-forward and follow the instructions. Flamingo [1] and IDEFICS [25, 34] use shared decoder for visual-language understanding. Qwen-VL [3] uses three-stage training to convert QwenLM to Qwen-VL. LLaVA series [46\u201348] adopt visual instruction tuning that uses instruction-following data to convert LLM into multimodal LLM. ShareGPT4V [8] collects detailed image caption data from GPT4V to augment the LLaVA models. HoneyBee [6] investigates different designs of the MLP connector for better alignment. VILA [44] unfreezes the LLM during pre-training with interleaved image-text data. MoE-LLaVA [43] adopts the MoE design in small LLMs and reaches comparable performance to LLaVA with large LLMs. VCoder [28] adopts various vision adapters to enhance visual perception abilities. SPHINX [20, 45] adopts multiple visual encoders to enrich the visual features with scaled data and models. InternLM-Xcomposer [14, 73] is trained with interleaved text-image composition data and achieves state-of-the-art performance. InternVL [10] scales up the vision encoder to a 6B ViT model. MM1 [54] summarizes the essential steps towards building a strong multimodal LLM from a pre-trained LLM. Mini-Gemini [42] further collects guided generation into the pipeline. 2.2. Mixture-of-Experts Mixture-of-Experts [26] is proposed to utilize a set of expert networks to address specific tasks by employing a gating network to determine the selection of these experts. Recently, it has gained popularity in the design of large language models [17]. The mainstream practice [60] is to replace the dense MLP layers with Top-K sparsely-gated mixture-of-experts (MoE) layers in the transformer [68]. MoE in Language Subsequent works [18, 35] have further scaled up MoE-based large language models with improved stability and load balancing of experts. The design of gating networks often involves selecting the top-k experts for each token [35, 60]. Various routing strategies have been explored, such as choosing top-k tokens by experts [75], oneto-one matching between experts and tokens [36]. Besides routing strategies, maintaining the load balance of experts is crucial for training MoE models. ST-MoE [77] adopts loading balancing loss and router-z loss to ensure a balanced distribution of the experts. Upcycling [33] proposes training sparse experts from dense checkpoints to stabilize training and lower the cost. Recent large language models like Gemini-Pro [58] and DBRX [65] are also based on the MoE design. MoE in Vision The success of MoE extends to the vision community, particularly following the popularity of vision transformers [5, 15, 22, 23, 27, 39, 76]. V-MoE [59] reaches Top-K Router Weighted Sum Layer Norm Co-Upcycled MoE block MLP MLP 1 MLP 2 MLP 3 MLP 4 N copies Layer Norm Figure 3. Initialization of MoE blocks via Co-Upcycling. Each MLP expert within the MoE block during the visual instruction tuning stage is initialized from the corresponding pre-trained MLP. comparable performance to dense ViT while only requiring half of the compute. LIMoE [55] replaces dense MLP layers with MoE layers in CLIP and observes improvements in zero-shot image classification. Residual MoE [69] corporates residual design into MoE transformer and saves over 30% training cost. AdaMV-MoE [9] proposes an adaptive MoE framework for multi-task learning. 3. Method In this section, we first review the sparse MoE block structure and the upcycling strategy utilized in previous studies. Subsequently, we describe how these sparsely-gated MoE blocks are integrated into each module of multimodal LLMs using co-upcycling strategies. Then, we introduce the threestage training process and auxiliary loss functions employed to stabilize training and balance the loads of experts. 3.1. Revisit Sparse MoE Sparse MoE Structure Previous mainstream practice [60] is to replace the dense MLP blocks with sparsely-gated mixture-of-experts blocks. Given input X \u2208RN\u00d7Cin and a MLP block, Xout = MLP(X) \u2208RN\u00d7Cout (1) To scale up the model with multiple MLP blocks in parallel, a sparse MoE block includes a router network to select TopK experts out of S total experts. This router network has a linear layer to compute the normalized weight matrix based on the inputs X for voting, resulting in W = Softmax(Linear(X)) \u2208RN\u00d7S (2) The Top-K experts are selected for each token based on W, and the re-normalized weights WK \u2208RN\u00d7K are computed 3 \fPre-Training CLIP MLP LLM Pre-FineTuning CLIP MLP LLM Visual Instruction Tuning CLIP-MoE MLP-MoE LLM Co-Upcycle Figure 4. Training Stages of CuMo. The first stage involves pre-training the MLP for better alignment. Subsequently, the pre-finetuning stage trains all parameters as a warm-up before the next stage. Finally, the MLP experts within each MoE block are initialized from the weights of the corresponding MLP block, followed by training all parameters in the visual instruction tuning stage. using WK = Softmax(TopK(W)) \u2208RN\u00d7K (3) Each selected expert is represented by an MLP block, and the final output is obtained through a re-weighted sum Xout = K X i W i K \u25e6MLPi(X) \u2208RN\u00d7Cout (4) the output Xout maintains the same dimension as the output of a single dense MLP block. Sparse Upcycling Training MoE-based designs from scratch can be unstable and costly. Sparse Upcycling [33] addresses this challenge by initializing the experts in each MoE block from the corresponding MLP block in pretrained dense checkpoints. This initialization approach provides a better starting point for training MoE-based models and reduces training costs compared to training from scratch. 3.2. CuMo Architecture Sparse MoE in MLP Connector The MLP connector converts visual tokens into word embedding space, aligning dimensions between visual and text tokens. An effective architecture for the vision-language connector is an MLP block [46] that contains two linear layers. We start from a single MLP block and replace it with a Top-K sparse MoE block, incorporating a Top-K router and a set of experts for projecting visual tokens into word embedding space. Sparse MoE in Vision Encoder Vision encoders extract image features as sequences of visual tokens for reasoning in LLMs. CLIP [57] is one the most popular pre-trained vision encoders for multimodal LLM since it is pre-trained on large-scale image-text pairs, which makes it suitable for processing images for multimodal usage. The visual encoding part of CLIP is a ViT [15] model, which has consecutive MLP blocks in the transformer encoder. We substitute each MLP block with a Top-K sparse MoE block, retaining skip connections alongside MoE block outputs. Sparse MoE in LLM In terms of using MoE in LLM, we compare the co-upcycled LLM with pre-trained MoEbased LLM. We start from Mistral-7B and the upcycled Mistral-7B-MoE slightly outperforms Mistral-7B on certain benchmarks. However, considering the constrained knowledge base of upcycled experts from Mistral-7B, we compare it with the pre-trained Mixtral 8x7B with pre-trained experts of a diverse knowledge base. Experimental results reveal that pre-trained Mixtral 8x7B significantly outperforms Mistral-7B-MoE. As a result, LLM is not co-upcycled with CLIP and MLP connectors since it brings marginal improvements with great additional parameters. 3.3. Training Recipe Co-Upcycling MoE blocks We start with training the added MoE blocks from scratch while the model is struggling to converge. Attempts to address this issue with lower learning rates perform worse compared to the baseline. As a result, we adopt a co-upcycling approach, initializing each module that integrates sparsely-gated MoE blocks with pretrained MLPs to replace corresponding MLP blocks, as shown in Figure 3. This strategy consistently improves training stability and model performance. Three-Stage Training To further enhance training stability, we adopt a three-stage training strategy for CuMo models, as illustrated in Figure 4. In the first stage, we only pretrain the MLP connector, given that the vision encoder and LLM have already undergone pre-training on large-scale data. During the second pre-finetuning stage, we train all parameters using high-quality caption data to warm up the entire model before introducing MoE blocks in the subsequent stage. The third stage involves visual instruction finetuning, where the multimodal LLM is scaled up with upcycled MoE blocks and trained on visual instruction tuning 4 \fSQA Text MMB MM VQA LLaVA SEED MMMU Math Method LLM Act. IMG VQA GQA POPE MME EN CN Vet v2 Wild IMG val Vista 7B to 13B Models InstructBLIP [13] Vicuna-7B 7.9B 60.5 50.1 49.2 36.0 23.7 26.2 60.9 60.5 Qwen-VL-Chat [3] Qwen-7B 68.2 61.5 57.5 1487.5 60.6 56.7 78.2 58.2 35.9 LLaVA-v1.5 [46] Vicuna-7B 7.1B 66.8 58.2 62.0 85.9 1510.7 64.3 58.3 30.5 78.5 63.4 66.1 LLaMA-VID [41] Vicuna-7B 68.3 64.3 86.0 1521.4 65.1 79.3 59.9 VILA [44] Vicuna-7B 7.1B 68.2 64.4 62.3 85.5 1533.0 68.9 61.7 34.9 79.9 69.7 61.1 SPHINX-Intern2 [20] InternLM2-7B 70.4 58.1 56.2 86.9 1260.4 57.9 36.5 75.5 57.6 68.8 35.5 LLaVA-NeXT [48] Mistral-7B 7.6B 72.8 65.7 64.8 86.7 1498 68.7 61.2 47.3 82.2 83.2 72.2 35.3 37.7 LLaVA-NeXT [48] Vicuna-7B 7.1B 70.1 64.9 64.2 86.5 1519 67.4 60.6 43.9 81.8 81.6 70.2 35.8 34.6 LLaVA-LLaMA3 [12] LLaMA3-8B-IT 8.4B 72.9 59.0 62.6 86.4 1469 72.3 66.4 70.1 36.8 Mini-Gemini [42] Vicuna-7B 7.3B 65.2 1523 69.3 40.8 36.1 31.4 MM1 [54] MM1-7B 72.6 72.8 86.6 1529.3 79.0 42.1 82.8 81.5 69.9 37.0 35.9 InstructBLIP [13] Vicuna-13B 14.2B 63.1 50.7 49.5 78.9 1212.8 25.6 58.2 63.1 LLaVA-v1.5 [46] Vicuna-13B 13.4B 71.6 61.3 63.3 85.9 1531.3 67.7 63.6 35.4 80.0 70.7 68.2 36.4 27.6 VILA [44] Vicuna-13B 13.4B 73.7 66.6 63.3 84.2 1570.1 70.3 64.3 38.8 80.8 73.0 62.8 LLaMA-VID [41] Vicuna-13B 70.0 65.0 86.0 1542.3 66.6 80.0 62.3 SPHINX-Plus [20] LLaMA2-13B 74.2 65.7 89.1 1457.7 71.0 47.9 71.7 74.8 36.8 Mini-Gemini[42] Vicuna-13B 13.6B 65.9 1565 68.5 46.0 38.1 37.0 InternVL-Chat [10] Vicuna-13B 19B 61.5 66.6 87.6 1586.4 81.2 LLaVA-NeXT [48] Vicuna-13B 13.4B 73.6 67.1 65.4 86.2 1575 70 64.4 48.4 82.8 87.3 71.9 36.2 35.3 CuMo Mistral-7B 7.8B 73.9 67.0 64.9 86.7 1548.6 73.0 66.6 51.0\u2020 82.2 85.7\u2020 72.1 39.1 35.1\u2020 7B MoE Models SPHINX-MoE [20] Mixtral-8\u00d77B 74.5 68.0 63.8 89.6 1485.3 71.3 40.9 81.1 70.2 73.0 31.1 42.7 MM1 [54] MM1-7B-MoE 75.3 72.8 87.6 1629.0 79.7 47.0 83.4 82.0 70.4 40.9 40.9 Mini-Gemini [42] Mixtral-8\u00d77B 13.5B 69.2 1639 75.6 45.8 41.8 41.8 CuMo Mixtral-8\u00d77B 13.5B 77.9 66.0 63.8 85.7 1639.5 75.3 68.0 48.7\u2020 81.8 84.7\u2020 73.2 45.0 38.2\u2020 Private Models GPT4V [56] 78.0 77.0 74.4 60.2 56.8 49.9 Gemini 1.5 Pro [58] 73.5 73.6 74.3 64.3 73.2 58.5 52.1 Claude 3 Opus [2] 63.3 59.2 58.1 59.4 50.5 Qwen-VL-Max [64] 79.5 1790.1 77.6 75.1 66.6 51.4 51.0 Table 1. Comparisons between CuMo and other state-of-the-art multimodal LLMs on competitive benchmarks. These models are grouped by the size of the base LLM. The benchmarks are double-rowed due to limited space: SQA-IMG [50]; TextVQA [62]; GQA [24]; POPE [40]; MME [19]; MMBench [49]; MMVet [71]; VQAv2 [21]; LLaVA-Wild [47]; SEED-IMG [37]; MMMU [72]; MathVista [51]. Act.: Activated Parameters. Numbers\u2020 are averaged by three inference runs of querying GPT API. data. Loss Function To maintain a load balance between experts in each MoE block, we adopt auxiliary losses based on the language modeling cross-entropy loss. The auxiliary losses comprise loading balance loss and router z-loss [77]. Hence, the total loss is L = Lce + \u03b1bLb + \u03b1zLz (5) Here, Lce represents the language modeling loss, which computes the cross-entropy of next-token predictions. \u03b1b and \u03b1z denote coefficients for loading balance loss Lb and router z-loss Lz, set to 0.1 and 0.01, respectively, across all experiments. These auxiliary losses, abbreviated as bzloss in Section 4, are individually applied to the MLP connector, vision encoder, and LLM for simplicity. 4. Experiments We train the CuMo models on a mixture of open-sourced datasets, which are converted into the visual instruction tuning format. Then, we conduct comprehensive evaluations of the performance of CuMo models across various competitive VQA-based and instruction-following-based benchmarks. Additionally, we perform ablation studies on each module with upcycled MoE blocks with qualitative analysis of the results. 4.1. Implementation Details Training Datasets During pre-training, we only utilize LLaVA-558K [47] to train the MLP connector for better alignment. In the subsequent pre-finetuning stage, detailed image caption data from ALLaVA [7] is employed to warm up all parameters of the multimodal LLM. For the final visual instruction tuning stage, a mixture of datasets including LLaVA-665K [46], ShareGPT4V [8], LAION-GPT-V [16], DocVQA [66], ChartQA [52], AI2D [31], InfoVQA [53], SynDog-EN [32], ALLaVA [7], and LIMA [74] is utilized to train the CuMo models with upcycled MoE blocks. The total data size for visual instruction tuning is approximately 1.65 million, and all training data are publicly accessible. 5 \fSQA Text MMBench MM VQA LLaVA SEED Method LLM PT IT IMG VQA GQA POPE MME EN CN Vet v2 Wild IMG InstructBLIP [13] Vicuna-7B 129M 1.2M 60.5 50.1 49.2 36.0 23.7 26.2 60.9 60.5 InstructBLIP [13] Vicuna-13B 129M 1.2M 63.1 50.7 49.5 78.9 1212.8 25.6 58.2 63.1 IDEFICS-9B [25] LLaMA-7B 353M 1M 25.9 38.4 48.2 25.2 50.9 IDEFICS-80B [25] LLaMA-65B 353M 1M 30.9 45.2 54.5 38.1 60.0 Qwen-VL [3] Qwen-7B 1.4B 50M 67.1 63.8 59.3 38.2 7.4 78.8 56.3 Qwen-VL-Chat [3] Qwen-7B 1.4B 50M 68.2 61.5 57.5 1487.5 60.6 56.7 78.2 58.2 LLaVA-v1.5 [46] Vicuna-7B 558K 665K 66.8 58.2 62.0 85.9 1510.7 64.3 58.3 30.5 78.5 63.4 66.1 LLaVA-v1.5 [46] Vicuna-13B 558K 665K 71.6 61.3 63.3 85.9 1531.3 67.7 63.6 35.4 80.0 70.7 68.2 CuMo Mistral-7B 558K 665K 71.7 59.3 63.2 87.1 1428.6 69.6 62.6 34.3 80.6 68.8 69.6 Table 2. Comparisons between CuMo Mistral-7B and other multimodal LMM models with limited training data. Method SQA VQAT MMVet SEED Baseline on Mistral-7B 72.8 57.6 32.1 66.4 + Top 2-in-4 & Scratch 68.1 55.6 29.3 65.1 \u21ccTop 2-in-4 & Upcycle 73.7 57.2 32.3 67.1 + bzloss 73.5 57.4 33.1 67.4 \u21ccTop 2-in-8 & Upcycle 73.4 57.6 32.4 67.2 Table 3. Ablation study on the MLP-MoE module. Each row represents a different configuration, with changes or additions marked using \u21ccand + symbols, respectively. Settings highlighted with a light blue background are those adapted for the MLP-MoE module in Table 1. Method SQA VQAT MMVet SEED MLP-MoE 73.5 57.4 33.1 67.4 + Unfreeze CLIP 72.0 58.9 34.7 69.0 + Top 2-in-4 & bzloss 72.8 59.7 35.4 69.8 \u21ccTop 2-in-8 & bzloss 71.0 59.0 33.6 69.2 Table 4. Ablation study on the CLIP-MoE module. All MoE blocks in CLIP are initialized with upcycling. Method SQA VQAT MMVet SEED MLP-MoE & CLIP-MoE 71.7 59.3 34.3 69.6 + Mistral 4\u00d77B & Upcycle 72.8 57.0 35.2 69.9 \u21ccMistral 8\u00d77B & Upcycle 73.2 56.4 35.7 70.5 \u21ccMixtral 8\u00d77B 74.2 60.6 40.0 72.6 Table 5. Ablation study on the LLM-MoE module. Mixtral 8\u00d77B outperforms upcycled Mistral MoE models significantly. The detailed breakdown of the training dataset is listed in Appendix A. Evaluation Benchmarks Evaluation of CuMo models primarily focuses on academic VQA-based datasets such as VQAv2 [21], GQA [24], Science-QA [50], and TextVQA [62], as well as instruction-following-based LMM benchmarks including POPE [40], MME [19], MMBench [49], SEED-Bench [37], LLaVA-Wild [47], and MM-Vet [71]. Additionally, the challenging MMMU [72] and MathVista [51] datasets are evaluated to assess the visual reasoning abilities of the multimodal LLMs. Training Settings We employ the pre-trained CLIP ViTL [57] as the vision encoder, a two-layer MLP as the visionlanguage connector, and Mistral-7B [29] as the LLM to establish the baseline model following LLaVA v1.5 [46]. We only use LLaVA-558K [46] as pre-training data and LLaVA-665K [46] as visual instruction tuning data to train the baseline model and make ablation studies for comparisons. The learning rate is set to 1e-3 for pre-training the MLP connector and reduced to 2e-5 for visual instruction tuning of both the MLP connector and CLIP. To further stabilize the visual instruction tuning process after scaling up with additional data, the learning rate is lowered to 2e-6 for all parameters of the CuMo models in the final results. More hyperparameters of the training process is listed in Appendix B. Evaluation Settings During evaluation, we adhere to the settings outlined in the LLaVA series [46], employing a greedy decoding strategy for all benchmarks. The data and questions are converted into visual instructions to prompt the multimodal LLMs. For benchmarks that utilize GPT API for evaluation, we adopt gpt-4-0613 for LLaVAWild [47] and gpt-3.5-turbo for MathVista [51]. 4.2. Main Results Comparison with SoTA Multimodal LLMs In Table 1, we present a comparison of CuMo models with other stateof-the-art instruction-following-based multimodal LLMs. We categorize the models based on the size of the base LLMs, including 7B models, 13B models, and 7B MoE models. CuMo Mistral-7B outperforms other 7B-based state-of-the-art multimodal LLMs across multiple benchmarks. Moreover, the performance of the CuMo Mistral7B model is comparable to many 13B-based multimodal LLMs. In the case of Mixtral-8\u00d77B models, CuMo achieves results on par with SPHINX-MoE, MM1, and Mini-Gemini. LLaMA-based LLMs [11, 67] are not utilized in our experiments due to license constraints. Comparison under limited training data To further evaluate the effectiveness of the co-upcycled MoE blocks, we 6 \f1\u00d7 2\u00d7 3\u00d7 SQA VQAT MMVet SEED \u2713 71.7 59.3 34.3 69.6 \u2713 \u2713 71.7 60.6 35.0 69.7 \u2713 \u2713 72.9 61.0 37.0 69.7 \u2713 \u2713 \u2713 72.2 60.5 36.9 70.1 Table 6. Ablation study on multi-resolution image features. The combination of 3\u00d7 and 1\u00d7 is adopted for the final models in Table 1. Method SQA VQAT MMVet SEED No PFT 71.7 59.3 34.3 69.6 + ShareGPT4V 72.4 61.7 36.5 70.0 \u21ccALLaVA 73.0 62.8 37.2 70.9 Table 7. Ablation study on the pre-finetuning stage. ALLaVA is chosen for pre-finetuning due to its provision of high-quality image caption data. train the vanilla CuMo mistral-7B under limited training data in Table 2. It shows that CuMo outperforms other 7B models and reaches comparable performance to LLaVAv1.5 Vicuna-13B under the same training data. 4.3. Ablation Study Upcycle MLP connector to MLP-MoE We initiate the ablation study by replacing the MLP connector with upcycled MLP-MoE, as depicted in Table 3. We start with a Top 2-in4 router and train the MoE blocks from scratch, which leads to a clear performance drop on all benchmarks. Then, we adopt the upcycling strategy to initialize the MLP experts. We observe marginal improvements over the baseline, considering each expert comprises only two linear layers. Subsequently, the incorporation of bzloss to ensure a balanced loading of experts in the MLP-MoE yields noticeable enhancements on MMVet. However, employing a Top 2-in-8 router with upcycling and bzloss results in a slight performance decline, possibly due to the limited visual instruction tuning data to train robust and well-balanced eight experts. Empower CLIP with CLIP-MoE In Table 4, initially unfreezing CLIP based on MLP-MoE leads to noticeable improvements on TextVQA and MMVet benchmarks. However, training the added Top2-in-4 MoE blocks in CLIP from scratch proves unsuccessful, as the model fails to converge even with reduced learning rates. Consequently, adopting upcycled MoE blocks during the visual instruction tuning stage yields further enhancements on TextVQA, MMVet, and SEED benchmarks. Upcycle LLM vs Pre-trained LLM-MoE Upon replacing all MLP blocks with sparsely-gated MoE blocks in the visual part, we further investigate the utilization of the MoE architecture in the LLM. Starting from the MistralLayer 1 Layer 4 Layer 7 Layer 10 Layer 13 Layer 16 Layer 19 Layer 22 0% 25% 50% 75% 100% Expert 1 Expert 2 Expert 3 Expert 4 Figure 5. Expert distributions of MoE blocks in CLIP. We select layers from CLIP and summarize the activated experts during the feed-forward process on the MME test set. 7B model, we first lower the learning rate to 2e-6 to set the baseline and the following experiments since a learning rate of 2e-5 induces training instabilities. Then, we upcycle each MLP block with a sparsely-gated MoE block, initializing the weight of each expert from the pre-trained MLP block. As demonstrated in Table 5, the upcycled Mistral4\u00d77B and 8\u00d77B outperform the Mistral-7B model slightly except for TextVQA. However, considering that the upcycled experts significantly increase parameters without introducing new knowledge, we replace the upcycled Mistral 8\u00d77B with Mixtral 8\u00d77B [30]. In Mixtral 8\u00d77B, all expert layers are pre-trained on large-scale language data, providing superior initialization compared to upcycling. The results indicate that CuMo Mixtral-8x7B outperforms its upcycled counterparts significantly and is employed in the final models with bzloss to maintain a balanced loading of experts. Multi-Resolution Visual Features Incorporating multiresolution inputs is crucial for enhancing the understanding of image content in multimodal LLMs. Following the approach outlined in S2[61], we introduce multi-resolution inputs to CLIP and concatenate the feature maps channelwise to maintain the total number of visual tokens consistent with low-resolution inputs. As illustrated in Table 6, an empirical combination of 3\u00d7 and 1\u00d7 yields optimal performance and we adopt this configuration for the final CuMo models. Pre-FineTuning Stage Previous ablation studies were conducted directly after the pre-training of the MLP connector, leading to observed training instabilities during visual instruction tuning. To address this, we introduce a pre-finetuning stage using high-quality image caption data, wherein all parameters are unfrozen. In Table 7, we leverage caption data from ALLaVA for this stage. Results indicate that ALLaVA data proves to be a superior option, 7 \fcan you describe what groceries are presented in this fridge CuMo-Mistral-7B The groceries presented in the fridge include a variety of fresh produce such as bell peppers, lettuce, broccoli, and cucumbers. There are also eggs, milk, and cheese, which are dairy products. Additionally, there are cherry tomatoes and a jar of red sauce, which could be used for cooking or as a condiment. The presence of these items suggests that the fridge is well-stocked with a balance of fresh vegetables, dairy, and condiments, indicating a household that values a healthy and varied diet. LLaVA-Yi-34B The refrigerator in the image is well-stocked with a variety of groceries. Here's a list of what I can see: on the top shelf, there are several items including what appears to be a head of lettuce, a bunch of green onions, a container of strawberries, and a few blue plastic containers that could be used for food storage or serving; the middle shelf has a few bottles, which might contain milk or juice, and a carton of eggs.; the bottom shelf is filled with fresh produce. There are bell peppers in different colors (red, yellow, and green), a bunch of spring onions, and a head of lettuce; the drawer at the bottom contains more fresh produce, including what looks like a bunch of green onions and possibly some other leafy greens. The refrigerator is well-organized and appears to be stocked with fresh and healthy food options. MiniGemini-Yi-34B The refrigerator is filled with a variety of groceries, including fruits, vegetables, dairy products, and beverages. There are several bottles, likely containing milk or other dairy products, and a carton of eggs. The fridge also contains a selection of fresh produce, such as broccoli, peppers, and lettuce. Additionally, there are some fruits, including apples and oranges. The presence of these items suggests that the person who owns the fridge prefers a balanced diet with a mix of healthy foods and beverages. how many people are in this image CuMo-Mistral-7B In the image, there are a total of 14 characters. This includes 12 characters seated around the table and 2 characlters standing on the table. The characters are a mix of human and skeletal figures, with the skeleton figures being part of the Day of the Dead celebration. LLaVA-Yi-34B The image you've provided is from the animated movie \"Coco.\" It features a variety of characters, both human and fantastical, which are not real people but animated characters. The central figure is a young boy named Miguel, who is holding a guitar. Surrounding him are other characters, including his family members and other characters from the film. The characters are depicted in a stylized, animated manner, and the image is a still from the movie. MiniGemini-Yi-34B In the image, there are a total of 12 people. This includes the boy in the foreground, the two skeletons, and the nine characters seated around the table. It's important to note that the skeletons are representations of people and are included in the count. can you introduce this movie based on this poster CuMo-Mistral-7B The poster is for the movie \"The Avengers,\" which is part of the Marvel Cinematic Universe. It features a team of superheroes, each with unique abilities and costumes, coming together to face a significant threat. The central figure, with the iconic red and gold suit, is likely the leader of the group. The presence of the Hulk and Thor suggests that the film includes a mix of science fiction and fantasy elements. The cityscape and the explosion in the background imply that the movie's plot involves a large-scale conflict that impacts the urban environment. The release date and the mention of \"3D and IMAX 3D\" indicate that the movie was available in various formats, catering to different viewing preferences. The poster is designed to convey the epic scale and the high stakes of the film, as well as the diverse team of heroes that viewers can expect to see on screen. LLaVA-Yi-34B This poster is for the movie \"Avengers: Age of Ultron,\" which is a 2015 American superhero film based on the Marvel Comics superhero team the Avengers. The film is the sequel to \"The Avengers\" (2012) and the 11th film in the Marvel Cinematic Universe (MCU). It was directed by Joss Whedon and produced by Marvel Studios. The poster features several of the main characters from the film, including Iron Man, Thor, Captain America, the Hulk, Black Widow, and Hawkeye, all in their superhero costumes. The setting appears to be a cityscape with a sense of destruction, suggesting a battle or conflict. The title \"Avengers: Age of Ultron\" is prominently displayed at the top of the poster, and the Marvel logo is visible at the bottom. The film's plot revolves around the Avengers attempting to stop the villain Ultron from enacting his plans for human extinction. The movie is known for its action-packed sequences, special effects, and ensemble cast of superheroes. It was a critical and commercial success, contributing to the ongoing popularity of the Marvel Cinematic Universe. MiniGemini-Yi-34B The movie poster is for \"The Avengers,\" a 2012 American superhero film produced by Marvel Studios and distributed by Walt Disney Pictures. It is the sixth film in the Marvel Cinematic Universe (MCU). The film features an ensemble cast including Robert Downey Jr., Chris Evans, Mark Ruffalo, Chris Hemsworth, Scarlett Johansson, Jeremy Renner, Tom Hiddleston, and Samuel L. Jackson. The plot revolves around Nick Fury, director of the peacekeeping organization S.H.I.E.L.D., recruiting a team of superheroes to help save the world from Loki, the adoptive brother of Thor, who is leading an alien invasion. The poster is designed to showcase the main characters and the action-packed nature of the film, with the characters in their iconic costumes and the cityscape in the background suggesting the scale of the conflict. The release date is prominently displayed, indicating the film's availability in theaters. The poster also highlights the film's availability in various formats, including 3D, which was a significant selling point at the time of its release. Figure 6. Dialogues between the user and multimodal LLMs on challenging images. We highlight the correct answers and hallucinations from the responses of the multimodal LLMs. providing fewer but higher-quality captions for training, ultimately leading to improved performance. 4.4. Qualitative Analysis Expert Distribution As shown in Figure 5, we visualize the expert distributions in the MoE block from selected layers at CLIP-MoE. The dataset analyzed is the test set of the MME 8 \fbenchmark. The distribution indicates that the selected experts during inference are evenly spread across layers, providing further evidence of the effectiveness of the auxiliary losses in maintaining load balance. Dialogue Comparisons Presented in Figure 6, we contrast the responses from CuMo-Mistral-7B, LLaVA-Yi-34B, and MiniGemini-Yi-34B. It demonstrates that CuMo-Mistral7B can effectively follow instructions and predominantly provide correct answers to challenging questions derived from complex scenes. However, CuMo also exhibits instances of hallucinations, such as responding with \u201c2 characters standing on the table,\u201d highlighting the need for further investigation to mitigate hallucinations in CuMo. 5." +} \ No newline at end of file diff --git a/abs_9K/test_abstract_short_2405.05953v1.json b/abs_9K/test_abstract_short_2405.05953v1.json new file mode 100644 index 0000000000000000000000000000000000000000..bcd17292110397c21a1dcf4bbcd6eaec2effd2df --- /dev/null +++ b/abs_9K/test_abstract_short_2405.05953v1.json @@ -0,0 +1,16 @@ +{ + "url": "http://arxiv.org/abs/2405.05953v1", + "title": "Frame Interpolation with Consecutive Brownian Bridge Diffusion", + "abstract": "Recent work in Video Frame Interpolation (VFI) tries to formulate VFI as a\ndiffusion-based conditional image generation problem, synthesizing the\nintermediate frame given a random noise and neighboring frames. Due to the\nrelatively high resolution of videos, Latent Diffusion Models (LDMs) are\nemployed as the conditional generation model, where the autoencoder compresses\nimages into latent representations for diffusion and then reconstructs images\nfrom these latent representations. Such a formulation poses a crucial\nchallenge: VFI expects that the output is deterministically equal to the ground\ntruth intermediate frame, but LDMs randomly generate a diverse set of different\nimages when the model runs multiple times. The reason for the diverse\ngeneration is that the cumulative variance (variance accumulated at each step\nof generation) of generated latent representations in LDMs is large. This makes\nthe sampling trajectory random, resulting in diverse rather than deterministic\ngenerations. To address this problem, we propose our unique solution: Frame\nInterpolation with Consecutive Brownian Bridge Diffusion. Specifically, we\npropose consecutive Brownian Bridge diffusion that takes a deterministic\ninitial value as input, resulting in a much smaller cumulative variance of\ngenerated latent representations. Our experiments suggest that our method can\nimprove together with the improvement of the autoencoder and achieve\nstate-of-the-art performance in VFI, leaving strong potential for further\nenhancement.", + "authors": "Zonglin Lyu, Ming Li, Jianbo Jiao, Chen Chen", + "published": "2024-05-09", + "updated": "2024-05-09", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "label": "Original Paper", + "paper_cat": "Diffusion AND Model", + "gt": "Recent work in Video Frame Interpolation (VFI) tries to formulate VFI as a\ndiffusion-based conditional image generation problem, synthesizing the\nintermediate frame given a random noise and neighboring frames. Due to the\nrelatively high resolution of videos, Latent Diffusion Models (LDMs) are\nemployed as the conditional generation model, where the autoencoder compresses\nimages into latent representations for diffusion and then reconstructs images\nfrom these latent representations. Such a formulation poses a crucial\nchallenge: VFI expects that the output is deterministically equal to the ground\ntruth intermediate frame, but LDMs randomly generate a diverse set of different\nimages when the model runs multiple times. The reason for the diverse\ngeneration is that the cumulative variance (variance accumulated at each step\nof generation) of generated latent representations in LDMs is large. This makes\nthe sampling trajectory random, resulting in diverse rather than deterministic\ngenerations. To address this problem, we propose our unique solution: Frame\nInterpolation with Consecutive Brownian Bridge Diffusion. Specifically, we\npropose consecutive Brownian Bridge diffusion that takes a deterministic\ninitial value as input, resulting in a much smaller cumulative variance of\ngenerated latent representations. Our experiments suggest that our method can\nimprove together with the improvement of the autoencoder and achieve\nstate-of-the-art performance in VFI, leaving strong potential for further\nenhancement.", + "main_content": "INTRODUCTION Video Frame Interpolation (VFI) aims to generate high frame-persecond (fps) videos from low fps videos by estimating the intermediate frame given its neighboring frames. High-quality frame interpolation contributes to other practical applications such as novel view synthesis [14], video compression [58], and high-fps cartoon synthesis [47]. Current works in VFI can be divided into two folds in terms of methodologies: flow-based methods [1, 7, 12, 18, 20, 24, 29, 32, 34, 39, 42, 47, 60] and kernel-based methods [4, 5, 9, 27, 36, 37, 46]. Flow-based methods compute flows in the neighboring frames and forward warp neighboring images and features [18, 24, 34, 35, 47] or estimate flows from the intermediate frame to neighboring frames and backward warp neighboring frames and features [1, 7, 12, 20, 29, 32, 39, 42, 60]. Instead of relying on optical flows, kernel-based methods predict convolution kernels for pixels in the neighboring frames. Recent advances in flow estimation [19, 21\u201323, 51, 52, 57] make it more popular to adopt flow-based methods in VFI. arXiv:2405.05953v1 [cs.CV] 9 May 2024 \fConference\u201917, July 2017, Washington, DC, USA Zonglin Lyu1, Ming Li2, Jianbo Jiao3, and Chen Chen2 Other than these two folds of methods, MCVD [55] and LDMVFI [11] start formulating VFI as a diffusion-based image generation problem. LDMVFI considers VFI as a conditional generation task with Latent Diffusion Models (LDMs) [43], where LDMs contain an autoencoder that compresses images into latent representations and reconstructs images from latent representations. Diffusion models [17] run in the latent space of the autoencoder. Though diffusion models achieve excellent performance in image generation, there remain challenges in applying them to VFI. (1) The formulation of diffusion models results in a large cumulative variance (the variance accumulated during sampling) of generated latent representations. The sampling process starts with standard Gaussian noise and adds small Gaussian noise to the denoised output at each step based on a pre-defined distribution. After the sampling process, images are generated, but these noises also add up to a large cumulative variance. Though such a variance is beneficial to diversity (i.e. repeated sampling results in different outputs), VFI requires that repeated sampling returns identical results, which is the ground truth intermediate frame. Therefore, a small cumulative variance is preferred in VFI. The relation of the cumulative variance and diversity is supported by the fact that DDIM [48] tends to generate relatively deterministic images than DDPM [17]. DDIM removes small noises at each sampling step, so the cumulative variance in DDIM is lower. LDMVFI [11] uses conditional generation as guidance, but this does not change the nature of large cumulative variance. In Section 3.4, we show that our method has a much lower cumulative variance than conditional generation. (2) Videos usually have high resolution, which can be up to 4K [41], resulting in practical constraints to apply diffusion models [17] in pixel spaces. It is natural to apply Latent Diffusion Models (LDMs) [43] to sample latent representations and reconstruct them back to images. LDMs apply VQModels in VQGAN [13] to compress images into latent representations and reconstruct images from latent representations. However, it does not take advantage of neighboring frames, which can be a good guide to reconstruction. LDMVFI designs reconstruction models that leverage neighboring frames, but it tends to reconstruct overlaid images when there is a relatively large motion between neighboring frames, possibly due to the cross-attention with features of neighboring frames, which is shown in Figure 1. To tackle these challenges, we propose a consecutive Brownian Bridge diffusion model (in latent space) that transits among three deterministic endpoints for VFI. This method results in a much smaller cumulative variance, achieving a better estimation of the ground truth inputs. We can separate LDM-based VFI methods into two parts: autoencoder and ground truth estimation (with diffusion). It is different from the original LDMs [43] because the latent representation generated by diffusion does not aim to estimate some ground truth. It is also different from LDMVFI [11] because LDMVFI does not consider the performance of autoencoder separately from the interpolation method. With such a two-stage separation, we evaluate them separately for specific directions of improvement. Moreover, we take advantage of flow estimation and refinement methods in recent literature [32] to improve the autoencoder. The feature pyramids from neighboring frames are warped based on estimated optical flows, aiming to alleviate the issues of reconstructing overlaid images. In experiments, our method improves by a large margin when the autoencoder is improved and achieves stateof-the-art performance. Our contribution can be summarized in three parts: \u2022 We propose a new consecutive Brownian Bridge diffusion model for VFI and justify its advantages over traditional diffusion models: lower cumulative variance and better ground truth estimation capability. Additionally, we provide a cleaner formulation of Brownian Bridges and also propose the loss weights among different times in Brownian Bridges. \u2022 We formulate the diffusion-based VFI as two stages: autoencoder and ground truth estimation. This is a novel interpretation of LDM-based VFI, which can provide specific directions for improvements. \u2022 Through extensive experiments, we validate the effectiveness of our method. Our method estimates the ground truth better than traditional diffusion with conditional generation. Moreover, the performance of our method improves when the autoencoder improves and achieves state-of-the-art performance with a simple yet effective autoencoder, indicating its strong potential in VFI. 2 RELATED WORKS 2.1 Video Frame Interpolation Video Frame Interpolation can be roughly divided into two categories in terms of methodologies: flow-based methods [1, 7, 12, 18, 20, 24, 29, 32, 34, 39, 42, 47, 60] and kernel-based methods [4, 5, 9, 27, 36, 37, 46]. Flow-based methods assume certain motion types, where a few works assume non-linear types [7, 12] while others assume linear. Via such assumptions, flow-based methods estimate flows in two ways. Some estimate flows from the intermediate frame to neighboring frames (or the reverse way) and apply backward warping to neighboring frames and their features [1, 7, 12, 20, 29, 32, 39, 42, 60]. Others compute flows among the neighboring frames and apply forward splatting [18, 24, 34, 35, 47]. In addition to the basic framework, advanced details such as recurrence of inputs with different resolution level [24], cross-frame attention [60], and 4D-correlations [29] are proposed to improve performance. Kernel-based methods, introduced by [36], aim to predict the convolution kernel applied to neighboring frames to generate the intermediate frame, but it has difficulty in dealing with large displacement. Following works [5, 9, 27] alleviate such issues by introducing deformable convolution. LDMVFI [11] recently introduced a method based on Latent Diffusion Models (LDMs) [43], formulating VFI as a conditional generation task. LDMVFI uses an autoencoder introduced by LDMs to compress images into latent representations, efficiently run the diffusion process, and then reconstruct images from latent space. Instead of directly predicting image pixels during reconstruction, it takes upsampled latent representations in the autoencoder as inputs to predict convolution kernels in kernel-based methods to complete the VFI task. 2.2 Diffusion Models The diffusion model is introduced by DDPM [17] to image generation task and achieves excellent performance in high-fidelity and high-diversity image generation. The whole diffusion model \fFrame Interpolation with Consecutive Brownian Bridge Diffusion Conference\u201917, July 2017, Washington, DC, USA can be split into a forward diffusion process and a backward sampling process. The forward diffusion process is defined as a Markov Chain with steps \ud835\udc61= 1, ...,\ud835\udc47, and the backward sampling process aims to estimate the distribution of the reversed Markov chain. The variance of the reversed Markov chain has a closed-form solution, and and expected value of the reversed Markov chain is estimated with a deep neural network. Though achieving strong performance in image generation tasks, DDPM [17] requires \ud835\udc47= 1000 iterative steps to generate images, resulting in inefficient generation. Sampling steps cannot be skipped without largely degrading performance because the conditional distribution at step \ud835\udc61\u22122 needs to be computed with the conditional distribution at time \ud835\udc61\u22121 and \ud835\udc61due to its Markov property. To enable efficient and high-quality generation, DDIM [48] proposes a non-Markov formulation of diffusion models, where the conditional distribution at time \ud835\udc61\u2212\ud835\udc58(\ud835\udc58> 0) can be directly computed with the conditional distribution at time \ud835\udc61. Therefore, skipping steps does not largely degrade performance. Score-based SDEs [3, 49, 63] are also proposed as an alternative formulation of diffusion models by writing the diffusion process in terms of Stochastic Differential Equations [38], where the reversed process has a closed-form continuous time formulation and can be solved with Eluer\u2019s method with a few steps [49]. In addition, Probability Flow ODE is proposed as the deterministic process that shares the same marginal distribution with the reversed SDE [49]. Following score-based SDEs, some works propose efficient methods to estimate the solution Probability Flow ODE [30, 31]. Instead of focusing on the nature of the diffusion process, DeepCache [33] proposes a feature caching and sharing mechanism in the denoising UNet, enabling parallel and skipping computation and further improving efficiency. To deal with high-resolution images, the Latent Diffusion Model [43] proposes an autoencoder with a Vector Quantization Layer (VQ Layer) that compresses and reconstructs images, and diffusion models run with compressed images. With such an autoencoder, high-resolution images can be generated efficiently. Other than accelerating generation, diffusion models are applied to conditional generation tasks [3, 6, 28, 43, 45, 61, 63] such as generation based on poses or skeletons, image inpainting, etc. 3 METHODOLOGY In this section, we will first go through preliminaries on the Diffusion Model (DDPM) [17] and Brownian Bridge Diffusion Model (BBDM) [28] and introduce the overview of our two-stage formulation: autoencoder and ground truth estimation (with consecutive Brownian Bridge diffusion). Then, we will discuss the details of our autoencoder method. Finally, we propose our solution to the frame interpolation task: consecutive Brownian Bridge diffusion. 3.1 Preliminaries Diffusion Model. The forward diffusion process of Diffsuion Model [17] is defined as: \ud835\udc5e(x\ud835\udc61|x\ud835\udc61\u22121) = N (x\ud835\udc61; \u221a\ufe01 1 \u2212\ud835\udefd\ud835\udc61x\ud835\udc61\u22121, \ud835\udefd\ud835\udc61I). (1) When \ud835\udc61= 1, x\ud835\udc61\u22121 = x0 is a sampled from the data (images). By iterating Eq. (1), we get the conditional marginal distribution of x\ud835\udc61[17]: \ud835\udc5e(x\ud835\udc61|\ud835\udc650) = N (\ud835\udc65\ud835\udc61; \u221a\ud835\udefc\ud835\udc61x0, (1 \u2212\ud835\udefc\ud835\udc61)I), (2) where \ud835\udefc\ud835\udc61= \ud835\udc61 \u00d6 \ud835\udc60=1 (1 \u2212\ud835\udefd\ud835\udc60). The sampling process can be derived with the Bayes\u2019 theorem [17]: \ud835\udc5d\ud835\udf03(x\ud835\udc61\u22121|x\ud835\udc61) = \ud835\udc5e(x\ud835\udc61\u22121|x0, x\ud835\udc61) = N (\ud835\udc65\ud835\udc61\u22121; \u02dc \ud835\udf07\ud835\udc61, \u02dc \ud835\udefd\ud835\udc61), (3) where \u02dc \ud835\udf07\ud835\udc61= \u221a\ud835\udefc\ud835\udc61\u22121\ud835\udefd\ud835\udc61 1 \u2212\ud835\udefc\ud835\udc61 x0 + \u221a\ufe01 1 \u2212\ud835\udefd\ud835\udc61(1 \u2212\ud835\udefc\ud835\udc61\u22121) 1 \u2212\ud835\udefc\ud835\udc61 x\ud835\udc61, (4) and \u02dc \ud835\udefd\ud835\udc61= 1 \u2212\ud835\udefc\ud835\udc61\u22121 1 \u2212\ud835\udefc\ud835\udc61 \ud835\udefd\ud835\udc61. (5) Eq. (4) can be rewritten with Eq. (2) via reparameterization: \u02dc \ud835\udf07\ud835\udc61= 1 1 \u2212\ud835\udefd\ud835\udc61 \u0012 x\ud835\udc61\u2212 \ud835\udefd\ud835\udc61 \u221a1 \u2212\ud835\udefc\ud835\udc61 \ud835\udf16 \u0013 , where \ud835\udf16\u223cN (0, I). (6) By Eq. (4) and (6), we only need to estimate\ud835\udf16to estimate\ud835\udc5d\ud835\udf03(x\ud835\udc61\u22121|x\ud835\udc61). Therefore, the training objective is: Ex0,\ud835\udf16 \u0002 ||\ud835\udf16\ud835\udf03(xt,\ud835\udc61) \u2212\ud835\udf16||2 2 \u0003 . (7) It suffices to train a neural network \ud835\udf16\ud835\udf03(x\ud835\udc61,\ud835\udc61) predicting \ud835\udf16. Brownian Bridge Diffusion Model. Brownian Bridge [44] is a stochastic process that transits between two fixed endpoints, which is formulated as \ud835\udc4b\ud835\udc61= \ud835\udc4a\ud835\udc61|(\ud835\udc4a\ud835\udc611,\ud835\udc4a\ud835\udc612), where \ud835\udc4a\ud835\udc61is a standard Wiener process with distribution N (0,\ud835\udc61). We can write a Brownian Bridge as \ud835\udc4b\ud835\udc61= \ud835\udc4a\ud835\udc61|(\ud835\udc4a0,\ud835\udc4a\ud835\udc47) to define a diffusion process. When \ud835\udc4a0 = \ud835\udc4e,\ud835\udc4a\ud835\udc47= \ud835\udc4f, it follows a normal distribution: \ud835\udc4b\ud835\udc61\u223cN \u0012 (1 \u2212\ud835\udc61 \ud835\udc47)\ud835\udc4e+ \ud835\udc61 \ud835\udc47\ud835\udc4f, \ud835\udc61\ud835\udc47\u2212\ud835\udc612 \ud835\udc47 \u0013 . (8) BBDM [28] develops an image-to-image translation method based on the Brownian Bridge process by treating \ud835\udc4eand \ud835\udc4fas two images. The forward diffusion process is defined as: \ud835\udc5e(x\ud835\udc61|x0, y) = N (x\ud835\udc61; (1 \u2212\ud835\udc5a\ud835\udc61)x0 + \ud835\udc5a\ud835\udc61y,\ud835\udeff\ud835\udc61) , (9) where \ud835\udc5a\ud835\udc61= \ud835\udc61 \ud835\udc47and \ud835\udeff\ud835\udc61= 2\ud835\udc60(\ud835\udc5a\ud835\udc61\u2212\ud835\udc5a2 \ud835\udc61). (10) x0 and y are two images, and \ud835\udc60is a constant that controls the maximum variance in the Brownian Bridge. The sampling process is derived based on Bayes\u2019 theorem [28]: \ud835\udc5d\ud835\udf03(x\ud835\udc61\u22121|x\ud835\udc61, y) = \ud835\udc5e(x\ud835\udc61\u22121|x0, x\ud835\udc61, y) = \ud835\udc5e(x\ud835\udc61|x\ud835\udc61\u22121, y)\ud835\udc5e(x\ud835\udc61\u22121|x0, y) \ud835\udc5e(x\ud835\udc61|x0, y) = N ( \u02dc \ud835\udf07\ud835\udc61, \u02dc \ud835\udeff\ud835\udc61I). (11) where \u02dc \ud835\udf07\ud835\udc61= \ud835\udc50\ud835\udc65\ud835\udc61x\ud835\udc61+ \ud835\udc50\ud835\udc66\ud835\udc61\ud835\udc66+ \ud835\udc50\ud835\udf16\ud835\udc61(\ud835\udc5a\ud835\udc61(y \u2212x0) + \u221a\ufe01 \ud835\udeff\ud835\udc61\ud835\udf16), \ud835\udc50\ud835\udc65\ud835\udc61= \ud835\udeff\ud835\udc61\u22121 \ud835\udeff\ud835\udc61 1 \u2212\ud835\udc5a\ud835\udc61 1 \u2212\ud835\udc5a\ud835\udc61\u22121 + \ud835\udeff\ud835\udc61|\ud835\udc61\u22121 \ud835\udeff\ud835\udc61 (1 \u2212\ud835\udc5a\ud835\udc61), \ud835\udc50\ud835\udc66\ud835\udc61= \ud835\udc5a\ud835\udc61\u22121 \u2212\ud835\udc5a\ud835\udc61 1 \u2212\ud835\udc5a\ud835\udc61 1 \u2212\ud835\udc5a\ud835\udc61\u22121 \ud835\udeff\ud835\udc61\u22121 \ud835\udeff\ud835\udc61 , \ud835\udc50\ud835\udf16\ud835\udc61= (1 \u2212\ud835\udc5a\ud835\udc61\u22121) \ud835\udeff\ud835\udc61|\ud835\udc61\u22121 \ud835\udeff\ud835\udc61 , \ud835\udeff\ud835\udc61|\ud835\udc61\u22121 = \ud835\udeff\ud835\udc61\u2212\ud835\udeff\ud835\udc61\u22121 (1 \u2212\ud835\udc5a\ud835\udc61)2 (1 \u2212\ud835\udc5a\ud835\udc61\u22121)2 . It suffices to train a deep neural network \ud835\udf16\ud835\udf03to estimate the term \ud835\udc50\ud835\udf16\ud835\udc61(\ud835\udc5a\ud835\udc61(y \u2212x0) + \u221a\ud835\udeff\ud835\udc61\ud835\udf16), and therefore the training objective is Ex0,y,\ud835\udf16[\ud835\udc50\ud835\udf16\ud835\udc61||\ud835\udc5a\ud835\udc61(y \u2212x0) + \u221a\ud835\udeff\ud835\udc61\ud835\udf16\u2212\ud835\udf16\ud835\udf03(xt,\ud835\udc61)||2 2]. \fConference\u201917, July 2017, Washington, DC, USA Zonglin Lyu1, Ming Li2, Jianbo Jiao3, and Chen Chen2 (b) Ground Truth Estimation (stage 2) Shared Encoder UNet Sampling process Consecutive Brownian Bridge Encoded latent representation \ud835\udc31\ud835\udc2c\ud835\udfcf= \ud835\udc0d \ud835\udfcf\u2212\ud835\udc2c \ud835\udc13\ud835\udc31+ \ud835\udc2c \ud835\udc13\ud835\udc32, \ud835\udc2c\ud835\udc13\u2212\ud835\udc2c\ud835\udfd0 \ud835\udc13 \ud835\udc31\ud835\udc2c\ud835\udfd0= \ud835\udc0d \ud835\udfcf\u2212\ud835\udc2c \ud835\udc13\ud835\udc31+ \ud835\udc2c \ud835\udc13\ud835\udc33, \ud835\udc2c\ud835\udc13\u2212\ud835\udc2c\ud835\udfd0 \ud835\udc13 Time step s \u2026 y x z x\ud835\udc601 \u2026 \u2026 x\ud835\udc602 \u2026 Time step s Predict xs1 \u2212x Predict x\ud835\udc602 \u2212x (c) Inference y z \u2026 \u0ddc x \u2026 Input \ud835\udc3c0 Output \u1218 \ud835\udc3c\ud835\udc5b Input \ud835\udc3c1 Decoder \ud835\udc3c0 \ud835\udc3c1 Freeze weights Input \ud835\udc3c0 (a) Autoencoder (stage 1) y x z Input \ud835\udc3c1 Output \u1218 \ud835\udc3c\ud835\udc5b \ud835\udc3f\ud835\udc5c\ud835\udc60\ud835\udc60= |\ud835\udc3c\ud835\udc5b\u2212\u1218 \ud835\udc3c\ud835\udc5b| Input \ud835\udc3c\ud835\udc5b Figure 2: The illustration of our two-stage method. The encoder is shared for all frames. (a) The autoencoder stage. In this stage, previous frame \ud835\udc3c0, intermediate frame \ud835\udc3c\ud835\udc5b, and next frame \ud835\udc3c1 are encoded by the encoder to y, x, z respectively. Then x is fed to the decoder, together with the encoder feature of \ud835\udc3c0, \ud835\udc3c1 at different down-sampling factors. The decoder predicts the intermediate frame as \u02c6 \ud835\udc3c\ud835\udc5b. The encoder and decoder are trained in this stage. (b) The ground truth estimation stage. In this stage, y, x, z will be fed to the consecutive Brownian Bridge diffusion as three endpoints, where we sample two states that move time step \ud835\udc60from x in both directions. The UNet predicts the difference between the current state and x. The autoencoder is well-trained and frozen in this stage. (c) Inference. \u02c6 x is sampled from y, z to estimate x (details in Section 3.4). The decoder receives \u02c6 x and encoder features of \ud835\udc3c0, \ud835\udc3c1 at different down-sampling factors to interpolate the intermediate frame. 3.2 Formulation of Diffusion-based VFI The goal of video frame interpolation is to estimate the intermediate frame \ud835\udc3c\ud835\udc5bgiven the previous frame \ud835\udc3c0 and the next frame \ud835\udc3c1. n is set to 0.5 to interpolate the frame in the middle of \ud835\udc3c0 and \ud835\udc3c1. In latent diffusion models [43], there is an autoencoder that encodes images to latent representations and decodes images from latent representations. The diffusion model is given a standard Gaussian noise, denoises it according to the sampling process, and decodes the denoised latent representation back to an image. Since the initial noise is random, the decoded images are diverse images when they are sampled repetitively with the same conditions such as poses. Instead of diversity, VFI looks for a deterministic ground truth, which is the intermediate frame. Such a ground truth frame is encoded to a ground truth latent representation by the encoder, and only the ground truth latent representation needs to be estimated since the decoder will decode it back to the frame. Therefore, LDMbased VFI can be split into two stages: autoencoder and ground truth estimation. The two stages are defined as: (1) Autoencoder. The primary function of the autoencoder is similar to image compression: compressing images to latent representations so that the diffusion model can be efficiently implemented. We denote x, y, z as encoded latent representations of \ud835\udc3c\ud835\udc5b, \ud835\udc3c0, \ud835\udc3c1. In this stage, the goal is to compress \ud835\udc3c\ud835\udc5bto x with an encoder and then reconstruct \ud835\udc3c\ud835\udc5bfrom x with a decoder. x is provided to the decoder together with neighboring frames \ud835\udc3c0, \ud835\udc3c1 and their features in the encoder at different down-sampling factors. The overview of this stage is shown in Figure 2 (a). However, to interpolate the intermediate frame, x is unknown, so we need to estimate this ground truth. (2) Ground truth estimation. In this stage, the goal is to accurately estimate x with a diffusion model. The diffusion model converts x to y, z with the diffusion process, and we train a UNet to predict the difference between the current diffusion state and x, shown in Figure 2 (b). The sampling process of the diffusion model will convert y, z to x with the UNet output. The autoencoder is modeled with VQModel [43] in Section 3.3, and the ground truth estimation is accomplished by our proposed (latent) consecutive Brownian Bridge diffusion in Section 3.4. During inference, both stages are combined as shown in Figure 2 (c), where we decode diffusion-generated latent representation \u02c6 x. Via such formulation, we can have a more specific direction to improve VFI quality. If images decoded from x (Figure 2 (a)) have similar visual quality to images decoded from \u02c6 x (Figure 2 (c)), then the diffusion model achieves a strong performance in ground truth estimation, so it will be good to develop a good autoencoder. On the other way round, the performance of ground truth estimation can be potentially improved by redesigning the diffusion model. 3.3 Autoencoder Diffusion models running in pixel space are extremely inefficient in video interpolation because videos can be up to 4K in real life [41]. Therefore, we can encode images into a latent space with encoder E and decode images from the latent space with decoder D. Features of \ud835\udc3c0, \ud835\udc3c1 are included because detailed information may be lost when images are encoded to latent representations [11]. We incorporate feature pyramids of neighboring frames into the decoder stage as guidance because neighboring frames contain a large number of shared details. Given \ud835\udc3c\ud835\udc5b, \ud835\udc3c0, \ud835\udc3c1, the encoder E will output encoded \fFrame Interpolation with Consecutive Brownian Bridge Diffusion Conference\u201917, July 2017, Washington, DC, USA Conv3x3 RAR Block VQ Layer Conv3x3 Conv3x3 H Head Flow estimation Back Warp \ud835\udeff Head Self Attention Down Sample Block x5 Up Sample Block x5 Back Warp Encoder Decoder ResNet Block Conv3x3 Sride = 2 Down Sample Block Conv3x3+ ReLU x3 Conv3x3 \ud835\udeff: 2*Sigmoid 1 \ud835\udc3b: Sigmoid H head and \ud835\udeff Head ResNet Block Self attention ResNet Block ResNet-Attention-ResNet (RAR) block Up Sample Block ResNet Block Self/Cross attention Transpose Conv \ud835\udc3c0,\ud835\udc3c1 \ud835\udc3c\ud835\udc5b \u1218 \ud835\udc3c\ud835\udc5b details of different blocks Figure 3: Architecture of the autoencoder. The encoder is in green dashed boxes, and the decoder contains all remaining parts. The output of consecutive Brownian Bridge diffusion will be fed to the VQ layer. The features of \ud835\udc3c0, \ud835\udc3c1 at different down-sampling rate will be sent to the cross-attention module at Up Sample Block in the Decoder. latent representation x, y, z for diffusion models and feature pyramids of \ud835\udc3c0, \ud835\udc3c1 in different down-sampling rates, denoted {\ud835\udc53\ud835\udc58 \ud835\udc66}, {\ud835\udc53\ud835\udc58 \ud835\udc67}, where \ud835\udc58is down-sampling factor. When \ud835\udc58= 1, {\ud835\udc53\ud835\udc58 \ud835\udc66} and {\ud835\udc53\ud835\udc58 \ud835\udc67} represent original images. The decoder D will take sampled latent representation \u02c6 x (output of diffusion model that estimates x) and feature pyramids {\ud835\udc53\ud835\udc58 \ud835\udc66}, {\ud835\udc53\ud835\udc58 \ud835\udc67} to reconstruct \ud835\udc3c\ud835\udc5b. In lines of equations, these can be expressed as: x, y, {\ud835\udc53\ud835\udc58 \ud835\udc66}, z, {\ud835\udc53\ud835\udc58 \ud835\udc67} = E(\ud835\udc3c\ud835\udc5b, \ud835\udc3c0, \ud835\udc3c1), \u02c6 \ud835\udc3c\ud835\udc5b= D \u0010 x, {\ud835\udc53\ud835\udc58 \ud835\udc66}, {\ud835\udc53\ud835\udc58 \ud835\udc67} \u0011 . (12) Our encoder shares an identical structure with that in LDMVFI [11], and we slightly modify the decoder to better fit the VFI task. Decoding with Warped Features. LDMVFI [11] apply crossattention [54] to up-sampled \u02c6 x and \ud835\udc53\ud835\udc58 \ud835\udc65, \ud835\udc53\ud835\udc58 \ud835\udc66, but keeping feature of neighboring frames may preserve their original information (i.e. motion in previous and next frames). This is problematic since motion changes may be drastic in different frames. Therefore, we estimate optical flows from \ud835\udc3c\ud835\udc5bto \ud835\udc3c0, \ud835\udc3c1 with a flow estimation module and apply backward warping to the feature pyramids. Suppose \u02c6 \ud835\udc65is generated by our consecutive Brownian Bridge diffusion, and it is up-sampled to \u210e\ud835\udc58where \ud835\udc58denotes the downsampling factor compared to the original image. Then, we apply \ud835\udc36\ud835\udc34 \u0010 \u210e\ud835\udc58,\ud835\udc36\ud835\udc4e\ud835\udc61(\ud835\udc64\ud835\udc4e\ud835\udc5f\ud835\udc5d(\ud835\udc53\ud835\udc58 \ud835\udc66),\ud835\udc64\ud835\udc4e\ud835\udc5f\ud835\udc5d(\ud835\udc53\ud835\udc58 \ud835\udc67)) \u0011 for \ud835\udc58> 1 to fuse the latent representation \u210e\ud835\udc58and feature pyramids \ud835\udc53\ud835\udc58 \ud835\udc66and \ud835\udc53\ud835\udc58 \ud835\udc67, where \ud835\udc36\ud835\udc34(\u00b7, \u00b7), \ud835\udc36\ud835\udc4e\ud835\udc61(\u00b7, \u00b7), and \ud835\udc64\ud835\udc4e\ud835\udc5f\ud835\udc5d(\u00b7) denotes cross attention, channelwise concatenation, and backward warping with estimated optical flows respectively. Finally, we apply convolution layers to \u210e1 to predict soft mask \ud835\udc3band residual \ud835\udeff. The interpolation output is \u02c6 \ud835\udc3c\ud835\udc5b= \ud835\udc3b\u2217\ud835\udc64\ud835\udc4e\ud835\udc5f\ud835\udc5d(\ud835\udc3c0) + (1 \u2212\ud835\udc3b) \u2217\ud835\udc64\ud835\udc4e\ud835\udc5f\ud835\udc5d(\ud835\udc3c1) + \ud835\udeff, where \u2217holds for Hadamard product, and \u02c6 \ud835\udc3c\ud835\udc5bis the reconstructed image. The detailed illustration of the architecture is shown in Figure 3. The VQ layer is connected with the encoder during training, but it is disconnected from the encoder and receives the sampled latent representation from the diffusion model. 3.4 Consecutive Brownian Bridge Diffusion Brownian Bridge diffusion model (BBDM) [28] is designed for translation between image pairs, connecting two deterministic points, which seems to be a good solution to estimate the ground truth intermediate frame. However, it does not fit the VFI task. In VFI, images are provided as triplets because we aim to reconstruct intermediate frames giving neighboring frames, resulting in three points that need to be connected. If we construct a Brownian Bridge between the intermediate frame and the next frame, then the previous frame is ignored, and so is the other way round. This is problematic because we do not know what \"intermediate\" is if we lose one of its neighbors. Therefore, we need a process that transits among three images. Given two neighboring images \ud835\udc3c0, \ud835\udc3c1, we aim to construct a Brownian Bridge process with endpoints \ud835\udc3c0, \ud835\udc3c1 and additionally condition its middle stage on the intermediate frame \ud835\udc3c\ud835\udc5b(\ud835\udc5b= 0.5 for 2\u00d7 interpolation). To achieve this, the process starts at \ud835\udc61= 0 with value y, passes \ud835\udc61= \ud835\udc47with value x, and ends at \ud835\udc61= 2\ud835\udc47with value z. To be consistent with the notation in diffusion models, x, y, z are used to represent latent representations of \ud835\udc3c\ud835\udc5b, \ud835\udc3c0, \ud835\udc3c1 respectively. It is therefore defined as \ud835\udc4b\ud835\udc61= \ud835\udc4a\ud835\udc61|\ud835\udc4a0 = y,\ud835\udc4a\ud835\udc47= x,\ud835\udc4a2\ud835\udc47= z. The sampling process starts from time 0 and 2\ud835\udc47and goes to time \ud835\udc47. Such a process indeed consists of two Brownian Bridges, where the first one ends at x and the second one starts at x. We can easily verify that for 0 < \ud835\udc61< \u210e: \ud835\udc4a\ud835\udc60|(\ud835\udc4a0,\ud835\udc4a\ud835\udc61,\ud835\udc4a\u210e) = ( \ud835\udc4a\ud835\udc60|(\ud835\udc4a0,\ud835\udc4a\ud835\udc61) if \ud835\udc60< \ud835\udc61 \ud835\udc4a\ud835\udc60|(\ud835\udc4a\ud835\udc61,\ud835\udc4a\u210e) if \ud835\udc60> \ud835\udc61. (13) According to Eq. (13), we can derive the distribution of our consecutive Brownian Bridge diffusion (details shown in Appendix A.1): \ud835\udc5e(x\ud835\udc61|y, x, z) = ( N ( \ud835\udc60 \ud835\udc47x + (1 \u2212\ud835\udc60 \ud835\udc47)y, \ud835\udc60(\ud835\udc47\u2212\ud835\udc60) \ud835\udc47 I) \ud835\udc60= \ud835\udc47\u2212\ud835\udc61, \ud835\udc61< \ud835\udc47 N ( \ud835\udc60 \ud835\udc47x + (1 \u2212\ud835\udc60 \ud835\udc47)z, \ud835\udc60(\ud835\udc47\u2212\ud835\udc60) \ud835\udc47 I) \ud835\udc60= \ud835\udc61\u2212\ud835\udc47, \ud835\udc61> \ud835\udc47 . (14) \fConference\u201917, July 2017, Washington, DC, USA Zonglin Lyu1, Ming Li2, Jianbo Jiao3, and Chen Chen2 Algorithm 1 Training 1: repeat 2: sample triplet x, y, z from dataset 3: \ud835\udc60\u2190\ud835\udc48\ud835\udc5b\ud835\udc56\ud835\udc53\ud835\udc5c\ud835\udc5f\ud835\udc5a(0,\ud835\udc47) 4: \ud835\udc64\ud835\udc60\u2190\ud835\udc5a\ud835\udc56\ud835\udc5b{ 1 \ud835\udeff\ud835\udc61,\ud835\udefe} \u22b2\ud835\udefeis a pre-defined constant 5: \ud835\udf16\u2190N (0, I) 6: xs1 \u2190\ud835\udc60 \ud835\udc47x + (1 \u2212\ud835\udc60 \ud835\udc47)y + \u221a\ufe03 \ud835\udc60(\ud835\udc47\u2212\ud835\udc60) \ud835\udc47 \ud835\udf16 7: xs2 \u2190\ud835\udc60 \ud835\udc47x + (1 \u2212\ud835\udc60 \ud835\udc47)z + \u221a\ufe03 \ud835\udc60(\ud835\udc47\u2212\ud835\udc60) \ud835\udc47 \ud835\udf16 8: r \u2190\ud835\udc48\ud835\udc5b\ud835\udc56\ud835\udc53\ud835\udc5c\ud835\udc5f\ud835\udc5a(0, 1) 9: if r < 0.5 then take a gradient step on 10: \u2207\ud835\udf03||\ud835\udf16\ud835\udf03(x\ud835\udc601,\ud835\udc47\u2212\ud835\udc60, y, z) \u2212(x\ud835\udc601 \u2212x)||2 2 11: else take a gradient step on 12: \u2207\ud835\udf03||\ud835\udf16\ud835\udf03(x\ud835\udc602,\ud835\udc47+ \ud835\udc60, y, z) \u2212(x\ud835\udc602 \u2212x)||2 2 13: end if 14: until convergence Algorithm 2 Sampling 1: \ud835\udc611,\ud835\udc612 \u2190\ud835\udc47, \u0394\ud835\udc61\u2190 \ud835\udc47 sampling steps, x\ud835\udc471 = y, x\ud835\udc472 = z 2: repeat 3: \ud835\udc601,\ud835\udc602 \u2190\ud835\udc611 \u2212\u0394\ud835\udc61,\ud835\udc612 \u2212\u0394\ud835\udc61 4: \ud835\udf16\u2190N (0, I) 5: xs1 \u2190\ud835\udc65\ud835\udc611 \u2212\u0394\ud835\udc61 \ud835\udc611 \ud835\udf16\ud835\udf03(\ud835\udc65\ud835\udc611,\ud835\udc47\u2212\ud835\udc611, y, z) + \u221a\ufe03 \ud835\udc601\u0394\ud835\udc61 \ud835\udc611 \ud835\udf16 6: xs2 \u2190\ud835\udc65\ud835\udc612 \u2212\u0394\ud835\udc61 \ud835\udc612 \ud835\udf16\ud835\udf03(\ud835\udc65\ud835\udc612,\ud835\udc47\u2212\ud835\udc612, y, z) + \u221a\ufe03 \ud835\udc602\u0394\ud835\udc61 \ud835\udc612 \ud835\udf16 7: \ud835\udc611,\ud835\udc612 \u2190\ud835\udc601,\ud835\udc602 8: until \ud835\udc611,\ud835\udc612 = 0 Cleaner Formulation. Eq. (11) is in a discrete setup (i.e. time = 0, 1, ...,\ud835\udc47), and the sampling process is derived via Bayes\u2019 theorem, resulting in a complicated formulation. To preserve the maximum variance, it suffices to have \ud835\udc47= 2\ud835\udc60in Eq. (8) and discretize T for training and sampling. Our forward diffusion is defined as Eq. (14). To sample from time \ud835\udc60from \ud835\udc61(\ud835\udc60< \ud835\udc61), we rewrite Eq. (11) according to Eq. (13): \ud835\udc5d\ud835\udf03(x\ud835\udc60|x\ud835\udc61, y) = \ud835\udc5e(x\ud835\udc60|x, x\ud835\udc61, y) = \ud835\udc5e(x\ud835\udc60|x, x\ud835\udc61) = N \u0012 x\ud835\udc60; \ud835\udc60 \ud835\udc61x\ud835\udc61+ (1 \u2212\ud835\udc60 \ud835\udc61)x, \ud835\udc60(\ud835\udc61\u2212\ud835\udc60) \ud835\udc61 I \u0013 = N \u0012 x\ud835\udc60; x\ud835\udc61\u2212\ud835\udc61\u2212\ud835\udc60 \ud835\udc61 (x\ud835\udc61\u2212x), \ud835\udc60(\ud835\udc61\u2212\ud835\udc60) \ud835\udc61 I \u0013 . (15) Note that Eq. (11) is slightly different from ours in that it uses x0 to represent x, but we directly use x. Since we have a closed-form solution of \ud835\udc5d\ud835\udf03(x\ud835\udc60|x\ud835\udc61, y) for 0 < \ud835\udc60< \ud835\udc61< \ud835\udc47, our method does not need DDIM [48] sampling for acceleration. Training and Sampling. According to Eq. (15), it suffices to have a neural network \ud835\udf16\ud835\udf03estimating x\ud835\udc61\u2212x0. Moreover, based on Eq. (14), we can sample \ud835\udc60from \ud835\udc48\ud835\udc5b\ud835\udc56\ud835\udc53\ud835\udc5c\ud835\udc5f\ud835\udc5a(0,\ud835\udc47) and compute \ud835\udc61= \ud835\udc47\u00b1 \ud835\udc60for \ud835\udc61> \ud835\udc47and \ud835\udc47< \ud835\udc61. With one sample of \ud835\udc60, we can obtain two samples at each side of our consecutive Brownian bridge diffusion symmetric at T. y, z are added to the denoising UNet as extra conditions. Therefore, the training objective becomes: E{y,x,z},\ud835\udf16[||\ud835\udf16\ud835\udf03(x\ud835\udc601,\ud835\udc47\u2212\ud835\udc60, y, z) \u2212(x\ud835\udc601 \u2212x)||2 2] + E{y,x,z},\ud835\udf16[||\ud835\udf16\ud835\udf03(x\ud835\udc602,\ud835\udc47+ \ud835\udc60, y, z) \u2212(x\ud835\udc602 \u2212x)||2 2]. (16) where xs1 = \ud835\udc60 \ud835\udc47x + (1 \u2212\ud835\udc60 \ud835\udc47)y + \u221a\ufe02 \ud835\udc60(\ud835\udc47\u2212\ud835\udc60) \ud835\udc47 \ud835\udf16, xs2 = \ud835\udc60 \ud835\udc47x + (1 \u2212\ud835\udc60 \ud835\udc47)z + \u221a\ufe02 \ud835\udc60(\ud835\udc47\u2212\ud835\udc60) \ud835\udc47 \ud835\udf16, \ud835\udf16\u223cN (0, I). (17) Optimizing Eq. (16) requires two forward calls of the denoising UNet, so to be more efficient in computation, we randomly select one of them to optimize during training. Moreover, [15] proposes \ud835\udc5a\ud835\udc56\ud835\udc5b\u2212\ud835\udc46\ud835\udc41\ud835\udc45\u2212\ud835\udefeweighting for different time steps during training based on the signal-to-noise ratio, defined as \ud835\udc5a\ud835\udc56\ud835\udc5b{\ud835\udc46\ud835\udc41\ud835\udc45(\ud835\udc61),\ud835\udefe}. In DDPM [17], we have \ud835\udc46\ud835\udc41\ud835\udc45(\ud835\udc61) = \ud835\udefc\ud835\udc61 1\u2212\ud835\udefc\ud835\udc61because the mean and standard deviation are scaled by \u221a\ud835\udefc\ud835\udc61and \u221a1 \u2212\ud835\udefc\ud835\udc61respectively in the diffusion process. However, in our formulation, consecutive frames \ud835\udc3c0, \ud835\udc3c1 share almost identical mean, and so as their encoded latent representations. Therefore, the mean is never scaled down. The SNR is defined as 1 \ud835\udeff\ud835\udc61, where \ud835\udeff\ud835\udc61is the standard deviation of the diffusion process at time \ud835\udc61. With the \ud835\udc5a\ud835\udc56\ud835\udc5b\u2212\ud835\udc46\ud835\udc41\ud835\udc45\u2212\ud835\udefeweighting, the weighting of loss is defined as \ud835\udc64\ud835\udc61= \ud835\udc5a\ud835\udc56\ud835\udc5b{ 1 \ud835\udeff\ud835\udc61,\ud835\udefe}. The training algorithm is shown in Algorithm 1. To sample from neighboring frames, we can sample from either of the two endpoints y, z with Eq. (14) and (15), shown in Algorithm 2. After sampling, we replace x in Eq (12) with the sampled latent representations to decode the interpolated frame. Cumulative Variance. As we claimed, diffusion model [17] with conditional generation has a large cumulative variance while ours is much smaller. The cumulative variance for traditional conditional generation is larger than 1 + \u00cd \ud835\udc61\u02c6 \ud835\udefd\ud835\udc61, which corresponds to 11.036 in experiments. However, in our method, such a cumulative variance is smaller than \ud835\udc47= 2 in our experiments, resulting in a more deterministic estimation of the ground truth latent representations. The detailed justification is in Appendix A.1. 4 EXPERIMENTS 4.1 Implementations Autoencoder. The down-sampling factor is set to be \ud835\udc53= 16 for our autoencoder, which follows the setup of LDMVFI [11]. The flow estimation and refinement modules are initialized from pretrained VFIformer [32] and frozen for better efficiency. The codebook size and embedding dimension of the VQ Layer are set to 16384 and 3 respectively. The number of channels in the compact latent space (encoder output) is set to 8. A self-attention [54] is applied at 32\u00d7 down-sampling latent representation (both encoder and decoder), and cross attentions [54] with warped features are applied on 2\u00d7, 4\u00d7, 8\u00d7, 16\u00d7, and 32\u00d7 down-sampling factors in the decoder. Following LDMVFI, max-attention [53] is applied in all attention layers for better efficiency. The model is trained with Adam optimizer [25] with a learning rate of 10\u22125 for 100 epochs with a batch size of 16. The autoencoder is still slowly converging after 100 epochs, but we stopped training to evaluate it. Consecutive Brownian Bridge Diffusion. We set \ud835\udc47= 2 (corresponding to maximum variance 1 2) and discretize 1000 steps for \fFrame Interpolation with Consecutive Brownian Bridge Diffusion Conference\u201917, July 2017, Washington, DC, USA Table 1: Quantitative results (LPIPS/FloLPIPS/FID, the lower the better) on test datasets. \u2020 means we evaluate our consecutive Brownian Bridge diffusion (trained on Vimeo 90K triplets [59]) with autoencoder provided by LDMVFI [11]. The best performances are boldfaced, and the second best performances are underlined. Methods Middlebury UCF-101 DAVIS SNU-FILM easy medium hard extreme LPIPS/FloLPIPS/FID LPIPS/FloLPIPS/FID LPIPS/FloLPIPS/FID LPIPS/FloLPIPS/FID LPIPS/FloLPIPS/FID LPIPS/FloLPIPS/FID LPIPS/FloLPIPS/FID ABME\u201921 [40] 0.027/0.040/11.393 0.058/0.069/37.066 0.151/0.209/16.931 0.022/0.034/6.363 0.042/0.076/15.159 0.092/0.168/34.236 0.182/0.300/63.561 MCVD\u201922 [55] 0.123/0.138/41.053 0.155/0.169/102.054 0.247/0.293/28.002 0.199/0.230/32.246 0.213/0.243/37.474 0.250/0.292/51.529 0.320/0.385/83.156 VFIformer\u201922 [32] 0.015/0.024/9.439 0.033/0.040/22.513 0.127/0.184/14.407 0.018/0.029/5.918 0.033/0.053/11.271 0.061/0.100/22.775 0.119/0.185/40.586 IFRNet\u201922 [26] 0.015/0.030/10.029 0.029/0.034/20.589 0.106/0.156/12.422 0.021/0.031/6.863 0.034/0.050/12.197 0.059/0.093/23.254 0.116/0.182/42.824 AMT\u201923 [29] 0.015/0.023/7.895 0.032/0.039/21.915 0.109/0.145/13.018 0.022/0.034/6.139 0.035/0.055/11.039 0.060/0.092/20.810 0.112/0.177/40.075 UPR-Net\u201923 [24] 0.015/0.024/7.935 0.032/0.039/21.970 0.134/0.172/15.002 0.018/0.029/5.669 0.034/0.052/10.983 0.062/0.097/22.127 0.112/0.176/40.098 EMA-VFI\u201923 [60] 0.015/0.025/8.358 0.032/0.038/21.395 0.132/0.166/15.186 0.019/0.038/5.882 0.033/0.053/11.051 0.060/0.091/20.679 0.114/0.170/39.051 LDMVFI\u201924 [11] 0.019/0.044/16.167 0.026/0.035/26.301 0.107 0.153/12.554 0.014/0.024/5.752 0.028/0.053/12.485 0.060/0.114/26.520 0.123/0.204/47.042 Ours\u2020 0.012/0.011/14.447 0.030/0.029/15.335 0.097/0.145/12.623 0.011/0.011/5.737 0.028/0.028/12.569 0.051/0.053/25.567 0.099/0.103/46.088 Ours 0.005/0.007/7.470 0.019/0.024/14.000 0.050/0.085/9.220 0.011/0.009/4.791 0.027/0.023/9.039 0.043/0.038/18.589 0.087/0.079/36.631 training and 50 steps for sampling. The denoising UNet takes the concatenation of x\ud835\udc61, y, z as input and is trained with Adam optimizer [25] with 10\u22124 learning rate for 30 epochs with a batch size of 64. \ud835\udefeis set to be 5 in the \ud835\udc5a\ud835\udc56\ud835\udc5b\u2212\ud835\udc46\ud835\udc41\ud835\udc45\u2212\ud835\udefeweighting. 4.2 Datasets and Evaluation Metrics Training Sets. To ensure a fair comparison with most recent works [1, 7, 12, 18, 20, 24, 32, 34, 42, 47], we train our models in Vimeo 90K triplets dataset [59], which contains 51,312 triplets. We apply random flipping, random cropping to 256 \u00d7 256, temporal order reversing, and random rotation with multiples of 90 degrees as data augmentation. Test Sets. We select UCF-101 [50], DAVIS [41], SNU-FILM [8], and Middlebury [2] to evaluate our method. UCF-101 and Middlebury consist of relatively low-resolution videos (less than 1K), whereas DAVIS and SNU-FILM consist of relatively high-resolution videos (up to 4K). SNU-FILM consists of four categories with increasing levels of difficulties (i.e. larger motion changes): easy, medium, hard, and extreme. Evaluation Metrics. Recent works [10, 11, 62] reveal that PSNR and SSIM [56] are sometimes unreliable because they have relatively low correlation with humans\u2019 visual judgments. However, deep-learning-based metrics such as FID [16], LPIPS [62], and FloLPIPS [10] are shown to have a higher correlation with humans\u2019 visual judgments in [11, 62]. Moreover, in our experiments, we also find such inconsistencies between PSNR/SSIM and visual quality, which will be discussed in Section 4.3. Therefore, we select FID, LPIPS, and FloLPIPS as our main evaluation metrics. LPIPS and FID measure distances in the space of deep learning features. FloLPIPS is based on LPIPS but takes the motion in the frames into consideration. Our methods evaluated with PSNR and SSIM will be included in Appendix C.1. 4.3 Experimental Results Quantitative Results. Our method is compared with recent opensource state-of-the-art VFI methods, including ABME [40], MCVD [55], VFIformer [32], IFRNet [26], AMT [29], UPR-Net [24], EMA-VFI [60], and LDMVFI [11]. The evaluation is reported in LPIPS/FloLPIPS/FID (lower the better), shown in Table 1. We evaluate VFIformer, IFRNet, AMT, UPR-Net, and EMA-VFI with their trained weights, and other results are provided in the appendix of LDMVFI [11]. Models with different versions in the number of parameters are all chosen to be the largest ones. With the same autoencoder as LDMVFI [11], our method (denoted as ours\u2020) generally achieves better performance than LDMVFI, indicating the effectiveness of our consecutive Brownian Bridge diffusion. Moreover, with an improved autoencoder, our method (denoted as ours) generally achieves state-of-the-art performance. It is important to note that we achieve much better FloLPIPS than other SOTAs, indicating our interpolated results achieve stronger motion consistency. In a few datasets, our method does not achieve the best performance in FID or LPIPS because our autoencoder is still converging. Ours LDMVFI GT Figure 4: The reconstruction quality of our autoencoder and LDMVFI\u2019s autoencoder (decoding with ground truth latent representation x). Images are cropped within green boxes for detailed comparisons. Red circles highlight the details that we have better reconstruction quality. LDMVFI usually outputs overlaid images while ours does not. Qualitative Results. In Table 1, our consecutive Brownian Bridge diffusion with the autoencoder in LDMVFI [11] (denoted as our\u2020) generally achieves better quantitative results than LDMVFI, showing our method is effective. We include qualitative visualization in Figure 5 to support this result. Moreover, as mentioned in Section 1, we find that the autoencoder in [11] usually reconstructs overlaid \fConference\u201917, July 2017, Washington, DC, USA Zonglin Lyu1, Ming Li2, Jianbo Jiao3, and Chen Chen2 Table 2: Ablation studies of autoencoder and ground truth estimation. + GT means we input ground truth x to the decoder part of autoencoder. + BB indicates our consecutive Brownian Bridge diffusion trained with autoencoder of LDMVFI. With our consecutive Brownian Bridge diffusion, the interpolated frame has almost the same performance as the interpolated frame with ground truth latent representation, indicating the strong ground truth estimation capability. Our autoencoder also has better performance than LDMVFI [11]. Methods Middlebury UCF-101 DAVIS SNU-FILM easy medium hard extreme LPIPS/FloLPIPS/FID LPIPS/FloLPIPS/FID LPIPS/FloLPIPS/FID LPIPS/FloLPIPS/FID LPIPS/FloLPIPS/FID LPIPS/FloLPIPS/FID LPIPS/FloLPIPS/FID LDMVFI\u201924 [11] 0.019/0.044/16.167 0.026/0.035/26.301 0.107 0.153/12.554 0.014/0.024/5.752 0.028/0.053/12.485 0.060/0.114/26.520 0.123 0.204/47.042 LDMVFI\u201924 [11] + BB 0.012/0.011/14.447 0.030/0.029/15.335 0.097/0.145/12.623 0.011/0.011/5.737 0.028/0.028/12.569 0.051/0.053/25.567 0.099/0.103/46.088 LDMVFI\u201924 [11] + GT 0.012/0.011/14.492 0.030/0.029/15.338 0.097/0.145/12.670 0.011/0.011/5.738 0.028/0.028/12.574 0.051/0.053/25.655 0.099/0.103/46.080 Ours 0.005/0.007/7.470 0.019/0.024/14.000 0.050/0.085/9.220 0.011/0.009/4.791 0.027/0.023/9.039 0.043/0.038/18.589 0.087/0.079/36.631 Ours + GT 0.005/0.007/7.468 0.019/0.024/14.000 0.050/0.085/9.220 0.011/0.009/4.791 0.027/0.023/9.039 0.043/0.038/18.591 0.087/0.079/36.633 Ours LDMVFI GT Figure 5: The visual comparison of interpolated results of LDMVFI [11] vs our method with the same autoencoder in LDMVFI (LDMVFI vs our\u2020 in Table 1). With the same autoencoder, our method can still achieve better visual quality than LDMVFI, demonstrating the superiority of our proposed consecutive Brownian Bridge diffusion. images, and therefore we propose a new method of reconstruction. We provide examples to visualize the reconstruction results with our autoencoder and LDMVFI\u2019s autoencoder for comparison, shown in Figure 4. All examples are from SNU-FILM extreme [8], which contains relatively large motion changes in neighboring frames. We have provided some visual comparisons of our method and recent SOTAs in Figure 1. Our method achieves better visual quality because we have clearer details such as dog skins, cloth with folds, and fences with nets. However, UPR-Net [24] achieves better PSNR/SSIM in all the cropped regions (5 \u221210% better) than ours, which is highly inconsistent with the visual quality. More qualitative results are provided in Appendix C.2. 4.4 Ablation Studies As we discussed in Section 3.2, latent-diffusion-based VFI can be broken down into two stages, so we conduct an ablation study on the ground truth estimation capability of our consecutive Brownian Bridge diffusion. We compare the LPIPS/FloLPIPS/FID of decoded images with diffusion-generated latent representation \u02c6 x and ground truth x, which is encoded \ud835\udc3c\ud835\udc5b. The results are shown in Table 2. It is important to note that, fixing inputs as the ground truth, our autoencoder achieves a stronger performance than the autoencoder in LDMVFI [11], indicating the effectiveness of our autoencoder. Also, fixing the autoencoder, our consecutive Brownian Bridge diffusion achieves almost identical performance with the ground truth, indicating its strong capability of ground truth estimation. However, the conditional generation model in LDMVFI [11] usually underperforms the autoencoder with ground truth inputs. Therefore, our method has a stronger ability in both the autoencoder and ground truth estimation stages. More ablation study is provided in Appendix C.3. 5" +} \ No newline at end of file diff --git a/abs_9K/test_abstract_short_2405.05959v1.json b/abs_9K/test_abstract_short_2405.05959v1.json new file mode 100644 index 0000000000000000000000000000000000000000..e8a93b989519721382034b3513d9df7e9222ea5d --- /dev/null +++ b/abs_9K/test_abstract_short_2405.05959v1.json @@ -0,0 +1,18 @@ +{ + "url": "http://arxiv.org/abs/2405.05959v1", + "title": "Self-Supervised Learning of Time Series Representation via Diffusion Process and Imputation-Interpolation-Forecasting Mask", + "abstract": "Time Series Representation Learning (TSRL) focuses on generating informative\nrepresentations for various Time Series (TS) modeling tasks. Traditional\nSelf-Supervised Learning (SSL) methods in TSRL fall into four main categories:\nreconstructive, adversarial, contrastive, and predictive, each with a common\nchallenge of sensitivity to noise and intricate data nuances. Recently,\ndiffusion-based methods have shown advanced generative capabilities. However,\nthey primarily target specific application scenarios like imputation and\nforecasting, leaving a gap in leveraging diffusion models for generic TSRL. Our\nwork, Time Series Diffusion Embedding (TSDE), bridges this gap as the first\ndiffusion-based SSL TSRL approach. TSDE segments TS data into observed and\nmasked parts using an Imputation-Interpolation-Forecasting (IIF) mask. It\napplies a trainable embedding function, featuring dual-orthogonal Transformer\nencoders with a crossover mechanism, to the observed part. We train a reverse\ndiffusion process conditioned on the embeddings, designed to predict noise\nadded to the masked part. Extensive experiments demonstrate TSDE's superiority\nin imputation, interpolation, forecasting, anomaly detection, classification,\nand clustering. We also conduct an ablation study, present embedding\nvisualizations, and compare inference speed, further substantiating TSDE's\nefficiency and validity in learning representations of TS data.", + "authors": "Zineb Senane, Lele Cao, Valentin Leonhard Buchner, Yusuke Tashiro, Lei You, Pawel Herman, Mats Nordahl, Ruibo Tu, Vilhelm von Ehrenheim", + "published": "2024-05-09", + "updated": "2024-05-09", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "G.3; I.6.5; I.2.4" + ], + "label": "Original Paper", + "paper_cat": "Diffusion AND Model", + "gt": "Time Series Representation Learning (TSRL) focuses on generating informative\nrepresentations for various Time Series (TS) modeling tasks. Traditional\nSelf-Supervised Learning (SSL) methods in TSRL fall into four main categories:\nreconstructive, adversarial, contrastive, and predictive, each with a common\nchallenge of sensitivity to noise and intricate data nuances. Recently,\ndiffusion-based methods have shown advanced generative capabilities. However,\nthey primarily target specific application scenarios like imputation and\nforecasting, leaving a gap in leveraging diffusion models for generic TSRL. Our\nwork, Time Series Diffusion Embedding (TSDE), bridges this gap as the first\ndiffusion-based SSL TSRL approach. TSDE segments TS data into observed and\nmasked parts using an Imputation-Interpolation-Forecasting (IIF) mask. It\napplies a trainable embedding function, featuring dual-orthogonal Transformer\nencoders with a crossover mechanism, to the observed part. We train a reverse\ndiffusion process conditioned on the embeddings, designed to predict noise\nadded to the masked part. Extensive experiments demonstrate TSDE's superiority\nin imputation, interpolation, forecasting, anomaly detection, classification,\nand clustering. We also conduct an ablation study, present embedding\nvisualizations, and compare inference speed, further substantiating TSDE's\nefficiency and validity in learning representations of TS data.", + "main_content": "INTRODUCTION Time Series (TS) data is a sequence of data points collected at regular time intervals. It is prevalent in various real-world applications, such as understanding human behavioral patterns [9], conducting in-depth financial market analyses [5], predicting meteorological \u2217Zineb Senane and Lele Cao contributed equally. For correspondence, please reach out to either of them. The source code and models for reproduction purposes are available at https://github.com/EQTPartners/tsde 1 EQT Group, Stockholm, Sweden 2 KTH Royal Institute of Technology, Stockholm, Sweden 3 T\u00e9l\u00e9com Paris, Palaiseau, France 4 Eurecom, Biot, France 5 Mitsubishi UFJ Trust Investment Technology Institute, Tokyo, Japan 6 Technical University of Denmark, Ballerup, Denmark 7 QA.Tech, Stockholm, Sweden phenomena [34], and enhancing healthcare diagnostics [46]. In this work, we focus on Multivariate TS (MTS) data, which refers to a TS with multiple variables or features recorded at each time point, where these variables may have inter-dependencies. This is in contrast to Univariate TS (UTS), which only involves a single variable. It should be noted that Multiple TS (Multi-TS) differs from MTS as it pertains to the simultaneous monitoring of several UTSs, each operating independently without any interrelation among them. While this paper primarily concentrates on MTS data, our methodology and insights are also applicable to UTS and Multi-TS, ensuring the versatility and broad applicability of our approach. To effectively extract and interpret valuable information from intricate raw MTS data, the field of Time Series Representation Learning (TSRL) has become increasingly pivotal. TSRL focuses on learning latent representations that encapsulate critical information within the time series, thereby uncovering the intrinsic dynamics of the associated systems or phenomena [52]. Furthermore, the learned representations are crucial for a variety of downstream applications, such as time series imputation, interpolation, forecasting, classification, clustering and anomaly detection. TSRL can be conducted in a supervised manner; however, the need for extensive and accurate labeling of vast time series data presents a significant bottleneck, often resulting in inefficiencies and potential inaccuracies. Consequently, our focus lies in unsupervised learning techniques, which excel in extracting high-quality MTS representations without the constraints of manual labeling. Self-Supervised Learning (SSL), a subset of unsupervised learning, has emerged as a highly effective methodology for TSRL. SSL utilizes innovative pretext tasks1 to generate supervision signals from unlabeled TS data, thereby facilitating the model\u2019s ability to autonomously learn valuable representations without relying on external labels. The four main designs of SSL pretext tasks \u2013 reconstructive, adversarial, contrastive, and predictive [18, 42, 52, 101] \u2013 will be elaborated in Section 2. These designs have demonstrated notable success in addressing TSRL across a diverse range of applications, yet they often struggle with capturing the full complexity of MTS data, particularly in modeling intricate long-term dependencies and handling high-dimensional, noisy datasets. 1A pretext task in SSL is a self-generated learning challenge designed to facilitate the extraction of informative representations for downstream tasks, encompassing various methods such as transformation prediction, masked prediction, instance discrimination, and clustering, tailored to the specific data modality involved [21, 33, 101]. arXiv:2405.05959v1 [cs.LG] 9 May 2024 \fSenane and Cao, et al. Due to their advanced generative capabilities, diffusion models [28, 71, 73\u201376] have emerged as a promising solution for TS modeling, adept at handling the complexities and long-term dependencies often found in MTS data. While these methods have shown success in specific tasks like forecasting [63] and imputation [80], their adoption in SSL TSRL remains largely unexplored, leaving a gap in the related research literature. Our work, TSDE (Time Series Diffusion Embedding), pioneers in this area by integrating conditional diffusion processes with crossover Transformer encoders and introducing a Imputation-Interpolation-Forecasting (IIF) mask strategy. This unique combination allows TSDE to generate versatile representations that are applicable to a wide range of tasks, including imputation, interpolation, forecasting, classification, anomaly detection, and clustering. Our main contributions are: \u2022 We propose a novel SSL TSRL framework named TSDE, which optimizes a denoising (reverse diffusion) process, conditioned on a learnable MTS embedding function. \u2022 We develop dual-orthogonal Transformer encoders integrated with a crossover mechanism, which learns MTS embeddings by capturing temporal dynamics and feature-specific dependencies. \u2022 We design a novel SSL pretext task, the IIF masking strategy, which creates pseudo observation masks designed to simulate the typical imputation, interpolation, and forecasting tasks. \u2022 We experimentally show that TSDE achieves superior performance over existing methods across a wide range of MTS tasks, thereby validating the universality of the learned embeddings. 2 RELATED WORK This research addresses the problem of TSRL using a SSL approach. Inspired by the taxonomies adopted by [18, 42, 52, 101], we structure our review of SSL-based TSRL around four primary methodologies: reconstructive, adversarial, contrastive, and predictive methods. Reconstructive methods focus on minimizing the discrepancy between original and reconstructed MTS data, mostly using an encoder-decoder Neural Network (NN) architecture to emphasize salient features and filter out noise, thereby training the NN to learn meaningful representations [27]. Recent mainstream methods in this category predominantly employ CNN (Convolutional NN) [72, 100], RNN (Recurrent NN) [47, 66] or Transformer [15, 103] as their architectural backbone. In this category, deep clustering stands out by simultaneously optimizing clustering and reconstruction objectives. It has been implemented through various clustering algorithms, including \ud835\udc58-means[7, 89], Gaussian Mixture Model (GMM) [4, 32], and spectral clustering [79]. Reconstructive methods might face limitations in addressing long-term dependencies and adequately representing complex features such as seasonality, trends, and noise in extensive, high-dimensional datasets. Adversarial methods utilize Generative Adversarial Network (GAN) to learn TS representations by differentiating between real and generated data [50, 59]. These methods often integrate advanced NN architectures or autoregressive models to effectively capture temporal dependencies and generate realistic TS data. For instance, TimeGAN [94] combines GANs with autoregressive models for temporal dynamics replication, while RGAN [22] uses RNN to enhance the realism of generated TS. Furthermore, approaches like TimeVAE [16] and DIVERSIFY [44] innovate in data generation, with the former tailoring outputs to user-specified distributions and latter employing adversarial strategies to maximize distributional diversity in generated TS data. However, the intricate training process of GANs, potential for mode collapse, and reliance on high-quality datasets are notable drawbacks of adversarial methods, potentially generating inconsistent or abnormal samples [101]. Contrastive methods distinguish themselves by optimizing self-discrimination tasks, contrasting positive samples with similar characteristics against negative samples with different ones [107]. These methods learn representations by generating augmented views of TS data and leveraging the inherent similarities and variations within the data [101]. They include instance-level models [11, 12, 78, 92] that treat each sample independently, using data augmentations to form positive and negative pairs. Prototype-level models [8, 37, 51, 99], on the other hand, break this independence by clustering semantically similar samples, thereby capturing higherlevel semantic information. Additionally, temporal-level models [19, 78, 91, 96] address TS-specific challenges by focusing on scaleinvariant representations at individual timestamps, enhancing the understanding of complex temporal dynamics. However, a common disadvantage across these contrastive methods is their potential to overlook higher-level semantic information, especially when not integrating explicit semantic labels, leading to the generation of potentially misleading negative samples. Predictive methods excel in capturing shared information from TS data by maximizing mutual information from various data slices or augmented views. These methods, like TST [98], wave2vec [68], CaSS [14] and SAITS [17], focus on predicting future, missing, or contextual information, thereby bypassing the need for full input reconstruction. Most recent advancements in this category, such as TEMPO [3] and TimeGPT [25], leverage LLM (Large Language Model) architectures to effectively decompose and predict complex TS components. TimeGPT, in particular, stands out as a foundation model specifically for TS forecasting, yet it only treats MTS as MultiTS. Lag-Llama [62], another notable predictive model, demonstrates strong univariate probabilistic forecasting, trained on a vast corpus of TS data. However, the challenge in these methods is their focus on local information, which can limit their capacity to capture longterm dependencies and make them susceptible to noise and outliers, thus affecting their generalization ability. Diffusion-based methods in TS modeling have recently gained traction, leveraging the unique abilities of diffusion models to model the data distribution through a process of injecting and reversing noise [101]. These models, like TimeGrad [63] and CSDI [80], have been effectively applied to tasks such as forecasting and imputation, employing innovative techniques like RNN-conditioned diffusion and multiple Transformer encoders. Recent developments like SSSD [2] have further evolved the field by integrating structured state space models [26] with diffusion processes. These advancements have showcased the flexibility and potential of diffusion models in handling diverse TS data, with applications ranging from electrical load forecasting with DiffLoad [84] to predicting spatio-temporal graph evolutions using DiffSTG [85]. Despite these significant advancements, a notable gap remains in the application of diffusion models for TSRL. While a recent study [13] demonstrates the efficacy of diffusion models as robust visual representation extractors, their specific adaptation and optimization for TSRL have not \fSelf-Supervised Learning of Time Series Representation via Diffusion Process and Imputation-Interpolation-Forecasting Mask been explored. Our work aims to fill this gap with the innovative TSDE framework, synergistically integrating conditional diffusion processes and crossover Transformer encoders, coupled with an innovative IIF mask strategy, to effectively tackle a wide range of downstream tasks. 3 THE APPROACH The task entails learning general-purpose embeddings for MTS that has \ud835\udc3efeatures/variables and \ud835\udc3ftime steps. Formally, given a multivariate time series x: x = {\ud835\udc651:\ud835\udc3e,1:\ud835\udc3f} = \uf8ee \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8f0 \ud835\udc651,1 \ud835\udc651,2 . . . \ud835\udc651,\ud835\udc3f \ud835\udc652,1 \ud835\udc652,2 . . . \ud835\udc652,\ud835\udc3f . . . . . . ... . . . \ud835\udc65\ud835\udc3e,1 \ud835\udc65\ud835\udc3e,2 . . . \ud835\udc65\ud835\udc3e,\ud835\udc3f \uf8f9 \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fb \u2208R\ud835\udc3e\u00d7\ud835\udc3f, (1) we aim to learn a \ud835\udf53-parameterized embedding function f\ud835\udf53(\u00b7) that maps the input MTS x to a latent representation Z: Z = {z1:\ud835\udc3e,1:\ud835\udc3f} = f\ud835\udf53(x) = \uf8ee \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8f0 z1,1 . . . z1,\ud835\udc3f . . . ... . . . z\ud835\udc3e,1 . . . z\ud835\udc3e,\ud835\udc3f \uf8f9 \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fb \u2208R\ud835\udc3e\u00d7\ud835\udc3f\u00d7\ud835\udc36, (2) where each element z\ud835\udc58,\ud835\udc59\u2208R\ud835\udc36represents the embedding vector for the \ud835\udc58-th feature and \ud835\udc59-th step, with \ud835\udc36denoting the dimensionality of the embedding space. We propose to learn f\ud835\udf53by leveraging a conditional diffusion process trained in a self-supervised fashion. 3.1 Unconditional Diffusion Process The unconditional diffusion process assumes a sequence of latent variables x\ud835\udc61(\ud835\udc61\u2208Z\u2229[1,\ud835\udc47]) in the same space as x. For unification, we will denote x as x0 henceforth. The objective is to approximate the ground-truth MTS distribution\ud835\udc5e(x0) by learning a \ud835\udf3d-parameterized model distribution \ud835\udc5d\ud835\udf3d(x0). The entire process comprises both forward and reverse processes. 3.1.1 Forward process. In this process, Gaussian noise is gradually injected to x0 in \ud835\udc47steps until \ud835\udc65\ud835\udc47is close enough to a standard Gaussian distribution, which can be expressed as a Markov chain: \ud835\udc5e(x1:\ud835\udc47|x0) = \u00ce\ud835\udc47 \ud835\udc61=1\ud835\udc5e(x\ud835\udc61|x\ud835\udc61\u22121), (3) where \ud835\udc5e(x\ud835\udc61|x\ud835\udc61\u22121) is a diffusion transition kernel, and is defined as \ud835\udc5e(x\ud835\udc61|x\ud835\udc61\u22121) := N (x\ud835\udc61; \u221a\ufe01 1 \u2212\ud835\udefd\ud835\udc61x\ud835\udc61\u22121, \ud835\udefd\ud835\udc61I), (4) which is a conditional Gaussian distribution with a mean of \u221a\ufe01 1 \u2212\ud835\udefd\ud835\udc61x\ud835\udc61\u22121 and a covariance matrix of \ud835\udefd\ud835\udc61I, and \ud835\udefd\ud835\udc61\u2208(0, 1) indicates the noise level at each diffusion step \ud835\udc61. Because of the properties of Gaussian kernels, we can sample any x\ud835\udc61from x0 directly with \ud835\udc5e(x\ud835\udc61|x0) := N (x\ud835\udc61; \u221a\ufe01 \u02dc \ud835\udefc\ud835\udc61x0, (1\u2212\u02dc \ud835\udefc\ud835\udc61)I), where \u02dc \ud835\udefc\ud835\udc61:= \u00ce\ud835\udc61 \ud835\udc56=1(1\u2212\ud835\udefd\ud835\udc56), (5) and x\ud835\udc61= \u221a\u02dc \ud835\udefc\ud835\udc61x0 + \u221a1 \u2212\u02dc \ud835\udefc\ud835\udc61\ud835\udf50, and \ud835\udf50\u223cN (0, I). 3.1.2 Reverse process. This process, modeled by a NN parameterized with \ud835\udf3d, recovers x0 by progressively denoising x\ud835\udc47: \ud835\udc5d\ud835\udf3d(x0:\ud835\udc47) = \ud835\udc5d(x\ud835\udc47)\u00ce\ud835\udc47 \ud835\udc61=1\ud835\udc5d\ud835\udf3d(x\ud835\udc61\u22121|x\ud835\udc61), (6) where \ud835\udc5d\ud835\udf3d(x\ud835\udc61\u22121|x\ud835\udc61) is the reverse transition kernel with a form of \ud835\udc5d\ud835\udf3d(x\ud835\udc61\u22121|x\ud835\udc61) := N (x\ud835\udc61\u22121; \ud835\udf41\ud835\udf3d(x\ud835\udc61,\ud835\udc61), \ud835\udeba\ud835\udf3d(x\ud835\udc61,\ud835\udc61)). (7) To approximate the reverse transition kernel, Ho et al. [29] propose the following reparametrization of the mean and variance: \ud835\udf41\ud835\udf3d(x\ud835\udc61,\ud835\udc61) := (1 \u2212\ud835\udefd\ud835\udc61)\u22121 2 (x\ud835\udc61\u2212\ud835\udefd\ud835\udc61(1 \u2212\u02dc \ud835\udefc\ud835\udc61)\u22121 2 \ud835\udf50\ud835\udf3d(x\ud835\udc61,\ud835\udc61)), (8) \ud835\udeba\ud835\udf3d(x\ud835\udc61,\ud835\udc61) := \ud835\udf48\ud835\udf3d(x\ud835\udc61,\ud835\udc61)I = \ud835\udf0e2 \ud835\udc61I, (9) where \ud835\udf0e2 \ud835\udc61= \ud835\udefd\ud835\udc61(1 \u2212\u02dc \ud835\udefc\ud835\udc61\u22121)/(1 \u2212\u02dc \ud835\udefc\ud835\udc61) when \ud835\udc61> 1, otherwise \ud835\udf0e2 \ud835\udc61= \ud835\udefd1; \ud835\udf50\ud835\udf3dis a trainable network predicting the noise added to input x\ud835\udc61at diffusion step \ud835\udc61. Specifically, \u02dc \ud835\udefc\ud835\udc47\u22480 such that \ud835\udc5e(x\ud835\udc47) \u2248N (x\ud835\udc47; 0, I), thus the starting point of the backward chain is a Gaussian noise. 3.2 Imputation-Interpolation-Forecasting Mask The reverse process of unconditional diffusion facilitates the generation of MTS from noise. However, our objective is to create generalpurpose embeddings for unlabeled MTS, which can be leveraged in many popular downstream tasks such as imputation, interpolation, and forecasting. Consequently, we propose an ImputationInterpolation-Forecasting (IIF) mask strategy, producing a pseudo observation mask mIIF = {\ud835\udc5aIIF 1:\ud835\udc3e,1:\ud835\udc3f} \u2208{0, 1}\ud835\udc3e\u00d7\ud835\udc3fwhere \ud835\udc5aIIF \ud835\udc58,\ud835\udc59=1 if \ud835\udc65\ud835\udc58,\ud835\udc59 in Equation (1) is observable, and \ud835\udc5aIIF \ud835\udc58,\ud835\udc59=0 otherwise. Algorithm 1 details the implementation and combination of imputation, interpolation, and forecasting masks2. During training, given any original MTS x0, we extract the observed (xobs 0 ) and masked (xmsk 0 ) segments by xobs 0 := x0 \u2299mIIF and xmsk 0 := x0 \u2299(m \u2212mIIF) , (10) where \u2299represents element-wise product; and m = {\ud835\udc5a1:\ud835\udc3e,1:\ud835\udc3f} \u2208 {0, 1}\ud835\udc3e\u00d7\ud835\udc3fis a mask with zeros indicating originally missing values in x0. We now reformulate our self-supervised learning objective to generate the masked version of MTS, denoted as xmsk 0 , from a corrupted input xmsk \ud835\udc61 , through a diffusion process, conditioned on the embedding of the observed MTS xobs 0 , i.e., f\ud835\udf53(xobs 0 ). Both the diffusion process (parameterized by \ud835\udf3d) and the embedding function (parameterized by \ud835\udf53) are approximated with a trainable NN. 3.3 Conditional Reverse Diffusion Process Our conditional diffusion process estimates the ground-truth conditional probability \ud835\udc5e(xmsk 0 |f\ud835\udf53(xobs 0 )) by re-formulating (6) as \ud835\udc5d\ud835\udf3d(xmsk 0:\ud835\udc47|f\ud835\udf53(xobs 0 )) :=\ud835\udc5d(xmsk \ud835\udc47 )\u00ce\ud835\udc47 \ud835\udc61=1\ud835\udc5d\ud835\udf3d(xmsk \ud835\udc61\u22121 |xmsk \ud835\udc61 , f\ud835\udf53(xobs 0 )). (11) Similar to (7), the reverse kernel \ud835\udc5d\ud835\udf3d(xmsk \ud835\udc61\u22121 |xmsk \ud835\udc61 , f\ud835\udf53(xobs 0 )) := N (xmsk \ud835\udc61\u22121; \ud835\udf41\ud835\udf3d(xmsk \ud835\udc61 ,\ud835\udc61, f\ud835\udf53(xobs 0 )), \ud835\udeba\ud835\udf3d(xmsk \ud835\udc61 ,\ud835\udc61, f\ud835\udf53(xobs 0 ))). (12) According to DDPM [29], the variance \ud835\udeba\ud835\udf3d(xmsk \ud835\udc61 ,\ud835\udc61, f\ud835\udf53(xobs 0 )) can be formulated in the same way as (9), i.e., \ud835\udf48\ud835\udf3d(xmsk \ud835\udc61 ,\ud835\udc61, f\ud835\udf53(xobs 0 ))I = \ud835\udf0e2 \ud835\udc61I. Similar to Equation (8), the conditional mean \ud835\udf41\ud835\udf3d(xmsk \ud835\udc61 ,\ud835\udc61, f\ud835\udf53(xobs 0 )):= (1 \u2212\ud835\udefd\ud835\udc61)\u22121 2 (xmsk \ud835\udc61 \u2212\ud835\udefd\ud835\udc61(1 \u2212\u02dc \ud835\udefc\ud835\udc61)\u22121 2 \ud835\udf50\ud835\udf3d(xmsk \ud835\udc61 ,\ud835\udc61|f\ud835\udf53(xobs 0 ))). (13) 2The imputation mask simulates random missing values; the interpolation mask mimics the MTS interpolation tasks by masking all values at a randomly selected timestamp; and the forecasting mask assumes all values post a specified timestamp unknown. \fSenane and Cao, et al. ? Embedding of diffusion step t FC + SiLu FC + ReLu Conv1\u00d71 + Expand Masked segment x t Conv1\u00d71 + ReLu msk Conv1\u00d71 + + Gated activation unit Conv1\u00d71 Conv1\u00d71 + Observable segment x 0 Conv1\u00d71 + ReLu obs Concat Time Embedding Feature Embedding Tempor al Encoder Conv1\u00d71 Spatial Encoder Conv1\u00d71 ? ? Conv1\u00d71 ... Rsidual layer 0 Input to next r esidual layer Conv1\u00d71 + ReLu Conv1\u00d71 + ... Skip connections MTS Em bedding IIF Mask m Concat + SiLu IIF m -m IIF element-wise product Output ? sum L\u00d7128 K\u00d716 K\u00d7L K\u00d7L K\u00d7L\u00d716 K\u00d7L\u00d716 K\u00d7L\u00d716 K\u00d7L\u00d733 K\u00d7L 1\u00d71\u00d7128 1\u00d71\u00d7128 K\u00d7L\u00d764 K\u00d7L\u00d7128 K\u00d7L Conditional Reverse Diffusion (denoising): Embedding Function/Block: Z= K\u00d7L\u00d7160 f (x 0 ) obs ? (m -m ) IIF f (x 0 )) obs ? (x t , t| msk ?? f (x 0 ) obs ? Rsidual layer 1 Rsidual layer N Split Split K\u00d7L\u00d7160 (L\u00d7160)\u00d7K (K\u00d7160)\u00d7L Concat Concat (L\u00d7160)\u00d7K (K\u00d7160)\u00d7L K\u00d7L\u00d7160 K\u00d7L\u00d7160 K\u00d7L K\u00d7L\u00d7160 K\u00d7L\u00d7160 K\u00d7L\u00d764 K\u00d7L\u00d764 K\u00d7L\u00d7128 K\u00d7L\u00d764 K\u00d7L\u00d764 K\u00d7L\u00d764 K\u00d7L\u00d764 K\u00d7L\u00d764 stime sfeat expand expand x 0 obs ~ x 0 obs ~ sdiff(t) Figure 1: The TSDE architecture comprises an embedding function (left) and a conditional reverse diffusion block (right): the temporal and spatial encoders are implemented as one-layer Transformer. Algorithm 1: Imputation-Interpolation-Forecasting Mask Input: Mask m= {\ud835\udc5a1:\ud835\udc3e,1:\ud835\udc3f} \u2208{0, 1}\ud835\udc3e\u00d7\ud835\udc3findicating the missing values in x0 Output: A pseudo observation mask mIIF \u2208{0, 1}\ud835\udc3e\u00d7\ud835\udc3f 1 \ud835\udc5f\u2190random value from the range of [0.1, 0.9]; // imputation mask ratio 2 \ud835\udc41\u2190\u00cd\ud835\udc3e \ud835\udc58=1 \u00cd\ud835\udc3f \ud835\udc59=1 \ud835\udc5a\ud835\udc58,\ud835\udc59; // total number of observed values 3 mIIF \u2190m and randomly set \u230a\ud835\udc41\u00d7 \ud835\udc5f\u23091s to 0; // apply imputation mask 4 Sample a probability \ud835\udc5duniformly from the range of [0, 1]; 5 if 1/3 < \ud835\udc5d< 2/3 then 6 \ud835\udc59\u2032 \u2190uniformly sample a time step from Z \u2229[1, \ud835\udc3f]; 7 mIIF[:,\ud835\udc59\u2032] \u21900; // mix with interpolation mask 8 else if \ud835\udc5d>= 2/3 then 9 \ud835\udc59\u2032 \u2190uniformly sample a time window length from Z \u2229[1, \u230a\ud835\udc3f 3 \u2309]; 10 mIIF[:, \u2212\ud835\udc59\u2032:] \u21900; // mix with forecasting mask 11 return mIIF; 3.4 Training Loss and Procedure It has been shown in [29] that the reverse process of unconditional diffusion can be trained by minimizing the following loss: L(\ud835\udf3d) := Ex0\u223c\ud835\udc5e(x0),\ud835\udf50\u223cN(0,I),\ud835\udc61\u2225\ud835\udf50\u2212\ud835\udf50\ud835\udf3d(x\ud835\udc61,\ud835\udc61))\u22252 2 . (14) Inspired by [80], we replace the noise prediction NN \ud835\udf50\ud835\udf3d(x\ud835\udc61,\ud835\udc61) with the conditioned version \ud835\udf50\ud835\udf3d(xmsk \ud835\udc61 ,\ud835\udc61|f\ud835\udf53(xobs 0 ) in (14), obtaining L(\ud835\udf3d, \ud835\udf53) := Ex0\u223c\ud835\udc5e(x0),\ud835\udf50\u223cN(0,I),\ud835\udc61\u2225\ud835\udf50\u2212\ud835\udf50\ud835\udf3d(xmsk \ud835\udc61 ,\ud835\udc61|f\ud835\udf53(xobs 0 ))\u22252 2 . (15) Given the focus of training is solely on predicting the noise at the non-missing and masked locations, we actually minimize e L(\ud835\udf3d, \ud835\udf53) := Ex0\u223c\ud835\udc5e(x0),\ud835\udf50\u223cN(0,I),\ud835\udc61\u2225(\ud835\udf50\u2212\ud835\udf50\ud835\udf3d(xmsk \ud835\udc61 ,\ud835\udc61|f\ud835\udf53(xobs 0 )))\u2299(m\u2212mIIF)\u22252 2. (16) The self-supervised and mini-batch training procedure, detailed in Algorithm 2 essentially attempts to solve min\ud835\udf3d,\ud835\udf53e L(\ud835\udf3d, \ud835\udf53). In each iteration \ud835\udc56of the training process, a random diffusion step \ud835\udc61is chosen, at which point the denoising operation is applied. Algorithm 2: TSDE Training Procedure Input: Ground-truth MTS data distribution \ud835\udc5e(x0), noise scheduler { \u02dc \ud835\udefc\ud835\udc61}, the denoising and embedding functions (approx. by NN): \ud835\udf50\ud835\udf3d(\u00b7) and f\ud835\udf53(\u00b7) Output: The trained NN parameters \ud835\udf3dand \ud835\udf53 Parameter: The total number of training iterations \ud835\udc41train and learning rate \ud835\udf0f 1 for (\ud835\udc56= 1;\ud835\udc56\u2264\ud835\udc41train;\ud835\udc56+ +) do 2 Sample a diffusion step \ud835\udc61\u223cUniform({1, . . . ,\ud835\udc47}) and a MTS x0 \u223c\ud835\udc5e(x0); 3 Obtain IIF Masking mIIF by following Algorithm 1; 4 Obtain the observed (xobs 0 ) and masked (xmsk 0 ) parts using Equation (10); 5 Sample a noise matrix \ud835\udf50\u223cN(0, I) that has the same shape as xmsk 0 ; 6 Compute xmsk \ud835\udc61 \u2190\u221a\u02dc \ud835\udefc\ud835\udc61xmsk 0 + \u221a1 \u2212\u02dc \ud835\udefc\ud835\udc61\ud835\udf50; 7 Compute loss e L := \u2225(\ud835\udf50\u2212\ud835\udf50\ud835\udf3d(xmsk \ud835\udc61 ,\ud835\udc61|f\ud835\udf53(xobs 0 ))) \u2299(m\u2212mIIF) \u22252 2, cf. (16); 8 \ud835\udf3d:= \ud835\udf3d\u2212\ud835\udf0f\ud835\udf15e L \ud835\udf15\ud835\udf3d and \ud835\udf53:= \ud835\udf53\u2212\ud835\udf0f\ud835\udf15e L \ud835\udf15\ud835\udf53; 9 return \ud835\udf3dand \ud835\udf53; 3.5 Embedding Function The left part of Figure 1 illustrates the architectural design of the embedding function f\ud835\udf53(xobs 0 ). This figure highlights that the function not only processes the input xobs 0 , but also incorporates additional side information (namely, time embedding stime(\ud835\udc59), feature embedding sfeat(\ud835\udc58), and the mask mIIF) into its computations. Consequently, the notation f\ud835\udf53(xobs 0 ) is succinctly used to represent the more extensive formulation f\ud835\udf53(xobs 0 , stime, sfeat, mIIF), which accounts for all the inputs processed by the function. To obtain 128-dimensional stime(\ud835\udc59), we largely follow [80, 108]: stime(\ud835\udc59) = \u0012 sin \ud835\udc59 \ud835\udf0f 0 64 , . . . , sin \ud835\udc59 \ud835\udf0f 63 64 , cos \ud835\udc59 \ud835\udf0f 0 64 , . . . , cos \ud835\udc59 \ud835\udf0f 63 64 \u0013 , (17) where \ud835\udf0f=10,000 and \ud835\udc59\u2208Z \u2229[1, \ud835\udc3f]. For sfeat(\ud835\udc58), a 16-dimensional feature embedding is obtained by utilizing the categorical feature embedding layer available in PyTorch. The observable segment xobs 0 undergoes a nonlinear transformation and is then concatenated with time and feature embeddings, resulting in \u02dc xobs 0 \u2208R\ud835\udc3e\u00d7\ud835\udc3f\u00d7160: \u02dc xobs 0 = \ud835\udc36\ud835\udc5c\ud835\udc5b\ud835\udc50\ud835\udc4e\ud835\udc61(\ud835\udc45\ud835\udc52\ud835\udc3f\ud835\udc62(\ud835\udc36\ud835\udc5c\ud835\udc5b\ud835\udc63(xobs 0 )), stime, sfeat), (18) \fSelf-Supervised Learning of Time Series Representation via Diffusion Process and Imputation-Interpolation-Forecasting Mask where \ud835\udc36\ud835\udc5c\ud835\udc5b\ud835\udc50\ud835\udc4e\ud835\udc61(\u00b7), \ud835\udc45\ud835\udc52\ud835\udc3f\ud835\udc62(\u00b7) and \ud835\udc36\ud835\udc5c\ud835\udc5b\ud835\udc63(\u00b7) represent concatenation, ReLu activation, and 1\u00d71 convolution operation [39] respectively. To accurately capture the inherent temporal dependencies and feature correlations in MTS data, thereby enabling clearer data interpretation and a customizable, modular design, we devise separate temporal and feature embedding functions: g\ud835\udefe( \u02dc xobs 0 ) and h\ud835\udeff( \u02dc xobs 0 ), parameterized by \ud835\udefeand \ud835\udeffrespectively. Inspired by [80], both the temporal g\ud835\udefe(\u00b7) and feature g\ud835\udeff(\u00b7) encoders are simply implemented as a one-layer Transformer encoder that takes an input tensor shaped \ud835\udc3e\u00d7\ud835\udc3f\u00d7160, as shown in Figure 1. Specifically, the temporal encoder operates on tensors shaped 1\u00d7\ud835\udc3f\u00d7160, representing a feature across all timestamps; and the feature encoder handles tensors shaped \ud835\udc3e\u00d71\u00d7160, representing a feature vector corresponding to a time stamp. To integrate temporal and feature embeddings in varying orders without adding to the model\u2019s trainable parameters, we have developed a crossover mechanism. This mechanism is depicted by the red and blue arrows in Figure 1. It facilitates the generation of g\ud835\udefe(h\ud835\udeff( \u02dc xobs 0 )) and h\ud835\udeff(g\ud835\udefe( \u02dc xobs 0 )), which are subsequently transformed and concatenated along with mIIF, resulting in the final embedding Z = f\ud835\udf53(xobs 0 ) := \ud835\udc46\ud835\udc56\ud835\udc3f\ud835\udc62 \u0010 \ud835\udc36\ud835\udc5c\ud835\udc5b\ud835\udc50\ud835\udc4e\ud835\udc61 \u0010 \ud835\udc36\ud835\udc5c\ud835\udc5b\ud835\udc63(g\ud835\udefe(h\ud835\udeff( \u02dc xobs 0 ))),\ud835\udc36\ud835\udc5c\ud835\udc5b\ud835\udc63(h\ud835\udeff(g\ud835\udefe( \u02dc xobs 0 ))), mIIF\u0011\u0011 , (19) where\ud835\udc46\ud835\udc56\ud835\udc3f\ud835\udc62(\u00b7) is the Sigmoid-weighted Linear Unit (SiLU) activation function [20]. Once the model is trained, the embedding for any MTS x0 is computed following Equations (18) and (19), where xobs 0 and mIIF are substituted with x0 and m, respectively. 3.6 The Overall Architecture Figure 1 provides a comprehensive depiction of the various components within the TSDE architecture. The process begins by applying the IIF mask mIIF to partition the input MTS into observable (xobs 0 ) and masked (xmsk 0 ) segments. The entire architecture primarily consists of two key elements: (1) an embedding function f\ud835\udf53(xobs 0 ) thoroughly introduced in Section 3.5; and (2) a conditional reverse diffusion module, illustrated on the right side of Figure 1. The conditional reverse diffusion, introduced in Section 3.3, functions as a noise predictor, effectively implementing \ud835\udf50\ud835\udf3d(xmsk \ud835\udc61 ,\ud835\udc61|f\ud835\udf53(xobs 0 )). During the \ud835\udc56-th training step, as outlined in Algorithm 2, the sampled diffusion step \ud835\udc61is first transformed into a 128-dimensional vector, denoted as sdiff(\ud835\udc61) := \u0010 sin(10 0\u00b74 63 \ud835\udc61), . . . , sin(10 63\u00b74 63 \ud835\udc61), cos(10 0\u00b74 63 \ud835\udc61), . . . , cos(10 63\u00b74 63 \ud835\udc61) \u0011 . (20) Subsequently, the MTS embedding Z, along with sdiff(\ud835\udc61) and xmsk 0 , are input into a residual block composed of \ud835\udc41residual layers. The outputs of these layers are aggregated (summation), processed through some transformations, and combined with xmsk \ud835\udc61 . This results in \ud835\udf50\ud835\udf3d(xmsk \ud835\udc61 ,\ud835\udc61|f\ud835\udf53(xobs 0 )) \u2299(m\u2212mIIF), which is then utilized to compute the loss e L(\ud835\udf3d, \ud835\udf53), as formulated in Equation (16). 3.7 Downstream Tasks and Model Efficiency The trained model can be utilized in two scenarios: (1) the embedding function, as a standalone component, can be used to generate comprehensive MTS representations, which are suitable for various downstream applications including anomaly detection, clustering, and classification as demonstrated in Section 4.2, 4.3, and 4.4, respectively. (2) When combined with the trained conditional reverse diffusion process, the model is capable of predicting missing values (for imputation and interpolation) as well as future values (for forecasting) in MTS data. In the second scenario, a notable increase in speed can be achieved compared to the existing diffusion-based methods such as those in [63, 80]. This efficiency, confirmed in Section 4.1.5, stems from simplifying the conditional reverse diffusion (the right block of Figure 1, i.e., \ud835\udf50\ud835\udf3d) to use only Conv1\u00d71 operators. This streamlining significantly accelerates the \ud835\udc47=50 steps reverse diffusion process. 4 EXPERIMENTS Our evaluation of the TSDE framework includes thorough experiments across six tasks (imputation, interpolation, forecasting, anomaly detection, classification, and clustering) accompanied by additional analyses on inference efficiency, ablation study, and embedding visualization. For experiment details, dataset specifications, hyperparameters, and metric formulas, refer to Appendix A. 4.1 Imputation, Interpolation and Forecasting 4.1.1 Imputation. We carry out imputation experiments on PhysioNet3 [70] and PM2.54 [93]. TSDE is benchmarked against several state-of-the-art TS imputation models. These include BRITS [6], a deterministic method using bi-directional RNN for correlation capture; V-RIN [53], employing variational-recurrent networks with feature and temporal correlations for uncertainty-based imputation; GP-VAE [24], integrating Gaussian Processes with VAEs; and CSDI [80], the top-performing model among the diffusion-based TS imputation models. The model performance is evaluated using CRPS (continuous ranked probability score) to assess the fit of predicted outcomes with original data distributions, and two deterministic metrics \u2013 MAE (mean absolute error) and the RMSE (root mean square error). Deterministic metrics are calculated using the median across all samples, and CRPS value is reported as the normalized average score for all missing values distributions (approximated with 100 samples). The imputation results, as detailed in the upper part of Table 1, highlight TSDE\u2019s superior performance over almost all metrics, outperforming all baselines. Notably, the pretraining-only variant (i.e., \u201cTSDE\u201d) excels on the PhysioNet dataset, underpinning its robustness and enhanced generalization capability, even without the need of any imputation-specific finetuning. For the PM2.5 dataset, finetuning TSDE (i.e., \u201cTSDE+ft\u201d) yields improved outcomes, likely attributable to its capability to adapt to the dataset\u2019s structured missing value patterns. Overall, TSDE\u2019s improvement in CRPS by 4.2%-6.5% over CSDI, a leading diffusion-based TS imputation model, signifies a notable advancement in the field. For a qualitative illustration of imputation results, refer to Figure 2(a). 4.1.2 Interpolation. For interpolation analysis, we utilized the same PhysioNet dataset [70], adopting the processing methods from 3PhysioNet, a healthcare dataset with 4,000 records of 35 variables over 48 hours, is processed and hourly sampled as [67, 80], leading to \u223c80% missing rate. For testing, we randomly mask 10%, 50%, and 90% of observed values to create ground-truth scenarios. On this dataset, we pretrain TSDE for 2,000 epochs, followed by a 200-epoch finetuning with an imputation mask. 4PM2.5, an air quality dataset, features hourly readings from 36 Beijing stations over 12 months with artificially generated missing patterns. Adapting [80], each series spans 36 consecutive timestamps. On this dataset, we pretrain for 1,500 epochs and finetune for 100 epochs using a history mask as detailed in Algorithm 5. \fSenane and Cao, et al. Table 1: Probabilistic MTS imputation and interpolation benchmarking results, featuring TSDE\u2019s pretraining-only and task-specific finetuned (TSDE+ft) models against established baselines. We present mean and standard deviation (SD) from three iterations, with baseline results primarily derived or reproduced according to [80]. Models PhysioNet PM2.5 10% masking ratio 50% masking ratio 90% masking ratio CRPS MAE RMSE CRPS MAE RMSE CRPS MAE RMSE CRPS MAE RMSE Imputation BRITS [6] 0.284(0.001) 0.619(0.022) 0.368(0.002) 0.693(0.023) 0.517(0.002) 0.836(0.015) 14.11(0.26) 24.47(0.73) V-RIN [53] 0.808(0.008) 0.271(0.001) 0.628(0.025) 0.831(0.005) 0.365(0.002) 0.693(0.022) 0.922(0.003) 0.606(0.006) 0.928(0.013) 0.526(0.025) 25.4(0.062) 40.11(1.14) GP-VAE [24] 0.558(0.001)* 0.449(0.002)* 0.739(0.001)* 0.642(0.003)* 0.566(0.004)* 0.898(0.005)* 0.748(0.002)* 0.690(0.002)* 1.008(0.002)* 0.397(0.009) unc. CSDI [80] 0.360(0.007) 0.326(0.008) 0.621(0.020) 0.458(0.008) 0.417(0.010) 0.734(0.024) 0.671(0.007) 0.625(0.010) 0.940(0.018) 0.135(0.001) 12.13(0.07) 22.58(0.23) CSDI [80] 0.238(0.001) 0.217(0.001) 0.498(0.020) 0.330(0.002) 0.301(0.002) 0.614(0.017) 0.522(0.002) 0.481(0.003) 0.803(0.012) 0.108(0.001) 9.60(0.04) 19.30(0.13) TSDE 0.226(0.002) 0.208(0.001) 0.446(0.003) 0.316(0.000) 0.290(0.000) 0.641(0.007) 0.488(0.001) 0.450(0.001) 0.801(0.001) 0.13(0.001) 11.41(0.60) 27.02(2.91) TSDE+ft 0.230(0.001) 0.211(0.001) 0.4718(0.013) 0.318(0.001) 0.292(0.001) 0.644(0.001) 0.490(0.001) 0.452(0.001) 0.803(0.001) 0.107 (0.000) 9.71(0.04) 18.76(0.02) Interpolation Latent ODE [65] 0.700(0.002) 0.522(0.002) 0.799(0.012) 0.676(0.003) 0.506(0.003) 0.783(0.012) 0.761(0.010) 0.578(0.009) 0.865(0.017) * Results reproduced using GP-VAE mTANs [69] 0.526(0.004) 0.389(0.003) 0.749(0.037) 0.567(0.003) 0.422(0.003) 0.721(0.014) 0.689(0.015) 0.533(0.005) 0.836(0.018) original implementation available at CSDI [80] 0.380(0.002) 0.362(0.001) 0.722(0.043) 0.418(0.001) 0.394(0.002) 0.700(0.013) 0.556(0.003) 0.518(0.003) 0.839(0.009) https://github.com/ratschlab/GP-VAE. TSDE 0.365(0.001) 0.331(0.001) 0.597(0.002) 0.403(0.001) 0.371(0.001) 0.657(0.001) 0.517(0.001) 0.476(0.001) 0.775(0.001) We report the mean and standard TSDE+ft 0.374(0.001) 0.338(0.001) 0.610(0.003) 0.421(0.001) 0.385(0.001) 0.677(0.003) 0.570(0.004) 0.522(0.006) 0.821(0.006) deviation of three runs. 0 40 0 20 40 0.0 0.5 1.0 Missing Observed 0 20 40 0.25 0.50 0.75 1.00 0 20 40 0 1 2 0 40 0 20 40 1 0 1 0 20 40 0 1 2 0 20 40 0.02 0.01 0.00 0 40 0 20 40 0.12 0.10 0.08 0 20 40 0 2 4 6 0 20 40 2 1 0 0 40 0 20 40 0.0 0.2 0.4 0 20 40 0.46 0.45 0.44 0.43 0.42 0 20 40 1 0 1 0 40 0 20 40 0.4 0.2 0.0 0.2 0 20 40 1 0 1 0 20 40 1.5 2.0 2.5 0 40 0 20 40 0.5 1.0 1.5 2.0 2.5 0 20 40 1.5 1.0 0.5 0.0 0 20 40 2.0 1.5 1.0 0.5 4 2 0 1.5 2.0 2.5 0.5 0.0 0.5 (a) Imputation on PhysioNet 120 140 80 100 120 140 3.0 80 100 120 140 0.4 0.6 120 140 80 100 120 140 2.0 3.0 4.0 x100 80 100 120 140 1.0 1.5 2.0 x1000 120 140 80 100 120 140 0.4 0.6 0.8 x1000 80 100 120 140 0.4 0.6 0.8 1.0 x1000 120 140 80 100 120 140 0.5 1.0 1.5 2.0 x1000 80 100 120 140 0.6 0.8 1.0 1.2 x100 120 140 80 100 120 140 1.0 1.5 2.0 2.5 3.0 x100 80 100 120 140 0.5 1.0 1.5 2.0 x1000 120 140 80 100 120 140 4.0 6.0 8.0 x10 80 100 120 140 0.8 1.0 1.2 1.4 x1000 120 140 80 100 120 140 1.0 1.5 2.0 x100 80 100 120 140 3.0 4.0 5.0 6.0 x10 120 140 80 100 120 140 2.0 3.0 4.0 5.0 x100 80 100 120 140 0.6 0.8 1.0 1.2 1.4 x1000 120 140 me 80 100 120 140 time 0.8 1.0 1.2 1.4 x100 (b) Interpolation on Electricity 60 180 140 160 180 2.0 4.0 140 160 180 2.0 3.0 140 160 180 1.0 1.5 60 180 140 160 180 0.8 1.0 1.2 1.4 1.6 x100 140 160 180 0.4 0.6 0.8 x1000 140 160 180 0.4 0.6 0.8 1.0 x1000 60 180 140 160 180 0.6 0.8 1.0 1.2 x100 140 160 180 0.5 1.0 1.5 2.0 x1000 140 160 180 0.6 0.8 1.0 1.2 x100 60 180 140 160 180 0.6 0.8 1.0 1.2 1.4 x1000 140 160 180 1.0 1.5 2.0 2.5 3.0 x100 140 160 180 0.5 1.0 1.5 2.0 2.5 x1000 60 180 140 160 180 3.0 4.0 5.0 x100 140 160 180 0.4 0.6 0.8 1.0 x100 140 160 180 0.8 1.0 1.2 1.4 x1000 60 180 140 160 180 2.0 4.0 6.0 x10 140 160 180 1.0 1.5 2.0 x100 140 160 180 3.0 4.0 5.0 6.0 x10 60 180 140 160 180 0.8 1.0 1.2 x1000 140 160 180 2.0 3.0 4.0 5.0 x100 140 160 180 0.6 0.8 1.0 1.2 x1000 60 180 ime 140 160 180 time 1.8 2.0 x10000 140 160 180 time 0.6 0.8 1.0 1.2 1.4 x100 (c) Forecasting on Electricity Figure 2: Comparison of predicted and ground truth values for (a) imputation (10% missing), (b) interpolation, and (c) forecasting. The line is the median of the predictions and the red shade indicates 5%\u223c95% quantile for missing/future values. See Appendix E for more results. Datasets CSDI* (sec.) TSDE (sec.) Electricity 1,997 163 Solar 608 62 Taxi 27,533 1,730 Traffic 7,569 422 Wiki 9,138 391 * For fair comparison, the linear Transformer encoders in CSDI [80] is replaced with the TransformerEncoder [57] implementation in Pytorch. Table 2: Inference time comparison for forecasting tasks between TSDE and CSDI. [65, 69, 80]. Ground truth scenarios were created by masking all values at randomly selected timestamps, sampled at rates of 10%, 50% and 90%. TSDE is pretrained for 2,000 epochs, and then further finetuned using an interpolation-only mask for another 200 epochs. In our benchmarking, TSDE is compared against three TS interpolation methods: (1) Latent ODE [65], an RNN-based model leveraging ODE (ordinary differential equation) for dynamic, continuous and irregular TS handling; (2) mTANs [69], utilizing time embeddings and attention mechanisms, noted for its strong performance in irregular TS interpolation; and (3) CSDI [80] which has also reported competitive result in interpolation tasks. The results in the lower section of Table 1 demonstrate TSDE\u2019s exceptional performance in interpolation, outperforming CSDI by 3.6%-7.0% in CRPS, 5.8%-8.6% in MAE, and 6.1%-17.3% in RMSE. These findings highlight TSDE\u2019s adeptness in managing irregular timestamp gaps, a likely factor behind the observation that finetuning does not enhance the pretraining-only TSDE\u2019s performance. Comparatively, while CSDI also operates on a similar diffusion model backbone, TSDE\u2019s edge lies in its unique embedding learning ability via IIF masking, adeptly capturing intricate TS characteristics and dynamics for improved results. A qualitative illustration of interpolation results can be found in Figure 2(b). 4.1.3 Forecasting. Our forecasting experiments employ five realworld datasets: (1) Electricity, tracking hourly consumption across 370 customers; (2) Solar, detailing photovoltaic production at 137 Alabama stations; (3) Taxi, recording half-hourly traffic from 1,214 New York locations; (4) Traffic, covering hourly occupancy rates of 963 San Francisco car lanes; and (5) Wiki, monitoring daily views of 2,000 Wikipedia pages. Adapting the practices from [55, 67, 80], each dataset is converted into a series of multivariate sequences, with \ud835\udc3f1 historical timestamps followed by \ud835\udc3f2 timestamps for forecasting. Training data apply a rolling window approach with a stride of 1, while validation and testing data employ a stride of \ud835\udc3f2, ensuring distinct, non-overlapping series for evaluation. Specific \ud835\udc3f1 and \ud835\udc3f2 values are outlined in Table 8. For evaluation metrics, we use CRPS and MSE, supplemented by CRPS-Sum, as introduced in [67]. CRPS-Sum is computed by summing across different features, capturing the joint impact of feature distributions. As of benchmarking baselines, we include several state-of-the-art probabilistic MTS forecasting models: GP-copula [67], TransMAF [64] and TLAE [55]. Additionally, in the realm of diffusion-based methods, we include CSDI [80] and TimeGrad [63]. The forecasting results, as detailed in Table 3, showcase TSDE\u2019s robust performance, especially when finetuned with a forecasting mask. Its effectiveness is notable when compared to CSDI, which is the most closely related method, sharing a diffusion backbone. TSDE particularly excels in the Electricity, Taxi, and Wiki datasets, especially as evaluated by the CRPS metric. However, it is important to note a discrepancy in the Solar dataset performance between TSDE/CSDI and other baselines, likely due to a data split issue: the actual test set, per the source code, is identical to the training set, which contradicts the details reported in the corresponding paper. For a qualitative illustration, refer to Figure 2(c). \fSelf-Supervised Learning of Time Series Representation via Diffusion Process and Imputation-Interpolation-Forecasting Mask Table 3: Probabilistic MTS forecasting results embodying both TSDE (pretraining-only) and finetuned (TSDE+ft) variants. Baseline results are either sourced or reproduced from [55, 64, 67]. For TSDE-related experiments, we report the mean and SD across three iterations. Models Electricity Solar Taxi Traffic Wiki CRPS GP-copula 0.056(0.002) 0.371(0.022) 0.360(0.201) 0.133(0.001) 0.236(0.000) TransMAF 0.052(0.000) 0.368(0.001) 0.377(0.002) 0.134(0.001) 0.274(0.007) TLAE 0.058(0.003) 0.335(0.044) 0.369(0.011) 0.097(0.002) 0.298(0.002) CSDI 0.043(0.001)* 0.396(0.021)*\u2020 0.277(0.006)* 0.076(0.000)* 0.232(0.006)* TSDE 0.043(0.000) 0.400(0.025)\u2020 0.277(0.001) 0.091(0.001) 0.222(0.003) TSDE+ft 0.042(0.000) 0.375(0.013)\u2020 0.282(0.001) 0.081(0.001) 0.226(0.003) CRPS-sum GP-copula 0.024(0.002) 0.337(0.024) 0.208(0.183) 0.078(0.002) 0.086(0.004) TransMAF 0.021(0.000) 0.301(0.014) 0.179(0.002) 0.056(0.001) 0.063(0.003) TimeGrad 0.021(0.001) 0.287(0.020) 0.114(0.020) 0.044(0.006) 0.049(0.002) TLAE 0.040(0.003) 0.124(0.057) 0.130(0.010) 0.069(0.002) 0.241(0.001) CSDI 0.019(0.001)* 0.345(0.029)*\u2020 0.138(0.008)* 0.020(0.000)* 0.084(0.013)* TSDE 0.020(0.001) 0.453(0.026)\u2020 0.136(0.003) 0.038(0.003) 0.064(0.002) TSDE+ft 0.017(0.001) 0.345(0.012)\u2020 0.153(0.006) 0.025(0.001) 0.059(0.003) MSE GP-copula 2.4e5(5.5e4) 9.8e2(5.2e1) 3.1e1(1.4e0) 6.9e-4(2.2e-5) 4.0e7(1.6e9) TransMAF 2.0e5 9.3e2 4.5e1 5.0e-4 3.1e7 TLAE 2.0e5(1.6e4) 6.8e2(1.3e2) 2.6e1(1.4e0) 4.0e-4(5.0e-6) 3.8e7(7.2e4) CSDI 1.23e5(9.7e3)* 1.12e3(1.2e2)*\u2020 1.82e1(7.8e-1)*3.64e-4(0.0e0)*4.43e7(1.0e7)* TSDE 1.20e5(3.5e3) 1.07e3(9.8e1)\u2020 1.89e1(3.7e-1) 4.34e-4(0.0e0) 3.59e7(7.2e4) TSDE+ft 1.16e5(6.0e3) 9.25e2(4.9e1)\u2020 1.92e1(2.4e-1) 3.88e-4(0.0e0) 3.62e7(1.8e5) * We replace the linear Transformers [83] in CSDI with the Pytorch TransformerEncoder [57]. \u2020 We take the training MTS dataset and split it into training, validation and testing sets. 4.1.4 Ablation Study. In an ablation study on TSDE across imputation, interpolation, and forecasting, evaluated on PhysioNet (10% missing ratio) and Electricity datasets, two configurations were tested: one without crossover, and another without IIF mask (replaced by an imputation mask detailed in Algorithm 4). Table 4 underscores the positive contribution of the crossover mechanism across all three tasks. The impact of IIF masking, while less pronounced for imputation and interpolation, becomes noticeable in the forecasting task. This can be attributed to the random PhysioNet missing values, which are distributed fundamentally differently from a typical forecasting scenario. Thus, IIF strategy is important for TSDE to gain a generalization ability across various settings. The contrast between \u201cTSDE\u201d and \u201cTSDE+ft\u201d in Tables 1 and 3 serves as an ablation study for finetuning; it reveals that pretrained TSDE can achieve competitive results without the necessity of finetuning. Table 4: Ablation study on PhysioNet (imputation and interpolation) and Electricity (forecasting) datasets. Ablation Configuration Imputation (MAE/CRPS) Interpolation (MAE/CRPS) Forecasting (CRPS-sum/CRPS) w/o crossover 0.252(0.001)/0.274(0.001) 0.339(0.000)/0.373(0.000) 0.021(0.001)/0.046(0.001) w/o IIF mask 0.207(0.001)/0.225(0.001) 0.330(0.001)/0.364(0.001) 0.028(0.004)/0.053(0.003) TSDE 0.208(0.001)/0.226(0.002) 0.331(0.001)/0.365(0.001) 0.020(0.001)/0.043(0.000) 4.1.5 Inference Efficiency. Similar to CSDI [80], TSDE performs inference by gradual denoising from the last diffusion step \ud835\udc47=50 to the initial step \ud835\udc61=1, to approximate the true data distribution of missing or future values for imputation/interpolation/forecasting tasks. Typically, this iterative process can become computationally expensive. TSDE achieves a substantial acceleration in this process as illustrated in Table 2, where TSDE is ten times faster than CSDI under the same experimental setup. This is primarily owing to its globally shared, efficient dual-orthogonal Transformer encoders with a crossover mechanism, merely requiring approximately a quarter of the parameters used by CSDI for MTS encoding. 4.2 Anomaly Detection For anomaly detection, we adopt an unsupervised approach using reconstruction error as the anomaly criterion, aligning with [87, 106]. We evaluate TSDE on five benchmark datasets: SMD [77], MSL [31], SMAP [31], SWaT [48] and PSM [1]. Once TSDE is pretrained, a projection layer, designed to reconstruct MTS from TSDE embeddings, is finetuned by minimizing MSE reconstruction loss. Our anomaly detection experiments align with TimesNet [106], utilizing preprocessed datasets from [90]. Following their method, we segment datasets into non-overlapping MTS instances of 100 timestamps each, labeling timestamps as anomalous based on a MSE threshold. This threshold is set according to the anomaly proportion in the validation dataset, ensuring consistency with baseline anomaly ratios for a fair comparison. In this task, TSDE is benchmarked against an extensive set of baselines featuring diverse backbones, including a) Frozen pretrained LLM-based models: GPT4TS [106]; b) Task-agnostic foundation models: TimesNet [87]; c) MLP (multi-layer perceptron) based models: LightTS [102] and DLinear [97]; and finally d) Transformerbased models: Transformer [82], Reformer [36], Informer [104], Autoformer [88], Pyraformer [41], LogSparse Transformer [38], FEDformer [105], Non-stationary Transformer [43], ETSformer [86], PatchTST [56] and Anomaly Transformer [90]. The results in Table 5 reveal that TSDE\u2019s anomaly detection performance surpasses nearly all baselines, with less than a 1% F1 score difference from GPT4TS. Notably, while TSDE doesn\u2019t outperform GPT4TS, it\u2019s important to consider that GPT4TS benefits from a pretrained LLM (GPT-2) with about 1.5 billion model parameters. TSDE, in contrast, relies on just two single-layer Transformer encoders (<0.3 million parameters), demonstrating its competitive edge despite having significantly fewer model parameters. 4.3 Classification To further inspect the discriminative power of the pretrained TSDE embedding, we utilize the labeled PhysioNet dataset to evaluate TSDE\u2019s performance on a binary classification downstream task. This dataset, marked by in-hospital mortality labels for each patient, features MTS with over 80% missing values. To address this, we pretrain TSDE for 2,000 epochs to impute the raw MTS. Subsequently, we train a simple MLP for 40 epochs to perform mortality classification. Given the imbalanced nature of PhysioNet labels, we assess our model\u2019s efficacy with AUROC as in [6, 24]. We benchmark TSDE against a diverse range of established MTS classification methods, categorized into 3 groups with a total of 12 methods: (1) heuristic methods: mean/forward imputation [24, 40], (2) GP/VAE based models: GP [61], VAE [35], HI-VAE [54], GP-VAE [24], and (3) RNN based models: GRUI-GAN [45], GRU-D [10], M-RNN [95] and BRITS variants [6]. As shown in Table 6, TSDE surpasses all existing baselines and is on par with the state-of-the-art BRITS baseline. It is worth noting that BRITS achieves that performance by employing a sophisticated multi-task learning mechanism tailored for classification tasks. In contrast, our method achieves top-tier results by simply finetuning a simple MLP. TSDE\u2019s remarkable performance, especially in challenging classification scenarios with significant class imbalance (\u223c10% positive classes), highlights its ability to learn generic embeddings well-suited for downstream MTS classification tasks. \fSenane and Cao, et al. Table 5: Anomaly detection: baseline results are cited from Table 27 of [106]; higher scores indicate better performance; the best and second best results are in bold and underlined, respectively. Models SMD MSL SMAP SWaT PSM Avg. F1 P R F1 P R F1 P R F1 P R F1 P R F1 Transformer 83.6 76.1 79.6 71.6 87.4 78.7 89.4 57.1 69.7 68.8 96.5 80.4 62.7 96.6 76.1 76.9 LogSparseT. 83.5 70.1 76.2 73.0 87.4 79.6 89.1 57.6 70.0 68.7 97.3 80.5 63.1 98.0 76.7 76.6 Reformer 82.6 69.2 75.3 85.5 83.3 84.4 90.9 57.4 70.4 72.5 96.5 82.8 59.9 95.4 73.6 77.3 Informer 86.6 77.2 81.6 81.8 86.5 84.1 90.1 57.1 69.9 70.3 96.7 81.4 64.3 96.3 77.1 78.8 AnomalyT.\u2020 88.9 82.2 85.5 79.6 87.4 83.3 91.8 58.1 71.2 72.5 97.3 83.1 68.3 94.7 79.4 80.5 Pyraformer 85.6 80.6 83.0 83.8 85.9 84.9 92.5 57.7 71.1 87.9 96.0 91.8 71.7 96.0 82.1 82.6 Autoformer 88.1 82.3 85.1 77.3 80.9 79.0 90.4 58.6 71.1 89.8 95.8 92.7 99.1 88.1 93.3 84.3 NonStation. 88.3 81.2 84.6 68.5 89.1 77.5 89.4 59.0 71.1 68.0 96.7 79.9 97.8 96.8 97.3 82.1 DLinear 83.6 71.5 77.1 84.3 85.4 84.9 92.3 55.4 69.3 80.9 95.3 87.5 98.3 89.3 93.5 82.5 LightTS 87.1 78.4 82.5 82.4 75.8 78.9 92.6 55.3 69.2 92.0 94.7 93.3 98.4 96.0 97.1 84.2 FEDformer 87.9 82.4 85.1 77.1 80.1 78.6 90.5 58.1 70.8 90.2 96.4 93.2 97.3 97.2 97.2 85.0 ETSformer 87.4 79.2 83.1 85.1 84.9 85.0 92.2 55.7 69.5 90.0 80.4 84.9 99.3 85.3 91.8 82.9 PatchTS. 87.3 82.1 84.6 88.3 71.0 78.7 90.6 55.5 68.8 91.1 80.9 85.7 98.8 93.5 96.1 82.8 TimesNet* 87.9 81.5 84.6 89.5 75.4 81.8 90.1 56.4 69.4 90.7 95.4 93.0 98.5 96.2 97.3 85.2 GPT4TS\u2021 88.9 85.0 86.9 82.0 82.9 82.4 90.6 60.9 72.9 92.2 96.3 94.2 98.6 95.7 97.1 86.7 TSDE\u2021 87.5 82.2 84.8 90.1 84.5 87.2 91.4 56.9 70.1 98.2 92.9 92.5 98.6 90.7 94.5 85.8 * Reproduced with https://github.com/thuml/Time-Series-Library. \u2020 Reconstruction error is used as joint criterion for fair comparison. \u2021 GPT4TS leverage a pretrained LLM (GPT-2) with 1.5B parameters, while TSDE merely uses two single-layer Transformer encoders. Table 6: Classification performance on PhysioNet measured with AUROC. The baseline results are sourced from Table 2 of [6] and Table 3 of [24]. Models AUROC Mean imp. [24, 40] 0.70 \u00b1 0.000 Forward imp. [24, 40] 0.71 \u00b1 0.000 GP [61] 0.70 \u00b1 0.007 VAE [35, 40] 0.68 \u00b1 0.002 HI-VAE [54] 0.69 \u00b1 0.010 GRUI-GAN [45] 0.70 \u00b1 0.009 GP-VAE [24] 0.73 \u00b1 0.006 GRU-D [10] 0.83 \u00b1 0.002 M-RNN [95] 0.82 \u00b1 0.003 BRITS-LR [6]\u2020 0.74 \u00b1 0.008 BRITS-RF [6]* 0.81 \u00b1 (N/A) BRITS [6] 0.85 \u00b1 0.002 TSDE 0.85 \u00b1 0.001 \u2020 Logistic Regression (LR) on imputed PhysioNet data. * Train Random Forest (RF) on imputed PhysioNet data. 4.4 Clustering MTS data often lack annotations, making supervised learning inapplicable. In such scenarios, unsupervised clustering is a valuable method for uncovering intrinsic patterns and classes. We utilize the same pretrained TSDE model from our classification experiments (trained on PhysioNet with a 10% missing ratio) to evaluate the clustering performance of TSDE embeddings. Initially, we generate MTS embeddings using TSDE\u2019s pretrained embedding function. For simplicity and visual clarity, these embeddings are projected into a 2D space using UMAP (uniform manifold approximation and projection) [49]. Subsequently, DBSCAN (density-based spatial clustering of applications with noise) [23] is applied to these 2D projections to obtain clusters. ARI=0.010 RI=0.502 (a) Raw MTS ARI=0.426 RI=0.737 (b) Embed raw MTS ARI=0.302 RI=0.684 (c) Embed imputed MTS Figure 3: Clustering of (a) raw MTS, (b) TSDE embedding of raw MTS, and (c) TSDE embedding of TSDE-imputed MTS. Marker shapes denote ground-truth binary labels; colors indicate DBSCAN [23] clusters after UMAP [49] dimension reduction. As shown in Figure 3, the clustering quality is assessed across three data types: (a) raw MTS, (b) TSDE embeddings of raw MTS, and (c) TSDE embeddings of TSDE-imputed MTS. The ground truth binary labels are indicated using two distinct marker shapes: circles and triangles. When using raw MTS as seen in 3(a), the clusters are unfavourably intertwined, with data points from both classes intermingling. However, the TSDE embeddings, whether derived from raw or imputed MTS, exhibit substantially improved cluster differentiation. These embeddings enable more precise alignment with ground truth classifications, implying the capability of TSDE in capturing data nuances. Furthermore, the negligible performance disparity between Figures 3(b) and 3(c) suggests that TSDE embeddings can be directly used for MTS clustering without the need of imputation. This consistency is likely because our encoders proficiently encapsulate missing data traits, seamlessly integrating these subtleties into the embeddings. To provide a quantitative assessment of clustering, given the presence of labels, we calculate RI (rand index) [60] and ARI (adjusted RI) [30]. These metrics are reported on top of each setup in Figure 3. Notably, the RI and ARI values align with the qualitative observations discussed earlier, further substantiating our findings. 4.5 Embedding Visualization (a) Trend (b) Seasonal (c) Noise Figure 4: TSDE embedding visualization of (a) Trend, (b) Seasonal, and (c) Noise components from synthetic MTS. To substantiate the representational efficacy of TSDE embeddings, we undertake a visualization experiment on synthetic MTS data, as showcased in Figure 4. The data comprises three distinct UTS: (a) a consistently ascending trend, (b) a cyclical seasonal signal, and (c) a white noise component. Each UTS embedding has two dimensions (\ud835\udc3f\u00d733); for a lucid depiction, we cluster the second dimension by treating it as 33 samples each of length \ud835\udc3f, and visualize the centroid of these clusters. Intriguingly, the embeddings, which were pretrained on the entire synthetic MTS, vividly encapsulate the joint encoding effects of all series. The trend\u2019s embedding delineates the series\u2019 progression, evident from the gradual color saturation changes, embodying the steady evolution. The seasonal \fSelf-Supervised Learning of Time Series Representation via Diffusion Process and Imputation-Interpolation-Forecasting Mask signal\u2019s embedding mirrors its inherent cyclicity, with color oscillations reflecting its periodic nature. Finally, the noise component\u2019s embeddings exhibit sporadic color band patterns (with subtle traces of seasonal patterns), capturing the inherent randomness. 5" +} \ No newline at end of file