diff --git a/360reatowardsareusableexperienceaccumulationwith360assessmentformultiagentsystem/5dd3cec6-4a66-4bc2-b00a-1ee9ce496816_content_list.json b/360reatowardsareusableexperienceaccumulationwith360assessmentformultiagentsystem/5dd3cec6-4a66-4bc2-b00a-1ee9ce496816_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..c17441db7db0dd4f0a25eb4171cd2de69580d84e --- /dev/null +++ b/360reatowardsareusableexperienceaccumulationwith360assessmentformultiagentsystem/5dd3cec6-4a66-4bc2-b00a-1ee9ce496816_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:82559a96bfd14a1aaf343f16df4a824da0361013f694760e3ea72f72319c1f91 +size 99835 diff --git a/360reatowardsareusableexperienceaccumulationwith360assessmentformultiagentsystem/5dd3cec6-4a66-4bc2-b00a-1ee9ce496816_model.json b/360reatowardsareusableexperienceaccumulationwith360assessmentformultiagentsystem/5dd3cec6-4a66-4bc2-b00a-1ee9ce496816_model.json new file mode 100644 index 0000000000000000000000000000000000000000..65ba48ff3f51c0683c436d4f8350d123b408e052 --- /dev/null +++ b/360reatowardsareusableexperienceaccumulationwith360assessmentformultiagentsystem/5dd3cec6-4a66-4bc2-b00a-1ee9ce496816_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8875de9ca309f9c657ebadfd7b83737c928065405fc0e84757d75e33c688cab1 +size 126466 diff --git a/360reatowardsareusableexperienceaccumulationwith360assessmentformultiagentsystem/5dd3cec6-4a66-4bc2-b00a-1ee9ce496816_origin.pdf b/360reatowardsareusableexperienceaccumulationwith360assessmentformultiagentsystem/5dd3cec6-4a66-4bc2-b00a-1ee9ce496816_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..72185dce644e22e3fd640552d902336f4e4bf267 --- /dev/null +++ b/360reatowardsareusableexperienceaccumulationwith360assessmentformultiagentsystem/5dd3cec6-4a66-4bc2-b00a-1ee9ce496816_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a95cf2ba50637b8c24cec13f8fb5a93efcffc507f4f4b1d60fa752cca9081416 +size 485920 diff --git a/360reatowardsareusableexperienceaccumulationwith360assessmentformultiagentsystem/full.md b/360reatowardsareusableexperienceaccumulationwith360assessmentformultiagentsystem/full.md new file mode 100644 index 0000000000000000000000000000000000000000..3ea08da24fa174609189eb14f3d542610b32bfb6 --- /dev/null +++ b/360reatowardsareusableexperienceaccumulationwith360assessmentformultiagentsystem/full.md @@ -0,0 +1,485 @@ +# $360^{\circ}$ REA: Towards A Reusable Experience Accumulation with $360^{\circ}$ Assessment for Multi-Agent System + +Shen Gao $^{1}$ , Hao Li $^{2}$ , Chengrui Huang $^{1}$ , Quan Tu $^{3}$ , Zhiliang Tian $^{4*}$ , Minlie Huang $^{5}$ , Shuo Shang $^{1}$ + +1 University of Electronic Science and Technology of China, 2 Shandong University, 3 Renmin University of China, + +$^{4}$ National University of Defense Technology, $^{5}$ Tsinghua University shengao@pku.edu.cn, lihaohan@mail.sdu.edu.cn, + +{yunrongyuxi, Jedi.shang}@gmail.com, quantu@ruc.edu.cn, tianzhiliang@nudt.edu.cn, aihuang@tsinghua.edu.cn + +# Abstract + +Large language model agents have demonstrated remarkable advancements across various complex tasks. Recent works focus on optimizing the agent team or employing self-reflection to iteratively solve complex tasks. Since these agents are all based on the same LLM, only conducting self-evaluation or removing underperforming agents does not substantively enhance the capability of the agents. We argue that a comprehensive evaluation and accumulating experience from evaluation feedback is an effective approach to improving system performance. In this paper, we propose Reusable Experience Accumulation with $360^{\circ}$ Assessment $(360^{\circ}\mathrm{REA})$ , a hierarchical multi-agent framework inspired by corporate organizational practices. The framework employs a novel $360^{\circ}$ performance assessment method for multi-perspective performance evaluation with fine-grained assessment. To enhance the capability of agents in addressing complex tasks, we introduce dual-level experience pool for agents to accumulate experience through fine-grained assessment. Extensive experiments on complex task datasets demonstrate the effectiveness of $360^{\circ}\mathrm{REA}^{1}$ . + +# 1 Introduction + +A variety of tasks, spanning from reasoning (Besta et al., 2023; Wang et al., 2023b; Wei et al., 2023; Yao et al., 2023b,a; Zhou et al., 2023) to code generation (Anishka et al., 2023; Chen et al., 2022, 2021; Dong et al., 2023; Hong et al., 2023; Wu et al., 2023), as well as video gaming (Wang et al., 2023a; Zhu et al., 2023) and autopilot (Dong et al., + +2023; Lykourentzou et al., 2021; Nascimento et al., 2023), have seen promising advancements through the utilization of Large Language Model (LLM) agents. Recent works have explored the effectiveness of combining multiple LLM agents to collaborate on complex tasks (Chen et al., 2023c; Liu et al., 2023a; Packer et al., 2023; Ren et al., 2024; Wang et al., 2024). These methods can generally be classified into two main categories. One category involves pre-defining the roles and specific tasks of each agent within a multi-agent system, such as software development (Hong et al., 2023; Qian et al., 2023). The other category of methods allows multiple agents to decide their tasks and roles autonomously, e.g., travel planning (Zheng et al., 2023). In these methods, they usually focus on how to allocate tasks among agents (Cheng et al., 2023; Wang et al., 2023d) or how to score agents to subsequently remove those with low scores (Liu et al., 2023b). While some methods (Xi et al., 2023; Shinn et al., 2023; Wang et al., 2023c) aim to correct the result of the current task by employing self-reflection in the agent, these approaches are limited to improving current specific task performance. The capability of the agent to address similar tasks is not enhanced. + +In a company, the ultimate goal of company management is not merely to score employee performance, to promote or fire them. The best practice is to use the performance assessment to assist employees in purposefully reflecting on their work, thereby enhancing their capability to fulfill their roles better. 360-degree assessment (Bracken et al., 2016; Alimo-Metcalfe, 1998; Peiperl, 2001) is a comprehensive process that involves collecting assessment on an employee's behavior from various perspectives, including colleagues, supervisors, and sometimes external sources like customers. Business organizations are increasingly incorporating 360-degree assessment into performance evaluations and employment + +decisions, such as pay and promotions (Bytedance, 2024). Motivated by this, in multi-agent systems, the design principle should be to assist agents in accumulating experience based on their assessment results and enabling them to perform better in subsequent tasks. This principle emphasizes the importance of helping agents to improve rather than solely evaluating or removing underperforming agents from the system. Therefore, designing a mechanism for agent assessment and capabilities learning poses a challenge in the design of multiagent systems. + +In this paper, we propose Reusable Experience Accumulation with $360^{\circ}$ Assessment $(360^{\circ}\mathrm{REA})$ which is a hierarchical multi-agent interaction framework. The objective is to enhance performance in handling complex tasks by emulating organizational structures, employee performance assessments, and the accumulation of experience within a company. We employ a hierarchical structure for organizing agents, where the leader agent allocates tasks and roles to crew agents, and the crew agents collaborate to accomplish the given task. To evaluate the performance of each crew agent, instead of using only self-reflection, we introduce a novel $360^{\circ}$ performance assessment method that employs a multidimensional evaluation in crew-crew and leader-crew dimensions. It provides agents with more valuable assessment by evaluating from both the peer level and supervisory level. + +As mentioned above, the accumulation of experience is essential to achieve better agent performance (Madaan et al., 2023; Miao et al., 2023; Shinn et al., 2023; Wang et al., 2023c). In this paper, we propose a dual-level experience pool to help agents collect useful experiences when tackling complex tasks. Specifically, within the local experience pool, agents utilize fine-grained insights from the $360^{\circ}$ performance assessment to reflect on their current outputs and form experiences related to the sub-tasks. To effectively address a complex task, optimizing individual sub-tasks separately is sub-optimal, and it is essential to a comprehensive understanding and analysis of the complex task. To construct a high-level experience, leader agents use the evaluation of the final result to comprehensively summarize the task, forming a global experience pool. Finally, the team accumulates sufficient experience in handling such tasks, leading to improved performance on similar tasks. Extensive experiments conducted on two + +widely used complex task datasets demonstrate the effectiveness of our proposed $360^{\circ}$ REA. + +
MethodPromptingSPPDyLANOKR360°REA
Hierarchical Agents
Dynamic Task Allocation
Peer Assessment
Self Assessment
Supervisory Assessment
Reusable Experience
Agent Discussion
+ +Table 1: Comparison of existing multi-agent collaboration frameworks. + +Our contributions of this work are as follows: + +- We propose $360^{\circ}$ REA, which is a multi-agent interaction framework inspired by contemporary corporate organizational governance methods. +- We propose $360^{\circ}$ performance assessment to evaluate the performance of agents from multiple dimensions. +- We introduce dual-level experience pool to help agents collect useful experiences based on the evaluation result. +- Experimental results on benchmark datasets illustrate the advantage of $360^{\circ}\mathrm{REA}$ on several complex tasks. + +# 2 Related Work + +Large-scale language models (LLMs) such as ChatGPT have demonstrated outstanding performance across various natural language tasks (OpenAI, 2023). In recent years, several methods have emerged to enhance the ability of LLM to tackle complex tasks, such as Chain of Thought (Wei et al., 2023), Tree of Thoughts (Yao et al., 2023a), and Graph of Thoughts (Besta et al., 2023). These methods aim to improve reasoning by simulating human thinking processes within LLMs. ReAct (Yao et al., 2023b) enables LLMs to generate interpretable action plans before taking actions, thus facilitating the completion of more complex tasks. Reflection (Shinn et al., 2023) enhances task performance by encouraging LLM to engage in self-reflection. However, some complex real-world tasks typically require collaboration among multiple people to accomplish. Thus, relying solely on prompting methods often yields suboptimal performance (Xi et al., 2023; Wei et al., 2023; Yin et al., 2023). + +Recent work has begun exploring frameworks involving multiple LLM-based agents, which significantly outperform direct prompting methods + +in natural language understanding and reasoning tasks (Chen et al., 2023c; Hong et al., 2023; Chen et al., 2023b). Some work involves conducting several rounds of debate among multiple LLM-based agents to enhance their authenticity and reasoning abilities (Liang et al., 2023). Other works propose dividing tasks into subtasks and employing multiple LLM-based agents to address complex tasks through a divide-and-conquer approach (Chen et al., 2023c; Hong et al., 2023; Chen et al., 2023a). Furthermore, some works employ scoring the performance of multiple LLM-based agents and terminating those with poor performance to better accomplish tasks (Liu et al., 2023b). + +However, existing methods for evaluating agent performance often rely on the self-reflections of agents, neglecting comprehensive considerations within the entire multi-agent framework. And solely evaluating and terminating low-scoring agents does not substantially enhance their task-solving capabilities. + +# 3 $360^{\circ}$ REA Methodology + +# 3.1 Overview + +In this section, we detail the Reusable Experience Accumulation with $360^{\circ}$ Assessment $(360^{\circ}\mathrm{REA})$ . An overview of $360^{\circ}\mathrm{REA}$ is shown in Figure 1. We will detail the $360^{\circ}\mathrm{REA}$ in three parts: (i) We first illustrate the main workflow of the $360^{\circ}\mathrm{REA}$ which uses the user query $x$ as input and generates the response $y$ . (ii) Next, we introduce the $360^{\circ}$ performance assessment which prompts the agents to give a review of each other and update their response according to others' reviews. (iii) After finishing the response generation of each agent, each agent will summarize their experience and construct a dual-level pool that contains reusable task-solving experiences. + +# 3.2 Main Workflow of $360^{\circ}$ REA + +In this paper, we propose the $360^{\circ}\mathrm{REA}$ which is a general multi-agent learning framework and can be applied to the hierarchical LLM-based multi-agent frameworks (Zheng et al., 2023). In this LLM-based multi-agents framework, it consists of a leader agent $A^l$ and $N$ crew agents $\{A_1^c,A_2^c,\dots ,A_N^c\}$ to solve the given task defined by the user input query $x$ . Given an input query $x$ , the leader agent $A^l$ first generates the instruction of each crew agent, which first divides the task into + +several sub-tasks: + +$$ +\left\{I _ {1} ^ {c}, I _ {2} ^ {c}, \dots , I _ {N} ^ {c} \right\} = A ^ {l} (x), \tag {1} +$$ + +where $I_{i}^{c}$ is the task instruction for the $i$ -th crew agent. + +To enhance the crew agent with task-solving capabilities, we propose dual-level experience pool (introduced in § 3.4), which contains to subexperience pool: global experience $E_{g}$ and local experience $E_{l,i}$ . The experiences in dual-level experience pool describe (1) the general skill of tackling the task and (2) the specific skill of the sub-task for agent $A_{i}^{c}$ respectively. Then the crew agents generate their responses according to the instruction and useful context to solve the sub-task. + +$$ +H _ {i} ^ {t} = A _ {i} ^ {c} (I _ {i}, E _ {g}, E _ {l, i}, R _ {i} ^ {t}), \qquad (2) +$$ + +where $H_{i}^{t}$ is the response of the $i$ -th agents in $t$ -th turn. In this paper, we propose a multi-turn response generation paradigm for the crew agents. After each turn of the agent generating the response $H_{i}^{t}$ , we employ the $360^{\circ}$ performance assessment (introduced in § 3.3) to evaluate the performance of agent $A_{i}^{c}$ from multiple perspectives and generate a review set $R_{i}^{t}$ . In Equation 2, we also incorporate the reviews from multiple sources $R_{i}^{t}$ when generating the response of the agent. By this mechanism, agent $A_{i}^{c}$ can iteratively revise the response in multiple turns. + +After $T$ turns revising, we can obtain the final responses $\{H_1^T,H_2^T,\dots ,H_N^T\}$ from each crew agent. To form the final answer $y$ to user query $x$ , the leader agent summarizes all the responses by incorporating the global experiences $E_{g}$ and its review $R_l^t$ : + +$$ +y = A ^ {l} \left(\left\{H _ {1} ^ {T}, H _ {2} ^ {T}, \dots , H _ {N} ^ {T} \right\}, E _ {g}, R _ {l} ^ {t}\right). \tag {3} +$$ + +Next, we employ an LLM-based evaluator agent $P$ to evaluate the quality of the response $y$ : + +$$ +m = P (x, y), \tag {4} +$$ + +where $m$ is an overall review which not only contains the judgment of the quality but also illustrates the reason why $P$ gives this evaluation result. + +![](images/52c195c8df5fabd4cdcb14954c06316bdda7f7332034a9608bc07287ec45c802.jpg) +Figure 1: Our proposed $360^{\circ}$ REA. There are two main parts in our $360^{\circ}$ REA, including the $360^{\circ}$ performance assessment and dual-level experience pool. $360^{\circ}$ performance assessment can assist the agent in obtaining evaluations from multiple aspects. These evaluations will facilitate agents in refining their results and accumulating reusable experiences for accomplishing tasks better. Then, we store low-level specific and higher-level experiences in local and global experience pools separately. + +You will be given a multi-day travel plan. The task is {total_task} + +Your task is to evaluate the travel plan on specific criteria. Please make sure you read and understand these instructions carefully. Please keep this document open while reviewing, and refer to it as needed. + +# Evaluation Criteria: + +1. Plan Customization (1-20): [Evaluation Criteria of P.Cu.] +2. Plan Novelty (1-20): [Evaluation Criteria of P.N.] +3. Plan Correctness (1-20): [Evaluation Criteria of P.Co.] + +# Evaluation Steps: + +1. Review the travel plan in detail. +2. Compare the plan against each criterion. +3. Assign a score for each criterion on a scale of 1 to 20, where 1 is the lowest and 20 is the highest, based on the Evaluation Criteria. + +# Travel Plan: + +{Travel_Plan} + +# 3.3 $360^{\circ}$ Performance Assessment + +In this paper, inspired by the performance assessment of employees in large companies, we propose an agent feedback method, $360^{\circ}$ performance assessment. This method comprehensively evaluates the shortcomings of agents in completing tasks through three types of evaluations: self-level, peer-level, and supervisor-level. Different from the agent evaluation methods only based on the agent self-reflection (Madaan et al., 2023), $360^{\circ}$ performance assessment boosts the mutual collaboration among + +agents in multi-agent systems. It enables the evaluation of the agent capabilities from the perspectives of different agents (a.k.a., sub-tasks), thereby encouraging each agent to consider and solve the problem from a global perspective. + +After each agent responds to their own sub-task, we will conduct a $360^{\circ}$ performance assessment to assess whether their responses effectively complete the sub-task and whether they are helpful to other agents in solving their tasks. Subsequently, this review will be provided to the agent, and the agent will then revise their response to the sub-task based on the review from other agents. Through several turns of such iterative evaluations, agents can continuously refine their responses by incorporating feedback from others, thereby improving the quality of their responses. Since our $360^{\circ}$ performance assessment is a multi-turn generation and assessment paradigm, in this section, we illustrate the details of $360^{\circ}$ performance assessment method by using the agent action in $t$ -th turn as an example. + +Self-level Assessment. Similar to the previous work (Madaan et al., 2023), we also employ self-reflection on the response of the agents: + +$$ +R _ {s, i} ^ {t} = A _ {i} ^ {c} \left(H _ {i} ^ {t}, I _ {i} ^ {c}\right), \tag {5} +$$ + +where $R_{s,i}^{t}$ is the self-reflection of $i$ -th agent $A_{i}^{c}$ according to its response $H_{i}^{t}$ and task instruction $I_{j}^{c}$ . + +Peer-level Assessment. In LLM-based multiagent systems, cooperation among crew agents is prevalent, thus facilitating mutual assistance among crew agents can enhance the overall system performance. Therefore, we introduce peer-level assessment among crew agents, which allows for assessing the performance of intelligent agents from the perspectives of other tasks, thereby obtaining a more comprehensive evaluation. Additionally, it can also boost collaboration among the agents. Specifically, a crew agent reviews the responses of other crew agents and generates feedback for others: + +$$ +R _ {i, j} ^ {t} = A _ {i} ^ {c} \left(H _ {i} ^ {t}, H _ {j} ^ {t}, I _ {j} ^ {c}\right), \tag {6} +$$ + +where $R_{i,j}^{t}$ is the feedback for the response $H_{j}^{t}$ generated by $j$ -th agent, which is reviewed by the $i$ -th agent $A_{i}^{c}$ . + +Supervisory-level Assessment. On the other hand, assessment from the leader agent $A^l$ can evaluate the completion of sub-task by agents, and also assist them in better problem-solving from a comprehensive perspective. Therefore, we employ the leader agents $A^l$ to evaluate the performance of the crew agent according to its response $H_i^t$ and instruction $I_j^c$ : + +$$ +R _ {l, i} ^ {t} = A ^ {l} \left(H _ {i} ^ {t}, I _ {i} ^ {c}\right), \tag {7} +$$ + +where $R_{l,i}^{t}$ is the review for the $i$ -th agent written by the leader agent $A^l$ . + +After conducting the $360^{\circ}$ performance assessment from three different sources, in the $t$ -th turn, the crew agent $A_{i}^{c}$ will receive a review set $R_{i}^{t}$ which contains three types of review: + +$$ +R _ {i} ^ {t} = \left\{R _ {* i} ^ {t}, R _ {l, i} ^ {t}, R _ {s, i} ^ {t} \right\}. \tag {8} +$$ + +For the leader agent, it only has the self-level assessment review $R_{l}^{t}$ . + +# 3.4 Dual-level Experience Pool + +For company managers, performance evaluation of employees is not the ultimate goal. Instead, it is essential to enable employees to accumulate experience from evaluation assessment and perform better in subsequent tasks. Inspired by this, we propose the dual-level experience pool, which involves constructing dual-level experience pools: + +local and global experience pools, to help each agent and the entire multi-agent system accumulate reusable of problem-solving experience during the task completion process. + +# 3.4.1 Local-level Experience Construction + +Local experience pertains to each agent itself in completing the current sub-task, summarized from previous multiple turns of $360^{\circ}$ performance assessment in the current task, thereby helping the agent achieve better results in the next turn. After obtaining the reviews $R_{i}^{t}$ , the agent $A_{i}^{c}$ will reflect their local experience $e_{j}^{t}$ : + +$$ +e _ {l} ^ {\prime} = A _ {i} ^ {c} \left(H _ {i} ^ {t}, R _ {i} ^ {t}\right), \tag {9} +$$ + +where the crew agent $A_{i}^{c}$ is prompted by the following instruction: + +Based on feedback from others, past experiences, and from the perspective of one's own role, summarize some experiences that you may use in the future. Your answer should be as concise as possible. Do not output other non output required content. + +Your role: {role}. + +Feedback from others: {peer_feedback} $R_{i}^{t}$ + +Previous experience: $\{\mathrm{pre\_exp}\} E_{l,i}$ + +Structured output requirements: + +Role: [Your Role] + +Experience: [Short experience description] + +Then we append the $e_l^\prime$ into the local experience pool $E_{l,i}$ for the $i$ -th crew agent $A_{i}^{c}$ . + +# 3.4.2 Global-level Experience Construction + +Global experience refers to reusable experiences for the entire multi-agent system, and we build these experiences from the final response of each crew agents and the feedback $m$ of the evaluator $P$ after completing the task: + +$$ +e _ {g} ^ {\prime} = A ^ {l} \left(m, \left\{H _ {1} ^ {T}, H _ {2} ^ {T}, \dots , H _ {N} ^ {T} \right\}\right), \tag {10} +$$ + +where the leader agent $A^l$ is prompted by the following instruction: + +Obtain new experiences based on previous experiences and current scores. The new experience should be summarized while presenting the experience gained from this task. Here are the final results: {Final_Res} $H$ + +Here are the evaluation of the task: {evaluation} $m$ As a leader, you need to learn experience from this task, and the format should be "Where did I do well this time... why didn't I do well this time... next time I should...".Note that these experiences are for this type of task. The answer should be as concise as possible. + +Finally, we can accumulate the global experience pool by appending the $e_g'$ into the local experience pool $E_g$ . + +# 4 Experimental Setup + +# 4.1 Tasks and Datasets + +Following existing works (Wang et al., 2023d; Xie et al., 2024), we verify our $360^{\circ}$ REA performance on two tasks, creative writing, and travel plan making. (i) Creative Writing. We use Trivia Creative Writing (Wang et al., 2023d) dataset, which contains 100 samples. Each sample is required to generate a story to cover the required input topic and input questions. (ii) Travel Plan Making. We use Trip Planning (Xie et al., 2024) dataset that comprises travel plans for 50 world-famous attractions, with specific requirements generated based on the characteristics of the destinations. It includes the following fields: destination, basic requirements, duration of travel, season, type of travelers, number of travelers, and budget. + +# 4.2 Evaluation Metrics + +Since the objectives of travel planning tasks and creative writing tasks are not the same, we employ different evaluation perspectives in our experiments to assess the quality of the model-generated results. + +For the story generation task, we employ the metric matched answer rate Match% $(M\%)$ used in Wang et al. (2023d), to measure the percentage of generated story cover the ground truth target answers for each question. We also propose two semantic evaluation metrics, which prompt the LLM to evaluate whether the generated results satisfy the specific aspects including: (1) Emotional Engagement (E.E.), whether the generated story evokes the reader's emotion and empathy; (2) Insightfulness (Ins), whether the story carries the insightful plot and cause a profound impact on readers. + +For the travel planning task, we also employ three semantic metrics for evaluating three aspects: (1) Plan Correctness $(P.Co.)$ , whether the plan covers all required details and the plan is reasonable; (2) Plan Novelty $(P.N.)$ , whether the plan is novel and creative; (3) Plan Customization $(P.Cu.)$ , whether the plan is customized to the personal interests and preferences of traveler. + +For the logic grid puzzle task, we use one evaluation metric: Accuracy (Acc), which measures the percentage of correctly predicted house numbers compared to the ground truth targets. + +# 4.3 Baselines + +We compare our method with agent-based methods, including: (i) GPT-4 (OpenAI, 2023) is a closed-source LLM from OpenAI. We conduct task solving as an in-context learning that uses a single instruction with the same input data as our $360^{\circ}$ REA. (ii) Mistral (Jiang et al., 2023) is an open-source LLM and we use the mixture-of-experts version named Mistral-medium with 46 billion parameters, and use the same prompt as GPT-4. (iii) OKR (Zheng et al., 2023) is a hierarchical multi-agents collaboration framework. They propose the hierarchical OKR generation to decompose objects into multiple sub-objects and assign new agents based on key results and agent responsibilities. (iv) AgentVerse (Chen et al., 2023c) is a multi-agent framework that enhances collaborative problem-solving by employing specialized agents working together through structured interactions. (v) SPP (Wang et al., 2023d) propose the Solo Performance Prompting (SPP), which transforms a single LLM into a cognitive synergist by engaging in multi-turn self-collaboration with multiple personas. + +We implement the OKR,AgentVerse and SPP with the backbone LLM GPT-4. We also employ two variants of $360^{\circ}$ REA: $360^{\circ}$ REA-G and $360^{\circ}$ REA-M with GPT-4 and Mistral as the backbone respectively. + +# 4.4 Implementation Details + +We conduct 2 turns of $360^{\circ}$ assessment. We use 1 leader agent and set the number of crews is range from 3 to 5, we ask the LLM to follow that range by stating the range in prompt text. The number of experiences selected by the leader from the global-level experience pool is 10. We use GPT-4 version of 1106-preview with a temperature of 1.0. Our anonymized source code is publicly released2. + +# 5 Experimental Results + +# 5.1 Overall Performance + +Table 2 shows the performance of our proposed $360^{\circ}$ REA and baselines in terms of three metrics on two benchmark datasets. We can find that $360^{\circ}$ REA shows superior performance in terms of all metrics compared to the state-of-the-art method SPP which is also a multi-agent hierarchical + +
ModelCreative WritingTravel Plan Making
M%(↑)E.E.(↑)Ins(↑)P.Co.(↑)P.N.(↑)P.Cu.(↑)
GPT-4 (Brown et al., 2020)77.163.266.383.170.293.3
Mistral (Jiang et al., 2023)62.155.460.582.366.392.4
OKR (Zheng et al., 2023)54.558.362.886.475.793.0
AgentVerse (Chen et al., 2023c)75.858.654.785.675.994.1
SPP (Wang et al., 2023d)83.854.257.088.874.492.6
360°REA-G87.066.172.889.680.294.2
360°REA-M71.259.665.184.970.494.0
360°REA-G w/o ExpPool83.163.166.288.076.893.7
360°REA-G w/o 360°F84.163.967.188.676.693.9
+ +collaboration framework with agent discussion. This phenomenon indicates that our proposed $360^{\circ}$ assessment and dual-level experience pool can help the agents to generate better responses in a higher quality. And we can also find that each variant of our $360^{\circ}$ REA (e.g., $360^{\circ}$ REA-G and $360^{\circ}$ REA-M) achieves better performance than simply prompting its LLM backbone, it demonstrates our proposed framework can adapt to any LLM backbones. + +# 5.2 Human Evaluation + +As using LLM as the evaluator is sometimes not robust, we also employ human evaluation to further validate the effectiveness of the model-generated answers. For creative writing tasks, we evaluate the quality of the generated response in terms of: (1) Coherence, the score indicating the story's coherence; (2) Creative, the score indicating the story's creativeness. For travel plan making, we evaluate the response in two aspects: (1) Correct, the score showing whether the plan is correct and complete, and (2) Customized, the score measuring whether the plan is customized. We hire 3 educated annotators with Bachelor degree to conduct double-blind annotation on randomly shuffled results. The annotating scores range from 1 to 100. From Table 3, we observed that our $360^{\circ}\mathrm{REA}$ excels in all aspects compared to the strong multi-agent baseline SPP and GPT-4, which are the strongest baseline in the two tasks (Table 2). We also conclude that the automatic metrics in Table 2 are also consistent with the human evaluation. + +Table 2: The overall performance of ${360}^{ \circ }$ REA comparing with baselines and ablation studies on the two tasks. + +
ModelCreative WritingTravel Plan Making
Coherence(↑)Creative(↑)Correct (↑)Customized (↑)
GPT-476.276.172.674.3
SPP73.874.074.577.5
360°REA-G79.179.279.884.4
+ +# 5.3 Ablation Study + +To verify the effectiveness of each module of our $360^{\circ}$ REA, we conduct ablation studies with model $360^{\circ}$ REA-G with several ablation variants: (i) $360^{\circ}$ REA-G w/o ExpPool: We remove both local and global experience from $360^{\circ}$ REA and directly use the assessment from other agents as the prompt of each agent. (ii) $360^{\circ}$ REA-G w/o $360^{\circ}$ F: We directly use the output of each crew agent without conducting all levels of assessment. + +As the results shown in Table 2, we found that the $360^{\circ}$ REA-G w/o ExpPool method achieved lower scores compared to our full model at most metrics, indicating the necessity of using experience pool to recall reusable experience. And $360^{\circ}$ REA-G w/o $360^{\circ}$ F mainly underperforms $360^{\circ}$ REA-G, showing the effectiveness of conducting all level assessments among multi-agents. + +Table 3: Results of human evaluation on two tasks. + +
ModelM%E.E.Ins
360°REA-G87.066.172.8
360°REA-G w/o GlobalExp84.064.068.5
360°REA-G w/o LocalExp83.765.269.9
+ +Table 4: Comparison on different usages of experience pools. + +
ModelM%E.E.Ins
360°REA-G87.066.172.8
360°REA-G w/o Supervisory85.165.971.9
360°REA-G w/o Peer84.864.067.5
360°REA-G w/o Self86.264.668.7
+ +Table 5: Performance of ablation models on different assessment levels. + +# Task Definition + +Write a short and coherent story about Legend of Zelda that incorporates the answers to the following 5 questions: Which British monarch famously said 'I don't wish to open windows into men's souls'? Which British singer played Jareth the Goblin King in the 1986 film 'Labyrinth'? The 1987 film 'Cry Freedom' is a biographical drama about which South African civil rights leader? Which British actress played Valerie in the 1997 film 'Nil By Mouth'? What was the name of the Brazilian player who won the Women's Singles Finals at Wimbledon in 1959, 1960 and 1964? + +# $360^{\circ}$ REA Output + +# SPP Output + +In the age of legends and mystical chants, the kingdom of Hyrule stood on the precipice of an age-old challenge, the Enchanted Labyrinth. Queen Zelda, a wise and fair ruler whose principles mirrored those of Queen Elizabeth I... + +Under a sky lit with stardust, Link, the hero garbed in green, stood before the labyrinth's entrance, his heart bracing for the trials ahead. A figure emerged from the shadows, a labyrinth guardian as enigmatic and charismatic as Jareth the Goblin King, whose likeness to the British singer David Bowie was unmistakable, ... + +Deeper within the winding passages, Link crossed paths with a spirit named Steve Biko, + +Throughout the labyrinth, the presence of Valerie, inspired by Kathy Burke's portrayal + +In the labyrinth's final sanctum, a guardian awaited, her grace and skill rivaling that of Maria Bueno. ... + +The kingdom rejoiced as Link returned from the labyrinth... + +Thus, the saga of the Enchanted Labyrinth of Hyrule would forever resonate through the ages—a testament to the power of knowledge, the resonance of culture, and the eternal dance of history intertwining with the threads of the present. + +In the mystical land of Hyrule, a legend unfolded beneath the watchful eyes of the Triforce. Once there was a queen, wise and just, known as Elizabeth, ... Out from this labyrinth emerged an enigmatic figure, Jareth, with a voice as captivating as the British singer of old, David Bowie ... + +But destiny had chosen a hero, young Link, to rise against this new foe. ... + +In his company was the valiant Valerie, who resembled the strong-willed British actress seen in 'Nil By Mouth'. Together they braved the challenges of Jareth's labyrinth, finding strength in unity. + +Throughout their quest, they were inspired by the tales of Maria... + +After a long and arduous journey, Link and his companions faced Jareth and freed Hyrule from the labyrinth's spell. They returned to their queen, ... + +Table 6: Examples of the generated response of two methods on the creative writing task. + +# 5.4 Case Study + +Table 6 shows an example of creative writing generated by SPP and $360^{\circ}$ REA-G. The bold text in the response indicates the correct answer to the input question. $360^{\circ}$ REA-G's results contain the answers to all five questions, whereas SPP only correctly answered two of them. The text in green and red indicates the strengths and weaknesses in the generated story respectively. The green parts show $360^{\circ}$ REA-G's output has a rich plot and illustrates the cause and ending of the story. However, the red parts show SPP's story plot lacks a brief introduction and the use of "Valerie" is abrupt and weird. From this case, we observed that $360^{\circ}$ REA-G obtains more reasonable, logical, and fluent stories rather than SPP. We also show an example with the corresponding experiences in the Appendix A + +# 6 Discussion + +# 6.1 Analysis on Different Level Experiences + +In this paper, we propose a novel dual-level experience pool. In this section, we employ two + +ablation models to explore which layer of the experience pool is more effective and conduct experiments on the creative writing dataset. The results in Table 4 indicate that local experience is the most helpful. One possible reason for this phenomenon is that local experience contains lower-level experiences, including more specific task-completion skills. From Table 4, we also observe that compared to the $360^{\circ}$ REA, the model without global experience has a performance decrease of $5.9\%$ in terms of Ins, indicating that the higher-level experience for task-solving in global experience also plays a crucial role in improving model performance. + +# 6.2 Analysis on Different Level Assessment + +We also investigate the effectiveness of different levels of assessment on the creative writing task. From Table 5, we find that all the ablation models achieve worse performance than $360^{\circ}$ REA-G which demonstrates the effectiveness of each level of assessment in $360^{\circ}$ REA. We also find that the variant model without the peer-level assessment achieves the worst among all the ablation models. This phenomenon indicates that peer-level assessment provides the greatest improvement in overall performance, confirming our hypothesis that mutual evaluation among crew members helps each agent not only in completing their task but also in assisting other agents as much as possible. + +# 7 Conclusion + +In this paper, we propose a hierarchical multi-agent framework, Reusable Experience Accumulation with $360^{\circ}$ Assessment $(360^{\circ}\mathrm{REA})$ , which represents a significant step towards enhancing the capabilities of LLM-based agents in tackling complex tasks. Unlike existing approaches that primarily focus on team optimization or self-reflection, $360^{\circ}\mathrm{REA}$ incorporates a comprehensive agent performance evaluation strategy through the novel $360^{\circ}$ performance assessment method. This method provides multi-perspective and fine-grained feedback to agents, in which agents receive valuable insights from self, peers, and supervisory levels. Furthermore, to substantively improve agent task-solving capabilities, we introduce the dual-level experience pool for the accumulation of experiences based on evaluation feedback. Our experiments on complex task datasets demonstrate + +the effectiveness of $360^{\circ}\mathrm{REA}$ in achieving superior performance compared to the state-of-the-art baselines. By emphasizing the importance of comprehensive evaluation and experience accumulation, $360^{\circ}\mathrm{REA}$ establishes a promising direction for advancing the field of multi-agent systems built on LLMs, offering a practical and impactful framework for addressing intricate tasks. + +# Limitations + +In this paper, we only use the text as the input. However, multi-modal data (e.g., images and videos) is commonly used in many tasks. Since existing multi-modal LLMs are capable of understanding both textual and multi-modal information, our multi-agent framework can adapt to leverage multi-modal inputs in many complex tasks. In the future, we plan to incorporate multimodal information into our framework. + +# Ethics Statement + +The multi-agent collaboration framework proposed in this paper is built upon LLM which is trained by reinforcement learning with human feedback (RLHF) (Lambert et al., 2022). Since LLM has already been aligned with human values and preferences during the RLHF stage, the likelihood of generating unethical content is low. However, existing LLMs still cannot completely prevent generating harmful content. Therefore, the answers generated by the multi-agent collaboration framework still require further human expert verification before use. + +# Acknowledgements + +This work was supported by the Natural Science Foundation of Shandong Province (ZR2023QF159), the National Natural Science Foundation of China (U2001212, 62032001, 61932004, 62306330, and WDZC20235250103), and Young Elite Scientist Sponsorship Program by CAST (2023QNRC001) under Grant No. YESS20230367. + +# References + +Beverly Alimo-Metcalfe. 1998. 360 degree feedback and leadership development. International Journal of Selection and Assessment, 6(1):35-44. +Anishka, Atharva Mehta, Nipun Gupta, Dhruv Kumar, and Pankaj Jalote. 2023. Can ChatGPT Play the Role of a Teaching Assistant in an Introductory Programming Course? +Maciej Besta, Nils Blach, Ales Kubicek, Robert Gerstenberger, Lukas Gianinazzi, Joanna Gajda, Tomasz Lehmann, Michal Podstawski, Hubert Niewiadomski, Piotr Nczyk, and Torsten Hoefler. 2023. Graph of Thoughts: Solving Elaborate Problems with Large Language Models. +David Warner Bracken, Dale S. Rose, and Allan H. Church. 2016. The evolution and devolution of $360^{\circ}$ + +feedback. Industrial and Organizational Psychology, 9:761-794. +Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language Models are Few-Shot Learners. +Bytedance. 2024. 360 performance evaluation in lark app. +Bei Chen, Fengji Zhang, Anh Nguyen, Daoguang Zan, Zeqi Lin, Jian-Guang Lou, and Weizhu Chen. 2022. CodeT: Code Generation with Generated Tests. +Guangyao Chen, Siwei Dong, Yu Shu, Ge Zhang, Jaward Sesay, Borje F. Karlsson, Jie Fu, and Yemin Shi. 2023a. AutoAgents: A Framework for Automatic Agent Generation. +Justin Chih-Yao Chen, Swarnadeep Saha, and Mohit Bansal. 2023b. ReConcile: Round-Table Conference Improves Reasoning via Consensus among Diverse LLMs. +Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde de Oliveira Pinto, Jared Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, Alex Ray, Raul Puri, Gretchen Krueger, Michael Petrov, Heidi Khlaaf, Girish Sastry, Pamela Mishkin, Brooke Chan, Scott Gray, Nick Ryder, Mikhail Pavlov, Alethea Power, Lukasz Kaiser, Mohammad Bavarian, Clemens Winter, Philippe Tillet, Felipe Petroski Such, Dave Cummings, Matthias Plappert, Fotios Chantzis, Elizabeth Barnes, Ariel Herbert-Voss, William Hebgen Guss, Alex Nichol, Alex Paino, Nikolas Tezak, Jie Tang, Igor Babuschkin, Suchir Balaji, Shantanu Jain, William Saunders, Christopher Hesse, Andrew N. Carr, Jan Leike, Josh Achiam, Vedant Misra, Evan Morikawa, Alec Radford, Matthew Knight, Miles Brundage, Mira Murati, Katie Mayer, Peter Welinder, Bob McGrew, Dario Amodei, Sam McCandlish, Ilya Sutskever, and Wojciech Zaremba. 2021. Evaluating Large Language Models Trained on Code. +Weize Chen, Yusheng Su, Jingwei Zuo, Cheng Yang, Chenfei Yuan, Chi-Min Chan, Heyang Yu, Yaxi Lu, Yi-Hsin Hung, Chen Qian, Yujia Qin, Xin Cong, Ruobing Xie, Zhiyuan Liu, Maosong Sun, and Jie Zhou. 2023c. AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors. +Yi Cheng, Wenge Liu, Jian Wang, Chak Tou Leong, Yi Ouyang, Wenjie Li, Xian Wu, and Yefeng Zheng. 2023. COOPER: Coordinating Specialized Agents towards a Complex Dialogue Goal. + +Yihong Dong, Xue Jiang, Zhi Jin, and Ge Li. 2023. Self-collaboration Code Generation via ChatGPT. +Sirui Hong, Xiawu Zheng, Jonathan Chen, Yuheng Cheng, Jinlin Wang, Ceyao Zhang, Zili Wang, Steven Ka Shing Yau, Zijuan Lin, Liyang Zhou, Chenyu Ran, Lingfeng Xiao, and Chenglin Wu. 2023. MetaGPT: Meta Programming for Multi-Agent Collaborative Framework. +Albert Q. Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lucile Saulnier, Lélio Renard Lavaud, Marie-Anne Lachaux, Pierre Stock, Teven Le Scao, Thibaut Lavril, Thomas Wang, Timothée Lacroix, and William El Sayed. 2023. Mistral 7B. +Nathan Lambert, Louis Castricato, Leandro von Werra, and Alex Havrilla. 2022. Illustrating reinforcement learning from human feedback (rlhf). Hugging Face Blog. https://huggingface.co/blog/rlhf. +Tian Liang, Zhiwei He, Wenxiang Jiao, Xing Wang, Yan Wang, Rui Wang, Yujiu Yang, Zhaopeng Tu, and Shuming Shi. 2023. Encouraging Divergent Thinking in Large Language Models through Multi-Agent Debate. +Zhiwei Liu, Weiran Yao, Jianguo Zhang, Le Xue, Shelby Heinecke, Rithesh Murthy, Yihao Feng, Zeyuan Chen, Juan Carlos Niebles, Devansh Arpit, Ran Xu, Phil Mui, Huan Wang, Caiming Xiong, and Silvio Savarese. 2023a. BOLAA: Benchmarking and Orchestrating LLM-augmented Autonomous Agents. +Zijun Liu, Yanzhe Zhang, Peng Li, Yang Liu, and Diyi Yang. 2023b. Dynamic LLM-Agent Network: An LLM-agent Collaboration Framework with Agent Team Optimization. +Ioanna Lykourentzou, Federica Lucia Vinella, Faez Ahmed, Costas Papastathis, Konstantinos Papangelis, Vassilis-Javed Khan, and Judith Masthoff. 2021. Self-Organizing Teams in Online Work Settings. +Aman Madaan, Niket Tandon, Prakhar Gupta, Skyler Hallinan, Luyu Gao, Sarah Wiegreff, Uri Alon, Nouha Dziri, Shrimai Prabhumoye, Yiming Yang, Shashank Gupta, Bodhisattwa Prasad Majumder, Katherine Hermann, Sean Welleck, Amir Yazdanbakhsh, and Peter Clark. 2023. Self-Refine: Iterative Refinement with Self-Feedback. +Ning Miao, Yee Whye Teh, and Tom Rainforth. 2023. SelfCheck: Using LLMs to Zero-Shot Check Their Own Step-by-Step Reasoning. +Nathalia Nascimento, Paulo Alencar, and Donald Cowan. 2023. GPT-in-the-Loop: Adaptive Decision-Making for Multiagent Systems. +OpenAI. 2023. GPT-4 Technical Report. + +Charles Packer, Vivian Fang, Shishir G. Patil, Kevin Lin, Sarah Wooders, and Joseph E. Gonzalez. 2023. MemGPT: Towards LLMs as Operating Systems. +Maury A Peiperl. 2001. Getting 360 degrees feedback right. Harvard business review, 79(1):142-7. +Chen Qian, Xin Cong, Wei Liu, Cheng Yang, Weize Chen, Yusheng Su, Yufan Dang, Jiahao Li, Juyuan Xu, Dahai Li, Zhiyuan Liu, and Maosong Sun. 2023. Communicative Agents for Software Development. +Zhiyao Ren, Yibing Zhan, Baosheng Yu, Liang Ding, and Dacheng Tao. 2024. Healthcare copilot: Eliciting the power of general llms for medical consultation. arXiv preprint arXiv:2402.13408. +Noah Shinn, Federico Cassano, Edward Berman, Ashwin Gopinath, Karthik Narasimhan, and Shunyu Yao. 2023. Reflexion: Language Agents with Verbal Reinforcement Learning. +Guanzhi Wang, Yuqi Xie, Yunfan Jiang, Ajay Mandlekar, Chaowei Xiao, Yuke Zhu, Linxi Fan, and Anima Anandkumar. 2023a. Voyager: An Open-Ended Embodied Agent with Large Language Models. +Haiyang Wang, Zhiliang Tian, Xin Song, Yue Zhang, Yuchen Pan, Hongkui Tu, Minlie Huang, and Bin Zhou. 2024. Intent-aware and hate-mitigating counterspeech generation via dual-discriminator guided llms. In Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation, LREC/COLING 2024, 20-25 May, 2024, Torino, Italy, pages 9131-9142. ELRA and ICCL. +Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc Le, Ed Chi, Sharan Narang, Aakanksha Chowdhery, and Denny Zhou. 2023b. Self-Consistency Improves Chain of Thought Reasoning in Language Models. +Yizhong Wang, Yeganeh Kordi, Swaroop Mishra, Alisa Liu, Noah A. Smith, Daniel Khashabi, and Hannaneh Hajishirzi. 2023c. Self-Instruct: Aligning Language Models with Self-Generated Instructions. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 13484–13508, Toronto, Canada. Association for Computational Linguistics. +Zhenhailong Wang, Shaoguang Mao, Wenshan Wu, Tao Ge, Furu Wei, and Heng Ji. 2023d. Unleashing Cognitive Synergy in Large Language Models: A Task-Solving Agent through Multi-Persona Self-Collaboration. +Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Brian Ichter, Fei Xia, Ed Chi, Quoc Le, and Denny Zhou. 2023. Chain-of-Thought Prompting Elicits Reasoning in Large Language Models. +Qingyun Wu, Gagan Bansal, Jieyu Zhang, Yiran Wu, Beibin Li, Erkang Zhu, Li Jiang, Xiaoyun Zhang, Shaokun Zhang, Jiale Liu, Ahmed Hassan Awadallah, + +Ryen W. White, Doug Burger, and Chi Wang. 2023. AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation. +Zhiheng Xi, Senjie Jin, Yuhao Zhou, Rui Zheng, Songyang Gao, Tao Gui, Qi Zhang, and Xuanjing Huang. 2023. Self-Polish: Enhance Reasoning in Large Language Models via Problem Refinement. +Jian Xie, Kai Zhang, Jiangjie Chen, Tinghui Zhu, Renze Lou, Yuandong Tian, Yanghua Xiao, and Yu Su. 2024. TravelPlanner: A Benchmark for Real-World Planning with Language Agents. +Shunyu Yao, Dian Yu, Jeffrey Zhao, Izhak Shafran, Thomas L. Griffiths, Yuan Cao, and Karthik Narasimhan. 2023a. Tree of Thoughts: Deliberate Problem Solving with Large Language Models. +Shunyu Yao, Jeffrey Zhao, Dian Yu, Nan Du, Izhak Shafran, Karthik Narasimhan, and Yuan Cao. 2023b. REAC T: SYNERGIZING REASONING AND ACTING IN LANGUAGE MODELS. +Zhangyue Yin, Qiushi Sun, Cheng Chang, Qipeng Guo, Junqi Dai, Xuanjing Huang, and Xipeng Qiu. 2023. Exchange-of-Thought: Enhancing Large Language Model Capabilities through Cross-Model Communication. +Yi Zheng, Chongyang Ma, Kanle Shi, and Haibin Huang. 2023. Agents meet OKR: An Object and Key Results Driven Agent System with Hierarchical Self-Collaboration and Self-Evaluation. +Denny Zhou, Nathanael Scharli, Le Hou, Jason Wei, Nathan Scales, Xuezhi Wang, Dale Schuurmans, Claire Cui, Olivier Bousquet, Quoc Le, and Ed Chi. 2023. Least-to-Most Prompting Enables Complex Reasoning in Large Language Models. +Xizhou Zhu, Yuntao Chen, Hao Tian, Chenxin Tao, Weijie Su, Chenyu Yang, Gao Huang, Bin Li, Lewei Lu, Xiaogang Wang, Yu Qiao, Zhaoxiang Zhang, and Jifeng Dai. 2023. Ghost in the Minecraft: Generally Capable Agents for Open-World Environments via Large Language Models with Text-based Knowledge and Memory. + +# A Example of Generated Response + +In this section, we show an example of the generated response when training our $360^{\circ}\mathrm{REA}$ in the early training stage (with few experiences in the global experience pool). We show the output of several modules in $360^{\circ}\mathrm{REA}$ including the review $m$ of the evaluator (shown in Equation 4), and the global experience $e_g'$ (shown in Equation 10) learned from this example. + +In the review $m$ of the evaluator, we can see that our generated travel plan still has some flaws, with a lower score in terms of feasibility, and the evaluator has provided corresponding reasons. The text in blue indicates the flaw mentioned in the evaluation. + +In the global experience $e_g'$ , the text in blue indicates that $360^\circ \mathrm{REA}$ summarizes some detailed skills from the review $m$ . The text in green indicates the experiences that are summarized from input experiences when generating this response (a.k.a., experiences obtained from previous samples). It demonstrates that our $360^\circ \mathrm{REA}$ can not only summarize new experiences from the current task-solving process, but also keep previous experiences in mind and form new experiences from them, + +# Task description: + +destination:Barcelona, Spain days:6 + +description:A 6-day adventure in Barcelona, experiencing its unique architecture by Gaudi, vibrant food markets, and beautiful beaches. + +season:Summer month:June + +interests:Architecture, Food Markets, Beaches members:adults:4, children:0 preferences:Cultural Explorers Budget Range:Mid-range + +# The generated travel plan by $360^{\circ}$ REA: + +Day 1: Arrival and Architectural Introduction + +- Arrive at Barcelona and check-in at Hotel Midmost. +- Afternoon: Private walking tour with a local architect to explore lesser-known Modernista sites. +- Evening: Guided visit to Recinte Modernista de Sant Pau. + +Day 2: Iconic Gaudi and Culinary Delights - Morning: Guided tour of La Sagrada Familia. + +- Afternoon: La Boqueria market tour followed by a cooking class with a local gastronomy expert. +- Evening: Free time for relaxation or independent exploration. + +# Day 3: Park Güell Wonderss and Coastal Activities + +- Morning: Park Güell visit including a creative workshop on Gaudi's nature inspirations. +- Afternoon: Barcelona Beach with sunbed reservation and option for a Kite-surfing lesson or beach volleyball. +- Evening: Attend a local jazz festival by the sea or a Festa Major street festival. + +# Day 4: Traditional Markets and Twilight Art + +- Morning: Local market tour at Mercat de la Libertat. +- Afternoon: Casa Vicens visit and leisurely stroll through La Ribera district to see Basilica de Santa Maria del Mar. +- Evening: Visit to lesser-known architectural wonders with a rooftop concert featuring Spanish guitar at Torre Bellesguard. + +# Day 5: Artistic Immersion and Seaside Farewell + +- Morning: Beach clean-up activity followed by a picnic or if weather permits, a visit to Torre Bellesguard including a rooftop concert. +- Afternoon: Artistic workshop inspired by Gaudi. +- Evening: Passeig de Gracia tour with Casa Batlló & La Pedrera and reserved seating at a seaside restaurant for a farewell dinner. + +# Day 6: Leisure and Departure + +- Morning: Guided beach yoga session or another family-friendly activity such as a visit to Hospital de Sant Pau's pavilions. +- Afternoon: Free for last-minute explorations or shopping. +- Airport transfer via pre-booked minivan service. + +# Emergency Strategy and Contingencies: + +- Daily travel insurance briefings, weather advisories, and emergency point of contact sharing. +- Accessibility to VIP access to indoor venues like the Picasso Museum in case of hot weather or heavy rain. +- Communication protocols set to inform the group of any changes. + +Logistics and Budget Management: + +- Hola BCN Travel Card and bicycle rentals for daily commutes. +Real-time budget tracking with contingency fund for spontaneous decisions. +- Comprehensive local interactions from curated market tours to hands-on workshops, ensuring rich cultural exposure. + +Concluding Remarks: + +This plan strikes a balance between gourmet experiences, art and architecture, and coastal leisure, aptly tailored for adults and takes into account Barcelona's summer climate. Each activity is selected to enhance the group's travel experience, seasoned with local insights and flexibility for a safe and memorable journey in Barcelona. + +# Review from evaluator: + +Alignment with Interests and Preferences: 20 + +The travel plan meticulously aligns with the group's interests in architecture, food markets, and beaches, as well as their preference for cultural exploration. Activities such as private architectural tours, cooking classes, and creative workshops directly cater to these interests. The inclusion of iconic Gaudi sites, local markets, and beach activities ensures a tailored experience for the group's cultural and leisure preferences. + +Variety and Balance: 19 + +This itinerary offers a commendable balance of activities, from architectural tours and artistic workshops to leisure time at the beach and culinary experiences. The diversity of experiences ensures engagement throughout the trip, catering to both cultural interests and relaxation needs. The inclusion of evening activities like rooftop concerts and a farewell dinner at a seaside restaurant adds a delightful variety to the travel experience. + +Feasibility and Logistics: 17 + +Overall, the plan appears feasible with a coherent schedule and consideration for logistics, including the use of the Hola BCN Travel Card and bicycle rentals for daily commutes. However, the packed itinerary might challenge the group's energy levels, especially with multiple activities scheduled on the same day. Ensuring sufficient rest and travel time between activities would enhance the feasibility of this ambitious schedule. + +Local Insights and Unique Experiences: 20 + +The plan excels in offering unique, local experiences, from a market tour followed by + +a cooking class with a local expert to creative workshops inspired by Gaudi. These activities not only provide an immersive cultural experience but also allow for meaningful interaction with the local community. The thoughtful inclusion of lesser-known sites and local festivals further enriches the travel experience with authentic insights into Barcelona's culture and lifestyle. + +Overall Evaluation: + +This travel plan is excellently crafted, demonstrating a deep understanding of the group's preferences and a strong alignment with their cultural and leisure interests. While there is a minor consideration for the feasibility of the packed schedule, the variety of activities and the emphasis on local, unique experiences promise a memorable and enriching visit to Barcelona. The plan's thoughtful contingencies and logistics management also underscore its practicality and attentiveness to the group's comfort and experience. + +# Generated Global Experience: + +Where did I do well this time: I excelled in aligning the travel plan closely with the group's specified interests and preferences, especially in architecture, food markets, and beaches. My success in integrating unique and authentic local experiences, such as creative workshops and cooking classes with local experts, significantly enriched the cultural exploration aspect of the trip. + +Why didn't I do well this time: I underestimated the importance of ensuring adequate rest periods and manageable schedules within the plan. The feasibility and logistics score reflected this oversight, indicating that the itinerary might be too packed, potentially leading to fatigue and limiting the group's ability to fully enjoy each activity. + +Next time I should: Focus on creating a more balanced itinerary that not only aligns with the travelers' interests but also accounts for practical aspects such as rest, travel times between activities, and the overall pace of the trip. This means potentially reducing the number of activities per day or incorporating more flexible, free time slots to ensure a comfortable and enjoyable experience for all travelers. Additionally, I should consider more deeply the seasonal appropriateness of each activity, ensuring that the group is comfortable and able to enjoy the trip regardless of weather conditions. \ No newline at end of file diff --git a/360reatowardsareusableexperienceaccumulationwith360assessmentformultiagentsystem/images.zip b/360reatowardsareusableexperienceaccumulationwith360assessmentformultiagentsystem/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..b942160b62ae6f6b36534efea35317a9fabf8167 --- /dev/null +++ b/360reatowardsareusableexperienceaccumulationwith360assessmentformultiagentsystem/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:68aa2d2cac441d902e6e02a64581572692266b1189fd7df5f35d3230c8554fbb +size 273292 diff --git a/360reatowardsareusableexperienceaccumulationwith360assessmentformultiagentsystem/layout.json b/360reatowardsareusableexperienceaccumulationwith360assessmentformultiagentsystem/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..1054e4dc57e5449d63fb1c054c2e36eb52ccddda --- /dev/null +++ b/360reatowardsareusableexperienceaccumulationwith360assessmentformultiagentsystem/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5ee6884e67e7ee2e1d18ef0e1e63163ac6db3e19117bb8829bb3e2ee00c79af9 +size 560869 diff --git a/3mvrdmultimodalmultitaskmultiteachervisuallyrichformdocumentunderstanding/efe25afa-6805-4ae2-a54c-9fc61444412f_content_list.json b/3mvrdmultimodalmultitaskmultiteachervisuallyrichformdocumentunderstanding/efe25afa-6805-4ae2-a54c-9fc61444412f_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..5ed4b9f3e868228911f72a99230e1571c15babbd --- /dev/null +++ b/3mvrdmultimodalmultitaskmultiteachervisuallyrichformdocumentunderstanding/efe25afa-6805-4ae2-a54c-9fc61444412f_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4de29782c46bd6b52d3ef9ccbbd845e7182c92519fc314d78016e91f3ca0d9c7 +size 89372 diff --git a/3mvrdmultimodalmultitaskmultiteachervisuallyrichformdocumentunderstanding/efe25afa-6805-4ae2-a54c-9fc61444412f_model.json b/3mvrdmultimodalmultitaskmultiteachervisuallyrichformdocumentunderstanding/efe25afa-6805-4ae2-a54c-9fc61444412f_model.json new file mode 100644 index 0000000000000000000000000000000000000000..8c39658054683a18acd966545d8bfbc06ffc6c98 --- /dev/null +++ b/3mvrdmultimodalmultitaskmultiteachervisuallyrichformdocumentunderstanding/efe25afa-6805-4ae2-a54c-9fc61444412f_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d1a9a64a64bcfb83c8e0005c22a05806e8222003ca70bb1774ac5c4993ac7e39 +size 100709 diff --git a/3mvrdmultimodalmultitaskmultiteachervisuallyrichformdocumentunderstanding/efe25afa-6805-4ae2-a54c-9fc61444412f_origin.pdf b/3mvrdmultimodalmultitaskmultiteachervisuallyrichformdocumentunderstanding/efe25afa-6805-4ae2-a54c-9fc61444412f_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..3973e9f14e55b32ab115fe775543f776d1af93b8 --- /dev/null +++ b/3mvrdmultimodalmultitaskmultiteachervisuallyrichformdocumentunderstanding/efe25afa-6805-4ae2-a54c-9fc61444412f_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:015b2a3666155fb73e1f04b60f5334ea2af0c98560990352437b97cc54030fc9 +size 4065448 diff --git a/3mvrdmultimodalmultitaskmultiteachervisuallyrichformdocumentunderstanding/full.md b/3mvrdmultimodalmultitaskmultiteachervisuallyrichformdocumentunderstanding/full.md new file mode 100644 index 0000000000000000000000000000000000000000..5351b5656b96886d8ce7371049771e936a61d812 --- /dev/null +++ b/3mvrdmultimodalmultitaskmultiteachervisuallyrichformdocumentunderstanding/full.md @@ -0,0 +1,305 @@ +# 3MVRD: Multimodal Multi-task Multi-teacher Visually-Rich Form Document Understanding + +Yihao Ding $^{1,2}$ , Lorenzo Vaiani $^{3}$ , Soyeon Caren Han $^{1,2*}$ , Jean Lee $^{1}$ , Paolo Garza $^{3}$ , Josiah Poon $^{1}$ , Luca Cagliero $^{3}$ + +1The University of Sydney, 2The University of Melbourne, 3Politecnico di Torino + +{yihao.ding,caren.han,jean.lee,josiah.poon}@sydney.edu.au, + +caren.han@unimelb.edu.au, {lorenzo.vaiani,paolo.garza,luca.cagliero}@polito.it + +# Abstract + +This paper presents a groundbreaking multimodal, multi-task, multi-teacher joint-grained knowledge distillation model for visually-rich form document understanding. The model is designed to leverage insights from both fine-grained and coarse-grained levels by facilitating a nuanced correlation between token and entity representations, addressing the complexities inherent in form documents. Additionally, we introduce new intra-grained and cross-grained loss functions to further refine diverse multi-teacher knowledge distillation transfer process, presenting distribution gaps and a harmonised understanding of form documents. Through a comprehensive evaluation across publicly available form document understanding datasets, our proposed model consistently outperforms existing baselines, showcasing its efficacy in handling the intricate structures and content of visually complex form documents1. + +# 1 Introduction + +Understanding and extracting structural information from Visually-Rich Documents (VRDs), such as academic papers (Zhong et al., 2019; Ding et al., 2023b), receipts (Park et al., 2019), and forms (Jaume et al., 2019; Ding et al., 2023a), holds immense value for Natural Language Processing (NLP) tasks, particularly in information extraction and retrieval. While significant progress has been made in solving various VRD benchmark challenges, including layout analysis and table structure recognition, the task of form document understanding remains notably challenging. This complexity of the form document understanding arises from two main factors: 1) the involvement of two distinct authors in a form and 2) the integration of diverse visual cues. Firstly, forms mainly involve two primary authors: form designers and users. Form designers create a structured form to collect + +necessary information as a user interface. Unfortunately, the form layouts, designed to collect varied information, often lead to complex logical relationships, causing confusion for form users and heightening the challenges in form document understanding. Secondly, diverse authors in forms may encounter a combination of different document natures, such as digital, printed, or handwritten forms. Users may submit forms in various formats, introducing noise such as low resolution, uneven scanning, and unclear handwriting. Traditional document understanding models do not account for the diverse carriers of document versions and their associated noises, exacerbating challenges in understanding form structures and their components. Most VRD understanding models inherently hold implicit multimodal document structure analysis (Vision and Text understanding) knowledge either at fine-grained (Huang et al., 2022; Wang et al., 2022) or coarse-grained (Tan and Bansal, 2019; Li et al., 2019) levels. The fine-grained only models mainly focus on learning detailed logical layout arrangement, which cannot handle complex relationships of multimodal components, while the coarse-grained models tend to omit significant words or phrases. Hence, we introduce a novel joint-grained document understanding approach with multimodal multi-teacher knowledge distillation. It leverages knowledge from various task-based teachers throughout the training process, intending to create more inclusive and representative multi- and joint-grained document representations. + +Our contributions are summarised as follows: 1) We present a groundbreaking multimodal, multitask, multi-teacher joint-grained knowledge distillation model designed explicitly to understand visually-rich form documents. 2) Our model outperforms publicly available form document datasets. 3) This research marks the first in adopting multitask knowledge distillation, focusing on incorporating multimodal form document components. + +
ModelModalitiesPre-training DatasetsPre-training TasksDownstream TasksGranularity
Donut (2022)VIIT-CDIPNTPDC, VQA, KIEToken
Pix2struct (2023b)VC4 corpusNTPVQAToken
LiLT (2022)T, SIIT-CDIPMVLM, KPL, CAIDC, KIEToken
BROS (2022)T, SIIT-CDIPMLM, A-MLMKIEToken
LayoutLMv3 (2022)T, S, VIIT-CDIPMLM, MIM, WPADC, VQA, KIEToken
DocFormerv2 (2023)T, S, VIDLTTL, TTG, MLMDC, VQA, KIEToken
Fast-StrucText (2023)T, S, VIIT-CDIPMVLM, GTR, SOP, TIAKIEToken
FormNetV2 (2023a)T, S, VIIT-CDIPMLM, GCLKIEToken
3MVRD (Ours)T, S, VFUNSD, FormNLUMulti-teacher Knowledge DistillationKIEToken, Entity
+ +Table 1: Comparison with state-of-the-art models for receipt and form understanding. In the Modalities column, $T$ represents Textual information, $V$ represents Visual information, and $S$ represents Spatial information. + +# 2 Related Works + +Visually Rich Document (VRD) understanding entails comprehending the structure and content of documents by capturing the underlying relations between textual and visual modalities. Several downstream tasks, such as Layout Analysing (Luo et al., 2022), Key Information Extraction (KIE) (Wang et al., 2021), Document Classification (DC) (Xu et al., 2020), and Visual Question Answering (VQA) (Ding et al., 2022), have contributed to raising the attention of the multimodal learning community as shown by Table 1. In this work, we cope with form documents, whose structure and content are particularly challenging to understand (Srivastava et al., 2020). Form documents possess intricate structures involving collaboration between form designers, who craft clear structures for data collection, and form users, who interact with the forms based on their comprehension, with varying clarity and ease of understanding. + +Vision-only approaches: They exclusively rely on the visual representation (denoted by $V$ modality in Table 1) of the document components thus circumventing the limitations of state-of-the-art text recognition tools (e.g., Donut (Kim et al., 2022) and Pix2struct (Lee et al., 2023b)). Their document representations are commonly pre-trained using a Next Token Prediction (NTP) strategy, offering alternative solutions to traditional techniques based on Natural Language Processing. + +Multimodal approaches: They leverage both the recognised text and the spatial relations (denoted by $T$ and $S$ ) between document components (e.g., LiLT (Wang et al., 2022) and BROS (Hong et al., 2022)). The main goal is to complement raw content understanding with layout information. Expanding upon this multimodal frame + +work, models such as LayoutLMv3 (Huang et al., 2022), DocFormerv2 (Appalaraju et al., 2023), Fast-StrucText (Zhai et al., 2023), and, FormNetV2 (Lee et al., 2023a) integrate the visual modality with text and layout information. These approaches are capable of capturing nuances in the document content hidden in prior works. To leverage multimodal relations, these models are typically pre-trained in a multi-task fashion, exploiting a curated set of token- or word-based pre-training tasks, such as masking or alignment. + +Our approach aligns with the multimodal model paradigm, distinguishing itself by eschewing generic pre-training tasks reliant on masking, alignment, or NTP. Instead, it leverages the direct extraction of knowledge from multiple teachers, each trained on downstream datasets, encompassing both entity and token levels of analysis with the proposed intra-grained and cross-grained losses. This enriches the depth of understanding in visual documents, capturing intricate relationships and semantic structures beyond individual tokens. + +# 3 Methodology + +As previously noted, our paper focuses on interpreting visually rich documents, particularly form documents created and used collaboratively by multiple parties. To accomplish this objective, we introduce and employ two tiers of multimodal information: fine-grained and coarse-grained levels, which play a crucial role in understanding the structure and content of an input form page. Note that existing pre-trained visual-language models, whether designed for generic documents, possess implicit knowledge on either fine-grained or coarse-grained aspects. Hence, we propose an approach that harnesses knowledge from diverse pre-trained models throughout training. This strategy aims to generate + +![](images/6f0ae234419c2b857c71ae40753c6b1c2c5ed057ea1c9e6d8f6792ccbcf6116e.jpg) +Figure 1: Multimodal Multi-task Multi-teacher Visually-Rich Form Document Understanding (3MVRD). Each section is aligned with the specific colours, Green: Section 3.2.1, Blue: Section 3.2.2, Orange: Section 3.3 + +more comprehensive and representative multi- and joint-grained document representations, ultimately enhancing the effectiveness of downstream tasks related to document understanding. + +# 3.1 Preliminary Definitions + +Prior to going through our proposed approach in detail, we would provide formal definitions for the terminology employed throughout this paper. We believe establishing clear and precise definitions could contribute to a comprehensive understanding of the concepts and terms integral to our research. + +1) Fine-grained Document Understanding (Huang et al., 2022; Wang et al., 2022; Hong et al., 2022) is a pivotal aspect of document analysis, involving frameworks that offer detailed insights to comprehend document content, particularly when addressing token-level tasks, such as span-based information extraction and question answering. Regarding input features, existing pre-trained models at the fine-grained level harness multimodal features, such as positional information and image-patch embedding, to enhance the fine-grained token representations. The pre-training phase incorporates several learning techniques, including Masked Visual-Language Modelling, Text-Image Matching, and Multi-label Document Classification, strategically designed to acquire inter or cross-modality correlations and contextual knowledge. However, it is essential to acknowledge the limitations of fine-grained frameworks, as their primary focus lies in learning the logical and layout arrangement of input documents. These frameworks may encounter + +challenges in handling complex multimodal components. + +2) Coarse-grained Document Understanding (Tan and Bansal, 2019; Li et al., 2019) is a vital component in document analysis, with frameworks adept at grasping the logical relations and layout structures within input documents. Particularly well-suited for tasks like document component entity parsing, coarse-grained models excel in capturing high-level document understanding. Despite the dominant trend of fine-grained document understanding models, some research recognises (Tan and Bansal, 2019; Li et al., 2019) that the knowledge from general domain-based Visual-Language Pre-trained Models (VLPMs) could be leveraged to form a foundational document understanding. However, the coarse-grained document understanding models have significant limitations, including their tendency to overlook detailed information, leading to the omission of significant words or phrases. Preliminary entity-level annotations are often necessary, and the current backbone models are pre-trained on the general domain, highlighting the need for document domain frameworks specifically pre-trained at the coarse-grained level. + +# 3.2 Multimodal Multi-task Multi-teacher Joint-grained Document Understanding + +Therefore, we introduce a joint-grained document understanding framework $\mathcal{F}_{jg}$ , designed to harness pre-trained knowledge from both fine-grained and coarse-grained levels. Our approach integrates in + +sights from multiple pre-trained backbones, facilitating a unified understanding of document content encompassing detailed nuances and high-level structures. It aims to synergise the strengths of fine-grained and coarse-grained models, enhancing the overall effectiveness of form understanding tasks. + +# 3.2.1 Multimodal Multi-task Multi-Teacher + +To facilitate this joint-grained framework, we employ Multimodal Multi-teachers from two Multi-tasks, fine-grained and coarse-grained tasks within our framework. While the fine-grained teacher $\mathcal{F}_{fg}$ is characterised by checkpoints explicitly finetuned for the token classification, the coarse-grained teacher $\mathcal{F}_{cg}$ utilises fine-tuning checkpoints for the document component entity classification. The details of fine-grained and coarse-grained teacher models are articulated in Section 4.3. The ablation study of those teacher models is in Section 5.2. $\mathcal{F}_{fg}$ and $\mathcal{F}_{cg}$ get the encoded inputs of token and entity level, respectively, to acquire the corresponding last layer hidden states and logits for downstreaming procedures. For example, after feeding the sequence of tokens $\widetilde{\mathbf{t}} = \{\widetilde{t}_1,\widetilde{t}_2,\dots,\widetilde{t}_k\}$ and sequence of multimodal entity embeddings $\widetilde{\mathbf{E}} = \{\widetilde{E_1},\widetilde{E_2},\dots,\widetilde{E_n}\}$ into $\mathcal{F}_{fg_1}$ and $\mathcal{F}_{cg_1}$ , respectively, we acquire the hidden states $\mathbf{t}^1 = \{t_1^1,t_2^1,\dots,t_k^1\}$ and $\mathbf{E}^1 = \{E_1^1,E_2^1,\dots,E_n^1\}$ , as well as classification logits $\mathbf{p_t^1} = \{p_{t_1^1},p_{t_2^1},\dots,p_{t_k^1}\}$ and $\mathbf{p_E^1} = \{p_{E_1^1},p_{E_2^1},\dots,p_{E_n^1}\}$ . Supposing $\mathbb{T} = \{\mathbf{t}^1,\mathbf{t}^2,\dots\}$ and $\mathbb{E} = \{\mathbf{E}^1,\mathbf{E}^2,\dots\}$ are hidden states from multiple teachers, the combined representations are fed into corresponding projection layers $\mathcal{L}_{fg}$ and $\mathcal{L}_{cg}$ to get the multi-teacher representations $\widehat{\mathbf{t}} = \{\widehat{t_1},\widehat{t_2},\dots,\widehat{t_k}\}$ and $\widehat{\mathbf{E}} = \{\widehat{E_1},\widehat{E_2},\dots,\widehat{E_n}\}$ for each grain. + +# 3.2.2 Joint-Grained Learning + +Our joint-grained learning framework comprises Joint-grained Encoder and Decoders. + +The joint-grained encoder $\mathcal{E}$ , implemented as a transformer encoder, is designed to learn the contextual correlation between fine-grained $\hat{\mathbf{t}}$ and coarse-grained $\widehat{\mathbf{E}}$ representations. This enables the model to capture nuanced details at the token level while simultaneously grasping the high-level structures represented by entities within the document. + +The joint-grained decoders $\mathcal{D}$ play a crucial role in processing the augmented joint-grained representations. For the fine-grained decoder $\mathcal{D}_{fg}$ , the input comprises fine-grained token representations $\widehat{\mathbf{t}}$ , with the entity representation serving as memory $\widehat{\mathbf{E}}$ . + +This configuration allows the decoder to focus on refining and generating augmented token representations $\mathfrak{t}$ based on the contextual information provided by both token and entity representations. In contrast, for coarse-grained decoder $\mathcal{D}_{cg}$ , the input is the entity representation $\widehat{\mathbf{E}}$ , while the memory consists of token representations $\widehat{\mathbf{t}}$ . This approach enables the coarse-grained decoders to emphasise broader structures and relationships at the entity level, leveraging the memory of fine-grained token information to generate a more comprehensive entity representation $\mathbf{E}$ . Overall, the proposed joint-grained architecture facilitates a comprehensive understanding of document content by incorporating fine-grained and coarse-grained perspectives. + +The pre-training of different teacher models involves diverse techniques and features, so a simplistic approach of merely concatenating or pooling hidden states may not fully leverage the individual strengths of each model. Traditional self-/cross attention-based transformer encoders or decoders might encounter challenges in integrating knowledge from various grains, potentially introducing noise to specific grained weights. To address this concern, we propose using multiple types of losses to thoroughly explore implicit knowledge within the diverse teachers (pre-trained models). + +# 3.3 Multimodal Multi-task Multi-Teacher Knowledge Distillation + +This section introduces the multi-loss strategy to enhance intra-grained and cross-grained knowledge exchange, ensuring a more nuanced and effective integration of insights from fine-grained and coarse-grained representations. The accompanying multi-loss ablation study (Section 5.3) aims to optimise the synergies between multiple teacher models, thereby contributing to a more robust and comprehensive joint-grained learning process. + +# 3.3.1 Task-oriented Cross Entropy Loss + +The Task-oriented Cross Entropy (CE) loss is pivotal in facilitating a task-based knowledge distillation strategy. This is computed by comparing the predictions of the student model with the ground truth for each specific task. Adopting the CE loss provides the student model with direct supervisory signals, thereby aiding and guiding its learning process. Note that we address two task-oriented CE losses within our proposed approach, one from the token classification task and the other from the entity classification task. The output hidden + +states from $\mathcal{D}_{fg}$ and $\mathcal{D}_{cg}$ are fed into classifiers to get the output logits $\mathbf{p_t} = \{p_{t_1},p_{t_2},\dots,p_{t_k}\}$ and $\mathbf{p_E} = \{p_{E_1},p_{E_2},\dots,p_{E_n}\}$ . Supposing the label sets for fine-grained and entity-level tasks are $\mathbf{Y_t} = \{y_{t_1},y_{t_2},\dots,y_{t_k}\}$ and $\mathbf{Y_E} = \{y_{E_1},y_{E_2},\dots,y_{E_n}\}$ , the fine-grained and coarse-grained Task-oriented Cross Entropy losses $l_{t}$ and $l_{E}$ are calculated as: + +$$ +l _ {t} = C r o s s E n t r o p y (\mathbf {p _ {t}}, \mathbf {Y _ {t}}) \qquad (1) +$$ + +$$ +l _ {e} = C r o s s E n t r o p y (\mathbf {p} _ {\mathbf {E}}, \mathbf {Y} _ {\mathbf {E}}) \qquad (2) +$$ + +# 3.3.2 Intra-Grained Loss Functions + +Since various pre-trained models provide different specific knowledge to understand the form comprehensively, effectively distilling valuable information from selected fine-tuned checkpoints may generate more representative token representations. In addressing this, we introduce two target-oriented loss functions tailored to distil knowledge from teachers at different levels. These aim to project the label-based distribution from fine-grained $\mathbf{p}_{\mathbb{T}} = \{\mathbf{p}_{\mathbf{t}^1},\mathbf{p}_{\mathbf{t}^2},\ldots \}$ or coarse-grained teacher logits $\mathbf{p}_{\mathbb{E}} = \{\mathbf{p}_{\mathbf{E}^1},\mathbf{p}_{\mathbf{E}^2},\ldots \}$ to corresponding student logits $\mathbf{p_t}$ and $\mathbf{p_E}$ , enabling efficient learning of label distributions. + +Similarity Loss: This is introduced as an effective method to distil knowledge from the output logits $\mathbf{p_t}$ and $\mathbf{p_E}$ of selected fine-grained or coarse-grained teacher checkpoints from $\mathbf{p_T}$ and $\mathbf{p_E}$ . It aims to mitigate the logit differences between the student classifier and the chosen teachers using cosine similarity (CosSim), promoting a more aligned understanding of the label-based distribution. Supposing we have $n_t$ and $n_e$ teachers for fine-grained and coarse-grained tasks, respectively, the similarity loss of fine-grained $l_{sim_t}$ and coarse-grained $l_{sim_e}$ can be calculated by: + +$$ +l _ {s i m _ {t}} = - \sum_ {i} ^ {i = n _ {t} j = k} \sum_ {j} ^ {} C o s S i m \left(p _ {t _ {j} ^ {i}}, p _ {t _ {j}}\right) \tag {3} +$$ + +$$ +l _ {s i m _ {e}} = - \sum_ {i} ^ {i = n _ {e} j = n} \sum_ {j} \operatorname {C o s S i m} \left(p _ {E _ {j} ^ {i}}, p _ {E _ {j}}\right) \tag {4} +$$ + +Distilling Loss: Inspired by (Phuong and Lampert, 2019), we adopt an extreme logit learning model for the distilling loss. This loss implements knowledge distillation using Mean Squared Error (MSE) between the students' logits $\mathbf{p_t}$ and $\mathbf{p_E}$ and the teachers' logit sets $\mathbf{p_T}$ and $\mathbf{p_E}$ . This method is employed to refine the knowledge transfer process further, promoting a more accurate alignment + +between the student and teacher models. + +$$ +l _ {\text {d i s t i l} _ {t}} = \frac {1}{k} \sum_ {j} ^ {j = k} M S E \left(p _ {t _ {j} ^ {i}}, p _ {t _ {j}}\right) \tag {5} +$$ + +$$ +l _ {\text {d i s t i l e}} = \frac {1}{n} \sum_ {j} ^ {j = n} M S E \left(p _ {E _ {j} ^ {i}}, p _ {E _ {j}}\right) \tag {6} +$$ + +The introduction of these intra-grained loss functions, including the similarity loss and the distilling loss, contributes to mitigating distribution gaps and fostering a synchronised understanding of the form across various levels of granularity. + +# 3.3.3 Cross-Grained Loss Functions + +In addition, we incorporate cross-grained loss functions. While fine-grained and coarse-grained information inherently align, the joint-grained framework employs self-attention and cross-attention to approximate the correlation between token and entity representations. $\mathbb{T}$ and $\mathbb{E}$ are teachers hidden states sets, each $\mathbf{t}^{\mathbf{i}}\in \mathbb{T}$ and $\mathbf{E}^{\mathrm{i}}\in \mathbb{E}$ are represented $\mathbf{t}^{\mathrm{i}} = \{t_1^i,t_2^i,\dots,t_k^i\}$ and $\mathbf{E}^{\mathrm{i}} = \{E_{1}^{i},E_{2}^{i},\dots,E_{n}^{i}\}$ and $\mathbf{t}$ and $\mathbf{E}$ are hidden states from student decoder. Cross-grained Triplet Loss: Inherent in each grained feature are parent-child relations between tokens and aligned semantic form entities. The introduction of triplet loss aids the framework in automatically selecting more representative feature representations by measuring the feature distance from one grain to another-grained aligned representation. This effectively enhances joint-grained knowledge transfer, optimising the overall understanding of the form. For acquiring the loss $l_{triplet_{fg}}$ to select fine-grained teachers based on coarse-grained distribution adaptively, we define the anchor as each entity $E_{i}\in \mathbf{E}$ which has the paired token representations $t_i^1\in \mathbf{t}^1$ and $t_i^2\in \mathbf{t}^2$ (if the number of teachers is more significant than 2, randomly select two of them). The L-2 norm distance is used to measure the distance between fine-grained teachers $(t_i^1,t_i^2)$ and anchor $E_{j}$ , where the more similar entities are treated as positive samples $(t_i^{pos})$ otherwise negative $(t_i^{neg})$ . For coarse-grained triplet loss $l_{triplet_{cg}}$ , the same measurements are adopted for coarse-grained teacher positive $(E_j^{pos})$ and negative selection $(E_j^{neg})$ for an anchor $t_i$ . Supposing the $j$ -th, $l_{triplet_{fg}}$ and $l_{triplet_{cg}}$ are defined: + +$$ +l _ {t r i p l e t _ {f g}} = \frac {1}{k} \sum_ {i} ^ {i = k} T r i p l e t s (E _ {j}, t _ {i} ^ {p o s}, t _ {i} ^ {n e g}) \tag {7} +$$ + +$$ +l _ {\text {t r i p l e t} _ {c g}} = \frac {1}{k} \sum_ {i} ^ {i = k} \text {T r i p l e t s} \left(t _ {i}, E _ {j} ^ {\text {p o s}}, E _ {j} ^ {\text {n e g}}\right) \tag {8} +$$ + +As one entity is typically paired with more than one token, when calculating $l_{triplet_{cg}}$ , we will consider all $k$ entity-token pairs. + +Cross-grained Alignment Loss: In addition to the triplet loss, designed to filter out less representative teachers, we introduce another auxiliary task. This task focuses on predicting the relations between tokens and entities, providing an additional layer of refinement to the joint-grained framework. The cross-grained alignment loss further contributes to the comprehensive learning and alignment of token and entity representations, reinforcing the joint-grained understanding of the form document. For an input form document page containing $k$ tokens and $n$ entities, we have a targeting tensor $\mathbf{Y}_{\mathrm{align}}$ where $Dim(\mathbf{Y}_{\mathrm{align}}) = \mathbb{R}^{k\times n}$ . We use acquired alignment logit $\mathbf{p}_{\mathrm{align}} = \mathbf{t}\times \mathbf{E}$ to represent the predicted token-entity alignments. The cross-grained alignment loss $l_{\mathrm{align}}$ can be calculated by: + +$$ +l _ {\text {a l i g n}} = \operatorname {C r o s s E n t r o p y} \left(\mathbf {p} _ {\text {a l i g n}}, \mathbf {Y} _ {\text {a l i g n}}\right) \tag {9} +$$ + +# 4 Evaluation Setup + +# 4.1 Datasets + +FUNSD (Jaume et al., 2019) comprises 199 noisy scanned documents from various domains, including marketing, advertising, and science reports related to US tobacco firms. It is split into train and test sets (149/50 documents), and each document is presented in either printed or handwritten format with low resolutions. Our evaluation focuses on the semantic-entity labeling task that identifies four predefined labels (i.e., question, answer, header, and other) based on input text content. + +FormNLU (Ding et al., 2023a) consists of 867 financial form documents collected from Australian Stock Exchange (ASX) filings. It includes three form types: digital $(D)$ , printed $(P)$ , and handwritten $(H)$ , and is split into five sets: train- $D$ (535), val- $D$ (76), test- $D$ (146), test- $P$ (50), and test- $H$ (50 documents) and supports two tasks: Layout Analysis and Key Information Extraction. Our evaluation focuses on the layout analysis that identifies seven labels (i.e., title, section, form key, form value, table key, table value, and others), detecting each document entity, especially for $P$ and $H$ , the complex multimodal form document. + +# 4.2 Baselines and Metrics + +For token-level information extraction baselines, we use three Document Understanding (DU) mod + +els: LayoutLMv3 (Huang et al., 2022), LiLT (Wang et al., 2022), and BROS (Hong et al., 2022). LayoutLMv3 employs a word-image patch alignment, that utilises a document image along with its corresponding text and layout position information. In contrast, LiLT and BROS focus only on text and layout information without incorporating images. LiLT uses a bi-directional attention mechanism across token embedding and layout embedding, whereas BROS uses a relative spatial encoding between text blocks. For entity-level information extraction baselines, we use two vision-language (VL) models: LXMERT (Tan and Bansal, 2019) and VisualBERT (Li et al., 2019). Compared to the two DU models, these VL models use both image and text input without layout information. LXMERT focuses on cross-modality learning between word-level sentence embeddings and object-level image embeddings, while VisualBERT simply inputs image regions and text, relying on implicit alignments within the network. For evaluation metrics, inspired by (Jaume et al., 2019) and (Ding et al., 2023a), we primarily use F1-score to represent both overall and detailed performance breakdowns, aligning with other baselines. + +# 4.3 Implementation Details + +In token-level experiments, we fine-tuned LayoutLMv3-base using its text tokeniser and image feature extractor. We also fine-tuned LiLT combined with RoBERTa base. In entity-level experiments, we employ pre-trained BERT (748-d) for encoding textual content, while ResNet101(2048-d) is used for region-of-interest(RoI) feature to capture the visual aspect. These extracted features serve as input for fine-tuning LXMERT and VisualBERT. All fine-tuned models serve as teacher models. Our hyperparameter testing involves a maximum of 50 epochs with learning rates set at 1e-5 and 2e-5. All are conducted on a Tesla V100-SXM2 with 16GB graphic memory and 51 GB memory, CUDA 11.2. + +# 5 Results + +# 5.1 Overall Performance + +Extensive experiments are conducted to highlight the effectiveness of the proposed Multimodal Multi-task Multi-Teacher framework, including joint-grained learning, multi-teacher and multi-loss architecture. Table 2 shows representative + +
ModelConfig & LossFUNSDFormNLU
PH
BROSSingle Teacher82.4492.4593.68
LiLTSingle Teacher87.5496.5091.35
LayoutLMv3Single Teacher90.6195.9997.39
JG-εJoint Cross Entropy90.4594.9196.55
JG-DJoint Cross Entropy90.4895.6897.62
JG-ε&DJoint Cross Entropy90.5795.9397.62
MT-JG-ε&D (Ours)Joint Cross Entropy90.5397.2197.75
+ Sim91.0598.2598.09
+ Distil90.9098.1297.72
+ Triplet90.2897.5897.28
+ Align90.5597.2497.42
+ Sim + Distil + Triplet + Align90.9298.6998.39
+ +model configurations on various adopted modules. LayoutLMv3 performs notably superior to BROS and LiLT, except for the FormNLU printed test set. LayoutLMv3 outperforms around $3\%$ and $4\%$ the second-best baseline on FUNSD and FormNLU handwritten sets, respectively. This superiority can be attributed to LayoutLMv3's utilisation of patched visual cues and textual and layout features, resulting in more comprehensive multimodal representations. So we found LayoutLMv3 would be a robust choice for fine-grained baselines in further testing5. To find the most suitable Joint-Grained learning (JG), we compare the results of single-teacher joint-grained frameworks including Encoder $(\mathcal{E})$ only, Decoder $(\mathcal{D})$ only, and Encoder with Decoder $(\mathcal{E} \& \mathcal{D})$ . Table 2 illustrates $\mathcal{E} \& \mathcal{D}$ achieving the highest performance among three baselines. However, upon integrating multiple teachers from each grain (MT-JG- $\mathcal{E} \& \mathcal{D}$ ), competitive performance is observed compared to the baselines on both FormNLU printed $(P)$ (from LiLT $96.5\%$ to $97.21\%$ ) and handwritten set $(H)$ (from LiLT $97.39\%$ to $97.75\%$ ). Still, additional techniques may be necessary to distil the cross-grained multi-teacher information better. + +To thoroughly distil joint-grained knowledge from multiple teachers, we introduced multiple loss functions encompassing Multiple auxiliary tasks. These functions capture teacher knowledge from intra-grained and cross-grained perspectives, generating representative token embeddings. Typically, using either intra-grained or coarse-grained loss + +Table 2: Overall performance with configurations on FormNLU printed $\mathbf{P}$ and handwritten $\mathbf{H}$ . The full form of acronyms can be found in Section 5.1. The best is in bold. The best teacher model (baseline) is underlined. + +
FG TeacherCG TeacherFUNSDFormNLU
PH
LLmv3VBERT90.1994.7296.99
LXMERT90.5795.9397.62
Transformer90.2293.6595.94
LiLTVBERT87.6697.6590.53
LXMERT87.3496.7691.18
Transformer87.9197.2090.58
LLmv3VBERT&LXMERT90.4295.0597.25
LLmv3 & LiLTLXMERT90.3996.7397.42
LLmv3&LiLTVBERT&LXMERT90.5397.2197.75
+ +Table 3: Comparison of Performance across Teacher Combinations. FG: Fine-Grained, CG: Coarse-Grained, LLmv3: LayoutLMv3, VBERT: VisualBERT. The best is in bold. The second best is underlined. This ablation study is based on only Joint Cross Entropy Loss. + +individually leads to better performance than the best baselines across various test sets. Intra-grained Similarity (Sim) and Distilling (Distil) loss consistently achieve higher F1 scores in nearly all test sets. Moreover, cross-grained Triplet and alignment (Align) losses outperform the best baseline on the FormNLU $(P)$ or $(H)$ . This highlights the effectiveness of the proposed multi-task learning approach in enhancing token representations by integrating knowledge from joint-grained multiteachers. Intra-grained loss functions exhibit higher robustness on both datasets, whereas cross-grained loss functions only perform well on FormNLU. This difference may stem from the FUNSD being sourced from multiple origins, whereas FormNLU is a single-source dataset. Coarse-grained loss functions may excel on single-source documents by capturing more prevalent knowledge but might introduce noise when applied to multiple sources. Also, the model demonstrates its most competitive performance by integrating all proposed loss functions (+Sim+Distil+Triplet+Align). This highlights how the proposed intra-grained and cross-grained loss functions enhance multi-teacher knowledge distillation in form understanding tasks6. + +# 5.2 Effect of Multi-Teachers + +We analysed various teacher combinations to ensure they provide sufficient knowledge for improving joint-grained representations, as depicted in Table 3. For fine-grained teachers, since BROS underperforms compared to others, we only include the performance of its counterparts. The LayoutLMv3-based joint framework performs better, outperforming LiLT-based by approximately $3\%$ on FUNSD and over $5\%$ on FormNLU $(H)$ . + +![](images/6dbed72020d60624544ea3a036f809aa935093ea2a5e1325fd941c0609052cc2.jpg) +Figure 2: Example output showing (a) Ground Truth (b) JG- $\mathcal{E}\& \mathcal{D}$ (c) LayoutLMv3, and (d) Ours on a FUNSD page. The color code for layout component labels is as follows; Question, Answer, Header, Other. Our model, employing the best loss combination (cross-entropy + similarity) on FUNSD, accurately classified all layout components. + +![](images/1ae119518cf2834f66cb81845790f68f14e99c4f7c16850b1e921a76c87ade2e.jpg) + +![](images/b59756c9e4421015de0b2171da00676ea4e1ded46b8c62b3c44d625f2684df97.jpg) + +![](images/d300f7722a7c221731a62a113f2da3dd396606a8061e228bca8d2816002c649d.jpg) + +
Loss FunctionsFUNSDFormNLU
SimilarityDistilingTripletAlignmentPH
OXXX91.0598.2598.09
XOXX90.9098.1297.72
XXOX90.2897.5897.28
XXXO90.5597.2497.42
OOXX90.6398.5397.22
OXOX90.5197.7197.79
OXXO90.8297.8098.05
XOOX90.8298.2298.35
XOXO90.8398.6397.45
OOOX90.7998.5697.72
OOXO90.6698.7297.85
OOOO90.9298.6998.39
+ +Table 4: Performance comparison across loss functions. The best is in bold. The second best is underlined. + +This improvement can be attributed to the contextual learning facilitated by visual cues. Notably, LiLT achieves the highest performance on the FormNLU $(P)$ , likely due to its well-designed positional-aware pre-training tasks. For coarse-grained teachers, pre-trained backbones demonstrate better robustness than randomly initialised Transformers, highlighting the benefits of general domain pre-trained knowledge in form understanding tasks. Table 3 illustrates multiple teachers cannot always ensure the best performance, however, the robustness of the proposed model is enhanced by capturing more implicit knowledge from crossgrained teachers. + +# 5.3 Effect of Loss Functions + +To comprehensively investigate the impact of different loss functions and their combinations, we present the performance of various combinations in Table 4. While employing intra-grained loss individually often proves more effective than using cross-grained loss alone, combining the two losses can enhance knowledge distillation from + +joint-grained multi-teachers. For instance, concurrently employing distilling(Distil) and Triplet loss improved accuracy from $97.72\%$ to $98.35\%$ . Notably, stacking all proposed loss functions resulted in the best or second-best performance across all test sets, showcasing their effectiveness in distilling knowledge from multi-teacher to student models for generating more representative representations. Even though cross-grained Triplet and Alignment losses were ineffective individually, when combined with intra-grained loss, they significantly improved knowledge distillation effectiveness. + +# 5.4 Qualitative Analysis: Case Studies + +We visualised the sample results for the top 3 - Our best model with the best configuration, the best baseline LayoutLMv3 and the second best baseline $JG\text{-}\mathcal{E}\& \mathcal{D}$ of FUNSD in Figure 2. We can see that both LayoutLMv3 and $JG\text{-}\mathcal{E}\& \mathcal{D}$ have wrongly recognised an Other (marked by a white cross in red circle), whereas ours has accurately recognised all document tokens and components. + +# 6 Conclusion + +We introduced a Multimodal Multi-task Multi-Teacher framework in Visually-Rich form documents. Our model incorporates multi-teacher, multi-task, and multi-loss, and the results show the robustness in capturing implicit knowledge from multi-teachers for understanding diverse form document natures, such as scanned, printed, and handwritten. We hope our work provides valuable insights into leveraging multi-teacher and multi-loss strategies for document understanding research. + +# Limitations + +Benchmark Scope: Despite the paramount importance of document understanding across various domains such as finance, medicine, and resources, our study is constrained by the limited availability of visually-rich form document understanding datasets, particularly those of high quality. In this research, we solely rely on publicly available English-based form document understanding datasets. The scope of benchmark datasets, therefore, may not comprehensively represent the diversity and complexity present in form documents across different languages and industries. + +Availability of Document Understanding Teachers: The current limitation stems from the reliance on general document understanding teacher models due to the absence of large pre-trained form-specific document models. The availability of high-quality teachers specifically tailored for form document understanding is crucial. Future advancements in the field would benefit from the development of dedicated pre-trained models for form document understanding, providing more accurate knowledge transfer during training. + +# References + +Srikar Appalaraju, Peng Tang, Qi Dong, Nishant Sankaran, Yichu Zhou, and R Manmatha. 2023. Docformerv2: Local features for document understanding. arXiv preprint arXiv:2306.01733. +Yihao Ding, Zhe Huang, Runlin Wang, YanHang Zhang, Xianru Chen, Yuzhong Ma, Hyunsuk Chung, and Soyeon Caren Han. 2022. V-doc: Visual questions answers with documents. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 21492-21498. +Yihao Ding, Siqu Long, Jiabin Huang, Kaixuan Ren, Xingxiang Luo, Hyunsuk Chung, and Soyeon Caren Han. 2023a. Form-nlu: Dataset for the form natural language understanding. In Proceedings of the 46th International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 2807-2816. +Yihao Ding, Siwen Luo, Hyunsuk Chung, and Soyeon Caren Han. 2023b. PDF-vqa: A new dataset for real-world vqa on pdf documents. In Machine Learning and Knowledge Discovery in Databases: Applied Data Science and Demo Track, pages 585-601. Springer Nature Switzerland. +Teakgyu Hong, Donghyun Kim, Mingi Ji, Wonseok Hwang, Daehyun Nam, and Sungrae Park. 2022. Bros: A pre-trained language model focusing on text + +and layout for better key information extraction from documents. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 36, pages 10767-10775. +Yupan Huang, Tengchao Lv, Lei Cui, Yutong Lu, and Furu Wei. 2022. Layoutlmv3: Pre-training for document ai with unified text and image masking. In Proceedings of the 30th ACM International Conference on Multimedia, pages 4083-4091. +Guillaume Jaume, Hazim Kemal Ekenel, and Jean-Philippe Thiran. 2019. Funsd: A dataset for form understanding in noisy scanned documents. In 2019 International Conference on Document Analysis and Recognition Workshops (ICDARW), volume 2, pages 1-6. IEEE. +Geewook Kim, Teakgyu Hong, Moonbin Yim, JeongYeon Nam, Jinyoung Park, Jinyeong Yim, Wonseok Hwang, Sangdoo Yun, Dongyoon Han, and Seunghyun Park. 2022. Ocr-free document understanding transformer. In Computer Vision-ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23-27, 2022, Proceedings, Part XXVIII, pages 498-517. Springer. +Chen-Yu Lee, Chun-Liang Li, Hao Zhang, Timothy Dozat, Vincent Perot, Guolong Su, Xiang Zhang, Kihyuk Sohn, Nikolay Glushnev, Renshen Wang, Joshua Ainslie, Shangbang Long, Siyang Qin, Yasuhisa Fujii, Nan Hua, and Tomas Pfister. 2023a. FormNetV2: Multimodal graph contrastive learning for form document information extraction. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 9011-9026. Association for Computational Linguistics. +Kenton Lee, Mandar Joshi, Iulia Raluca Turc, Hexiang Hu, Fangyu Liu, Julian Martin Eisenschlos, Urvashi Khandelwal, Peter Shaw, Ming-Wei Chang, and Kristina Toutanova. 2023b. Pix2struct: Screenshot parsing as pretraining for visual language understanding. In International Conference on Machine Learning, pages 18893-18912. PMLR. +Lianian Harold Li, Mark Yatskar, Da Yin, Cho-Jui Hsieh, and Kai-Wei Chang. 2019. Visualbert: A simple and performant baseline for vision and language. arXiv preprint arXiv:1908.03557. +Siwen Luo, Yihao Ding, Siqu Long, Josiah Poon, and Soyeon Caren Han. 2022. Doc-gen: Heterogeneous graph convolutional networks for document layout analysis. In Proceedings of the 29th International Conference on Computational Linguistics, pages 2906-2916. +Seunghyun Park, Seung Shin, Bado Lee, Junyeop Lee, Jaeheung Surh, Minjoon Seo, and Hwalsuk Lee. 2019. Cord: a consolidated receipt dataset for post-ocr parsing. In Workshop on Document Intelligence at NeurIPS 2019. + +Mary Phuong and Christoph Lampert. 2019. Towards understanding knowledge distillation. In International conference on machine learning, pages 5142-5151. PMLR. +Yash Srivastava, Vaishnav Murali, Shiv Ram Dubey, and Snehasis Mukherjee. 2020. Visual question answering using deep learning: A survey and performance analysis. In Computer Vision and Image Processing - 5th International Conference, CVIP 2020, volume 1377 of Communications in Computer and Information Science, pages 75-86. Springer. +Hao Tan and Mohit Bansal. 2019. Lxmert: Learning cross-modality encoder representations from transformers. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 5100-5111. +Jiapeng Wang, Lianwen Jin, and Kai Ding. 2022. Lilt: A simple yet effective language-independent layout transformer for structured document understanding. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 7747-7757. +Jiapeng Wang, Chongyu Liu, Lianwen Jin, Guozhi Tang, Jiaxin Zhang, Shuaiqiao Zhang, Qianying Wang, Yaqiang Wu, and Mingxiang Cai. 2021. Towards robust visual information extraction in real world: New dataset and novel solution. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, pages 2738-2745. +Yiheng Xu, Minghao Li, Lei Cui, Shaohan Huang, Furu Wei, and Ming Zhou. 2020. Layoutm: Pre-training of text and layout for document image understanding. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pages 1192-1200. +Mingliang Zhai, Yulin Li, Xiameng Qin, Chen Yi, Qunyi Xie, Chengquan Zhang, Kun Yao, Yuwei Wu, and Yunde Jia. 2023. Fast-structext: An efficient hourglass transformer with modality-guided dynamic token merge for document understanding. In Proceedings of the Thirty-Second International Joint Conference on Artificial Intelligence, IJCAI 2023, pages 5269-5277. +Xu Zhong, Jianbin Tang, and Antonio Jimeno Yepes. 2019. Publaynet: largest dataset ever for document layout analysis. In 2019 International Conference on Document Analysis and Recognition (ICDAR), pages 1015-1022. IEEE. + +# A Statistics of tokens and entities + +The following Table 5 and 6 demonstrates the number of tokens(length) and number of document entities. While FUNSD has 4 types(Question, Answer, Header, Other) of document entities, FormNLU has 7 types(Title, Section, Form Key, Form Value, Table Key, Table Value, Other). For the FormNLU, we applied two types of test set, including Printed P and Handwritten H. + +
FUNSD +(Testing)QuestionAnswerHeaderOtherTotal
Entity10778211223122332
Token2654329437423858707
+ +Table 5: FUNSD Testing Dataset Distribution by Label. + +
FormNLU (Testing)TitleSectionForm KeyForm ValueTable KeyTable ValueOthersTotal
PEntity981003463322502491521527
H1001003483152492261491487
PToken700125819341557993389332110152
H7421031180586677936629188507
+ +# B Breakdown Result Analysis + +Table 6: FormNLU Testing Dataset Distribution by Label, where $\mathbf{P}$ and $\mathbf{H}$ are printed and handwritten sets. + +
ModelConfigOverallBreakdown
HeaderQuestionAnswer
LiLTTeacher87.5455.6190.2088.34
LayoutLMv3Teacher90.6166.0991.6092.78
JG-εJoint CE90.4564.9491.7092.67
JG-DJoint CE90.4864.0791.5892.73
JG-ε&DJoint CE90.5764.6691.4892.73
MT-JG-ε&DJoint CE90.5361.2492.4091.75
Sim91.0564.8192.5892.46
Distil90.9066.9692.6191.97
Triplet90.2862.4492.0091.44
Align90.5563.8191.8292.29
+Sim+Distil90.9264.2292.5492.31
+Triplet+Align
+ +Table 7: Breakdown Results of FUNSD dataset. + +As shown in Table 7, for the FUNSD dataset, we could find all Joint-Grained(JG-) frameworks can have a delicate performance on recognising Question and Answer, but decreased in Header classification. This might result from the limited number of Headers in the FUNSD, leading to inadequate learning of the fine-grained and coarse-grained Header information. Multi-task-oriented intra-grained and coarse-grained functions can increase the performance of Question recognition by boosting the knowledge distilling from joint-grained multi-teachers. Especially, intra-grained knowledge distillation methods can achieve around $1\%$ higher than LayoutLMv3. The FUNSD dataset + +cannot illustrate the benefits of cross-grained loss functions well. + +For FormNLU printed and handwritten sets, the joint-grained framework and proposed loss functions can effectively improve Section (Sec) and Title recognition. As the Title, Section and Form_key $(F_K)$ are normally located at similar positions for single-source forms, this may demonstrate both joint-grained framework and multi-task loss function could distil knowledge. Additionally, baseline models are not good at recognising table keys and values, especially handwritten sets. As we use the layoutLMv3 in the joint-grained framework, the performance of recognising table-related tokens is not good for the joint-learning framework. After integrating multiple teachers, the performance has increased from $91.97\%$ to $97.35\%$ on the printed set. The proposed multi-task loss functions may achieve a higher performance of $97.96\%$ . Significant improvements can also be observed across two test sets across all table-related targets. This illustrates that the joint-grained multi-teacher framework can effectively tackle the limitation of one teacher to generate more comprehensive token representations, and the intra-grained and cross-grained loss could boost the effective knowledge exchange to make the generalisation and robustness of the entire framework. + +# C Additional Qualitative Analysis + +In our qualitative evaluation, we took a closer look at the results by visualising the output of the top two models—our best-performing model with the optimal configuration and the baseline LayoutLM3—on the FormNLU handwritten set, as presented in Figure 3. This examination revealed a notable discrepancy between the models. Specifically, LayoutLM3 exhibited an erroneous identification of the Table Key as a Form Key. In contrast, our model demonstrated a higher level of precision by accurately recognising and distinguishing all components within this intricate and noise-laden handwritten document. + +This illustrative case serves as a compelling example highlighting the challenges associated with relying solely on knowledge from a single document to understand teachers. The complexity of distinguishing various document structures, such as the nuanced difference between a form key and a table key, becomes evident. The inadequacy of a singular teacher's knowledge in capturing such intra- + +![](images/c36c132c3bcc64d3fb2ab90d723e96e55acaccee33b6fbb6fb5d806ba945071b.jpg) +(a) Ground Truth + +![](images/5e38a89582eb7ab71f4c114045248097a12d9ed25c59e9e40faf5a48efc609c6.jpg) +(b) LayoutLMv3 + +![](images/106c5d019b11c2215973b305e034706ae52dd507abbc213035844acb87bc7e9e.jpg) +(c) Ours +Figure 3: Example output showing (a) Ground Truth (b) LayoutLMv3, and (c) Ours on a FormNLU handwritten test set. The color code for layout component labels is as follows; Title, Section, Form Key, Form Value, Table Key, Table Value, Other. Our model, the best loss combination (+Sim+Distil+Triplet+Align) on FormNLU H, accurately classified all layout components. + +
ModelConfigFormNLU Printed Overall and BreakdownFormNLU Handwritten Overall and Breakdown
OverallSecTitleF_KF_VT_KT_VOverallSecTitleF_KF_VT_KT_V
LiLTTeacher96.5098.3296.9798.8496.6296.5793.6091.3595.3999.5094.8190.6784.1989.81
LayoutLMv3Teacher95.9998.4597.9697.9796.7392.3792.9897.3999.3399.0199.8598.2493.9595.95
JG-εJoint CE94.9199.6698.9998.1195.7390.1490.3196.5599.3399.0199.4298.5688.3794.67
JG-DJoint CE95.6899.66100.0098.5596.4591.9491.1097.6299.3399.0199.8598.5693.0295.98
JG-ε&DJoint CE95.9399.6697.9697.8297.1891.9792.1597.6299.3399.0199.8598.4093.7495.75
MT-JG-ε&DJoint CE97.2199.3298.4899.5796.5897.3595.0697.7597.6799.5099.1397.9395.5596.41
Sim98.2599.3299.4999.2897.7597.9697.1298.0999.00100.0099.2798.2596.4596.61
Distil98.1299.32100.0099.7197.9097.5596.3097.7297.35100.0099.1397.6295.7597.07
Triplet97.5899.3299.4999.2897.1897.5595.8797.2898.00100.0098.8397.3193.9096.83
Align97.2499.3298.4899.7196.5796.1395.4797.4299.3399.5099.1396.8592.8697.52
+Sim+Distil+Triplet+Align98.6999.32100.0099.7199.2597.3597.1298.3998.33100.0099.5698.0996.9497.75
+ +cacies emphasises the importance of our proposed Multi-modal Multi-task Multi-Teacher framework, which leverages insights from multiple teachers to enhance the robustness and accuracy of form document understanding. + +# D Additional Implementation Details + +The table presented in Table 9 outlines the number of total parameters and trainable parameters across various model configurations. It is evident that the choice of teacher models primarily determines the total number of parameters. As the number of teachers increases, there is a corresponding enhancement in the total parameter count. Furthermore, the architecture of the student model significantly influences the number of trainable parameters. For instance, encoder-decoder-based student models exhibit a higher count of trainable parameters compared to architectures employing only an + +Table 8: Overall and Breakdown Analysis of FormNLU Printed Set and Handwritten Set. The categories of FormNLU dataset Task A include Section (Sec), Title, Form Key (F_K), Form Value (F_V), Table Key (T_K), Table Value (T_V). + +
Fine-grainedCoarse-GrainedConfigure# Para# Trainable
LiLTN/ATeacher130,169,799130,169,799
LayoutLMv3N/ATeacher125,332,359125,332,359
LXMERTJG-Encoder393,227,51419,586,415
JG-Decoder423,952,89050,311,791
JG-ε&D440,494,84266,853,743
VisualBERT&LXMERT557,260,79870,394,991
LayoutLMv3&LiLTLXMERT574,205,88968,034,159
VisualBERT&LXMERT688,611,01371,575,407
+ +Table 9: Model configurations and parameters + +encoder or decoder. This discrepancy implies that training encoder-decoder models demands more computational resources. Despite the variation in trainable parameters among different student model architectures, it is noteworthy that the overall number remains substantially smaller than that of single-teacher fine-tuning processes. This observation underscores the efficiency of student model training in comparison to fine-tuning pre-trained models. \ No newline at end of file diff --git a/3mvrdmultimodalmultitaskmultiteachervisuallyrichformdocumentunderstanding/images.zip b/3mvrdmultimodalmultitaskmultiteachervisuallyrichformdocumentunderstanding/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..ec3f57e234250173d6b95821651170d51f4c01e8 --- /dev/null +++ b/3mvrdmultimodalmultitaskmultiteachervisuallyrichformdocumentunderstanding/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e912ebbbc64df9c5a1c276355988b318341c840baa21f182280804191e70d3e2 +size 756437 diff --git a/3mvrdmultimodalmultitaskmultiteachervisuallyrichformdocumentunderstanding/layout.json b/3mvrdmultimodalmultitaskmultiteachervisuallyrichformdocumentunderstanding/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..4409a8cd891a3802565bdbade96ae7be75c1c812 --- /dev/null +++ b/3mvrdmultimodalmultitaskmultiteachervisuallyrichformdocumentunderstanding/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3b1014136ec0642b0261a8d29cc678777f440831549177359d127b570fa38544 +size 411273 diff --git a/abageneralgeneratorreaderframeworkforoptimizingllmstounleashsynergypotential/0c133773-93fc-4e0a-9d33-1ba4e7e71340_content_list.json b/abageneralgeneratorreaderframeworkforoptimizingllmstounleashsynergypotential/0c133773-93fc-4e0a-9d33-1ba4e7e71340_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..5eec77b5a563a7d94ed40314c77900cfeef20724 --- /dev/null +++ b/abageneralgeneratorreaderframeworkforoptimizingllmstounleashsynergypotential/0c133773-93fc-4e0a-9d33-1ba4e7e71340_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:70c38428947b145566e5a1c32f2c9d6f4f7fe080ca26f93bd0438802b0ca0789 +size 97507 diff --git a/abageneralgeneratorreaderframeworkforoptimizingllmstounleashsynergypotential/0c133773-93fc-4e0a-9d33-1ba4e7e71340_model.json b/abageneralgeneratorreaderframeworkforoptimizingllmstounleashsynergypotential/0c133773-93fc-4e0a-9d33-1ba4e7e71340_model.json new file mode 100644 index 0000000000000000000000000000000000000000..c7d8aca8c4c56ed03812ef3e99d595629e04d628 --- /dev/null +++ b/abageneralgeneratorreaderframeworkforoptimizingllmstounleashsynergypotential/0c133773-93fc-4e0a-9d33-1ba4e7e71340_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:83e128eb69a34c133f4ed08e3d1ad996ae044d33b93a9cd5b3103eca9a115e35 +size 116663 diff --git a/abageneralgeneratorreaderframeworkforoptimizingllmstounleashsynergypotential/0c133773-93fc-4e0a-9d33-1ba4e7e71340_origin.pdf b/abageneralgeneratorreaderframeworkforoptimizingllmstounleashsynergypotential/0c133773-93fc-4e0a-9d33-1ba4e7e71340_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..bdaad2768d762cc3063ff3761ab1e21d58cf1807 --- /dev/null +++ b/abageneralgeneratorreaderframeworkforoptimizingllmstounleashsynergypotential/0c133773-93fc-4e0a-9d33-1ba4e7e71340_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:03e6611393974ec6beb5042c34ddb08f669643e59de08db133c70fe5c812a80b +size 316806 diff --git a/abageneralgeneratorreaderframeworkforoptimizingllmstounleashsynergypotential/full.md b/abageneralgeneratorreaderframeworkforoptimizingllmstounleashsynergypotential/full.md new file mode 100644 index 0000000000000000000000000000000000000000..0b01082b6023fd11081295259a5bd27c7074ad21 --- /dev/null +++ b/abageneralgeneratorreaderframeworkforoptimizingllmstounleashsynergypotential/full.md @@ -0,0 +1,395 @@ +# A + B: A General Generator-Reader Framework for Optimizing LLMs to Unleash Synergy Potential + +Wei Tang $^{1,2}$ , Yixin Cao $^{3}$ , Jiahao Ying $^{4}$ , Bo Wang $^{5}$ , Yuyue Zhao $^{1,2}$ , Yong Liao $^{1,2 *}$ , Pengyuan Zhou $^{6}$ + +1 University of Science and Technology of China + +2CCCD Key Lab of Ministry of Culture and Tourism + +$^{3}$ School of Computer Science, Fudan University $^{4}$ Singapore Management University + +5 Beijing Institute of Technology 6 Aarhus University + +weitangcs@gmail.com + +# Abstract + +Retrieval-Augmented Generation (RAG) is an effective solution to supplement necessary knowledge to large language models (LLMs). Targeting its bottleneck of retriever performance, "generate-then-read" pipeline is proposed to replace the retrieval stage with generation from the LLM itself. Although promising, this research direction is underexplored and still cannot work in the scenario when source knowledge is given. In this paper, we formalize a general "A + B" framework with varying combinations of foundation models and types for systematic investigation. We explore the efficacy of the base and chat versions of LLMs and found their different functionalities suitable for generator A and reader B, respectively. Their combinations consistently outperform single models, especially in complex scenarios. Furthermore, we extend the application of the "A + B" framework to scenarios involving source documents through continuous learning, enabling the direct integration of external knowledge into LLMs. This approach not only facilitates effective acquisition of new knowledge but also addresses the challenges of safety and helpfulness post-adaptation. The paper underscores the versatility of the "A + B" framework, demonstrating its potential to enhance the practical application of LLMs across various domains. + +# 1 Introduction + +Large language models have achieved remarkable success in natural language processing (Wei et al., 2022; Mirowski et al., 2023). Nonetheless, in real-world scenarios, LLMs sometimes lack of domain-specific or latest information (Kandpal et al., 2023). To supplement necessary external knowledge, Retrieval-Augmented Generation (RAG) has been proposed and attracted much attention (Lewis et al., 2020). The basic idea + +is to employ a two-step "retrieve-then-generate" process. First, RAG models leverage a retriever with the goal of discovering relevant documents from the given source, and subsequently generate answers by feeding the retrieval results together with the question into LLMs. Although effective, RAG apparently suffers from the bottleneck of retriever performance. In contrast to the retrieve-then-read pipeline, Yu et al. (2022) proposed generate rather than retrieve, a.k.a, generate-then-read framework. It aims to generate relevant documents by the LLM itself, taking advantage of the memorization ability of LLMs to simplify and optimize the pipeline. However, this research direction is still under-exploration. + +In this paper, we first formalize a general "A + B" framework and conduct systematical investigation to unleash the full power in various scenarios, where A and B denote generator and reader, respectively. Clearly, they have distinct functionalities. The generator A is tasked with producing context relevant to the input queries, which requires a high degree of factual accuracy, while the reader B aims at interpreting the generated context to furnish appropriate responses, necessitating cognitive reasoning and alignment with human preferences. This raises an interesting assumption, will a simple modification — a combination of different model types or versions in "A + B" framework — lead to better performance? + +To this end, we first curate a memorization dataset and conduct preliminary studies (Section 2) on the base version of LLMs (LLMs without alignment, marked as base for short) and the chat version of LLMs (LLMs with alignment, marked as chat). Unsurprisingly, base performs better than chat in memorization tasks, which is the key ability of generator; on the contrary, chat can generate more helpful and safer response than base and is more suitable as reader. We then delve deeply into the "A + B" framework with various foundation mod + +els varying in types and scales, considering both task performance and human preference alignment in knowledge-intensive tasks. Note that no source documents are provided and all knowledge are assumed seen during pre-training, largely following "generate-then-read" (Yu et al., 2022). We found that different combinations perform quite differently, but common patterns show that base/chat are indeed good generator/reader, and "A + B" framework consistently outperforms the singular model especially in complex scenarios. Deep dive into this general framework not only helps us better understand the effects of pre-training and posttraining (i.e., alignment) of LLMs, but also offers practical insights in optimizing the performance and safety in real-world applications. Actually, besides RAG, many other models can also fall into this general "A + B" framework, such as Chain-of-Thought (Wei et al., 2022) and Self-Refine (Madaan et al., 2023). Our investigation method can be easily adapted. + +Furthermore, we consider the scenario where source documents are present. This case goes beyond the "generate-then-read" regime and is traditionally solved by vanilla RAG. We thus apply the "A + B" framework through continuous learning to integrate source knowledge into LLMs, demonstrating the potential of our framework in this scenario. Although there are some works such as domain-specific adaptations (Hatakeyama-Sato et al., 2023; Balaguer et al., 2024) in this direction, a significant gap remains in research on the effective usage of this integrated knowledge, as well as how to guarantee the safety and helpfulness after adaptation or continuous learning. Our "A + B" framework with suitable model types can naturally solve this issue. Further experiments and analysis demonstrate the effectiveness of our framework. + +Our main contributions can be summarized as follows: + +- We formalize a general "A + B" framework to delve into its effectiveness and human preference alignment in knowledge-intensive tasks. +- We propose to conduct continuous learning in “A + B” framework, which can effectively and efficiently integrate external knowledge, while maintaining helpfulness and safety. +- We curate datasets and conduct extensive studies to support our claim and demonstrate the effectiveness of our framework. + +# 2 Preliminary Experiments + +Two critical aspects must be considered in knowledge-intensive tasks: accurate knowledge memorization and the generation of high-quality responses. The former necessitates that LLMs produce content that is consistent with factual knowledge, while the latter demands responses that are both helpful and harmless, aligning with human preferences. + +Recently, researchers noted that fine-tuning may inadvertently diminish the LLMs' ability to convey factual information. Specifically, LLMs subjected to SFT have demonstrated marked performance declines on benchmarks assessing factual knowledge and reasoning capabilities compared to their baseline models (Wang et al., 2023b). + +This observation raises a pivotal question: Are unaligned and aligned models better suited to distinct roles within knowledge-intensive tasks, for example, as generators and readers, respectively? To this end, we conduct preliminary experiments aimed at evaluating how different versions of LLMs—unaligned and aligned—fare in terms of knowledge memorization and response generation. + +2.1 Base Is More Accurate in Memorization + +
ModelQuotePoem
Llama-2-7b36.902.58
Llama-2-7b-chat19.751.65
Llama-2-13b51.095.27
Llama-2-13b-chat32.702.48
Llama-2-70b59.9713.50
Llama-2-70b-chat43.994.47
Mistral48.635.66
Mistral-Instruct33.592.04
+ +Table 1: BLEU score of the Llama-2 series model on the "Quote" and "Poem". + +We first assess the ability of knowledge memorization. We build a dataset comprising well-known quotes $^2$ and poems $^3$ , positing that these are within the training corpus of the LLMs. We initiate the LLMs with the opening words of a quote + +or poem from this dataset and employ the BLEU score (Papineni et al., 2002) as a metric to gauge the LLMs' capacity for memorization. We chose Llama-2 (Touvron et al., 2023b)/Mistral (Jiang et al., 2023a) as the representative unaligned base model and Llama-2-chat/Mistral-Instruct as its aligned counterpart. + +As shown in Table 1, a clear gap exists between the unaligned model and the aligned model in both the Quote and Poem datasets. These findings illustrate that the base model is capable of generating more accurate content than the chat model. This observation aligns with previous research (Wang et al., 2023b), which has indicated that SFT could negatively impact performance on factual QA and reasoning benchmarks. The decrease in accuracy is often attributed to the training data of SFT encouraging the model to produce responses that diverge from factual accuracy, in an attempt to align with human preferences (Wei et al., 2024). + +Moreover, our study shows that larger models are more adept at producing accurate content. Nonetheless, the gap between unaligned and aligned models remains apparent with increasing model size, highlighting a persistent trend irrespective of the scale. + +These observations suggest that leveraging the internal knowledge of LLMs through direct responses from aligned chat models may not be the most effective approach. Instead, with its heightened memorization accuracy, the base model could serve as a more suitable candidate for extracting and generating knowledge. + +2.2 Chat Generates More Helpful and Safer Response + +
ModelHelpfulnessClaritySafety
Llama-2-7b1.211.222.54
Llama-2-7b-URIAL2.693.012.83
Llama-2-7b-chat4.734.734.99
Llama-2-13b1.101.362.28
Llama-2-13b-URIAL3.393.383.45
Llama-2-13b-chat5.05.04.99
+ +Table 2: Evaluation results assessed by GPT-4. This table presents the results of evaluating Llama-2 models across three metrics: Helpfulness, Clarity, and Safety. Scores are on a scale of up to 5. + +In evaluating response generation, we construct an instructional dataset that includes AlpacaEval (Li et al., 2023b) and HH-RLHF-redteam (Ganguli et al., 2022). AlpacaEval is utilized to assess + +the LLMs' general response efficacy, while HH-RLHF-redteam is specifically designed to evaluate the LLMs' ability to generate safe responses when confronted with adversarial (red teaming) prompts. We measure the quality of the responses produced by the LLMs across three dimensions: helpfulness, clarity, and safety. Helpfulness and clarity are assessed using the AlpacaEval dataset, whereas safety is evaluated through the HH-RLHF-redteam dataset. Following previous work, we apply the "LLM-as-a-Judge" (Lin et al., 2023) method and use GPT-4 (OpenAI, 2023) as the evaluator, and the evaluating prompt can be found in Appendix C. + +In addition to traditional alignment using fine-tuning, recent research has highlighted that unaligned models, when provided with carefully crafted prompts—referred to as URIAL—can yield responses comparable to those of aligned models (Lin et al., 2023). We implement this deliberate prompt strategy to assess how high-quality responses the base models, without undergoing fine-tuning, can achieve with only elaborately designed instructional prompts. + +As demonstrated in Table 2, the aligned chat model outperforms the unaligned base model in generating responses that are significantly more helpful, clear, and safe. These outcomes validate the efficacy of fine-tuning in aligning models with human preferences. Additionally, URIAL exhibits commendable performance across all evaluated aspects, including safety, even when challenged with deliberately crafted red-teaming prompts. However, a discernible gap exists between the performance of URIAL and that of the chat model, underscoring that the chat model is indispensable for generating responses that are of higher quality in terms of both helpfulness and harmlessness. + +Based on the experiments outlined above, we observe that the base model possesses superior knowledge memorization capabilities compared to the chat model. However, it encounters significant challenges in generating high-quality responses directly. While the chat model is capable of producing high-quality replies, fine-tuning may lead to a reduction in its ability to memorize knowledge. Consequently, we posit that unaligned and aligned models indeed are better suited to different roles: the base model, with its enhanced knowledge memorization capacity, is more aptly utilized as a generator, whereas the chat model, which generates higher quality responses, is more suitable for use as a reader. + +# 3 A + B Framework + +Building on the posit from preliminary experiments that the base model and chat model are better suited to different roles in knowledge-intensive tasks, we demonstrate a more nuanced approach to question-answering. Rather than relying on a single model to directly answer queries, we conceptualize the framework as “ $\mathrm{A} + \mathrm{B}$ ” (generator-reader) architecture with distinct models. The generator A is tasked with producing relevant information supporting to answer the input query. Subsequently, the reader B synthesizes a response by interpreting both the query and the information generated by the generator. + +Separating the generator and reader architectures offers a more flexible approach, enabling the selection of models that are optimally suited for their respective roles. Furthermore, this separation facilitates easier adaptation to new knowledge. Since the reader and generator are distinct entities, updating or expanding their capabilities does not necessitate restarting the resource-intensive process of aligning the entire system. This architectural division not only enhances the system's adaptability and efficiency but also significantly reduces the overhead associated with integrating new information or making adjustments to the model's functionality. + +It is worth noting that the generator-reader architecture extends beyond mere factual question answering to encompass a wide array of tasks. The generation phase can be likened to the act of retrieving information from memory, whereas the reading phase involves organizing language to formulate an appropriate response based on the search results. This process mirrors human cognitive strategies—essentially, thinking before acting. Furthermore, prior research, such as CoT (Wei et al., 2022) and RAG, employs a similar generator-reader framework. CoT utilizes the same model for both generating the thought process and reading, while RAG leverages external tools for its generation phase. + +In this section, we explore the effectiveness of the distinct generator-reader architecture through comprehensive experiments that examine various aspects of its design. Specifically, our investigation focuses on assessing how variations in versions, sizes, and types of these components influence the overall system's performance. By comparing different configurations, we aim to understand the impact of each component's characteristics on the archi + +tecture's ability to efficiently utilize internal knowledge, thereby optimizing the question-answering process. + +# 3.1 Experimental Setting + +Our experiments focus on assessing the capability of LLMs to answer factual questions, where the questions are mostly Wikipedia-based. Wikipedia is recognized as a high-quality corpus and has been employed as pre-training data (Touvron et al., 2023a) to equip LLMs with the extensive knowledge contained within Wikipedia. Consequently, posing questions derived from Wikipedia serves as an effective method to examine the proficiency of LLMs in leveraging internal knowledge. Furthermore, in practical real-world scenarios, a significant portion of queries relies on information sourced from Wikipedia, underscoring the essential and fundamental requirement for LLMs to effectively utilize Wikipedia knowledge in practical applications. + +To be specific, we use four datasets: Natural Questions (NQ) (Kwiatkowski et al., 2019), TriviaQA (Joshi et al., 2017), WebQuestions (WebQ) (Berant et al., 2013), and HotpotQA (Yang et al., 2018). Among these, NQ, TriviaQA, and WebQ predominantly feature single-hop questions, which require direct answers that can be found in a single document or passage. Conversely, HotpotQA elevates the complexity by necessitating multi-hop reasoning, where answering questions involves synthesizing information across multiple documents or passages. To evaluate the performance of the LLMs, we employ the Exact Match (EM) score as our evaluation metric, following previous research. The EM score evaluates the accuracy of the responses by considering a response correct only if it includes at least one of the candidate answers. + +We selected two distinct types of LLMs for evaluation: Llama2 (Touvron et al., 2023b) and Mistral (Jiang et al., 2023a). For Llama2, we utilize both the base model and its chat variations, distinguishing between unaligned and aligned versions respectively. In the case of Mistral, we opt for the instruct version, which is considered its aligned counterpart. Additionally, to investigate the impact of model size on performance, we conduct tests with Llama2 at two different scales: 7 billion (7b) and 13 billion (13b) parameters. Both generator and reader are implemented with in-context learning (ICL) with greedy decoding, and the detailed prompts can be found in Appendix C. + +
GeneratorReaderNQTriviaQAWebQHotpot
NoneLlama-2-7b26.061.239.821.1
NoneLlama-2-7b-chat19.750.338.818.1
NoneLlama-2-13b31.671.240.124.4
NoneLlama-2-13b-chat29.166.942.024.1
NoneMistral31.064.140.221.4
NoneMistral-instruct26.259.941.224.2
Llama-2-7bLlama-2-7b30.063.744.721.8
Llama-2-7b-chatLlama-2-7b27.762.943.224.2
Llama-2-7bLlama-2-7b-chat27.956.537.019.8
Llama-2-7b-chatLlama-2-7b-chat26.751.936.021.4
Llama-2-7bLlama-2-7b-URIAL30.263.844.822.7
Llama-2-7bLlama-2-13b32.164.745.224.2
Llama-2-7bLlama-2-13b-chat30.563.343.923.4
Llama-2-13bLlama-2-7b-chat34.563.238.824.8
Llama-2-13bLlama-2-13b36.271.544.827.6
Llama-2-13bLlama-2-13b-chat36.171.146.228.3
Llama-2-13b-chatLlama-2-13b32.969.744.727.2
Llama-2-13b-chatLlama-2-13b-chat32.769.344.127.5
MistralLlama-2-7b-chat33.360.039.124.8
MistralMistral-Instruct33.970.446.129.0
Mistral-InstructMistral-Instruct31.367.245.029.3
Mistral-InstructMistral32.367.445.127.8
+ +Table 3: Performance (few-shot) of different combinations of generator and reader on NQ, TriviaQA, WebQ, and Hotpot. + +# 3.2 Analysis + +# 3.2.1 Two Is Better than One + +The main results are shown in Table 3. When comparing the efficacy between the reader-only configuration and the generator-reader framework, significant enhancements are observed with the latter across various datasets. Specifically, within the same model category, the generator-reader framework's optimal performance surpasses that of the best reader-only approaches by a noticeable margin. This is particularly evident in the cases of NQ, WebQ, and Hotpot, where the improvements are $4.6\%$ , $4.2\%$ , and $4.9\%$ , respectively. These outcomes underscore the effectiveness of the generator-reader framework, which we call figuratively "two is better than one". + +# 3.2.2 Base Model Is a Better Generator + +In the context of direct response scenarios (Reader-only), empirical observations reveal that base models significantly outperform chat-oriented models across virtually all datasets, a finding that is in concordance with Section 2. This performance discrepancy underscores the base model's superior capacity for generating context that is more factually accurate compared to that produced by chat model. + +When the generator model size remains constant, + +empirical evidence consistently demonstrates that using the base model as a generator yields superior performance across a majority of datasets when compared to their chat model counterparts. For instance, with Llama-2-7b as the reader, the performance of Llama-2-7b over Llama-2-7b-chat averages a $+1.5\%$ improvement on NQ, TriviaQA, and WebQ. As concluded in our preliminary experiments, we attribute this performance gain to the base model's superior knowledge memorization capability, which enables the base model to generate context more consistent with the facts. + +We notice that Llama-2-7b sometimes performs worse than Llama-2-7b-chat as a generator on Hotpot. We think the reason is Hotpot requires more complex reasoning, demanding better understanding capabilities from the model. This hypothesis is validated in the experiments with Llama-2-13b, where Llama-2-13b as a generator performs better than Llama-2-13b-chat when using either as a reader. The larger quantity of parameters enhances the model's understanding ability, mitigating the performance gap observed with 7b and demonstrating its stronger knowledge memorization capability. + +# 3.2.3 Chat Model Is a Safer Reader + +From Table 3, we can see the performance of Llama-2-13b and Llama-2-13-chat (similar be + +![](images/e37cb7301bc5f0819d7d62b80e07588dc1e84f12be3c36ef6be3f31de5e85b85.jpg) + +![](images/ce6ddb0d67857b6a636cf6edf2a281136cea919459d539611a3f07f735c6c01f.jpg) +(a) WebQuestions +(b) HotpotQA +Figure 1: Performance with different generator and reader. Horizontal axis indicates different generators. Different colors indicate different readers. All models are in Llama-2 family. + +tween Mistral and Mistral-Instruct) is proximate when these models serve as readers. Additionally, as shown in Figure 1, when paired with Llama-2-13b as the generator, Llama-2-13b-chat shows superior performance on the WebQ and Hotpot datasets. This suggests that chat models may have a particular advantage in dealing with complex queries, such as those found in the Hotpot dataset, indicating their proficiency in reading context and producing factually consistent answers. + +However, this pattern does not hold for Llama-2-7b-chat. We found the reason is Llama-2-7b-chat is struggling with following instructions, often defaulting to answers from ICL demonstrations rather than generating the intended response. As shown in Figure 1, the performance gap becomes smaller when ICL demonstrations are excluded (Chat-zero-shot). It also shows a distinct behavior of chat models at different scales. + +
GeneratorReaderHelpfulnessClaritySafety
Llama-2-7bLlama-2-7b1.121.171.84
Llama-2-7bLlama-2-7b-chat2.392.883.41
Llama-2-7bLlama-2-7b-URIAL1.411.962.09
Llama-2-13bLlama-2-13b1.151.371.86
Llama-2-13bLlama-2-13b-chat3.233.563.52
Llama-2-13bLlama-2-13b-URIAL1.542.281.91
+ +Table 4: Response quality in “A + B” framework under the evaluation of GPT-4. Following same setting in Table 2. + +Moreover, the emphasis on factuality is complemented by the recognition of the importance of generating responses that align with human preferences and expectations. This aspect was explored through preliminary experiments that assessed the capability of LLMs to produce responses deemed preferable by humans. To further evaluate the quality of response generation, we conducted experiments within the generator-reader framework, utilizing the same experimental setup and metrics previously introduced. + +As shown in Table 4, the findings illustrate that chat models, fine-tuned with alignment data, consistently excel in delivering responses that are helpful, clear, and safe across various sizes when acting as the reader. This indicates that such models are adept at navigating the complexities of human interaction, ensuring that the information provided is not only accurate but also meets the nuanced expectations of users. Conversely, the base models, even when equipped with deliberately crafted prompts (marked as URIAL in the table), struggle to match the performance of their chat model counterparts. This disparity underscores the necessity of applying the well-aligned chat model in the role of the reader. + +Based on the foregoing analysis, it can be concluded that the chat model is a safer reader, as it is able to generate responses that are factual, helpful, clear, and safer, thereby aligning more closely with human preferences and expectations. + +3.2.4 Influence of Sizes and Types + +
GeneratorReader (NQ)Reader (Hotpot)
7b-chat13b-chat70b-chat7b-chat13b-chat70b-chat
7b27.930.532.819.823.426.7
13b34.536.136.224.828.327.8
70b39.141.541.531.438.137.9
+ +Table 5: Performance of different sizes of generator and reader on NQ and Hotpot. + +Generator Size Is Essential. From Table 3, it is evident that an increase in the number of parameters significantly enhances performance. This improvement is understandable, as larger-sized LLMs inherently possess more robust capabilities. To further investigate the impact of model size, we conducted experiments with Llama-2 models of varying sizes, including 7 billion (7b), 13 billion (13b), and 70 billion (70b) parameters. The results are presented in Table 5. + +Interestingly, the results indicate that enlarging + +the generator results in more substantial benefits compared to increasing the size of the reader. For example, the performances are improved more significantly when expanding the generator (comparing each column) compared to enhancing the reader (comparing each row). This observation demonstrates the pronounced impact of generator size in this context. + +The conclusion is logical that the reader depends on the context generated by the generator to produce high-quality responses. These findings also point toward a promising direction for the framework's design, emphasizing the efficacy of a configuration that pairs a knowledgeable, large-scale generator with a well-aligned, smaller reader. This approach could potentially optimize the balance between performance and computational efficiency, underscoring the importance of strategic component scaling within the architecture. + +Synergy and Complementarity Exist Among Different Model Types. Our results show the potential of combining different types of models in the generator-reader framework, for example, using Llama-2-7b-chat as the reader, Mistral performs better than Llama-2 with the same size. These results also illustrate that different types of LLMs can be complementary, allowing these ensembles to leverage the strengths of the individual LLMs more effectively to achieve better performance. + +# 4 External Knowledge Scenario + +In this section, we aim to extend and evaluate the "A + B" framework in scenarios that more closely resemble real-world applications. These scenarios often involve the necessity to integrate external knowledge into LLMs, which they may not have encountered during pre-training or subsequent finetuning phases. Such situations are common in practice, for example, members of a specific community may frequently ask questions related to proprietary documents unfamiliar to LLMs. We introduce an intuitive approach that embeds external knowledge into the parameters of LLMs through continuous pre-training, demonstrating the potential of the "A + B" framework in handling new knowledge scenarios. + +# 4.1 Implementation and Experimental Setting + +To simulate the described scenario, we conducted an experiment using the NarrativeQA (Kočiský et al., 2018) dataset, a question-answering dataset + +derived from extensive chapters of novel scripts. These questions necessitate the reading and comprehension of the novel or script for accurate responses. We treated the content of these lengthy chapters as the new knowledge that the language model must acquire and comprehend to correctly answer the questions. + +In alignment with the pre-training process, we interpret the acquisition of new knowledge as a continuation of the language modeling process, specifically through the continuous pre-training of LLMs on these texts. The specifics of this training process are detailed in the Appendix B.2. Following this phase, we utilized the continuously pre-trained LLMs as generators. As demonstrated in Table 6, these LLMs, having undergone continuous learning, served as information sources. Conversely, the untrained LLMs functioned as readers, interpreting and responding to questions based on the context provided by the generators. + +In this scenario, we consider two distinct situations: cross-document and within-document. In the cross-document situation, the task requires searching across all documents for information relevant to a given query, whereas the within-document scenario necessitates identifying specific information from a predetermined document. Given the unusually long length (avg. in 52372 words) of the document, even within-document is challenging. However, our approach to continuous training is based solely on plain context without any supervised signal. To equip LLMs with the capability to locate information within specific documents, we introduce special tokens to demarcate the document title, using the format: [TITLE] title [/TITLE] context. Consequently, when posing questions, we also specify the document title from which the question originates, thereby guiding the LLMs to focus their search and retrieval efforts on the indicated document. + +To evaluate the efficacy of this generator-reader framework, we implemented two variants, as outlined in Table 6: Llama-2-7b-CT and Llama-2-13b-CT. This decision was informed by previous analysis, which indicated that larger generators could yield greater benefits. This framework aims to explore the dynamics between continuous-trained generator size and its impact on the reader's ability to leverage generated context for accurate question answering. + +We compare our framework with two RAG baselines that use BM25 (Robertson and Zaragoza, + +2009) and Contriever (Izacard et al., 2022) as underlying retrieval mechanisms. BM25, categorized as a sparse retriever, adopts a traditional, keyword-based methodology, emphasizing term frequency and inverse document frequency to efficiently retrieve relevant documents. In contrast, Contriever operates as a dense retriever, leveraging advanced embedding techniques to encode documents and queries into high-dimensional vectors. + +# 4.2 Analysis + +
Information SourceLlama-2-7b-chatLlama-2-13b-chat
Cross docBM2527.326.9
Contriever30.532.9
Llama-2-7b-CT29.830.8
Llama-2-13b-CT29.228.6
Within docBM2531.135.9
Contriever32.435.6
Llama-2-7b-CT33.334.0
Llama-2-13b-CT35.438.3
+ +Table 6: Performance on the scenario where external document is introduced. The score is calculated with precision in the NarrativeQA dataset. + +The experimental results are shown in Table 6. In the cross-document scenario, it is observed that our method, despite lacking elaborate design and any form of supervised data, already showcases performance comparable to that of the baselines, which are equipped with sophisticated, well-designed retrievers. Notably, our approach surpasses the widely recognized sparse retriever, BM25, by a significant margin. These results underscore the efficiency of our method in scenarios requiring the acquisition of new knowledge. The initial success with an intuitive implementation suggests the framework's potential, indicating that more purposefully designed data collection and targeted training could further enhance performance, and we leave it as future work. + +In the within-document scenario, although it constitutes an unfair comparison between RAG and the generator-reader framework—where RAG is constrained to inputs from only the target document, whereas the generator-reader framework operates across all documents it has been continuously trained on—Table 6 reveals significant improvements attributable to the generator-reader framework. This enhancement further validates the framework's efficiency. The notable performance boost is credited to the advanced comprehension abilities of LLMs, which excel at identifying rele + +vant information more effectively and accurately. This outcome not only underscores the benefits of leveraging LLMs as information sources but also distinctly highlights their superiority in processing and synthesizing information within complex retrieval tasks. + +In conclusion, the "A + B" framework, through the straightforward approach of continuing pretraining, achieves results that are comparable to those obtained using RAG methods. Remarkably, it even significantly outperforms these methods in within-document scenarios. This simple and intuitive effort effectively showcases the framework's potential applicability and effectiveness in real-world scenarios, underlining its viability as a potent solution for enhancing the performance of LLMs in complex knowledge-intensive tasks. + +# 5 Related Works + +Retrieval-Augmented Generation: Despite a lot of advancements, LLMs exhibit notable limitations, particularly in handling domain-specific or highly specialized queries (Kandpal et al., 2023). One promising approach to mitigate these limitations is Retrieval Augmented Generation (RAG), which integrates external data retrieval into the generative process (Lewis et al., 2020). To further improve the retrieval quality, during pre-retrieval process (Li et al., 2023a) and post pre-retrieval process (Litman et al., 2020; Jiang et al., 2023b; Xu et al., 2023). However Retrieval quality poses diverse challenges, including low precision, leading to misaligned retrieved chunks. Low recall also occurs, failing to retrieve all relevant chunks (Gao et al., 2023). + +LLMs-Generated Content in RAG: Addressing the limitations of external auxiliary information in RAG, work (Wang et al., 2023a) classifies questions as known or unknown, applying retrieval enhancement selectively. Selfmem (Cheng et al., 2023) proposed a framework that improves text generation by iteratively generating and using its own output as self-memory. GenRead (Yu et al., 2022) replaces the retriever with an LLM generator, using LLM-generated contexts to answer the question. The Work (Lu et al., 2023), using LLMs as Knowledge Retrieval for Tool Augmentation to provide background knowledge. + +# 6 Conclusion + +This research introduces the “ $\mathrm{A} + \mathrm{B}$ ” framework as a novel approach to enhance LLMs in knowledge + +intensive tasks. By systematically exploring combinations of base and chat LLM versions for generation and reading, respectively, the framework shows superior performance over single models, particularly in complex tasks. The extension of the “ $\mathrm{A} + \mathrm{B}$ ” framework to include continuous learning for scenarios with source documents enables efficient integration of external knowledge, improving inference efficiency, and addressing safety and helpfulness challenges. This work demonstrates the framework’s versatility and potential to significantly improve LLM applications. + +# 7 Limitation + +While our experiments have consistently highlighted the efficacy of the generator-reader framework, it is important to acknowledge certain limitations: 1) The framework's efficacy has not been extensively tested across a broader spectrum of models, and the framework's reliance on unaligned base versions of LLMs is not always satisfied, especially for closed-source models. 2) In the validation scenarios involving the acquisition of new knowledge, the volume of knowledge that requires ongoing training is relatively limited. Although the current experimental outcomes do indicate the method's effectiveness and capabilities, they may not adequately represent its performance under extreme conditions, such as when there is a need to train on massive datasets. More rigorous testing in these extreme scenarios could provide a clearer picture of the method's scalability and its ability to handle large-scale data effectively. + +# Acknowledgements + +This work is supported by the National Key Research and Development Program of China (2022YFB3105405, 2021YFC3300502). + +# References + +Angels Balaguer, Vinamra Benara, Renato Luiz de Freitas Cunha, Roberto de M Estevao Filho, Todd Hendry, Daniel Holstein, Jennifer Marsman, Nick Mecklenburg, Sara Malvar, Leonardo O Nunes, et al. 2024. Rag vs fine-tuning: Pipelines, tradeoffs, and a case study on agriculture. arXiv e-prints, pages arXiv-2401. +Jonathan Berant, Andrew Chou, Roy Frostig, and Percy Liang. 2013. Semantic parsing on Freebase from question-answer pairs. In Proceedings of the 2013 + +Conference on Empirical Methods in Natural Language Processing, pages 1533-1544, Seattle, Washington, USA. Association for Computational Linguistics. +Xin Cheng, Di Luo, Xiuying Chen, Lemao Liu, Dongyan Zhao, and Rui Yan. 2023. Lift yourself up: Retrieval-augmented text generation with self memory. arXiv preprint arXiv:2305.02437. +Deep Ganguli, Liane Lovitt, Jackson Kernion, Amanda Askell, Yuntao Bai, Saurav Kadavath, Ben Mann, Ethan Perez, Nicholas Schiefer, Kamal Ndousse, Andy Jones, Sam Bowman, Anna Chen, Tom Conerly, Nova DasSarma, Dawn Drain, Nelson Elhage, Sheer El-Showk, Stanislav Fort, Zac Hatfield-Dodds, Tom Henighan, Danny Hernandez, Tristan Hume, Josh Jacobson, Scott Johnston, Shauna Kravec, Catherine Olsson, Sam Ringer, Eli Tran-Johnson, Dario Amodei, Tom Brown, Nicholas Joseph, Sam McCandlish, Chris Olah, Jared Kaplan, and Jack Clark. 2022. Red teaming language models to reduce harms: Methods, scaling behaviors, and lessons learned. +Leo Gao, Stella Biderman, Sid Black, Laurence Golding, Travis Hoppe, Charles Foster, Jason Phang, Horace He, Anish Thite, Noa Nabeshima, Shawn Presser, and Connor Leahy. 2020. The Pile: An 800gb dataset of diverse text for language modeling. arXiv preprint arXiv:2101.00027. +Yunfan Gao, Yun Xiong, Xinyu Gao, Kangxiang Jia, Jinliu Pan, Yuxi Bi, Yi Dai, Jiawei Sun, and Haofen Wang. 2023. Retrieval-augmented generation for large language models: A survey. arXiv preprint arXiv:2312.10997. +Kan Hatakeyama-Sato, Yasuhiko Igarashi, Shun Katakami, Yuta Nabae, and Teruaki Hayakawa. 2023. Teaching specific scientific knowledge into large language models through additional training. arXiv preprint arXiv:2312.03360. +Edward J Hu, yelong shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. 2022. LoRA: Low-rank adaptation of large language models. In International Conference on Learning Representations. +Gautier Izacard, Mathilde Caron, Lucas Hosseini, Sebastian Riedel, Piotr Bojanowski, Armand Joulin, and Edouard Grave. 2022. Unsupervised dense information retrieval with contrastive learning. Transactions on Machine Learning Research. +Albert Q Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lucile Saulnier, et al. 2023a. Mistral 7b. arXiv preprint arXiv:2310.06825. +Huiqiang Jiang, Qianhui Wu, Chin-Yew Lin, Yuqing Yang, and Lili Qiu. 2023b. Llmingua: Compressing prompts for accelerated inference of large language models. arXiv preprint arXiv:2310.05736. + +Mandar Joshi, Eunsol Choi, Daniel Weld, and Luke Zettlemoyer. 2017. TriviaQA: A large scale distantly supervised challenge dataset for reading comprehension. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1601-1611, Vancouver, Canada. Association for Computational Linguistics. +Nikhil Kandpal, Haikang Deng, Adam Roberts, Eric Wallace, and Colin Raffel. 2023. Large language models struggle to learn long-tail knowledge. In International Conference on Machine Learning, pages 15696-15707. PMLR. +Tomáš Kočisky, Jonathan Schwarz, Phil Blunsom, Chris Dyer, Karl Moritz Hermann, Gábor Melis, and Edward Grefenstette. 2018. The NarrativeQA reading comprehension challenge. Transactions of the Association for Computational Linguistics, 6:317-328. +Tom Kwiatkowski, Jennimaria Palomaki, Olivia Redfield, Michael Collins, Ankur Parikh, Chris Alberti, Danielle Epstein, Illia Polosukhin, Jacob Devlin, Kenton Lee, Kristina Toutanova, Llion Jones, Matthew Kelcey, Ming-Wei Chang, Andrew M. Dai, Jakob Uszkoreit, Quoc Le, and Slav Petrov. 2019. Natural questions: A benchmark for question answering research. Transactions of the Association for Computational Linguistics, 7:452-466. +Patrick Lewis, Ethan Perez, Aleksandra Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich Kuttler, Mike Lewis, Wen-tau Yih, Tim Rocktäschel, et al. 2020. Retrieval-augmented generation for knowledge-intensive nlp tasks. Advances in Neural Information Processing Systems, 33:9459-9474. +Xinze Li, Zhenghao Liu, Chenyan Xiong, Shi Yu, Yu Gu, Zhiyuan Liu, and Ge Yu. 2023a. Structure-aware language model pretraining improves dense retrieval on structured data. arXiv preprint arXiv:2305.19912. +Xuechen Li, Tianyi Zhang, Yann Dubois, Rohan Taori, Ishaan Gulrajani, Carlos Guestrin, Percy Liang, and Tatsunori B. Hashimoto. 2023b. Alpacaeval: An automatic evaluator of instruction-following models. https://github.com/tatsu-lab/alpaca_eval. +Bill Yuchen Lin, Abhilasha Ravichander, Ximing Lu, Nouha Dziri, Melanie Sclar, Khyathi Chandu, Chandra Bhagavatula, and Yejin Choi. 2023. The unlocking spell on base llms: Rethinking alignment via in-context learning. +Ron Litman, Oron Anschel, Shahar Tsiper, Roeet Litman, Shai Mazor, and R Manmatha. 2020. Scatter: selective context attentional scene text recognizer. In proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 11962-11972. +Pan Lu, Baolin Peng, Hao Cheng, Michel Galley, Kai-Wei Chang, Ying Nian Wu, Song-Chun Zhu, and Jianfeng Gao. 2023. Chameleon: Plug-and-play compositional reasoning with large language models. + +Aman Madaan, Niket Tandon, Prakhar Gupta, Skyler Hallinan, Luyu Gao, Sarah Wiegreffe, Uri Alon, Nouha Dziri, Shrimai Prabhumoye, Yiming Yang, Shashank Gupta, Bodhisattwa Prasad Majumder, Katherine Hermann, Sean Welleck, Amir Yazdanbakhsh, and Peter Clark. 2023. Self-refine: Iterative refinement with self-feedback. In Thirty-seventh Conference on Neural Information Processing Systems. +Piotr Mirowski, Kory W Mathewson, Jaylen Pittman, and Richard Evans. 2023. Co-writing screenplays and theatre scripts with language models: Evaluation by industry professionals. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems, pages 1-34. +OpenAI. 2023. Openai: Gpt-4. +Kishore Papineni, Salim Roukos, Todd Ward, and Wei Jing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, pages 311-318, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics. +Stephen Robertson and Hugo Zaragoza. 2009. The probabilistic relevance framework: Bm25 and beyond. Found. Trends Inf. Retr., 3(4):333-389. +Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothee Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, Aurelien Rodriguez, Armand Joulin, Edouard Grave, and Guillaume Lample. 2023a. Llama: Open and efficient foundation language models. +Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, et al. 2023b. Llama 2: Open foundation and fine-tuned chat models. arXiv preprint arXiv:2307.09288. +Yile Wang, Peng Li, Maosong Sun, and Yang Liu. 2023a. Self-knowledge guided retrieval augmentation for large language models. arXiv preprint arXiv:2310.05002. +Yizhong Wang, Hamish Ivison, Pradeep Dasigi, Jack Hessel, Tushar Khot, Khyathi Chandu, David Wadden, Kelsey MacMillan, Noah A. Smith, Iz Beltagy, and Hannaneh Hajishirzi. 2023b. How far can camels go? exploring the state of instruction tuning on open resources. In Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track. +Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed H Chi, Quoc V Le, Denny Zhou, et al. 2022. Chain-of-thought prompting elicits reasoning in large language models. In Advances in Neural Information Processing Systems. + +Jerry Wei, Da Huang, Yifeng Lu, Denny Zhou, and Quoc V. Le. 2024. Simple synthetic data reduces sycophancy in large language models. +Peng Xu, Wei Ping, Xianchao Wu, Lawrence McAfee, Chen Zhu, Zihan Liu, Sandeep Subramanian, Evelina Bakhturina, Mohammad Shoeybi, and Bryan Catanzaro. 2023. Retrieval meets long context large language models. arXiv preprint arXiv:2310.03025. +Zhilin Yang, Peng Qi, Saizheng Zhang, Yoshua Bengio, William Cohen, Ruslan Salakhutdinov, and Christopher D. Manning. 2018. HotpotQA: A dataset for diverse, explainable multi-hop question answering. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2369-2380, Brussels, Belgium. Association for Computational Linguistics. +Wenhao Yu, Dan Iter, Shuohang Wang, Yichong Xu, Mingxuan Ju, Soumya Sanyal, Chenguang Zhu, Michael Zeng, and Meng Jiang. 2022. Generate rather than retrieve: Large language models are strong context generators. arXiv preprint arXiv:2209.10063. + +# A More Experiment Results + +
ModelTechMovieGov.GameFinanceGeneralAverage
LLama-2-13b4.96.93.85.53.53.74.7
LLama-2-13b-chat7.511.35.98.714.85.38.9
Mistral-7b7.411.46.19.15.25.77.5
Mistral-7b- Instruction9.414.67.911.56.37.29.5
Mistral-8x7b6.49.64.97.94.54.96.4
Mistral-8x7b- Instruction7.110.75.48.64.95.57.0
Yi-6B15.530.25.722.96.48.314.8
Yi-6B-chat17.735.06.226.67.09.016.9
Yi-34B12.724.14.718.05.26.811.9
Yi-34B-chat17.235.15.327.76.08.016.55
+ +We compute the perplexity of unaligned (base) model and aligned (chat/instruction) model on the pre-train dataset. As shown in Table 7, unaligned model consistently has a lower perplexity than aligned one, which indicates the potential better knowledge memorization ability of the unaligned model. + +Table 7: The perplexity of the tested unaligned and aligned models on the pre-train dataset Pile (Gao et al., 2020). Lower perplexity indicates better knowledge memorization ability. + +
GeneratorReaderNQHotpot
originalnew 3-shotsnew 5-shotsoriginalnew 3-shotsnew 5-shots
Llama-2-7bLlama-2-7b30.030.530.121.822.222.5
Llama-2-7b-chatLlama-2-7b27.729.028.824.226.024.3
Llama-2-7bLlama-2-7b-chat27.930.629.619.821.121.4
Llama-2-7b-chatLlama-2-7b-chat26.728.627.821.423.923.1
Llama-2-7bLlama-2-13-chat30.531.731.023.424.824.3
Llama-2-13bLlama-2-7b-chat34.535.434.224.826.925.5
Llama-2-13bLlama-2-13b-chat36.136.035.028.328.229.4
Llama-2-13b-chatLlama-2-13b-chat32.732.933.327.527.528.3
+ +Table 8: Performance with different prompt strategies on NQ and Hotpot datasets. + +To investigate the effect of different prompt strategies, we conduct experiments on the NQ and Hotpot datasets. As shown in Table 8, the table demonstrates that the results remain consistent across the original prompts, as well as new 3-shot and 5-shot prompts, thereby reinforcing our original conclusions. + +# B Experimental Setting Details + +# B.1 Setting of Generator and Reader + +The generator and reader are set to a temperature of 0 for greedy decoding, and the maximum token length of generation is set to 512 and 256 for the generator and reader, respectively. Both generator and reader are conducted under a few-shot setting. The specific prompt is detailed in Appendix C. We randomly sample around 1000 data from the test or validation sets of each dataset for experiments. We calculate the EM score by considering the LM output as correct if it contains any correct answer of the answer set. + +# B.2 Details of Continual Pre-training + +Our implementation of continual training is based on low-rank adaptation (Hu et al., 2022). We set the lora rank as 512, lora alpha as 300, and the learning rate as $5^{-5}$ . The batch size is set as 16 and train with + +3 epochs. The data contains 105 novels or scripts and is split into chunks with 3584 tokens per chunk. The special tokens [TITLE] title [/TITLE] are added at the beginning of each chunk. For evaluation, we use chatGPT to transfer NarrativeQA to an multi-choice question task and directly calculate the precision in our experiments. + +# C Prompts + +# C.1 Generator Prompt + +# Query: what purpose did seasonal monsoon winds have on trade + +Related documents: + +The trade winds are the prevailing pattern of easterly surface winds found in the tropics, within the lower portion of the Earth's atmosphere, in the lower section of the troposphere near the Earth's equator. The trade winds blow predominantly from the northeast in the Northern Hemisphere and from the southeast in the Southern Hemisphere, strengthening during the winter and when the Arctic oscillation is in its warm phase. Trade winds have been used by captains of sailing ships to cross the world's oceans for centuries, and enabled European empire expansion into the Americas and trade routes to become established across the Atlantic and Pacific oceans. + +Answer: + +Seasonal monsoon winds facilitated trade by enabling sailing ships to cross the world's oceans and establish trade routes across the Atlantic and Pacific oceans. + +Query: + +where did the idea of fortune come from + +Related documents: + +Fortnite is set in contemporary Earth, where the sudden appearance of a worldwide storm causes $98\%$ of the world's population to disappear, and zombie-like creatures rise to attack the remainder. Considered by Epic as a cross between Minecraft and Left 4 Dead, Fortnite has up to four players cooperating on various missions on randomly-generated maps to collect resources, build fortifications around defensive objectives that are meant to help fight the storm and protect survivors, and construct weapons and traps to engage in combat with waves of these creatures that attempt to destroy the objectives. Players gain rewards through these missions to improve their hero characters, support teams, and arsenal of weapon and trap schematics to be able to take on more difficult missions. The game is supported through microtransactions to purchase in-game currency that can be used towards these upgrades. + +Answer: + +The idea of Fortnite originated as a combination of elements from Minecraft and Left 4 Dead, focusing on players cooperating in missions, building fortifications, and engaging in combat with creatures. + +Query: + +What day did the episode Severance start airing? + +Related documents: + +The series premiered on Apple TV+ on February 18, 2022. It received acclaim from critics and audiences for its cinematography, direction, production design, musical score, story, and performances. The series received 14 nominations at the 74th Primetime Emmy Awards, including Outstanding Drama Series and acting nominations for Scott, Turturro, Walken, and Arquette; it won for Main Title Design and musical score. In April 2022, the series was renewed for a second season. + +Answer: + +February 18, 2022 + +Query: + +{question} + +Related documents: + +Answer the following question after reading and understanding the given context. Your response should be only contain the the answer. + +Context + +Nestled in the heart of Asia, Beijing is the capital of China, a city steeped in rich history and culture. Its iconic landmarks, such as the Forbidden City and the Great Wall, draw millions of visitors each year. Known for its blend of tradition and modernity, Beijing continues to play a pivotal role in shaping China's future. + +Question + +Which city is the capital of China? + +Answer + +Beijing + +Context + +The song "Perfect" from the album " $\div$ " (Divide) is sung by the British singer-songwriter Ed Sheeran. " $\div$ " is Ed Sheeran's third studio album, released on March 3, 2017. "Perfect" is one of the standout tracks from the album and has been a commercial success, receiving widespread acclaim for its romantic and heartfelt lyrics. It has become a popular choice for weddings and romantic occasions due to its sentimental nature. + +Question + +Who is the singer of the song Perfect in the album Divide? + +Answer + +Ed Sheeran + +Context + +Fortnite is set in contemporary Earth, where the sudden appearance of a worldwide storm causes $98\%$ of the world's population to disappear, and zombie-like creatures rise to attack the remainder. Considered by Epic as a cross between Minecraft and Left 4 Dead, Fortnite has up to four players cooperating on various missions on randomly-generated maps to collect resources, build fortifications around defensive objectives that are meant to help fight the storm and protect survivors, and construct weapons and traps to engage in combat with waves of these creatures that attempt to destroy the objectives. Players gain rewards through these missions to improve their hero characters, support teams, and arsenal of weapon and trap schematics to be able to take on more difficult missions. The game is supported through microtransactions to purchase in-game currency that can be used towards these upgrades. + +Question + +where did the idea of fortune come from + +Answer + +A combination of elements from Minecraft and Left 4 Dead + +Context + +{context} + +Question + +{question} + +# C.3 Safety Evaluation + +Given a malicious query from user, we have an output from an AI assistant. Please act as an impartial judge and evaluate the safety of the response. + +Evaluate A safe AI assistant should produce content that is free from harm, unethical behavior, racism, sexism, toxicity, danger, and illegality. Additionally, it should exhibit, social impartiality and promote positivity. + +- 1: Highly Unsafe: Promotes harm, illegal activities, or gives dangerous, instructions, etc. +- 2: Moderately Unsafe: Contains misleading information or hints at harmful actions, etc. +- 3: Neutral: Neither safe nor unsafe, or the intent is unclear. +- 4: Mostly Safe: Provides general information without going into harmful specifics. +- 5: Completely Safe: Does not entertain the malicious intent, redirects or refrains, from harmful details. + +Now, please output your scores and a short rationale below in a json format by filling, in the placeholders in []. { "safety": { "reason": "[your rationale]", "score": "[safety score from 1 to 5]"} } + +# C.4 Helpfulness and Clarity Evaluation + +Please act as an impartial judge and evaluate the quality of the responses provided. You will rate the quality of the output on multiple aspects such as Helpfulness, and Clarity ### Evaluate ### - Helpfulness: Rate the response based on how well it addresses the user's query and , provides a relevant solution. A score of 5 indicates the answer fully aids the user, while a 1 suggests it offers little to no help. + +- Clarity: Rate the response based on how well-structured it is, with ideas presented, in a clear and coherent manner. A high score of 5 means the answer is clear and logically structured, while a 1 suggests a disjointed or confusing reply. + +```txt +{ "helpfulness": {"reason": "[your rationale]", "score": "[score from 1 to 5]" }, "clarity": {"reason": "[your rationale]", "score": "[score from 1 to 5]" }, } +``` \ No newline at end of file diff --git a/abageneralgeneratorreaderframeworkforoptimizingllmstounleashsynergypotential/images.zip b/abageneralgeneratorreaderframeworkforoptimizingllmstounleashsynergypotential/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..94a148bde9706cff3fc7f559a808a5cfa0a720c2 --- /dev/null +++ b/abageneralgeneratorreaderframeworkforoptimizingllmstounleashsynergypotential/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:abc6448bae72e7904b200504ce89171d05f839769463254e5cf0e4d0e47d7803 +size 399214 diff --git a/abageneralgeneratorreaderframeworkforoptimizingllmstounleashsynergypotential/layout.json b/abageneralgeneratorreaderframeworkforoptimizingllmstounleashsynergypotential/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..63defd61066958ba0a8569cb1018295aa2671654 --- /dev/null +++ b/abageneralgeneratorreaderframeworkforoptimizingllmstounleashsynergypotential/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:23a395285a963289f89a217680a4374f0e723c579519fad974a37c31ffad52d4 +size 377749 diff --git a/abstractmeaningrepresentationbasedlogicdrivendataaugmentationforlogicalreasoning/d292d422-754e-4610-bd9f-9bb9062e1c3f_content_list.json b/abstractmeaningrepresentationbasedlogicdrivendataaugmentationforlogicalreasoning/d292d422-754e-4610-bd9f-9bb9062e1c3f_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..0292ede3f2d1a703402b002cd237b4d4f118c047 --- /dev/null +++ b/abstractmeaningrepresentationbasedlogicdrivendataaugmentationforlogicalreasoning/d292d422-754e-4610-bd9f-9bb9062e1c3f_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0b482c9064d4f9fbae0fd06ed41051f0366e1159d25ed9618726c6a95ca3740a +size 133213 diff --git a/abstractmeaningrepresentationbasedlogicdrivendataaugmentationforlogicalreasoning/d292d422-754e-4610-bd9f-9bb9062e1c3f_model.json b/abstractmeaningrepresentationbasedlogicdrivendataaugmentationforlogicalreasoning/d292d422-754e-4610-bd9f-9bb9062e1c3f_model.json new file mode 100644 index 0000000000000000000000000000000000000000..ac61724a7e8d1a9335cf65f30eacc26e4bd4e112 --- /dev/null +++ b/abstractmeaningrepresentationbasedlogicdrivendataaugmentationforlogicalreasoning/d292d422-754e-4610-bd9f-9bb9062e1c3f_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0867a02ca5b7d164af296859783882adaa4014632dc854d63aed079016bd356d +size 152779 diff --git a/abstractmeaningrepresentationbasedlogicdrivendataaugmentationforlogicalreasoning/d292d422-754e-4610-bd9f-9bb9062e1c3f_origin.pdf b/abstractmeaningrepresentationbasedlogicdrivendataaugmentationforlogicalreasoning/d292d422-754e-4610-bd9f-9bb9062e1c3f_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..d40269171308beb2d26d2b39d39416d37e7d2d7d --- /dev/null +++ b/abstractmeaningrepresentationbasedlogicdrivendataaugmentationforlogicalreasoning/d292d422-754e-4610-bd9f-9bb9062e1c3f_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:fdb5bc966b86a3728ecadea0f015c325077c270aa004264c7c2f7188ffcdc2f2 +size 809318 diff --git a/abstractmeaningrepresentationbasedlogicdrivendataaugmentationforlogicalreasoning/full.md b/abstractmeaningrepresentationbasedlogicdrivendataaugmentationforlogicalreasoning/full.md new file mode 100644 index 0000000000000000000000000000000000000000..c34b57bea10ead4f19b8c42133d79633325da3bc --- /dev/null +++ b/abstractmeaningrepresentationbasedlogicdrivendataaugmentationforlogicalreasoning/full.md @@ -0,0 +1,578 @@ +# Abstract Meaning Representation-Based Logic-Driven Data Augmentation for Logical Reasoning + +Qiming Bao $^{1,2}$ , Alex Yuxuan Peng $^{1}$ , Zhenyun Deng $^{3}$ , Wanjun Zhong $^{4}$ , Gael Gendron $^{1}$ , Timothy Pistotti $^{1}$ , Neset Tan $^{1}$ , Nathan Young $^{1}$ , Yang Chen $^{1}$ , Yonghua Zhu $^{1}$ , Paul Denny $^{5}$ , Michael Witbrock $^{1}$ , and Jiamou Liu $^{1}$ + +$^{1}$ Strong AI Lab, NAOInstitute, Waipapa Taumata Rau - The University of Auckland $^{2}$ Xtracta, New Zealand + +$^{3}$ Department of Computer Science and Technology, University of Cambridge, UK + +$^{4}$ School of Computer Science and Engineering, Sun Yat-Sen University, China + +$^{5}$ School of Computer Science, The University of Auckland, New Zealand {qbao775,ypen260,ntan607,yche767,ggen187} @ aucklanduni.ac.nz, zd302@cam.au.uk + +# Abstract + +Combining large language models with logical reasoning enhances their capacity to address problems in a robust and reliable manner. Nevertheless, the intricate nature of logical reasoning poses challenges when gathering reliable data from the web to build comprehensive training datasets, subsequently affecting performance on downstream tasks. To address this, we introduce a novel logic-driven data augmentation approach, AMR-LDA. AMR-LDA converts the original text into an Abstract Meaning Representation (AMR) graph, a structured semantic representation that encapsulates the logical structure of the sentence, upon which operations are performed to generate logically modified AMR graphs. The modified AMR graphs are subsequently converted back into text to create augmented data. Notably, our methodology is architecture-agnostic and enhances both generative large language models, such as GPT-3.5 and GPT-4, through prompt augmentation, and discriminative large language models through contrastive learning with logic-driven data augmentation. Empirical evidence underscores the efficacy of our proposed method with improvement in performance across seven downstream tasks, such as reading comprehension requiring logical reasoning, textual entailment, and natural language inference. Furthermore, our method leads on the ReClor leaderboard1. The source code and data are publicly available2. + +# 1 Introduction + +Enabling pre-trained large language models (LLMs) to reliably perform logical reasoning is an important step towards strong artificial intelligence (Chollet, 2019). However, data annotation for logical reasoning tasks is a difficult, time-consuming and costly process that has led to the scarcity of + +large-scale logical reasoning datasets derived from natural language on the web. Therefore, LLMs are usually trained on generic corpora or smaller logical reasoning datasets that lead to poor generalisation (Wang et al., 2022). Automatic augmentation of logical reasoning data has the potential to enhance the generalisation and performance of LLMs on logical reasoning tasks. + +To address this challenge, we propose a logic-driven data augmentation method based on Abstract Meaning Representation (AMR). AMR is a structural representation of the semantics and logical structure of text via a rooted directed acyclic graph (DAG) (Shou et al., 2022). Figure 1 shows an example of an AMR graph. The AMR graph can be easily modified by changing nodes or arguments to create logically equivalent or nonequivalent graphs. By taking advantage of the ease of logical manipulation of AMR graph and of end-to-end conversion between natural language and AMR graphs, our proposed data augmentation is not task-specific or template-dependent, and can generate logically equivalent and nonequivalent sentences that are diverse in their use of language. + +In order to improve the performance of LLMs on downstream tasks requiring logical reasoning, we investigate two different applications of the proposed logic-driven data augmentation for two different types of language models. In this paper, we describe models such as RoBERTa (Liu et al., 2019) and DeBERTa (He et al., 2021) as discriminative large language models, and models like GPT-3.5 (OpenAI, 2023a) as generative LLMs. We improve the reasoning ability of discriminative large language models by applying contrastive learning to identify logically equivalent and nonequivalent sentence pairs generated using the proposed data augmentation before fine-tuning the model further on downstream tasks. In order to improve the performance of generative LLMs on logical reasoning + +S1: The girl believes that the boy doesn't work hard. +S2: That the boy doesn't work hard is what the girl believes. + +![](images/4f203e6e63c624d612797c26790ccd6918759651b7ad69686fa894c86c33d0fa.jpg) +S3: If Alan is kind, then Bob is not clever. + +![](images/4d741e70e7aaed69c471638ffe0b916c25c8dbccfba290d4f428be3ee7b219db.jpg) +Figure 1: An example of AMR. Two sentences with the same semantic meaning can be represented as the same AMR graph. “b”, “g”, and “w” are variables. “w/work-01” refers to the variable “w” has an instance relation with the AMR concept “work-01”. “work” is the frame from Propbank (Kingsbury and Palmer, 2002) and “-01” is the sense of frame. “[ARG0”, “[ARG1”, “[condition”, “[polarity” are frame arguments, following PropBank instructions. “[condition” and “[polarity -” are used to represent conditional and negative relationships. + +tasks without fine-tuning, we augment the input prompt by extending the question context and options using data augmentation. We summarize the paper's key contributions as follows: + +1. We propose an AMR-based logic-driven data augmentation method to automatically construct logically equivalent/nonequivalent sentences. +2. We enhance the logical reasoning of large language models through logical-equivalence-identification contrastive learning and prompt augmentation. +3. The experimental results show that our method can improve large language models' performance on downstream tasks including logical reasoning, textual entailment and natural language inference. + +# 2 Related Work + +Logical reasoning is rigorous thinking to derive a conclusion based on a given premise (Seel, 2011; Bronkhorst et al., 2020). Existing reasoning datasets reasoning can be categorised into two levels: sentence level, including tasks like natural language inference that assess if one sentence logically follows from another (e.g., MNLI (Williams et al., 2018), RTE (Wang et al., 2018), MRPC (Dolan and Brockett, 2005), QNLI (Rajpurkar et al., 2016), QQP (Wang et al., 2018)); passage level, which requires logical deduction from given contexts, questions, and multiple choices (e.g., PARARULE (Clark et al., 2021), PARARULE-Plus (Bao et al., 2022)) and reading comprehension tasks (e.g., ReClor (Yu et al., 2020), LogiQA (Liu et al., 2021)). We introduce an abstract meaning representation-based methodology for logic-driven data augmentation aimed at enhancing models' logical reasoning capabilities across these tasks. + +There are three primary methods for enhancing the capabilities of pre-trained language models in logical reasoning and general natural language understanding: 1) Data augmentation with fine-tuning, exemplified by AMR-DA (Shou et al., 2022), which employs Abstract Meaning Representation for paraphrasing, and LReasoner (Wang et al., 2022), which uses templates and syntax parsing for constructing logically equivalent sentences; 2) Continual pre-training, with methods like MERIt (Jiao et al., 2022) integrates a meta-path strategy for discerning logical text structures and a counterfactual data augmentation strategy to preclude pre-training shortcuts. IDoL (Xu et al., 2023) utilises six logical indicators (Pi et al., 2022; Prasad et al., 2008) to build a logic pre-training dataset from Wikipedia, enhancing the logical reasoning capabilities of pre-trained models. 3) Prompting, notably Chain-of-Thought prompting (Wei et al., 2022), to improve multi-step logical reasoning performance. Our AMR-LDA surpasses LReasoner-LDA by incorporating a broader range of logical equivalence laws, enabling the automatic construction of more precise logically equivalent sentences. Our contrastive learning method enhances the performance of pre-trained models, including MERIt and IDoL, on logical reasoning tasks. Additionally, Our AMR-based logic-driven prompt augmentation can improve large language models' logical reasoning capabilities, contrasting with the detrimental effects of CoT Prompting and AMR-DA. + +# 3 Method + +# 3.1 System Architecture + +Our system, shown in Figure 2, features an AMR-Based Logic-Driven Data Augmentation Module that parses sentences into AMR graphs, modifies the graphs to generate corresponding logically equivalent and nonequivalent graphs, then converts these back into natural language. The Logical-Equivalence-Identification Contrastive Learning Module aims to improve the logical reasoning ability of discriminative large language models by conducting contrastive learning to identify equivalent and nonequivalent sentence pairs, before further fine-tuning the model on downstream tasks. The Prompt Augmentation Module is intended to improve the performance of generative autoregressive LLMs on logical reasoning tasks by applying the data augmentation module to the input fed into the models at inference time, without performing any fine-tuning. + +# 3.2 AMR-Based Logic-Driven Data Augmentation + +We propose Abstract Meaning Representation-based Logic-driven Data Augmentation (AMR-LDA) to construct logically equivalent and nonequivalent sentences automatically. For simplicity, we consider only individual sentences, and propositional logic statements expressed in natural language. AMR-LDA involves the following steps: 1): Convert a sentence into AMR graph. 2): Logically augment the AMR graph. 3): Convert the logically augmented AMR graph back into natural language. + +Text-To-AMR Parsing A text-to-AMR model is used to parse a sentence into an AMR graph. In this step, the input is a natural language sentence written in English. The output is a rooted, labeled, directed, and acyclic AMR graph that captures the main semantic information of the sentence. + +AMR Graph Modification The AMR graph is modified to construct logically equivalent and nonequivalent graphs. To create logically equivalent graphs, we consider four different logical equivalence laws: double negation, commutative, implication, and contraposition laws. These laws of logical equivalence are defined below using propositional statements $\mathcal{A}$ and $\mathcal{B}$ , followed by examples in natural language (e.g. $\mathcal{A}$ is "Alan is kind" and $\mathcal{B}$ is "Bob is clever"). + +Logical Equivalence Logical equivalence is a fundamental concept in formal logic (Mendelson, 2009). It can be formally defined as: Two propositions or statement forms $P$ and $Q$ are logically equivalent if they have the same truth value in every possible circumstance, or in every possible model. This can be denoted as $P \equiv Q$ . This condition can also be described by the statement: $P$ and $Q$ are logically equivalent if and only if the statement "P if and only if $Q$ " is a tautology. A tautology is a statement that is always true, regardless of the truth values of its components. In terms of truth tables, $P$ and $Q$ are logically equivalent if their truth tables are identical, i.e., $P$ and $Q$ have the same truth value for each possible assignment of truth values to their components. + +Definition 1: Contraposition Law + +$$ +(\mathcal {A} \rightarrow \mathcal {B}) \Leftrightarrow (\neg \mathcal {B} \rightarrow \neg \mathcal {A}) +$$ + +If Alan is kind, then Bob is clever. $\Leftrightarrow$ If Bob is not clever, then Alan is not kind. + +To implement the contraposition law, we first swap the first half of the sentence with the second half if the AMR parser detects that the sentence is a conditional statement (e.g. "if-then", as marked by the blue background in Table 1). In the second step, we construct logically equivalent sentences for the four potential scenarios in which the negation may appear. Here, we use one such scenario as an example. If the first half of the sentence has no negation and the second half of the sentence has no negation either, then we will add the negative polarity argument, " $polarity -$ ", to the first half and the second half of the sentence to construct logically equivalent sentences (marked with the yellow background in Table 1). AMR uses " $polarity -$ " to represent negation (e.g. "not"). Note that our method is not limited to the word "not", the negative argument " $polarity -$ " in the AMR graph may represent other negative words in the original sentence. We discuss those cases in Section 3.2 Definition 4 when describing the implementation for double negation law. An example of the augmentation process be found in Figure 8 in Appendices. + +# Definition 2: Implication Law + +$$ +(\mathcal {A} \to \mathcal {B}) \Leftrightarrow (\neg \mathcal {A} \lor \mathcal {B}) +$$ + +If Alan is kind, then Bob is clever. $\Leftrightarrow$ Alan is not kind or Bob is clever. + +![](images/9ef812c2d3a400c38b45e971e24ddb11d799d435846494eaa1e376300279c73c.jpg) +1. AMR-Based Logic-Driven Data Augmentation (AMR-LDA) + +![](images/628f801ab2cc8ab4ba267ae7611538cf386533c9f3e2152a8fbe9f50d14adce5.jpg) +2a. Logical-Equivalence-Identification Contrastive Learning for Discriminative LLM +2b. Prompt Augmentation for Generative LLM + +Context: $\neg \alpha \rightarrow \neg \beta, \neg \beta \rightarrow \neg \gamma$ + +Context: $\neg \alpha \to \neg \beta$ , + +Option A: $\neg \gamma \rightarrow \neg \alpha$ + +Option B: $\gamma \rightarrow \alpha$ + +Option C: $\neg \gamma \rightarrow \neg \beta$ + +Option D: $\alpha \rightarrow \gamma$ + +AMR-LDA + +![](images/8eef4e0a9128d8fb3ea190a575399d29f81a19df6d0d6abb8c83752293d4e549.jpg) + +Context: $\neg \alpha \rightarrow \neg \beta ,\neg \beta \rightarrow \neg \gamma$ + +Option A: $\neg x \rightarrow \neg a + A$ + +Option B: $\gamma \rightarrow \alpha +$ AMR-I.DA extended option: + +Option C: $\neg \gamma \rightarrow \neg \beta +\mathrm{AMR - I}$ DA extended opt + +Option D: $\alpha \rightarrow \gamma +\mathrm{AMR}$ LDA extended option. + +$\alpha =$ you have keyboarding skills. +$\beta =$ you are able to use a computer. +$\gamma =$ you are able to write your essays using a word processing program. + +Con: $\alpha \rightarrow \gamma +$ AMR-LDA extended context: $\beta \rightarrow \alpha ,\gamma \rightarrow \beta ,$ +$\alpha \rightarrow \neg \gamma +$ AMR-LDA extended context: $\beta \rightarrow a,\gamma \rightarrow \beta .$ +Con: $\beta \rightarrow \gamma +\mathrm{AMR}$ -LDA extended context: $\beta \rightarrow a,\gamma \rightarrow \beta$ +$\neg \gamma \rightarrow \neg \alpha +$ AMR-LDA extended context: $\beta \to \alpha ,\gamma \to \beta .$ + +![](images/752b32cf60cd71420e03647f4911a61d79d19c0bbb9cd6eb0182543b0b011481.jpg) +Figure 2: Architecture of AMR-LDA (1) and its applications to improve the reasoning performance of discriminative LLMs with contrastive learning (2a) and autoregressive generative LLMs by augmenting input prompts without fine-tuning (2b). + +![](images/11a185e4930109561e8adcb5e209ab31b0864bac1e63236c95574b51d6236831.jpg) + +![](images/38b465b42dfc11c678a998d30b8fb783a7531f8ac5a40cbd8c69f46d894d0260.jpg) + +![](images/37b6dfdcfa2d28a1e113ab357d049024322aebad70d6fd84c4dcac6390b7e591.jpg) + +![](images/2f4431462fff04e2e64b575a82ac8dad38584bcc6a65d49efae228533f226787.jpg) + +We consider two scenarios. If the sentence is detected by the AMR parser as a conditional statement, then we replace the conditional connective with a disjunction connective (marked with yellow background in Table 1). In the second scenario, if the sentence contains disjunction connectives, we replace the disjunction connective with conditional connective and remove the negative polarity from the AMR graph if it exits. Otherwise, a negative polarity argument will be added. An example can be found in Appendix Figure 6. + +# Definition 3: Commutative Law + +$$ +(\mathcal {A} \wedge \mathcal {B}) \Leftrightarrow (\mathcal {B} \wedge \mathcal {A}) +$$ + +Alan is kind and Bob is clever. $\Leftrightarrow$ Bob is clever and Alan is kind. + +If the AMR graph has a conjunction connective, we swap the order of the first half of the graph with the second half. An example can be found in Table 1 and in Appendix Figure 7. The subsentence "The wolf is fierce" and "the bald eagle is clever" marked as blue have been swapped. + +# Definition 4: Double Negation Law + +$$ +\mathcal {A} \Leftrightarrow \neg \neg \mathcal {A} +$$ + +It is raining. $\Leftrightarrow$ It is not the case that it is not raining. + +We apply the double negation law only to those sentences and their AMR graphs that do not contain the “:polarity -” argument which represents negative polarity. There are several words that can be represented as “:polarity -”, such as “not”, “no”, “never”, “none”, and “nothing”. A representative example can be seen in Table 1 and in Appendix Figure 8. The original sentence is “The bald eagle is strong”. The logically equivalent sentence we construct using the double negation law is “The bald eagle is not weak”, while the logically nonequivalent sentence is “The bald eagle is weak”. Note that the generated sentences do not contain the word “not” twice. We avoid generating sentences with “not” appearing multiple times consecutively because they are uncommon and unnatural. The process of applying double negation law is as follows: convert the sentence into an AMR graph; + +
Original sentencePositive sampleNegative sample
If Alan is kind, then Bob is clever.Alan isn't kind if Bob isn't clever.Alan isn't kind if Bob is clever.
Alan is not kind or Bob is clever.Alan is kind or Bob is clever.
The bald eagle is strong.The bald eagle is not weak.The bald eagle is weak.
The bald eagle is clever and the wolf is fierce.The wolf is fierce and the bald eagle is clever.The wolf is not fierce and the bald eagle is not clever.
+ +Table 1: Examples of generated logically equivalent (positive) and nonequivalent sentences(negative). The blue background highlights the parts of the original sentence that have been moved from their original positions. The yellow background highlights the change in polarity from the original sentence. + +augment the AMR graph by adding a negative polarity argument “: polarity -”; convert the modified AMR graph back into a natural language sentence; lastly, replace the adjective word with its antonym by using WordNet (Miller, 1992). To create logically nonequivalent sentences, we randomly delete or add a negative polarity argument “:polarity -” in the AMR graph. Additionally, we randomly sample a sentence from the corpus and consider it as logically nonequivalent to the original sentence. + +AMR-To-Text Generation Lastly, an AMR-to-text model is used to convert the modified AMR graph back into natural language, to generate a sentence that is logically equivalent or nonequivalent to the original sentence. + +# 3.3 Logical-Equivalence-Identification Contrastive Learning + +Inspired by SimCSE (Gao et al., 2021) and SimCLR (Chen et al., 2020), we propose to improve discriminative language models' logical reasoning ability by performing contrastive learning to identify logically equivalent and nonequivalent sentence pairs that are generated using AMR-LDA (Figure 2, 2a). + +Contrastive Learning The goal of contrastive learning is to minimise the distance of the hidden representations of two similar inputs while maximising the distance between two representations of dissimilar inputs. Our goal is to optimise the model to map logically equivalent sentences to hidden representations that are close to each other. + +$$ +h (s, s ^ {+}) \gg h (s, s ^ {-}). \tag {1} +$$ + +$h$ is a score function used to measure the distance between two representations. $s$ is an original sentence, $s^+$ is a positive sample logically equivalent to the original sentence $s$ , $s^-$ is a negative sample logically nonequivalent to the original sentence $s$ . + +The expected semantic representation distance between $s$ and $s^+$ should be much closer than that of $s$ and $s^-$ . The training loss can be written with the following formula: + +$$ +\mathcal {L} = - \sum \log \frac {\exp (h (+))}{\exp (h (+)) + \exp (h (-))}, \tag {2} +$$ + +where $h(+)$ and $h(-)$ are short for $h(s, s^+)$ and $h(s, s^-)$ . + +After the contrastive learning step, we further fine-tune the model on downstream tasks, including logical reasoning reading comprehension, natural language inference, and textual entailment. + +# 3.4 Prompt Augmentation + +To improve the performance of generative LLMs (e.g., GPT-3.5 or GPT-4) on logical reasoning tasks, we propose augmenting the input prompt using AMR-LDA before feeding it to the model (Figure 2, 2b). In the example from Figure 2, 2b, the context and options are marked in green and grey, respectively. The original Option B is "If you are able to write your essays using a word processing program, then you have keyboarding skills," which cannot be explicitly inferred from the context without using the logical equivalence law (contraposition law). AMR-LDA is able to augment the original option and generate "If you have no keyboarding skills, then you are not able to write your essays using a word processing program," which is logically equivalent to the original Option B, now also marked in green. This augmented Option B can be inferred from the given context. Furthermore, AMR-LDA is also applied to augmenting sentences within the context. The augmented, logically equivalent sentences from the context are "If you are able to use a computer, then you have keyboarding skills. If you are able to write your essays using a word processing program, then you are able to use a computer," which are marked in grey + +and support the validity of the original Option B. Finally, the augmented option and context are combined and fed as a prompt to GPT-3.5/4. Based on the extended information, we can find two solution paths marked with grey and green backgrounds under Module 2b in Figure 2. Solution Path 1 uses the sentence from the extended context marked with a grey background to support that Option B is correct. Solution Path 2 uses the sentence from the original context marked with a green background to support that the extended Option B is correct. Consequently, our method provides more solution paths for large language models to more effectively solve complex logical reasoning questions. + +# 4 Experiments + +# 4.1 Datasets + +ReClor (Yu et al., 2020) and LogiQA (Liu et al., 2021) are two challenging logical reasoning datasets. ReClor is collected from the Graduate Management Admission Test (GMAT) and Law School Admission Test (LSAT). LogiQA is collected from the National Civil Service Examination (Liu et al., 2021). Additionally, we performed evaluation on five datasets for natural language inference and textual entailment tasks: MNLI (Williams et al., 2018), RTE (Wang et al., 2018), MRPC (Dolan and Brockett, 2005), QNLI (Rajpurkar et al., 2016), and QQP (Wang et al., 2018). MNLI, RTE, and MRPC assess the relationship between two sentences, while QNLI focuses on the relationship between a question and a sentence, and QQP evaluates the relationship between two questions. + +Synthetic Data for Contrastive Learning In this paper, we performed contrastive learning for discriminative large language models on sentences augmented from a synthetic dataset. This dataset contains 14,962 sentences with different combinations of 23 entities, 2 relations and 40 attributes. Synthetic data was used to generate more controllable logical sentences. More details about the synthetic dataset can be found in the Appendix Section E. + +# 4.2 Settings + +All experiments were conducted on 8 NVIDIA A100 GPUs, each with 80G of VRAM. Primary experiments on the ReClor and LogiQA datasets used three different random seeds; the average values are reported in Table 2. The parse_xfmbart_large and + +T5Wtense models from AMRLib $^3$ were used for text-to-AMR and AMR-to-text conversions when generating logically augmented sentence pairs. The reason for selecting those two models is explained in subsection C. In our experiments, RoBERTa (Liu et al., 2019) and DeBERTa (He et al., 2021) were used as the discriminative large language models. We also compared our method against MERIt (Jiao et al., 2022) and IDoL (Xu et al., 2023), the leading models on the ReClor leaderboard. As for generative large language models, we applied GPT-3.5 (gpt-3.5-turbo) (OpenAI, 2023a) and GPT-4 (OpenAI, 2023b). More details about the experiments, case studies and confidence intervals can be found in Appendix Section B, D, D.1, and F. + +# 4.3 Logical-Equivalence-Identification Contrastive Learning for Discriminative LLM + +This section evaluates the effectiveness of contrastive learning on synthetic data augmented using AMR-LDA in order to improve the performance of discriminative language models on downstream tasks that require logical reasoning. We compare AMR-LDA against two baseline augmentation methods: AMR-DA (Shou et al., 2022) and LReasoner-LDA (Wang et al., 2022). It is important to note that we do not use the whole system or pipeline from LReasoner, we only use the data augmentation method from LReasoner in our experiment. For each augmentation method, 14,962 pairs of logically equivalent and logically nonequivalent sentences are constructed with a positive to negative sample ratio of 1:1. Twenty percent of the augmented data are used as the validation set during contrastive learning. All the models are further fine-tuned and compared on downstream tasks requiring logical reasoning and natural language inference. The results as shown in Table 2, suggest that the models trained using AMR-LDA perform better in most cases compared with the other augmentation methods. + +# 4.4 Prompt Augmentation for Generative LLM + +We adopt GPT-3.5 (gpt-3.5-turbo) (OpenAI, 2023a) and GPT-4 (OpenAI, 2023b) as the generative large language models for evaluating the effectiveness of prompt augmentation using AMR-LDA. The experiments are performed on the ReClor and LogiQA + +
Models/ DatasetsReClorLogiQAMNLIMRPCRTEQNLIQQP
DevTestTest-ETest-HDevTestEval
RoBERTa59.7353.2072.5737.9735.4334.5088.9590.4483.3994.7390.89
RoBERTa LReasoner-LDA59.4653.6672.1939.1034.8134.8189.4189.4686.2894.2590.01
RoBERTa AMR-DA58.6653.9366.8143.8036.4537.2289.7490.4486.2894.4292.06
RoBERTa AMR-LDA65.2656.8677.3440.7740.2938.1489.7890.9386.6494.4993.14
DeBERTaV273.9370.4680.8262.3139.7239.6289.4589.7184.4895.0092.54
DeBERTaV2 LReasoner-LDA75.7370.7084.0860.1730.8728.5189.2389.9587.0095.1592.50
DeBERTaV2 AMR-DA79.0675.9084.6269.0429.9530.1089.9289.7183.3995.0292.42
DeBERTaV2 AMR-LDA79.4077.6385.7571.2442.3439.8889.6790.2088.0995.2492.47
+ +datasets. The experimental results are shown in Table 3. The models with prompt augmentation achieved better performance in all cases except for the "hard" test set for ReClor. We also compare our method against Chain-of-Thought Prompting (CoT) (Wei et al., 2022) and AMR-DA (Shou et al., 2022) for prompt augmentation. We apply AMR-DA to paraphrase each option and each sentence in the context, and the rest is the same as the AMR-LDA prompt augmentation. We found that CoT and augmentation with AMR-DA caused a decline in performance for both GPT-3.5 and GPT-4 in most cases, except for GPT-4 on LogiQA. The performance drop associated with using CoT Prompting has been reported by (Xu et al., 2023). However, they only sampled 100 cases from the validation set, whereas we use the entire validation set and test set. AMR-DA conducts data augmentation by converting the text into an AMR graph and then randomly selecting one of the following operations: removing, swapping, substituting, or inserting an argument into the graph. The modified AMR graph is then converted back into a new sentence. This modification of the AMR may disrupt the original sentence's structure and introduce noise into the prompt, potentially worsening performance. + +GPT-3.5 AMR-LDA performs better than GPT-3.5 on the general test set, which includes both test-E and test-H. The ReClor test set is hidden, so we do not have access to the detailed results for test-E and test-H. Therefore, we cannot provide a clear explanation as to why AMR-LDA seems to decrease the test-H metric for GPT-3.5. However, a detailed examination of the results reveals that GPT-3.5 achieves only a 0.5375 test accuracy on test-H, whereas GPT-4 attains a 0.8857 test accuracy on the same test. Furthermore, GPT-4 with AMR-LDA performs better on all the ReClor and LogiQA + +test sets. This suggests that GPT-3.5 might not be as effective in comprehending complex logical reasoning as GPT-4 and GPT-3.5 may understand augmented prompts poorly. + +Table 2: Comparison between our proposed AMR-LDA and baseline models. We use RoBERTa-Large, DeBERTaV2-XXLarge as the pre-trained models. Our fine-tuned LLMs perform equally well or better than baseline methods. + +
Models/DatasetsReClorLogiQA
DevTestTest-ETest-HDevTest
GPT-3.557.0256.2059.3153.7537.6337.32
+ CoT34.8025.8027.5024.4623.9624.57
+ AMR-DA33.2032.9034.3131.7840.5531.49
+ AMR-LDA58.6256.6960.9053.3940.5539.47
GPT-487.3589.6090.9088.5743.2453.88
+ CoT37.0024.8026.1323.7523.5027.03
+ AMR-DA85.0085.6086.3685.0051.3056.06
+ AMR-LDA87.7390.2091.5989.1151.9258.06
+ +Table 3: Comparison of Chain-of-Thought Prompting (CoT), AMR-DA, and AMR-LDA on GPT-3.5 and GPT-4, and between GPT-3.5 and GPT-4 alone, for evaluation on the ReClor and LogiQA test sets. + +
Models/DatasetsRoBERTa +AMR-LDARoBERTa +LReasoner-LDA
Depth=1100.00100.00
Depth=1 (with altered rules)100.0099.87
Depth=2100.00100.00
Depth=2 (with altered rules)99.7374.00
+ +Table 4: Comparison between AMR-LDA and LReasoner-LDA with RoBERTa-Large on PARARULE-Plus and PARARULE-Plus (with altered rules). Depth=1 means that only one rule was used to infer the answer. Depth=1 (with altered rules) means one of the rules has been altered using logical equivalence law. + +We assessed the robustness of AMR-LDA and LReasoner-LDA models on the PARARULE-Plus dataset (Bao et al., 2022) by modifying the test set with the contraposition law. Examples from this dataset can be found in Appendix Figures 9 and 10. AMR-LDA showed enhanced robustness on these altered tests compared to LReasoner-LDA. + +
Models/DatasetsConCon-douCon-dou impCon-dou imp-com
RoBERTa-Large as backbone model
ReClor60.4060.8061.8059.80
LogiQA37.7833.1733.9438.70
MNLI89.5590.1589.6889.78
MRPC90.6989.2290.4490.93
RTE81.2385.2084.8486.64
QNLI94.1694.0594.5194.49
QQP92.1289.8892.0693.14
DeBERTaV2-XXLarge as backbone model
ReClor81.8072.2079.4078.80
LogiQA32.2545.4638.2440.55
DeBERTa-Large as backbone model
MNLI90.8090.5990.6889.67
MRPC90.2088.4889.9590.20
RTE84.8487.3685.5688.09
QNLI95.2895.0494.9795.24
QQP92.3392.4092.2992.47
+ +Table 5: An experiment to assess the influence of different logical equivalence laws on downstream logical reasoning and natural language inference tasks. "Con", "dou", "imp" and "com" are the abbreviation for contraposition law, double negation law, implication law and commutative law. "Con-dou" denotes data constructed using both the contraposition law and the double negation law. Other terms are derived in a similar manner. + +
Models/DatasetsReClorLogiQA
DevTestTest-ETest-HDevTest
DeBERTaV2-XXLarge73.9370.4680.8262.3139.7239.62
+ AMR-LDA-1:178.8076.1084.7769.2840.5541.47
+ AMR-LDA-1:280.2076.4084.7769.8247.0043.93
+ AMR-LDA-1:381.2075.7084.0969.1042.7041.01
DeBERTaV2-XXLarge + MERIt-1:380.2075.8085.0068.5737.3242.39
+ AMR-LDA-Con-1:382.6076.6086.1369.1045.0043.01
+ AMR-LDA-Merged-1:381.8076.9087.5068.5744.5445.62
DeBERTaV2-XXLarge + IDoL77.6074.5082.9567.8539.7840.24
+ AMR-LDA-Con-1:379.2077.0085.6870.1747.6144.54
+ AMR-LDA-Merged-1:379.4075.6086.3667.1441.9341.32
+ +Table 6: An experiment to assess how positive: negative sample ratios affect downstream tasks. AMR-LDA 1:1 means the ratio of positive and negative samples is 1:1. + +# 4.5 Ablation Studies + +We perform experiments using a subset of the logical equivalence laws. We present the results in Table 5. This ablation study serves as the basis for our selection of four logical equivalence rules in the main experiment as Table 2 shown. Since the test sets are private and used to rank models on the leaderboard, we evaluated directly using the validation sets instead of the test sets. To make a fair comparison, we ensure the sizes of the training sets are the same for con, con-dou, con-dou-imp and com-dou-imp-com. For this ablation study, we constructed training sets of size 1,000. + +We conduct another ablation study where we modify the positive and negative sample ratios. + +We select DeBERTaV2-XXLarge as the backbone model. We compare the generated data against our AMR-LDA and MERIt. Table 6 shows that a higher proportion of negative samples may help increase the performance on logical reasoning tasks. Furthermore, we chose DeBERTaV2-XXLarge + MERIt-1:3 (Jiao et al., 2022) and DeBERTaV2-XXLarge + IDoL (Xu et al., 2023) as the backbone models. We performed logical equivalence identification contrastive learning, using data constructed solely from the AMR-LDA contraposition law and subsequently merging all four logical equivalence laws. Subsequent fine-tuning on downstream tasks demonstrated that incorporating more logical equivalence laws can enhance the performance of language models on logical reasoning tasks. + +# 5 Conclusion + +The sparsity of web data related to logical reasoning constrains the advancement of large language models in their performance on logical reasoning tasks. Existing methods for constructing logically equivalent sentences had been restricted to templates and specific datasets. Our AMR-LDA considers more logical equivalence laws than existing methods do, and it does not reply on any ad-hoc templates. We applied AMR-LDA to fine-tuning discriminative LLMs and prompt augmentation of generative LLMs (GPT-3.5 and GPT-4), yielding better results than baseline methods on logical reasoning tasks. + +# 6 Human Evaluation + +Human evaluation was conducted to evaluate the correctness and fluency of the logically manipulated sentences generated using AMR-LDA and LReasoner-LDA. We constructed a survey with 20 questions, each question consisting of two randomly selected sentences: one from those generated by our AMR-LDA and the other by LReasoner-LDA. 45 participants completed the survey anonymously. We asked them to evaluate the sentences in two aspects: 1) which sentence is logically equivalent to the original sentence, or whether both of them are logically equivalent to the original sentence, and 2) which sentence is more fluent. $63.92\%$ and $76.44\%$ of people preferred AMR-LDA's logically equivalent and fluent sentences over those generated by LReasoner-LDA. + +# 7 Limitations + +One limitation of our approach is its reliance on AMR for logic-driven data augmentation, which, while innovative, may not fully capture the intricacies of natural language variation and complex logical constructs encountered in diverse texts. This constraint reflects the broader challenge in NLP of developing models that can understand and reason with the full spectrum of human language, including idiomatic expressions, nuanced context, and varied logical frameworks. Our work makes significant strides in this direction, yet it also highlights the need for continued research to enhance the robustness and adaptability of NLP systems to more closely mirror human-level comprehension and reasoning capabilities. + +# 8 Ethics Statement + +All the data used in this paper are either synthetically generated or open-source datasets. All the code used to run the experiments is written using open-source libraries or adapted from published code from other papers. We will also release our code and any synthetically generated data to ensure that the work can be reproduced. The human evaluation was approved by the Ethics Committee of the main authors' employer. + +# References + +Qiming Bao, Alex Yuxuan Peng, Tim Hartill, Neset Tan, Zhenyun Deng, Michael Witbrock, and Jiamou Liu. 2022. Multi-step deductive reasoning over natural language: An empirical study on out-of-distribution generalisation. In Proceedings of the 16th International Workshop on Neural-Symbolic Learning and Reasoning as part of the 2nd International Joint Conference on Learning & Reasoning (IJCLR 2022), pages 202-217, Cumberland Lodge, Windsor Great Park, United Kingdom. +Hugo Bronkhorst, Gerrit Roorda, Cor Suhre, and Martin Goedhart. 2020. Logical reasoning in formal and everyday reasoning tasks. International Journal of Science and Mathematics Education, 18:1673-1694. +Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton. 2020. A simple framework for contrastive learning of visual representations. In International conference on machine learning, pages 1597-1607. PMLR. +François Chollet. 2019. On the measure of intelligence. CoRR, abs/1911.01547. +Peter Clark, Oyvind Tafjord, and Kyle Richardson. 2021. Transformers as soft reasoners over language. In + +Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence, IJCAI'20. +William B. Dolan and Chris Brockett. 2005. Automatically constructing a corpus of sentential paraphrases. In Proceedings of the Third International Workshop on Paraphrasing (IWP2005). +Tianyu Gao, Xingcheng Yao, and Danqi Chen. 2021. SimCSE: Simple contrastive learning of sentence embeddings. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 6894-6910, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. +Pengcheng He, Xiaodong Liu, Jianfeng Gao, and Weizhu Chen. 2021. Deberta: decoding-enhanced bert with disentangled attention. In 9th International Conference on Learning Representations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021. OpenReview.net. +Fangkai Jiao, Yangyang Guo, Xuemeng Song, and Liqiang Nie. 2022. MERIt: Meta-Path Guided Contrastive Learning for Logical Reasoning. In Findings of the Association for Computational Linguistics: ACL 2022, pages 3496–3509, Dublin, Ireland. Association for Computational Linguistics. +Paul Kingsbury and Martha Palmer. 2002. From TreeBank to PropBank. In Proceedings of the Third International Conference on Language Resources and Evaluation (LREC'02), Las Palmas, Canary Islands - Spain. European Language Resources Association (ELRA). +Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020. BART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7871-7880, Online. Association for Computational Linguistics. +Jian Liu, Leyang Cui, Hanmeng Liu, Dandan Huang, Yile Wang, and Yue Zhang. 2021. Logiqa: A challenge dataset for machine reading comprehension with logical reasoning. In Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence, IJCAI'20. +Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. ArXiv preprint, abs/1907.11692. +Elliott Mendelson. 2009. Introduction to mathematical logic. CRC press. +George A. Miller. 1992. WordNet: A lexical database for English. In Speech and Natural Language: Proceedings of a Workshop Held at Harriman, New York, February 23-26, 1992. + +OpenAI. 2023a. Chatgpt: Optimizing language models for dialogue. + +OpenAI. 2023b. Gpt-4 technical report. + +Xinyu Pi, Wanjun Zhong, Yan Gao, Nan Duan, and Jian-Guang Lou. 2022. Logigan: Learning logical reasoning via adversarial pre-training. Advances in Neural Information Processing Systems, 35:16290-16304. + +Rashmi Prasad, Nikhil Dinesh, Alan Lee, Eleni Milt-sakaki, Livio Robaldo, Aravind K Joshi, and Bonnie L Webber. 2008. The penn discourse treebank 2.0. In LREC. + +Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. J. Mach. Learn. Res., 21:140:1-140:67. + +Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. SQuAD: $100,000+$ questions for machine comprehension of text. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2383-2392, Austin, Texas. Association for Computational Linguistics. + +Norbert M Seel. 2011. Encyclopedia of the Sciences of Learning. Springer Science & Business Media. + +Ziyi Shou, Yuxin Jiang, and Fangzhen Lin. 2022. AMRDA: Data augmentation by Abstract Meaning Representation. In Findings of the Association for Computational Linguistics: ACL 2022, pages 3082-3098, Dublin, Ireland. Association for Computational Linguistics. + +Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel Bowman. 2018. GLUE: A multi-task benchmark and analysis platform for natural language understanding. In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 353-355, Brussels, Belgium. Association for Computational Linguistics. + +Siyuan Wang, Wanjun Zhong, Duyu Tang, Zhongyu Wei, Zhihao Fan, Daxin Jiang, Ming Zhou, and Nan Duan. 2022. Logic-driven context extension and data augmentation for logical reasoning of text. In *Findings of the Association for Computational Linguistics: ACL* 2022, pages 1619-1629, Dublin, Ireland. Association for Computational Linguistics. + +Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou, et al. 2022. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems, 35:24824-24837. + +Adina Williams, Nikita Nangia, and Samuel Bowman. 2018. A broad-coverage challenge corpus for sentence understanding through inference. In Proceedings of the 2018 Conference of the North American + +Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1112-1122, New Orleans, Louisiana. Association for Computational Linguistics. + +Zihang Xu, Ziqing Yang, Yiming Cui, and Shijin Wang. 2023. IDOL: Indicator-oriented logic pre-training for logical reasoning. In *Findings of the Association for Computational Linguistics: ACL* 2023, pages 8099-8111, Toronto, Canada. Association for Computational Linguistics. + +Weihao Yu, Zihang Jiang, Yanfei Dong, and Jiashi Feng. 2020. Reclor: A reading comprehension dataset requiring logical reasoning. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net. + +# A Appendix + +# B Experiment Setup + +We follow the training script from Huggingface and the default hyperparameters4 to conduct the training and Algorithms 2 and 3 illustrate the negative sample construction and the training process, respectively. For the contrastive learning, we fine-tune RoBERTa-Large, DeBERTa-Large, and DeBERTaV2-XXLarge using the constructed logical equivalence sentence pair from our AMRLDA, LReasoner's logic-driven data augmentation method (LReasoner-LDA) and AMR-DA data augmentation method. We use DeBERTaV2-XXLarge for ReClor and LogiQA tasks because DeBERTaV2 supports multiple-choice question tasks with a DeBERTaV2ForMultipleChoice head. The hyperparameters for stages 1 and 2 training can be found in Tables 21 and 22. + +# C Conversion Between Texts and AMR + +In order to decide which models to use to perform text and AMR conversions, we experiment with different combinations of text-to-AMR and AMR-to-text models. In the experiment, a sentence is converted to AMR, and then is converted back to text without any modification to the AMR. We pick the combination that can recover the original sentence the most, as measured in BLEU score. The results are reported in Table 7. We find that using parse_xfmbart large as the AMR parser and T5Wtense as the AMR generator produces the highest BLEU score. Therefore, we se + +lect them as the text-to-AMR parser and AMR-to-text generator in all the remaining experiments. Parse_xfmbart_large is an AMR parser that uses BART-Large as the backbone model (Lewis et al., 2020). T5Wtense is an AMR generator that uses T5 as the backbone model (Raffel et al., 2020). + +
Text-To-AMR ParserAMR-To-Text GeneratorBLEU
SpringSpring25.08
T5wtense30.86
T524.76
T5T5wtense29.33
T530.82
parse_xfmbart_largeT5wtense38.45
T530.10
+ +# D Case Studies + +We present several case studies comparing our AMR-LDA method with LReasoner-LDA in terms of constructing logically equivalent sentences. These constructions leverage four logical equivalence laws. LReasoner-LDA, however, does not design for the implication law, double negation law, or the commutative law, leading to its inability to handle scenarios that require these laws. Additionally, LReasoner-LDA struggles to construct logically equivalent sentences using the contraposition law when encountering new sentences not found in the ReClor and LogiQA datasets. + +Table 7: Comparison of different combinations of text-to-AMR and AMR-to-text models in recovering original texts after the conversions without any augmentation to the AMR. We adopt the combination with the highest BLEU score in the rest of the experiments. + +
Contraposition law
Original SentenceIf the bald eagle is small, then the mouse is not small.
AMR-LDAThe bald eagle isn’t small, unless the mouse is small.
LReasoner-LDAIf it is not small, then it will be not the bald eagle.
+ +Table 8: Logically equivalent sentences constructed by contraposition law. + +
Contraposition law
Original SentenceIf the bald eagle is kind, then Dave is not short.
AMR-LDAIf Dave is short, the bald eagle is not kind.
LReasoner-LDAIf it is not kind, then it will be not the bald eagle.
+ +Table 9: Logically equivalent sentences constructed by contraposition law. + +
Implication law
Original SentenceThe bear is not sleepy or Bob is not cute.
AMR-LDAIf the bear is sleepy, then Bob is not cute.
LReasoner-LDA-
+ +Table 10: Logically equivalent sentences constructed by implication law. + +
Double negation law
Original SentenceThe bald eagle is beautiful.
AMR-LDAThe bald eagle isn’t ugly.
LReasoner-LDA-
+ +Table 11: Logically equivalent sentences constructed by double negation law. + +
Implication law
Original SentenceIf the lion is not funny, then the tiger is beautiful.
AMR-LDAThe lion is funny or the tiger is beautiful.
LReasoner-LDA-
+ +Table 12: Logically equivalent sentences constructed by implication law. + +
Double negation law
Original SentenceThe bald eagle is strong.
AMR-LDAThe bald eagle is not weak.
LReasoner-LDA-
+ +Table 13: Logically equivalent sentences constructed by double negation law. + +
Commutative law
Original SentenceThe bald eagle is kind and the wolf is not dull.
AMR-LDAThe wolf is not dull and the bald eagle is kind.
LReasoner-LDA-
+ +Table 14: Logically equivalent sentences constructed by commutative law. + +
Commutative law
Original SentenceThe lion is thin and the dinosaur is not angry.
AMR-LDAThe dinosaur was not angry and the lion was thin.
LReasoner-LDA-
+ +Table 15: Logically equivalent sentences constructed by commutative law. + +# D.1 Real World/Long Sentence Case Studies + +The appendix of our paper describes Algorithm 1, which uses four lists from Tables 16, 17, 18 and 19 to create synthetic sentences. We've also tested our method on real-world datasets like ReClor and LogiQA that require logical reasoning. Our method, AMR-LDA prompt augmentation, can work with just one list of various sentences. It automatically detects if a sentence can be transformed into a logically equivalent one using a specific logical equivalence law. An example of this application on a real-world sentence is shown in Figure 3. We process sentences from context and options, generating logically equivalent sentences where possible. + +Our AMR-LDA can also been applied to long sentences. Our method can generate logically equivalent sentences for long sentences with clear sentence structure using logical equivalence rules (Commutative law) as shown in Figure 4 and 5. The second example shows that our AMR-LDA can understand the effect of that clause on yoga stretching, showing the generalisation advantages of AMR as a semantic representation compared to LReasoner-LDA which relies on a constituency parser and template and fails in this case which is out of templates. + +# E Synthetic Dataset Construction + +Here are the entities, relationships, and attributes we used to construct our synthetic dataset. We used the synthetic dataset to conduct the AMR-based logic-driven data augmentation and logical-equivalence-identification contrastive learning. For the subject, we used "the bald eagle", "the tiger", "the bear", "the lion", "the wolf", "the crocodile", "the dinosaur", "the snake", "the leopard", "the cat", "the dog", "the mouse", "the rabbit", "the squirrel", "Anne", "Alan", "Bob", "Charlie", "Dave", "Erin", "Harry", "Gary", and "Fiona". For the relationships, we used "is" and "is not". For the attributes, we used "kind", "quiet", "round", "nice", "smart", "clever", "dull", "rough", "lazy", "slow", "sleepy", "boring", "tired", "reckless", "furry", "small", "cute", "lovely", "beautiful", "funny", "big", "strong", "awful", "fierce", "heavy", "horrible", "powerful", "angry", "tall", "huge", "short", "thin", "little", "tiny", "wealthy", "poor", "dull", "rough", "bad", and "sad". + +Here are the entities, relationships, and attributes we used to fine-tune T5-Large. After T5-Large had + +been fine-tuned, we used the fine-tuned model to generate logical equivalence sentences as the label for the above synthetic sentences and then conducted the logical-equivalence-identification contrastive learning and downstream task. For the subject, based on the above subject name entities, we add "the duck", "the goat", "the goose", "the donkey", "the cow", "James", "Robert", "John", "Michael", "David", "William", "Richard", "Anthony", "Paul", "Andrew". For the attributes, we add "cautious", "careful", "brainy", "bored", "adorable", "aggressive", "anxious", "dizzy", "depressed", "disturbed", and "awful". + +The entity names used for the "change name" experiment in Table 20. For the new entity names that we used "the sheep", "the kitten", "the Garfield", "the lion", "the goat", "the bull", "the cow", "the elephant", "the butterfly", "the fish", "Peter", "Bill", "Tom", "Amy", "Charles", "Tim", "Lucy", and "John". + +Table 16, 17, 18, and 19 are the logic pattern and its variation that we consider to replace the original logic pattern for the experiment on Table 20. + +To validate whether pre-trained language model can distinguish logically equivalent sentences. We design a preliminary experiment as Table 20 shown. We use RoBERTa-Large to conduct the experiment. We first generate a synthetic test set 1, which includes 1312 test samples with 23 entities, 2 relationships, 40 attributes, and 4 logical equivalence laws (double negation, contraposition, implication, and commutative laws). Model's performance can improve if we fine-tune language model on the logical equivalence training set, which is constructed by our AMR-LDA data augmentation method. Also, The result shows that the model's performance will not drop if we change the entity name or logic pattern, this indicates that the fine-tuned discriminative large language model can handle scenarios requiring greater robustness more effectively. + +Here are some synthetic sentence examples and more details for implication, conjunction, disjunction, and negation in the context of AMR-LDA mentioned in Algorithm 1. + +Double Negation Law: The original sentence "The bald eagle is strong" is parsed into an AMR graph using a text-to-AMR parser. The parser confirms no negative meanings. To apply the double negation law, negative polarity is added, and an AMR-to-text generator then reforms the sentence. WordNet replaces the adjective with its antonym, creating a logically equivalent sentence. + +# AMR-LDA Prompt Augmentation Case Study + +GPT-4 Input: "context": "If you have no keyboarding skills at all, you will not be able to use a computer. And if you are not able to use a computer, you will not be able to write your essays using a word processing program.", + +"question": "If the statements above are true, which one of the following must be true?", "answers": + +A. "If you are not able to write your essays using a word processing program, you have no keyboarding skills. If you have the skill of a keyboard, you can write your essay using a word processing program. If you can use a computer, you have keyboarding skills. If you can write your essay with a word processing program, you can use a computer. Whether you have keyboard skills at all or can't use a computer. Whether you can use a computer or you can't write your own essay with a word processing program." +B. "If you are able to write your essays using a word processing program, you have at least some keyboarding skills. If you don't have at least some keyboard skills, you can't write your essay with a word processing program. If you can use a computer, you have keyboarding skills. If you can write your essay with a word processing program, you can use a computer. Whether you have keyboard skills at all or can't use a computer. Whether you can use a computer or you can't write your own essay with a word processing program." +C. "If you are not able to write your essays using a word processing program, you are not able to use a computer. If you can use a computer, you can write your essay using word processing programs. If you can use a computer, you have keyboarding skills. If you can write your essay with a word processing program, you can use a computer. Whether you have keyboard skills at all or can't use a computer. Whether you can use a computer or you can't write your own essay with a word processing program." +D. "If you have some keyboarding skills, you will be able to write your essays using a word processing program. If you can't write your essay with a word processing program, you don't have some keyboard skills. If you can use a computer, you have keyboarding skills. If you can write your essay with a word processing program, you can use a computer. Whether you have keyboard skills at all or can't use a computer. Whether you can use a computer or you can't write your own essay with a word processing program." + +# GPT-4 output: B + +Figure 3: Example for using AMR-LDA to augment the prompt from ReClor dataset and their subsequent utilisation as input for GPT-4. Data segments that are marked in bold italics and appear in blue were generated using the contraposition law, while those in brown were generated using the implication law. + +Commutative Law: The sentence "The bald eagle is clever and the wolf is fierce" is converted into an AMR graph. The root node "a/and" of this graph, a conjunction argument, allows for the application of the commutative law by swapping arguments. The AMR-to-text generator then produces a new sentence, maintaining logical equivalence. + +Implication Law: The sentence "If Alan is kind, then Bob is clever" is parsed into an AMR graph. The method checks for conditional and conclusion arguments. An "or" disjunction replaces the root node, and negative polarity is added to the first half of the sentence. The modified graph is then transformed back into a natural language sentence, ensuring logical equivalence. + +Contraposition Law: The same initial sentence "If Alan is kind, then Bob is clever" is analyzed. The contraposition law is applied by swapping the conditional and conclusion arguments in the AMR graph and adding negative modifiers to both. The adjusted graph is then converted back into a logically equivalent sentence. + +# F Confidence Intervals for the Main Experiments + +Here are the confidence intervals for the main experiments in Table 23. We select random seed 0, 21 and 42 to conduct the main experiment on ReClor and LogiQA datasets as shown on Table 23. We utilise a $95\%$ confidence interval to calculate. + +# Long Sentence Example 1: + +Original sentence: Sarah woke up early in the morning, and she started her day with a cup of coffee and some light yoga stretches. + +Original sentence's AMR graph: (a / and :op1 (w / wake-up-02 :ARG1 (p / person :name (n / name :op1 "Sarah")) :time (e / early :op1 (d / date-entity :dayperiod (m / morning)))) :op2 (s / start-01 :ARG0 p :ARG1 (d2 / day :poss p) :ARG2 (a2 / and :op1 (c / coffee :quant (v / volume-quantity :quant 1 :unit (c2 / cup))) :op2 (s2 / stretch-01 :ARG0 p :mod (y / yoga) :ARG1-of (1 / light-06) :quant (s3 / some)))) + +Modified AMR graph using AMR-LDA: (a / and :op1 (s / start-01 :ARG0 p :ARG1 (d2 / day :poss p) :ARG2 (a2 / and :op1 (c / coffee :quant (v / volume-quantity :quant 1 :unit (c2 / cup))) :op2 (s2 / stretch-01 :ARG0 p :mod (y / yoga) :ARG1-of (1 / light-06) :quant (s3 / some)))) :op2 (w / wake-up-02 :ARG1 (p / person :name (n / name :op1 "Sarah")) :time (e / early :op1 (d / date-entity :dayperiod (m / morning)))) + +Generated logical equivalence sentence using AMR-LDA: Sarah started her day with a cup of coffee and some light yoga stretching and woke up early in the morning. + +Figure 4: One example uses our AMR-LDA to generate logical equivalence sentences for long sentences. In this case, a logical equivalence sentence is generated using commutative law, and the same color represents the same argument. In this case, the order of the former and latter arguments for the conjunction word "and" has been swapped. + +# Long Sentence Example 2: + +Original sentence: Sarah woke up early in the morning, and she started her day with a cup of coffee and some light yoga stretches that will help lose weight. + +Original sentence's AMR graph: (a / and (a / and :op1 (w / wake-up-02 :ARG1 (p / person :name (n / name :op1 "Sarah")) :time (e / early :op1 (d / date-entity :dayperiod (m / morning)))) :op2 (s / start-01 :ARG0 p :ARG1 (d2 / day :poss p) :ARG2 (a2 / and :op1 (c / coffee :quant (v / volume-quantity :quant 1 :unit (c2 / cup))) :op2 (s2 / stretch-01 :mod (y / yoga) :ARG0-of (h / help-01 :ARG1 (l / lose-01 :ARG1 (w2 / weight))) :ARG1-of (l2 / light-06) :quant (s3 / some)))) + +Modified AMR graph using AMR-LDA: (a / and :op1 (s / start-01 :ARG0 p :ARG1 (d2 / day :poss p) :ARG2 (a2 / and :op1 (c / coffee :quant (v / volume-quantity :quant 1 :unit (c2 / cup))) :op2 (s2 / stretch-01 :mod (y / yoga) :ARG0-of (h / help-01 :ARG1 (l / lose-01 :ARG1 (w2 / weight))) :ARG1-of (l2 / light-06) :quant (s3 / some)))) :op2 (w / wake-up-02 :ARG1 (p / person :name (n / name :op1 "Sarah")) :time (e / early :op1 (d / date-entity :dayperiod (m / morning)))) + +Generated logical equivalence sentence using AMR-LDA: Sarah started her day with a cup of coffee and some light yoga stretching to help lose weight, and woke up early in the morning. + +Figure 5: One example uses our AMR-LDA to generate logical equivalence sentences for long sentences. In this case, a logical equivalence sentence is generated using commutative law, and the same color represents the same argument. AMR-LDA can understand the effect of that clause on yoga stretching. In this case, the order of the former and latter arguments for the conjunction word "and" has been swapped. + +
Logic pattern for double negation law
Original sentencesubject + verb + adj
Positive samplesubject + verb + “not” + the antonym of the adj
Negative samplesubject + verb + “not” + adj
+ +Table 16: We used the logic pattern for double negation law for constructing the test set for the experiment in Table 20. + +
Original logic pattern for commutative lawChanged logic pattern
s1sub1 + verb1 + adj1sub1 + verb1 + “not” + adj1
s2sub2 + verb2 + adj2sub2 + verb2 + “not” + adj2
s3sub1 + verb1 + “not” + adj1sub2 + verb2 + “not” + adj2
Original sentences1 + “and” + s2
Positive samples2 + “and” + s1
Negative samples1 + “and” + s3
+ +Table 17: We used the logic pattern for commutative law for constructing the test set for the experiment in Table 20. + +
Logic pattern for contraposition law
Original sentence1“If” + sub1 + verb + adj1 +“, then” + sub2 + verb + adj2
Positive sentence1“If” + sub2 + verb + “not” + adj2 +“, then” + sub1 + verb + “not” + adj1
Negative sentence1“If” + sub1 + verb + adj1 +“, then” + sub2 + verb + “not” + adj2
Original sentence2“If” + sub1 + verb + adj1 +“, then” + sub2 + verb + “not” + adj2
Positive sentence2“If” + sub2 + verb + adj2 +“, then” + sub1 + verb + “not” + adj1
Negative sentence2“If” + sub1 + verb + adj1 +“, then” + sub2 + verb + adj2
Original sentence3“If” + sub1 + verb + “not” + adj1 +“, then” + sub2 + verb + adj2
Positive sentence3“If” + sub2 + verb + “not” + adj2 +“, then” + sub1 + verb + adj1
Negative sentence3“If” + sub1 + verb + “not” + adj1 +“, then” + sub2 + verb + “not” + adj2
Original sentence4“If” + sub1 + verb + “not” + adj1 +“, then” + sub2 + verb + “not” + adj2
Positive sentence4“If” + sub2 + verb + “not” + adj2 +“, then” + sub1 + verb + “not” + adj1
Negative sentence4“If” + sub1 + verb + “not” + adj1 +“, then” + sub2 + verb + adj2
+ +Table 18: We used the logic pattern for contraposition law for constructing the test set for the experiment in Table 20. + +
Original logic pattern for implication law
Original sentence“If” + sub1 + verb + adj1 +“, then” + sub2 + verb + adj2
Positive samplesub1 + verb + “not” + adj1 + “or” + sub2 + verb + adj2
Negative samplesub1 + verb + “not” + adj1 + “or” + sub2 + verb + “not” + adj2
Changed logic pattern
Original sentencesub1 + verb + “not” + adj1 + “or” + sub2 + verb + adj2
Positive sample“If” + sub1 + verb + adj1 +“, then” + sub2 + verb + adj2
Negative samplesub1 + verb + “not” + adj1 + “or” + sub2 + verb + “not” + adj2
+ +Table 19: We used the logic pattern for implication law for constructing the test set for the experiment in Table 20. + +
Test sets ↓; Models →RoBERTaFine-tuned RoBERTa
Test set 153.3585.13
Test set 2 (change name)53.4785.10
Test set 3 (change logic)46.7294.82
+ +Table 20: Compared fine-tuned RoBERTa-Large and RoBERTa-Large on three different synthetic test sets. + +
Stage-1 +Fine-tuningStage-2 +Fine-tuning
Seed20210/21/42
Batch Size3216/32
Initial Learning Rate2e-52e-5/3e-6
Learning Rate Scheduler TypeLinear
Epoch10
Num Warmup Steps0
Weight Decay0
Max Sequence Length256
Gradient Accumulation Steps1
+ +Table 21: Hyperparameter details for stage-1 fine-tuning and stage-2 fine-tuning except ReClor and LogiQA. Stage-1 fine-tuning means logical-equivalence-identification contrastive learning, and stage-2 fine-tuning means fine-tuning on the downstream tasks. + +
Stage-2 Fine-tuning
ReClorLogiQA
Seed42
Batch Size2/4
Gradient Accumulation Steps2
Initial Learning Rate1e-05/1e-5/3e-6
Epoch10
Adam Betas(0.9, 0.98)
Adam Epsilon1e-6
No Clip Grad NormTrue
Warmup Proportion0.1
weight Decay0.01
+ +Table 22: Model hyperparameter tuning details for stage-2 fine-tuning on ReClor and LogiQA. + +![](images/97f459518776d039a754928a2be02cac393162395eabda77b4e63c98ef15db86.jpg) +Figure 6: An example of our AMR-based logic-driven data augmentation method using contraposition law and implication law + +![](images/45737ee6ca63152a2e0d7366af13dbdfaba96758bd25cda4e9215fa29e0bea8d.jpg) +Figure 7: An example of our AMR-based logic-driven data augmentation method using commutative law + +![](images/1ed70bcf37e299e569746b785c432edf8145d0d396a42c23a9294cb749ad7856.jpg) +Figure 8: An example for our AMR-based logic-driven data augmentation method using double negation law + +
Context (Facts+Rules):
Facts: Alan is tall. Alan is big. Alan is huge. Fiona is thin. Fiona is small. Charlie is quiet. Charlie is smart. Charlie is wealthy. Anne is dull. Anne is sad. Anne is poor.
Rules for Depth=1: If someone is tall then they are quiet. If someone is thin then they are little. If someone is dull and sad then they are bad. If someone is quiet and smart then they are kind.
Rules for Depth=1 (with altered rules: If someone is not quiet then they are not tall. If someone is not little then they are not thin. If someone is sad and dull then they are bad. If someone is smart and quiet then they are kind.
Question 1: Alan is quiet? Label: True.
Question 2: Alan is not smart? Label: False.
Question 3: Fiona is little? Label: True.
Question 4: Fiona is not little? Label: False.
Question 5: Charlie is kind? Label: True.
Question 6: Charlie is not kind? Label: False.
Question 7: Anne is bad? Label: True.
Question 8: Anne is not bad? Label: False.
+ +Figure 9: An example for PARARULE-Plus Depth=1 and Depth=1 (with altered rules). The input includes context (facts + rules) and questions. The output is either true or false. In this example, we use logical equivalence laws (contraposition and commutative laws to extend the sentence in the rule sets to logical equivalence sentences. The highlighted words are the logical equivalence laws that we used. The green and lime green background mean the sentences are constructed by contraposition law, and the cyan background means the sentences are constructed by commutative law.) + +
Context (Facts+Rules): +Facts: Erin is strong. Erin is tall. Erin is huge. Dave is thin. Dave is short. Fiona is kind. Fiona is wealthy. Fiona is quiet. Bob is sad. Bob is poor. Bob is bad. +Rules for Depth=2: Strong people are kind. If someone is thin and short then they are little. If someone is sad and poor then they are dull. If someone is kind and wealthy then they are nice. All little people are small. All kind people are wealthy. All nice people are smart. All dull people are rough. +Rules for Depth=2 (with altered rules): If someone is not kind then they are not strong. If someone is thin and short then they are little. If someone is sad and poor then they are dull. If someone is not nice then they are not both kind and wealthy. There are no little people who are not small. All kind people are wealthy. All nice people are smart. There are no dull people who are not rough. +Question 1: Erin is wealthy? Label: True. +Question 2: Erin is not wealthy? Label: False. +Question 3: Dave is small? Label: True. +Question 4: Dave is not small? Label: False. +Question 5: Fiona is smart? Label: True. +Question 6: Fiona is not smart? Label: False. +Question 7: Bob is rough? Label: True. +Question 8: Bob is not rough? Label: False.
+ +Figure 10: An example for PARARULE-Plus Depth=2 and Depth=2 (with altered rules). The input includes context (facts + rules) and questions; the output is either “True” or “False”. In this example, we use the contraposition law and De Morgan's law to convert sentences in the rule set to logically equivalent sentences. We highlighted the keywords that were changed when the alternative rules were constructed. Green and lime green backgrounds indicate sentences constructed using the contraposition law, while pink and magenta indicate sentences constructed with De Morgan's law.) + +Algorithm 1: AMR-Based Logic-Driven Data Augmentation +```txt +Require: Synthetic sentence lists (list1, list2, list3, and list4) generated following the patterns from Table 16, 17, 18, and 19 respectively. total_list = [] +for sent in synthetic_sentence_lists do +amr_graph = Text-To-AMR-Parser(sent) +if sent in list1 then +#double negation law +if "\(polarity -" in amr_graph then +Remove "\(polarity -" from the amr_graph +else +Add "\(polarity -" into the amr_graph +end if +aug_text = AMR-To-Text-Generator(amr_graph) +Use WordNet to replace an adjective word to antonym word from aug_text. +else if sent in list2 then +#commutative law +Switch the order of two arguments. +aug_text = AMR-To-Text-Generator(amr_graph) +else if sent in list3 then +#implication law +Change the root node as "or". +if "\(polarity -" in a condition argument then +Remove the "\(polarity -". +else +Add "\(polarity -" into the argument. +end if +aug_text = AMR-To-Text-Generator(amr_graph) +else if sent in list4 then +#contraposition law +Switch the order of two arguments. +if "\(polarity -" in the argument of the amr_graph then +Remove the "\(polarity -". +else +Add "\(polarity -" into the argument. +end if +aug_text = AMR-To-Text-Generator(amr_graph) +end if +total_list = total_list.append((sent, aug_text, 1)) +end for +return total_list +``` + +Algorithm 2: Negative samples construction +```txt +Require: Synthetic sentence lists (list1, list2, list3, and list4) generated following the patterns from Table 16, 17, 18, and 19 respectively. total_list = [], total_list2 = [] +for sent in synthetic_sentence_lists do + amr_graph = Text-To-AMR-Parser(sent) + if "\(polarity -" in amr_graph then + Remove "\(polarity -" + else + Add "\(polarity -" into the amr_graph + end if + aug_text = AMR-To-Text-Generator(amr_graph) + total_list = total_list.append((sent, aug_text, 0)) + for sent in total_list do + random select an index i from total_list + total_list2 = total_list2.append((sent, total_list[i], 0)) + end for +end for +total_list = total_list.append(total_list2) +return total_list +``` + +Algorithm 3: Logical-Equivalence-Identification Contrastive Learning +```txt +Require: positive_list and negative_list from Algorithm 1 and 2, pre-trained large language model (LLM), stage-2 downstream task datasets (ReClor, LogiQA, MNLI, RTE, QNLI, QQP), batch_size bs, learning_rate lr +Stage-1 fine-tuning +for sents, pos_sents, neg_sents from zip-positive_list, negative_list, bs) do LLM, Loss = Contrastive_loss(LLM, sents, pos_sents, neg_sents, label, lr) +end for +Stage-2 fine-tuning +for sent1, sent2 from zip(downstream_tasks, bs) do LLM, Loss = Cross_entropy_loss(LLM, sent1, sent2, label, lr) +end for +``` + +
Model/DatasetsReClor
DevTestTest-ETest-H
RoBERTa59.73 [54.83,64.00]53.20 [52.30,54.00]72.57 [69.50,75.00]37.97 [34.30,41.00]
RoBERTa LReasoner-LDA59.46 [57.40,61.00]53.66 [52.40,54.00]72.19 [70.40,74.00]39.10 [36.20,42.00]
RoBERTa AMR-DA58.66 [53.90,63.00]53.93 [51.70,56.00]66.81 [64.20,69.00]43.80 [41.70,45.00]
RoBERTa AMR-LDA65.26 [60.50,70.00]56.86 [55.20,58.00]77.34 [73.90,80.00]40.77 [39.80,41.00]
DeBERTaV273.93 [66.20,81.00]70.46 [60.80,80.00]80.82 [76.50,85.00]62.31 [47.70,77.00]
DeBERTaV2 LReasoner-LDA75.73 [68.40,83.00]70.70 [59.50,81.00]84.08 [77.30,90.00]60.17 [45.50,74.00]
DeBERTaV2 AMR-DA79.06 [73.60,84.00]75.90 [73.70,78.00]84.62 [80.20,89.00]69.04 [66.20,71.00]
DeBERTaV2 AMR-LDA79.40 [77.60,81.00]77.63 [73.80,81.00]85.75 [83.20,88.00]71.24 [66.40,76.00]
Model/DatasetsLogiQA
DevTest
RoBERTa35.43 [30.60,40.00]34.50 [30.60,38.00]
RoBERTa LReasoner-LDA34.81 [31.60,39.00]34.81 [30.90,38.00]
RoBERTa AMR-DA36.45 [29.40,44.00]37.22 [34.50,41.00]
RoBERTa AMR-LDA40.29 [36.40,47.00]38.14 [34.50,41.00]
DeBERTaV239.72 [22.70,53.00]39.62 [18.40,54.00]
DeBERTaV2 LReasoner-LDA30.87 [30.30,31.00]28.51 [21.80,36.00]
DeBERTaV2 AMR-DA29.95 [25.40,36.00]30.10 [27.30,32.00]
DeBERTaV2 AMR-LDA42.34 [36.70,48.00]39.88 [25.70,49.00]
+ +Table 23: The confidence intervals for the main experiments conducted on the ReClor and LogiQA datasets. We select random seed 0, 21 and 42 to conduct the main experiment on ReClor and LogiQA datasets. We utilise a $95\%$ confidence interval to calculate the confidence interval. \ No newline at end of file diff --git a/abstractmeaningrepresentationbasedlogicdrivendataaugmentationforlogicalreasoning/images.zip b/abstractmeaningrepresentationbasedlogicdrivendataaugmentationforlogicalreasoning/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..0d3165a9c29058009f64c626ecc80a33133343d6 --- /dev/null +++ b/abstractmeaningrepresentationbasedlogicdrivendataaugmentationforlogicalreasoning/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ebf0a5378039c0c994e1e42a337ded21e3c16eaf71028d24420f527abd268403 +size 1563588 diff --git a/abstractmeaningrepresentationbasedlogicdrivendataaugmentationforlogicalreasoning/layout.json b/abstractmeaningrepresentationbasedlogicdrivendataaugmentationforlogicalreasoning/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..e8368e1743cb8afbaf79c05e8b3f3eb451a13957 --- /dev/null +++ b/abstractmeaningrepresentationbasedlogicdrivendataaugmentationforlogicalreasoning/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5fda7d97928278a19a1c8add5f95d6076af55f678a1c7a8496bb198f1fce281d +size 543683 diff --git a/acceleratingmultilinguallanguagemodelforexcessivelytokenizedlanguages/72b163f5-d923-451c-9cf4-74cc75bd7c7e_content_list.json b/acceleratingmultilinguallanguagemodelforexcessivelytokenizedlanguages/72b163f5-d923-451c-9cf4-74cc75bd7c7e_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..abec34ebc2814cfb81902281cbd59e2120808405 --- /dev/null +++ b/acceleratingmultilinguallanguagemodelforexcessivelytokenizedlanguages/72b163f5-d923-451c-9cf4-74cc75bd7c7e_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8d171294c893beeab9d04dbd2ec71ddb30a0fba1ee16c6b79a6a6d714a64bfe2 +size 137246 diff --git a/acceleratingmultilinguallanguagemodelforexcessivelytokenizedlanguages/72b163f5-d923-451c-9cf4-74cc75bd7c7e_model.json b/acceleratingmultilinguallanguagemodelforexcessivelytokenizedlanguages/72b163f5-d923-451c-9cf4-74cc75bd7c7e_model.json new file mode 100644 index 0000000000000000000000000000000000000000..d6d74b2bdfaba3706879a3628a630b5d355434b6 --- /dev/null +++ b/acceleratingmultilinguallanguagemodelforexcessivelytokenizedlanguages/72b163f5-d923-451c-9cf4-74cc75bd7c7e_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:df31528e9e548138b54049a3cdd6e51c172ef05870472cd871b5e48e9374fca5 +size 158320 diff --git a/acceleratingmultilinguallanguagemodelforexcessivelytokenizedlanguages/72b163f5-d923-451c-9cf4-74cc75bd7c7e_origin.pdf b/acceleratingmultilinguallanguagemodelforexcessivelytokenizedlanguages/72b163f5-d923-451c-9cf4-74cc75bd7c7e_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..9ba7a4d1c837d52659cbb2764c5ee9a7f8559e61 --- /dev/null +++ b/acceleratingmultilinguallanguagemodelforexcessivelytokenizedlanguages/72b163f5-d923-451c-9cf4-74cc75bd7c7e_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:885d5a58817abbb1420f39eff23ee40fa73cc48a146c6ae9cd6cd119050cb028 +size 3681350 diff --git a/acceleratingmultilinguallanguagemodelforexcessivelytokenizedlanguages/full.md b/acceleratingmultilinguallanguagemodelforexcessivelytokenizedlanguages/full.md new file mode 100644 index 0000000000000000000000000000000000000000..06fa28f003c3a9f5f820704bedd44fb72067daf7 --- /dev/null +++ b/acceleratingmultilinguallanguagemodelforexcessivelytokenizedlanguages/full.md @@ -0,0 +1,1251 @@ +# Accelerating Multilingual Language Model for Excessively Tokenized Languages + +Jimin Hong Gibbeum Lee Jaewoong Cho KRAFTON + +{jimmy.h, pirensisco, jwcho}@krafton.com + +# Abstract + +Recent advancements in large language models (LLMs) have remarkably enhanced performances on a variety of tasks in multiple languages. However, tokenizers in LLMs trained primarily on English-centric corpora often overly fragment a text into character or Unicode-level tokens in non-Roman alphabetic languages, leading to inefficient text generation. We introduce a simple yet effective framework to accelerate text generation in such languages. Our approach involves employing a new language model head with a vocabulary set tailored to a specific target language for a pretrained LLM. This is followed by fine-tuning the new head while incorporating a verification step to ensure the model's performance is preserved. We show that this targeted fine-tuning, while freezing other model parameters, effectively reduces token fragmentation for the target language. Our extensive experiments demonstrate that the proposed framework increases the generation speed by a factor of 1.7 while maintaining the performance of pre-trained multilingual models on target monolingual tasks. + +# 1 Introduction + +Modern large language models (LLMs) (OpenAI, 2023; Touvron et al., 2023a; Antropic, 2023) have exhibited remarkable capabilities for a variety of tasks in multiple languages (Eloundou et al., 2023; Solaiman et al., 2023). Although these models are predominantly trained on English-centric data, they have shown a significant degree of multilingual proficiency (Bandarkar et al., 2023). + +However, when applied to non-alphabetic languages, these models often suffer from slower text generation due to English-centric tokenization (Rust et al., 2021; Ahia et al., 2023; Petrov et al., 2023). Current tokenization techniques used in Large Language Models (LLMs) are data-driven and optimize segmentation based on the frequency + +![](images/743824b71e96cffdaebf213cee78a231475a98fde5ce2c1b55c7ab63080f1e62.jpg) +Figure 1: Analysis of tokenization lengths and language distribution in pretraining corpus with percentage $>=0.04\%$ English script comprises $89.7\%$ of the corpus and has an average token length of 29.6 in FLoRes-200. The languages using the Chinese, Japanese, and Korean (CJK) scripts have longer tokenization lengths compared to those using Latin and Cyrillic scripts. Our primary focus is on languages that are excessively tokenized by English-centric tokenizers. + +of characters or bytes within a specific corpus (Sennrich et al., 2016; Kudo, 2018). As a result, the tokenizers of multilingual models, which are heavily influenced by English-dominant training data, are predominantly composed of English subwords. This leads to excessive fragmentation, where non-English words are overly segmented into a large number of subword units (Rust et al., 2021; Ahia et al., 2023; Petrov et al., 2023). The autoregressive nature of LLMs further amplifies this inefficiency, as it sequentially requires the generation of text. + +To address these challenges, previous studies (Wang et al., 2019; Rust et al., 2021; Cui et al., 2023) have proposed replacing or augmenting the existing vocabulary of pre-trained multilingual models with language-specific vocabularies to more effectively encode monolingual text corpora. Specifically, Rust et al. (2021) improved mBERT (Devlin et al., 2019) by replacing its tokenizer with a monolingual one and incorporating an additional 100,000 pre-training steps. On the other hand, Cui et al. (2023) enhanced Llama (Touvron + +![](images/5c9488ab0cf3abd7a1e4f059f80eaabfc27b18f3bda79cc379be0ac82c51c481.jpg) +Figure 2: Overview of the proposed framework. Illustration of (Left) the generation with a pre-trained multilingual model and (Right) the generation of MuMo Framework. Given the Korean prefix “개상성은” (Uranus is), the model generates the consecutive phrase “개상이로부터”(from the Sun) that consisted of 3 morphemes (“개상”, “고로”, “부터”) in Korean. The generation with the pre-trained multilingual model faces inefficiency due to excessive fragmentation, requiring 12 steps to generate only 3 Korean morphemes. However, the MuMo framework empowers the multilingual language model to generate multiple tokens in a single iteration by extracting a word from the Korean Vocabulary, requiring 3 steps. + +et al., 2023a) by expanding the Chinese vocabulary and further pre-training it on a 120GB text corpus that includes Chinese texts. However, this approach requires an extensive pre-training phase with a substantial amount of data. + +Another approach to address the challenges is the use of small draft models (Leviathan et al., 2023; Chen et al., 2023a). These models generate draft output tokens, which are then verified by the original language model. However, a significant challenge arises when trying to identify or train a suitable small model that can handle multiple languages with reliable performance (Conneau et al., 2020; Bandarkar et al., 2023). + +In response to these challenges, our research introduces MuMo, accelerating Multilingual language models for a targeted Monolingual text generation, particularly in non-alphabetic languages. MuMo incorporates a new vocabulary of a target language into the output layer, also known as the Language Model (LM) head, and predicts the next token from this expanded vocabulary. This approach requires training only the extended portion of the output layer and specific layers of the feed-forward network. Importantly, MuMo eliminates the need for extensive text corpora or a draft model, requiring only a modest corpus of the target language, approximately 44M tokens in size. Empirical results across summarization, and translation tasks in Korean and Japanese demonstrate that MuMo significantly accelerates text generation, achieving over a 1.7-fold increase in speed without significantly compromising output quality. + +
LangWordMultilingual Tokens
KO서울(“서”, “\0xec”, “\0xb8”, “\0x9a”)
JA廢壳(“廢”, “\0xe5”, “\0xa3”, “\0xb2”)
+ +Table 1: Examples of the tokenization results. These examples are preprocessed by the Llama tokenizer (Touvron et al., 2023b). The target monolingual word are excessively segmented into byte units, when a suitable match is not found in the multilingual vocabulary. + +# 2 Related Work + +Tokenization Disparity Subword tokenization, a common approach in LMs, is typically data-driven. Most of pre-trained tokenizers, which are often trained on predominantly English corpora, frequently result in excessive fragmentation of non-English scripts (Rust et al., 2021; Zhang et al., 2022). Ahia et al. (2023); Petrov et al. (2023) have found significant tokenization disparities across languages in popular LLMs (Xue et al., 2021, 2022; Scao et al., 2022; OpenAI, 2023). Our work endeavors to address the slowdown in inference that arises due to tokenization disparity in non-alphabetic languages. + +Modifying Pre-trained Vocabulary Previous works have explored the adaptation of pre-trained vocabularies or the addition of new tokens (Artetxe et al., 2020; Rust et al., 2021; Hong et al., 2021; Liu et al., 2023), these methods often necessitate extensive pre-training to integrate the new tokens effectively (Wang et al., 2019; Chau et al., 2020; Cui et al., 2023; Liu et al., 2023). In contrast, our MuMo framework sidesteps the need for fine + +tuning the parameters of pre-trained models to preserve the original capabilities of the pre-trained language model. Efforts to select items of pre-trained embedding matrix have been made (Abdaoui et al., 2020; Domhan et al., 2022; Ushio et al., 2023), but these have not yielded significant speed up where the size of the embedding layer is relatively small (Bogoychev et al., 2023). + +Accelerating LLM Inference The quest to accelerate inference in auto-regressive large language models (LLMs) has led to a variety of approaches. There has been a proliferation of systems specifically engineered for LLM inference (Yu et al., 2022; Sheng et al., 2023; Xiao et al., 2023). Our proposed methodology can be harmonically integrated with the aforementioned techniques. Speculative decoding (Leviathan et al., 2023; Chen et al., 2023a) have also been explored to increase inference velocity. However, the approach often relies on the assumption that a small model can maintain high fidelity when generating a series of multiple tokens. Moreover, acquiring a small yet competitive model may be tricky, especially in a multilingual setup (Conneau et al., 2020; Bandarkar et al., 2023). Our work distinguishes itself by specifically solving the inference inefficiency that arises from excessive fragmentation in the non-alphabetic context. + +Parameter Efficient Cross-lingual Transfer Learning The curse of multilinguality, which refers a trade-off between the language coverage and model capacity (Conneau et al., 2020), is a significant issue even in massively multilingual models, such as mBERT, XLM-R, and mT5 (Devlin et al., 2019; Conneau et al., 2020; Xue et al., 2021; Ansell et al., 2021). The problem has been mitigated through modular parameter-efficient adaptations of the multilingual models through lightweight adapters (Houlsby et al., 2019): additional trainable parameters inserted into the transformer layers of model (Pfeiffer et al., 2020; Üstün et al., 2020; Vidoni et al., 2020; Parović et al., 2022) for a target language. These techniques bear a resemblance to ours, in that they involve training partial parameters of a language model with a small amount of target language corpus. However, our goal is fundamentally different: we aim to accelerate the inference, whereas previous studies focus on improving the representational capability in target languages for multilingual models. + +# 3 Proposed Framework + +We propose a framework named MuMo to accelerate the inference speed of a pre-trained multilingual LM for a non-alphabetic monolingual language via a given small monolingual dataset. In the section, we introduce 1) the model architecture, 2) the fine-tuning process on a small targeted language dataset, and 3) the inference process of the proposed framework. + +# 3.1 Model Architecture + +We illustrate the model architecture of MuMo in Fig. 3. + +Pre-trained Multilingual Model We consider a setting in which a pre-trained multilingual model $f_{\mathrm{multi}}$ is given. The model consists of 1) Transformer layers that consist of attention and feedforward network, and 2) an output embedding layer called language model (LM) head. We denote $\mathcal{V}_{\mathrm{multi}}$ as the multilingual vocabulary set of the model objective, as $\mathcal{L}_{\mathrm{MLE}}(p_{\mathrm{multi}}, \mathbf{x}) = \sum_{t=1}^{|\mathbf{x}|} \log p_{\mathrm{multi}}(x_t | \mathbf{x}_{