text
string | source
string |
|---|---|
is inside of ?y. This predicate applies when ?x is a block and ?y is a drawer or a slider. - (stacked ?x - item ?y - item): ?x is stacked on top of ?y. This predicate applies when ?x and ?y are blocks. - (unstacked ?x - item ?y - item): ?x is unstacked from ?y. This predicate applies when ?x and ?y are blocks. **Actions: ** 35 There are the following actions that the robot can perform. They are: - lift_red_block_table: lift the red block from the table. - lift_red_block_slider: lift the red block from the slider. - lift_red_block_drawer: lift the red block from the drawer. - lift_blue_block_table: lift the blue block from the table. - lift_blue_block_slider: lift the blue block from the slider. - lift_blue_block_drawer: lift the blue block from the drawer. - lift_pink_block_table: lift the pink block from the table. - lift_pink_block_slider: lift the pink block from the slider. - lift_pink_block_drawer: lift the pink block from the drawer. - stack_block: stack the blocks. - place_in_slider: place the block in the slider. - place_in_drawer: place the block in the drawer. - place_on_table: place the block on the table. - rotate_red_block_right: rotate the red block to the right. - rotate_red_block_left: rotate the red block to the left. - rotate_blue_block_right: rotate the blue block to the right. - rotate_blue_block_left: rotate the blue block to the left. - rotate_pink_block_right: rotate the pink block to the right. - rotate_pink_block_left: rotate the pink block to the left. - push_red_block_right: push the red block to the right. - push_red_block_left: push the red block to the left. - push_blue_block_right: push the blue block to the right. - push_blue_block_left: push the blue block to the left. - push_pink_block_right: push the pink block to the right. - push_pink_block_left: push the pink block to the left. - move_slider_left: move the slider to the left. - move_slider_right: move the slider to the right. - open_drawer: open the drawer. - close_drawer: close the drawer. - turn_on_lightbulb: turn on the lightbulb. - turn_off_lightbulb: turn off the lightbulb. - turn_on_led: turn on the led. - turn_off_led: turn off the led. **Example Task: ** Now, you will help to parse the goal predicate and generate a sequence of actions to accomplish this task. Goal predicate: (is-turned-off led) Symbolic state: is-turned-on(led), is-turned-on(lightbulb), not(is-turned-off(led)), not(is- turned-off(lightbulb)) Task output: ‘‘‘python [’turn_off_led’] ‘‘‘ **Example Task: ** Goal predicate: (is-turned-on led) Symbolic state: is-turned-on(led), is-turned-on(lightbulb), not(is-turned-off(led)), not(is- turned-off(lightbulb)) Task output: ‘‘‘python [] ‘‘‘ **Example Task: ** Goal predicate: (is-in red_block drawer) Symbolic state: not(is-in(red_block, drawer)), not(is-in(red_block, slider)), is-on( red_block, table), not(is-open(drawer)), is-close(drawer), is-slider-left(slider), not(is- slider-right(slider)), not(lifted(red_block)) Task output: ‘‘‘python [’open_drawer’, ’lift_red_block_table’, ’place_in_drawer’] ‘‘‘ **Example Task: ** Goal predicate: (is-in red_block drawer) Symbolic state: not(is-in(red_block, drawer)), not(is-in(red_block, slider)), not(is-on( red_block, table)), is-open(drawer), not(is-close(drawer)), is-slider-left(slider), not(is- slider-right(slider)), lifted(red_block) Task output: ‘‘‘python [’place_in_drawer’] ‘‘‘ **Example Task: ** Goal predicate: (and (is-turned-on lightbulb) (is-slider-right slider)) Symbolic state: is-slider-left(slider), not(is-slider-right(slider)), is-turned-off( lightbulb), not(is-turned-on(lightbulb)) Task output: ‘‘‘python 36 [’turn_on_lightbulb’, ’move_slider_right’] ‘‘‘ **Additional Instructions: ** 1. Make sure the generated plan is a list of actions. Place the list between ‘‘‘python and ends with ‘‘‘. 2.
|
https://arxiv.org/abs/2505.21981v1
|
arXiv:2505.21985v1 [cs.MA] 28 May 2025Reward-Independent Messaging for Decentralized Multi-Agent Reinforcement Learning Naoto Yoshida1[0000 −0002−9813−0668]and Tadahiro Taniguchi1,2[0000 −0002−5682−2076] 1Kyoto University, Kyoto, Japan 2Research Organization of Science and Technology, Ritsumeikan University {yoshida.naoto.8x,taniguchi.tadahiro.7j}@kyoto-u.ac.jp Abstract. Inmulti-agentreinforcementlearning(MARL),effectivecom- munication improves agent performance, particularly under partial ob- servability. We propose MARL-CPC, a framework that enables commu- nication among fully decentralized, independent agents without param- eter sharing. MARL-CPC incorporates a message learning model based on collective predictive coding (CPC) from emergent communication re- search. Unlike conventional methods that treat messages as part of the action space and assume cooperation, MARL-CPC links messages to state inference, supporting communication in non-cooperative, reward- independent settings. We introduce two algorithms—Bandit-CPC and IPPO-CPC—andevaluatetheminnon-cooperativeMARLtasks.Bench- marksshowthatbothoutperformstandardmessage-as-actionapproaches, establishing effective communication even when messages offer no direct benefit to the sender. These results highlight MARL-CPC’s potential for enabling coordination in complex, decentralized environments. Keywords: Multi-agent Reinforcement Learning ·Emergent Commu- nication ·Predictive Coding ·Collective Predictive Coding 1 Introduction In multi-agent reinforcement learning (MARL), inter-agent communication for sharing private observations can yield group-level benefits [47,11,2]. Under par- tial observability, agents can improve decision-making by integrating localized perceptual inputs. Such communicative behaviors are common in nature; for ex- ample, vervet monkeys use alarm calls to warn conspecifics of threats, enabling collective risk avoidance beyond the perceptual range of any individual [36]. Similarly, in human language, the exchange of individual experiences facilitates knowledge integration, supporting collective adaptation and utility [29,43]. Natural agents that engage in communication function as autonomous units with decentralized learning mechanisms. Reinforcement signals (rewards) are as- signed individually and may not be aligned, making the environment inherently non-cooperative. As a result, effective communication must emerge under decen- tralized and potentially conflicting incentive structures. Despite its importance 2 N. Yoshida and T. Taniguchi for both practical MARL applications [31,23,45] and foundational research on emergent communication [19], the challenge of enabling functional communica- tion among independently learning, decentralized agents remains underexplored. The main contribution of this study is the proposal of MARL-CPC, a novel deep MARL framework that enables independent agents to establish and uti- lize communication through decentralized learning. Building on the concept of Collective Predictive Coding (CPC)—recently introduced in emergent com- munication research [40]—we develop two algorithms: Bandit-CPC and IPPO- CPC. These methods facilitate communication among independently optimizing agents, regardless of cooperative conditions. Empirical evaluations demonstrate that MARL-CPC significantly improves group-level performance by enabling effective information sharing, even in non-cooperative scenarios. 2 Related Work 2.1 MARL with Communication Communication has been shown in numerous studies to enhance performance in MARL [11,47,45]. In many cases, such optimization relies on two key assump- tions. First, from an engineering-oriented perspective, centralized training with de- centralized execution (CTDE) has been widely adopted, linking agents through a centralized optimization framework [38]. This includes the use of global value functions [39,25,46] and architectures like RIAL, DIAL [9], and CommNet [38], which allow gradient propagation across agents to improve policy optimization [47]. Parameter sharing is also commonly employed to facilitate learning from individual experiences [47]. However, these centralized approaches often diverge from the fully decentralized learning observed in natural agents. They typically assume unrealistic access to shared
|
https://arxiv.org/abs/2505.21985v1
|
information and coordination mechanisms, limiting their relevance to models of emergent communication. Additionally, re- liance on agent homogeneity has been criticized for its inability to support role differentiation based on context [45]. Second, most prior work on communication in MARL assumes cooperative environments[20].Insightsfromsimulationstudiesonlanguageevolutionsuggest that in systems of independently learning, decentralized agents with individual objectives, communication is often hindered by free-riding and deceptive sig- naling, impeding the stability of shared communication protocols [26]. Similar challenges arise in the “cheap-talk” framework from economics [8], where com- munication is costless and non-binding. Consequently, MARL studies using this framework typically adopt cooperative settings such as signaling games [22,37, 44] or referential games [21]. Given these challenges, the emergence of communication among decentral- ized, independently learning agents in non-cooperative environments remains difficult. Under such conditions, it is reasonable to assume that introducing a communication-inducing module is necessary for facilitating communication. Title Suppressed Due to Excessive Length 3 While several studies have explored decentralized acquisition of such modules [23,7,27,31], these efforts primarily focus on cooperative settings. In this study, we investigate the possibility and implications of communication emerging under non-cooperative conditions. 2.2 Collective Predictive Coding Collective Predictive Coding (CPC) is an emergent communication model for independent, decentralized agents, originally proposed in the field of emergent communication research [41,16,40]. It extends predictive coding theory from computational neuroscience [32,10] to multi-agent systems. In developmental psychology,humancommunicationthatbenefitsothersisknowntoexhibitaltru- istic properties in natural environments [42]. CPC assumes that human linguistic behavior and its acquisition are driven by innately altruistic mechanisms, draw- ing inspiration from the cognitive and motor development processes involved in human language learning [40]. Rather than modeling communication emergence through RL, CPC formu- lates it as inference within a single, large generative model representing a group of agents. This model is decomposed across individuals, yielding an objective function that each agent can optimize via communication [16,40]. This formu- lation enables decentralized Bayesian inference in a distributed manner [12,41, 28]. In this context, messages are interpreted as auxiliary variables supporting distributed optimization. Viewing the multi-agent system as a single genera- tive model also allows communication learning to be understood as a process of knowledge integration based on individually acquired observations. Previous applications of CPC to MARL have been explored by Ebara et al. [7] and Nakamura et al. [27], but their approaches rely on posterior sampling via Markov Chain Monte Carlo (MCMC), limiting compatibility with neural network-based function approximation. In contrast, the present study introduces a CPC formulation grounded in variational inference, enabling implementation with deep neural networks and thus offering greater scalability and practical applicability. 3 Multi-Agent Reinforcement Learning with Collective Predictive Coding 3.1 Preliminaries The problem addressed in this study is formulated as a Partially Observable Markov Game (POMG) [13]. Formally, a POMG is defined by the tuple ⟨I,S,A,X, µ0, P, R⟩, where Idenotes the set of Nagents; S, the set of envi- ronmental states; and A=A1× A 2× ··· × A N, the joint action space com- posed of each agent i’s action set Ai. Similarly, X=X1× X 2× ··· × X Nde- notes the joint observation space. The initial
|
https://arxiv.org/abs/2505.21985v1
|
state distribution is denoted by µ0∈ △(S), where △(·)represents the set of probability distributions over the 4 N. Yoshida and T. Taniguchi given set. The transition-observation function is given by P(s′,x|s,a), where s′∈ Sis the next state, x= (x1, x2, . . . , x N)∈ Xis the joint observation, and a= (a1, a2, . . . , a N)∈ Ais the joint action. The reward function for agent iis defined as Ri:S × A → R. In a POMG, each agent independently aims to maximize its expected return, Eπ[P∞ t=0γtri,t], based on its own experiences. Here, ri,tis the reward received by agent iat time step t,πiis the policy of agent i, and Eπ[·]denotes the expectation over trajectories induced by the joint policy π= (π1, π2, . . . , π N) under the environment dynamics. To formalize effective communication in this study, we consider a setting within the POMG framework where the state space is factorized as S=S1× S2×···×S N. We assume Xi≜Sifor all i∈ I, meaning that each agent observes only its own component of the state. Thus, xirepresents private information for agent i, and access to other agents’ observations xj(forj̸=i) effectively reveals the full environmental state s, which may enhance the agent’s expected future return. We specifically focus on non-cooperative reward settings, where rewards differ across agents: Ri(s,a)̸=Rj(s,a)for some (s,a)∈ S × A [4], while assuming that agents are not in direct competition. We investigate the emergence and role of communication under these conditions. 3.2 Variational CPC by Joint Auto-encoder Fig. 1.Graphical model of the CPC module (2 agents).This study formulates CPC using variational inference within a deep generative model- ing framework. In this context, a joint gen- erative model is constructed by aggregating the observations xiof individual agents ( i= 1,2, . . . , N). This joint model is then decom- posed to derive an objective function for the communication modules of individual agents. An overview of the generative model em- ployed in this study is provided in Figure 1. The model defines the joint probability dis- tribution over all agents’ observations xiand messages mi∈ {1, . . . , K }as follows: Pθ(x,m)≜P(m)NY i=1Pθi(xi|m),(1) where x= (x1, x2, . . . , x N)andm= (m1, m2, . . . , m N)is the joint message. We consider performing variational inference by deriving an evidence lower bound (ELBO)oftheabovejointprobability,asinvariationalautoencoders(VAE)[18]. LetQ(m)denote the variational distribution over the joint message m. Then, the likelihood of the observations can be lower-bounded as follows: logPθ(x)≥X mQ(m) logPθ(x|m)P(m) Q(m). (2) Title Suppressed Due to Excessive Length 5 Here, we introduce the variational distribution and the prior distribution over mas follows, and substitute them into the ELBO above: Q(m)≜Qϕ(m|x) =Q iQϕi(mi|xi) (3) P(m)≜QN i=1P(mi) (4) As a result, the ELBO can be decomposed into agent-wise terms as follows: X mQ(m) logPθ(x|m)P(m) Q(m)=NX i=1"X mQϕ(m|x) logPθi(xi|m)P(mi) Qϕi(mi|xi)# .(5) This suggests that the ELBO in CPC can be decomposed with respect to each individual agent i. By expressing each agent’s term as JCPC(θi, ϕi)≜EQϕ(m|x)[logPθi(xi|m)]−DKL(Qϕi(mi|xi)∥P(mi)),(6) where DKLrepresents the Kullback–Leibler divergence. Then we obtain logPθ(x)≥ NX i=1JCPC(θi, ϕi)! .
|
https://arxiv.org/abs/2505.21985v1
|
(7) Therefore, by maximizing JCPC(ϕi, θi)for each agent, the variational distribu- tionQθ(m|x)approximates the posterior Pθ(m|x), and the message vari- ablem, which integrates the observations xfrom both agents, corresponds to a state estimation of the entire environment s∈ S. Furthermore, Qϕ(m|x) inJCPC(ϕi, θi)can be interpreted as sampling based on the utterances of all agents. In this study, we use the following one-sample approximation: JCPC(θi, ϕi)≈logPθi(xi|m)−DKL(Qϕi(mi|xi)∥P(mi)) (8) m∼Qϕ(m|x). (9) The optimization of JCPC(ϕi, θi)is performed independently by each indi- vidual agent. Practically, each agent’s discrete message miis represented as a one-hot feature vector, and m= (m1, . . . , m N)denotes the concatenation of messages from Nagents, forming a vector of dimension N×K. For optimiza- tion, we use the following straight-through gradient estimator [3] for each agent’s own message: ˜mi(xi) =mi+ log Qϕi(mi|xi)−sg[logQϕi(mi|xi)] (10) Here, sg[·]denotes the stop-gradient operator. Accordingly, in the training of agent i,gradientsarecomputedusingtheinput m(xi) = (m1, . . . , ˜mi(xi), . . . , m K), where only the agent’s own message is treated as a function of its input and the rest are detached. In addition, the KL divergence is approximated using a sampling-based technique inspired by methods from deep RL [35], as follows: DKL(Qϕi(mi|xi)∥P(mi))≈(κ−1)−logκ, (11) κ=Qϕi(mi|xi) P(mi). (12) 6 N. Yoshida and T. Taniguchi Fig. 2.Overview of the MARL-CPC architecture. The figure is a model with two agents.Thecomponentsofeachagentarerepresentedbyfilledregions—whiteandgray, respectively. The central panel corresponds to the CPC module, which forms a pseudo- joint agent and enables message generation and exchange. Based on the messages m and the hidden states zacquired through the CPC module, the agent performs action selection and value estimation. The dashed arrows in the figure indicate paths through which gradients do not propagate during learning. The CPC-based communication module resembles the autoencoder-based method proposed by Lin et al. [23]. However, our approach differs in that the decoder reconstructs from the entire message vector m, and each agent’s objec- tive function includes an additional KL divergence term. When the KL term is weighted as in a β-VAE [15] with β= 0, and the decoder is constrained such thatPθi(xi|m)≜Pθi(xi|mi), our formulation reduces to that of Lin et al., in- dicating that our method generalizes their approach. Furthermore, whereas the effectiveness of message learning based on autoencoders was not theoretically justified in prior work, our CPC-based derivation interprets the joint message as supporting state estimation, thus providing a principled account of inter-agent information sharing. 3.3 MARL-CPC This study proposes a MARL framework that facilitates communication learn- ing in MARL via a CPC module, termed MARL-CPC . Based on this frame- work, we introduce two algorithms. The first, Bandit-CPC , is designed for Title Suppressed Due to Excessive Length 7 Algorithm 1 MARL-CPC pseudocode 1: Initialize parameters (ηi, θi, ϕi)for each agent. 2:foriteration = 0,1,2, . . .do 3: Collect sample sets {Di}i=1,...,Nin the environment using {πi, Qϕi}i=1,...,N. 4:fori= 0,1,2, . . . , N do 5: Make mini-batches {Dk i}k=0,1,...using data Di 6: forall mini-matches do 7: Calculate the RL loss JRL(ηi)using data Dk iand (14) or (15). 8: Calculate the CPC loss JCPC(θi, ϕi)using data Dk iand
|
https://arxiv.org/abs/2505.21985v1
|
(6). 9: Update parameters using some gradient ascent: (ηi, θi, ϕi)←(ηi, θi, ϕi) +α∇ηi,θi,ϕiJ(ηi, θi, ϕi). 10: end for 11: end for 12:end for multi-agent contextual bandit problems. The second, IPPO-CPC , extends the approach to more complex scenarios involving state transitions. During execution, each agent samples a message mifrom the variational distribution Qϕi(mi|xi). The concatenated global message m, together with each agent’s hidden representation ziobtained by embedding its observation xi, is then provided as input to the subsequent RL modules (Figure 2). The objective function of both algorithms is expressed as the sum of the RL term JRLand the CPC term JCPC. The overall objective function to maximize for agent iof MARL-CPC is expressed as follows: J(ηi, θi, ϕi) =Et[JRL(ηi) +JCPC(θi, ϕi)], (13) Here, ηidenotes the parameters of agent i’s policy πi. The expectation operator Etrepresents an empirical average computed over a finite batch of samples. The pseudocode is provided in Algorithm 1. During RL optimization, gradients are not propagated through the CPC module. Instead, gradient computations for the CPC and RL components are performed independently. Bandit-CPC This algorithm is effective in environments where each agent is provided with individually defined observations and rewards in a contextual banditsetting,andwhereagentscanpotentiallybenefitfromsharinginformation with one another. In this context, letting ridenote the reward obtained by agent iinasingletrial,theobjectivefunctiontobemaximizedforimprovingthepolicy πiis given as follows. JRL(ηi) =rilogπi(ai|zi,m) (14) Here, ziis the internal representation of the encoder Q(mi|xi)(Figure 2). IPPO-CPC For environments beyond contextual bandits, we incorporate the CPC module into Proximal Policy Optimization (PPO) [34] for representation 8 N. Yoshida and T. Taniguchi learning. PPO is an on-policy, actor-critic deep RL method that optimizes a policy πiand value function Vπi, both parameterized by deep neural networks and trained via gradient-based methods [17]. Independent PPO (IPPO) extends PPO to multi-agent settings, allowing each agent to optimize its policy indepen- dently using local updates [6]. We denote the combined parameters of agent i’s policy and value networks as ηi. In our implementation, πiandVπiare modeled using separate multilayer perceptrons (MLPs). Letπolddenote the policy at the time of sampling, and define rt(ηi) = πi(at|zi,t,mt) πold(at|zi,t,mt). The subscript zi,tdenotes the internal representation of agent i’s encoder Q(mi|xi)at time step t. The PPO objective is expressed as the minimization of the following loss function: JRL(ηi) =Jπ(ηi)−c1JV(ηi) +c2Hπ(ηi) (15) Jπ(ηi) = min rt(ηi)Ai,t,clip(rt(ηi),1−ϵ,1 +ϵ)Ai,t) (16) JV(ηi) = Vπi(zi,t,mt)−Vtarg i,t2(17) Here, the advantage Ai,tis computed using the value estimator at sampling time, based on generalized advantage estimation [33]. Hπdenotes the entropy of the policy. c1= 0.5,c2= 0.01, and ϵ= 0.2are all positive constants. The function clip(·,1−ϵ,1+ϵ)restricts its input to the range [1−ϵ,1+ϵ][34]. The target value Vtarg tis the empirical return computed from a trajectory of length T, defined as Vtarg t=rt+γrt+1+···+γT−tVπi(zi,T,mT). 4 Experiments We constructed MARL environments where information sharing via communi- cation influences performance and compared different agent architectures (Fig- ure 3). Two environments were evaluated: a multi-agent contextual bandit and the “observer” environment. We tested four agent types: independent agents without communication ( no-comm ); agents using messages as actions ( mes- sage), as in classical studies [5,9]; and agents with CPC-based communication (cpc). To estimate
|
https://arxiv.org/abs/2505.21985v1
|
the performance upper bound under full information shar- ing, we also evaluated a sharedcondition, where each agent’s policy and value networks receive both observations (x1, x2)as joint input. Implementation details For all agent architectures, the policy and value net- works consisted of multilayer perceptrons (MLPs) with two hidden layers of 64 units and Tanhactivation. In the message condition, message inputs and out- puts were added to the policy network, while the cpccondition incorporated a CPC module. Specifically, Pθi(xi|m)andQϕi(mi|xi)were implemented as MLPs with a single 64-unit hidden layer and GELU activation [14]. For mes- sage prior P(mi)we used a flat prior in both experiments. For agents with communication, messages were used as additional inputs to the value network. We used the Adam optimizer with a learning rate of 3×10−4for Bandit-CPC and2.5×10−4for IPPO-CPC. The discount factor for IPPO-CPC was set to γ= 0.99. Title Suppressed Due to Excessive Length 9 Fig. 3.Agentarchitecturescomparedinthisexperiments. A)Independentagentswith- out communication [6]. B) Message agents, where communication is defined as an extension of action [5,9]. C) CPC-based agents in which messages function as auxil- iary variables for the state inference process (ours). D) Agents whose observations are shared in advance (performance upper bound). 4.1 Contextual Bandit with Information Sharing This experiment serves as a proof of concept for communication via CPC in a non-cooperative setting where communication benefits each individual agent (Figure 5). The environment consists of two independently acting agents (Agent- A and Agent-B), each making a single decision per episode and receiving an individual reward based on the environmental state and their chosen action. Each episode proceeds as follows. The environment has a true state s∈ {LEFT ,RIGHT }, sampled uniformly. Only one agent observes the true state, encoded as a binary vector: the informed agent receives xi= [1,0]⊤ifs=LEFT, andxi= [0,1]⊤ifs=RIGHT; the uninformed agent receives xi= [0,0]⊤. Each agent selects an action ai∈ {LEFT ,RIGHT }, and receives +1ifai=s, and −0.1otherwise. This setting is non-cooperative: the informed agent can maximize its own reward without relying on the other, and thus has no incentive to communicate. As a result, reward-based learning alone does not promote communication. How- ever, for the uninformed agent, accessing the true state is critical, and shared 10 N. Yoshida and T. Taniguchi Agent A Agent B Action ∈ {Left, Right} Action ∈ {Left, Right}Left orCorrect Answer Right messageObservation (2 bits)Observation (2 bits) Answer: Left Right Action: Left RightReward +1 -0.1 -0.1 +1Answer: Left p=0.5Right p=0.5 Agent 0 p=0.5Observation obs0: [ 1, 0] obs1: [0, 0]obs0: [0, 0] obs1: [ 1, 0]Agent 1 p=0.5 obs0: [0, 1] obs1: [0, 0]obs0: [0, 0] obs1: [0, 1] Fig. 4.Multi-agent conditional bandit environment. communication can increase the total group reward. CPC has the potential to establish such communication autonomously. We evaluated Bandit-CPC under the cpccondition and compared it with three baselines: independent ,message , and shared. Each condition was run for3.0×104episodes. Messages were discrete (5 values), and each agent could send one message per timestep. Performance was measured using group welfare W=r0+r1[2], which approaches 2.0 when cooperative communication is achieved. cpc no-comm message shared 0.0 0.5 1.0 1.5 2.0
|
https://arxiv.org/abs/2505.21985v1
|
2.5 3 1e4Episode1.01.21.41.61.82.0 Welfare Fig. 5.Results in Bandit environment.Results Figure 5 presents the ex- perimental results. The results are reported as the interquartile mean (IQM) in Nruns, accompanied by bootstrapped 95% confidence inter- vals (CIs) with 2,000 iterations of re- sampling [1]. First, the shared condi- tion (information sharing is assumed in advance) achieves a group welfare closeto2.0,confirmingthatthisisthe maximum achievable value under the given learning conditions. As shown in the results, agents with the CPC module also attain comparable levels of group wel- fare, indicating that information sharing is successfully established and utilized by each agent without the need for an explicitly cooperative setting. In contrast, message agents fail to establish effective information sharing in this non-cooperative environment, resulting in group welfare that remains at Title Suppressed Due to Excessive Length 11 the same suboptimal level as in the independent condition. These results demon- strate that MARL-CPC enables the emergence of beneficial communication even in non-cooperative environments. 4.2 Observer: Information Sharing without Rewards Agent A (immobile)Agent B (movable) Reward location (only observable from Agent A) Fig. 6.Overview of observer environment.This environment evaluates commu- nication in a non-cooperative setting with asymmetric information access (Figure 6). It features two agents: Agent-A, who remains stationary and receives no reward, and Agent-B, who navigates a 4×4grid and can earn +1by selecting the DIGaction on cells with buried rewards. All other actions yield a penalty of −0.01. Agent-B cannot observe the reward locations, while Agent-A can, but has no incen- tive to share this information, as it re- ceives zero reward regardless of its actions. Agent-A’s observation is a 16-dimensional one-hot vector indicating the re- ward location, with a single dummy action. Agent-B observes its grid posi- tion (also as a 16-dimensional one-hot vector) and selects from six actions: AB={up,down ,left,right,stand–still ,dig}. Each agent can send one of 20 dis- crete messages per time step, enabling bidirectional communication. Episodes last up to 1,000 steps. As this environment includes state transitions, we used IPPO-based meth- ods for evaluation. Four conditions were compared: no-comm ,message ,cpc, andshared. In each setting, agents collected 1,024 time steps of experience per iteration using 8 parallel threads, with data split into four mini-batches for optimization. Training was run for 3×106time steps. Evaluation was per- formed periodically by averaging test performance. Metrics included group wel- fareW=E[r0+r1]and episode length E[T], where expectations are empirical averages over test runs. Results Figure 7 shows the experimental results. As in previous experiments, IQM and 95% bootstrapped confidence intervals were computed over 10 trials. Thesharedcondition confirms that task performance improves with informa- tion sharing. Compared to the independent andmessage baselines, the cpc condition shows significant gains across both metrics. An ablation study further evaluated the informativeness of messages generated by CPC agents for Agent-B through statistical testing. 12 N. Yoshida and T. Taniguchi A Bcpc no-comm message shared Welfare Time steps Time stepsEpisode Length Fig. 7.Results in observer environment. A) Episode length. B) Group welfare. 5 Additional analysis 5.1 Ablation Study Contexual Bandit Observer Normal Random No Normal Random NoWelfare Fig. 8.Overview of observer environment.The effectiveness of communication in
|
https://arxiv.org/abs/2505.21985v1
|
MARL remains under debate [24,30]. To assess message utility in MARL- CPC, we conducted an ablation study to evaluate the impact of disrupting trained agents’ messages. Two con- ditions were tested: random , where messages were replaced with random values, and no, where message vectors were set to zero when input as one-hot features. We evaluated both interventions across 100 trials in the contextual bandit and observer environments. Results, summarized in Figure 8, show a sig- nificant performance drop under both conditions, confirming that MARL-CPC establishes meaningful communication that contributes to task success. 5.2 Comparing in Cooperative Scenario 0.250.500.751.001.251.501.752.00 shared no-comm cpc message 0.0 0.5 1.0 1.5 2.0 2.5 3 1e4 EpisodeWelfare Fig. 9.Results in cooperative environment.We examined communication learn- ing under cooperative conditions by modifying the reward structure in the contextual bandit environment. Both agents now receive a reward of +1 only if they select the correct an- swer simultaneously; otherwise, they receive −0.1. Thus, an agent benefits from sharing information only when it knows the correct answer. This set- ting represents a cooperative MARL scenario. Figure 9 presents IQM and 95% bootstrapped confidence intervals over 10 runs for four conditions. As expected, the cpcandsharedconditions achieve the highest rewards, while no-comm fails to learn cooperative behavior. The Title Suppressed Due to Excessive Length 13 message condition, previously shown to enable communication in cooperative settings, also performs well, but learns more slowly and achieves lower final per- formance than cpc. This likely reflects a fundamental difference: CPC integrates communication into representation learning as inference of global state, whereas the message-as-action paradigm treats communication as a learned action, re- quiring agents to jointly acquire both message generation and interpretation. 6 Conclusions This study proposed MARL-CPC, a framework that applies CPC from emer- gentcommunicationresearchtoon-policyRL.MARL-CPCformulatedtheemer- gence of communication as a form of representation learning, based on a pseudo- joint generative modeling of multiple agents. This formulation was then decom- posed into an objective function that each agent could optimize independently. The framework enabled agents to establish communication independently of the reward-driven mechanisms typical in conventional RL, and demonstrated im- proved performance in non-cooperative environments—settings that traditional MARL with communication has struggled to address effectively. Acknowledgments. This research was supported by the Japan Society for the Pro- motion of Science Grant in Aid for Transformative Research Areas (A) (23H04835). We thank Masatoshi Nagano, Nguyen Le Hoang, Noburo Saji, and Moe Ohkuma for the discussion. Figures are partially created with BioRender.com. Disclosure of Interests. The authors have no competing interests to declare that are relevant to the content of this article. References 1. Agarwal, R., Schwarzer, M., Castro, P.S., Courville, A.C., Bellemare, M.: Deep reinforcement learning at the edge of the statistical precipice. Advances in Neural Information Processing Systems 34(2021) 2. Albrecht, S.V., Christianos, F., Schäfer, L.: Multi-agent reinforcement learning: Foundations and modern approaches. MIT Press (2024) 3. Bengio, Y., Léonard, N., Courville, A.: Estimating or propagating gradi- ents through stochastic neurons for conditional computation. arXiv preprint arXiv:1308.3432 (2013) 4. Busoniu, L., Babuska, R., De Schutter, B.: A comprehensive survey of multiagent reinforcement learning. IEEE Transactions on
|
https://arxiv.org/abs/2505.21985v1
|
Systems, Man, and Cybernetics, Part C (Applications and Reviews) 38(2), 156–172 (2008) 5. Cangelosi, A., Parisi, D.: The emergence of a’language’in an evolving population of neural networks. Connection Science 10(2), 83–97 (1998) 6. De Witt, C.S., Gupta, T., Makoviichuk, D., Makoviychuk, V., Torr, P.H., Sun, M., Whiteson, S.: Is independent learning all you need in the starcraft multi-agent challenge? arXiv preprint arXiv:2011.09533 (2020) 7. Ebara, H., Nakamura, T., Taniguchi, A., Taniguchi, T.: Multi-agent reinforcement learning with emergent communication using discrete and indifferentiable message. In: 2023 15th international congress on advanced applied informatics winter (IIAI- AAI-Winter). pp. 366–371. IEEE (2023) 14 N. Yoshida and T. Taniguchi 8. Farrell,J.,Rabin,M.:Cheaptalk.JournalofEconomicperspectives 10(3),103–118 (1996) 9. Foerster, J., Assael, I.A., De Freitas, N., Whiteson, S.: Learning to communicate with deep multi-agent reinforcement learning. Advances in neural information pro- cessing systems 29(2016) 10. Friston, K., Kilner, J., Harrison, L.: A free energy principle for the brain. Journal of physiology-Paris 100(1-3), 70–87 (2006) 11. Gronauer, S., Diepold, K.: Multi-agent deep reinforcement learning: a survey. Ar- tificial Intelligence Review 55(2), 895–943 (2022) 12. Hagiwara, Y., Kobayashi, H., Taniguchi, A., Taniguchi, T.: Symbol emergence as an interpersonal multimodal categorization. Frontiers in Robotics and AI 6, 134 (2019) 13. Hansen, E.A., Bernstein, D.S., Zilberstein, S.: Dynamic programming for partially observable stochastic games. In: AAAI. vol. 4, pp. 709–715 (2004) 14. Hendrycks, D., Gimpel, K.: Gaussian error linear units (gelus). arXiv preprint arXiv:1606.08415 (2016) 15. Higgins, I., Matthey, L., Pal, A., Burgess, C., Glorot, X., Botvinick, M., Mohamed, S., Lerchner, A.: beta-vae: Learning basic visual concepts with a constrained vari- ational framework. In: International conference on learning representations (2017) 16. Hoang,N.L.,Taniguchi,T.,Hagiwara,Y.,Taniguchi,A.:Emergentcommunication of multimodal deep generative models based on metropolis-hastings naming game. Frontiers in Robotics and AI 10, 1290604 (2024) 17. Kingma, D., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014) 18. Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) 19. Lazaridou, A., Baroni, M.: Emergent multi-agent communication in the deep learn- ing era. arXiv preprint arXiv:2006.02419 (2020) 20. Lazaridou, A., Hermann, K.M., Tuyls, K., Clark, S.: Emergence of linguistic com- munication from referential games with symbolic and pixel input. In: International Conference on Learning Representations (2018) 21. Lazaridou, A., Peysakhovich, A., Baroni, M.: Multi-agent cooperation and the emergence of (natural) language. In: International Conference on Learning Repre- sentations (2017) 22. Lewis, D.: Convention: A philosophical study. Harvard University Press (1969) 23. Lin, T., Huh, J., Stauffer, C., Lim, S.N., Isola, P.: Learning to ground multi-agent communication with autoencoders. Advances in Neural Information Processing Systems 34, 15230–15242 (2021) 24. Lowe, R., Foerster, J., Boureau, Y.L., Pineau, J., Dauphin, Y.: On the pitfalls of measuring emergent communication. In: Proceedings of the 18th International Conference on Autonomous Agents and MultiAgent Systems. pp. 693–701 (2019) 25. Lowe, R., Wu, Y.I., Tamar, A., Harb, J., Pieter Abbeel, O., Mordatch, I.: Multi- agent actor-critic for mixed cooperative-competitive environments. Advances in neural information processing systems 30(2017) 26. Mirolli, M., Parisi, D.: Producer Biases and Kin Selection in the Evolution of Com- munication, pp. 135–159. Springer Berlin Heidelberg, Berlin, Heidelberg (2010) 27. Nakamura,T.,Taniguchi,A.,Taniguchi,T.:Controlasprobabilisticinferenceasan emergent communication
|
https://arxiv.org/abs/2505.21985v1
|
mechanism in multi-agent reinforcement learning. arXiv preprint arXiv:2307.05004 (2023) Title Suppressed Due to Excessive Length 15 28. Nomura,K.,Aoki,T.,Taniguchi,T.,Horii,T.:Decentralizedcollectiveworldmodel for emergent communication and coordination. arXiv preprint arXiv:2504.03353 (2025) 29. Nowak, M.A., Krakauer, D.C.: The evolution of language. Proceedings of the Na- tional Academy of Sciences 96(14), 8028–8033 (1999) 30. Peters, J., de Puiseau, C.W., Tercan, H., Gopikrishnan, A., De Carvalho, G.A.L., Bitter, C., Meisen, T.: A survey on emergent language. arXiv preprint arXiv:2409.02645 (2024) 31. Pina, R., De Silva, V., Artaud, C., Liu, X.: Fully independent communication in multi-agent reinforcement learning. arXiv preprint arXiv:2401.15059 (2024) 32. Rao, R.P., Ballard, D.H.: Predictive coding in the visual cortex: a functional inter- pretation of some extra-classical receptive-field effects. Nature neuroscience 2(1), 79–87 (1999) 33. Schulman, J., Moritz, P., Levine, S., Jordan, M., Abbeel, P.: High-dimensional continuous control using generalized advantage estimation. In: International Con- ference on Learning Representations (ICLR) (2016) 34. Schulman, J., Wolski, F., Dhariwal, P., Radford, A., Klimov, O.: Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347 (2017) 35. Schulman, J.D.: Approximating kl divergence. http://joschu.net/blog/kl- approx.html (3 2020) 36. Seyfarth, R.M., Cheney, D.L., Marler, P.: Monkey responses to three different alarm calls: evidence of predator classification and semantic communication. Sci- ence210(4471), 801–803 (1980) 37. Skyrms, B.: Signals: Evolution, learning, and information (2010) 38. Sukhbaatar, S., Fergus, R., et al.: Learning multiagent communication with back- propagation. Advances in neural information processing systems 29(2016) 39. Sunehag, P., Lever, G., Gruslys, A., Czarnecki, W.M., Zambaldi, V., Jaderberg, M., Lanctot, M., Sonnerat, N., Leibo, J.Z., Tuyls, K., et al.: Value-decomposition networks for cooperative multi-agent learning based on team reward. In: Proceed- ings of the 17th International Conference on Autonomous Agents and MultiAgent Systems. pp. 2085–2087 (2018) 40. Taniguchi, T.: Collective predictive coding hypothesis: Symbol emergence as de- centralized bayesian inference. Frontiers in Robotics and AI 11, 1353870 (2024) 41. Taniguchi, T., Yoshida, Y., Matsui, Y., Le Hoang, N., Taniguchi, A., Hagiwara, Y.: Emergent communication through metropolis-hastings naming game with deep generative models. Advanced Robotics 37(19), 1266–1282 (2023) 42. Tomasello, M.: Why we cooperate. MIT press (2009) 43. Tomasello, M.: Origins of human communication. MIT press (2010) 44. Ueda, R., Taniguchi, T.: Lewis’s signaling game as beta-vae for natural word lengths and segments. In: The Twelfth International Conference on Learning Rep- resentations (2024) 45. Wong, A., Bäck, T., Kononova, A.V., Plaat, A.: Deep multiagent reinforcement learning: challenges and directions. Artificial Intelligence Review 56(6), 5023–5056 (2023) 46. Yu, C., Velu, A., Vinitsky, E., Gao, J., Wang, Y., Bayen, A., Wu, Y.: The sur- prising effectiveness of ppo in cooperative multi-agent games. Advances in neural information processing systems 35, 24611–24624 (2022) 47. Zhu, C., Dastani, M., Wang, S.: A survey of multi-agent deep reinforcement learn- ing with communication. Autonomous Agents and Multi-Agent Systems 38(1), 4 (2024)
|
https://arxiv.org/abs/2505.21985v1
|
arXiv:2505.21988v1 [cs.AI] 28 May 2025Functional Matching of Logic Subgraphs: Beyond Structural Isomorphism Ziyang Zheng Kezhi Li Zhengyuan Shi Qiang Xu The Chinese University of Hong Kong {zyzheng23,kzli24,zyzshi21,qxu}@cse.cuhk.edu.hk Abstract Subgraph matching in logic circuits is foundational for numerous Electronic Design Automation (EDA) applications, including datapath optimization, arithmetic verifi- cation, and hardware trojan detection. However, existing techniques rely primarily on structural graph isomorphism and thus fail to identify function-related subgraphs when synthesis transformations substantially alter circuit topology. To overcome this critical limitation, we introduce the concept of functional subgraph matching , a novel approach that identifies whether a given logic function is implicitly present within a larger circuit, irrespective of structural variations induced by synthesis or technology mapping. Specifically, we propose a two-stage multi-modal framework: (1) learning robust functional embeddings across AIG and post-mapping netlists for functional subgraph detection, and (2) identifying fuzzy boundaries using a graph segmentation approach. Evaluations on standard benchmarks (ITC99, OpenABCD, ForgeEDA) demonstrate significant performance improvements over existing struc- tural methods, with average 93.8%accuracy in functional subgraph detection and a dice score of 91.3%in fuzzy boundary identification. 1 Introduction Subgraph matching—identifying smaller graphs within larger ones—is a fundamental task in graph analysis, with pivotal applications spanning social network mining, bioinformatics, and Electronic Design Automation (EDA). In the context of EDA, subgraph matching involves searching for specific circuit patterns embedded within larger circuits. This capability directly supports critical tasks such as circuit optimization, verification, and security analyses. For example, verifying complex arithmetic circuits like multipliers typically requires recognizing embedded small functional units (e.g., half-adders) within larger netlists, enabling algebraic simplifications and correctness proofs [ 1,2]. Similarly, during template-based synthesis, accurately locating predefined subgraphs allows their replacement with highly optimized standard cells, thereby significantly improving power, performance, and area (PPA) metrics [ 3]. Moreover, subgraph matching also plays an essential role in hardware security by enabling the identification of potentially malicious substructures or "hardware trojans"—anomalous subcircuits intentionally embedded to compromise system integrity [4, 5]. Traditionally, subgraph matching in graphs is formulated as a structural isomorphism problem: determining whether a smaller query graph exactly matches part of a larger target graph in terms of node and edge connectivity. This problem is extensively studied in general graph theory, and classical approaches rely primarily on combinatorial search algorithms [ 6,7,8]. However, subgraph isomorphism is an NP-complete problem and thus often suffers from exponential computational complexity in worst-case scenarios. Recently, deep learning methods have emerged to mitigate this computational cost by embedding graphs into continuous latent spaces, significantly accelerating Preprint. Under review. matching tasks [ 9,10,11]. Within the EDA domain, these techniques have been successfully adapted for transistor-level subcircuit identification [12]. However, structure-based matching methods encounter significant limitations in practical EDA tasks, as circuit topologies frequently undergo substantial transformations during logic synthesis and technology mapping. Equivalent logic functions can thus be realized through widely differing structural implementations, driven by design considerations such as timing performance, power consumption, or silicon area. Consequently, exact structural correspondence rarely persists throughout the design process, even when the underlying logic function remains unchanged. This inherent limitation severely restricts the utility of
|
https://arxiv.org/abs/2505.21988v1
|
traditional structural matching techniques, particularly in applications requiring cross-stage queries—for example, identifying subgraphs from an abstract netlist (like an And-Inverter Graph, or AIG) within a synthesized, technology-mapped netlist. Motivated by this critical gap, we introduce an approach explicitly designed to recognize logic functionality irrespective of structural differences. Specifically, our framework determines whether the logic represented by a query subgraph exists implicitly within a candidate graph, independent of structural transformations. To formalize this, we propose two key concepts: (1) functional subgraph , representing the circuit logic containment relation independent of structure, and (2) fuzzy boundary , minimal graph regions encapsulating the query’s logic despite unclear structural boundaries. Consequently, our methodology, termed functional subgraph matching , addresses two sub-tasks: 1. Functional Subgraph Detection : Determining whether the logic function of a query graph is implicitly contained within a candidate graph; 2. Fuzzy Boundary Identification : Precisely locating the smallest possible region (the fuzzy boundary) in the candidate graph that encapsulates the query’s logic. To achieve these objectives, we propose a novel two-stage multi-modal framework. In the first stage, we train our model with intra-modal and inter-modal alignment across different graph modalities, enabling robust and cross-stage detection of functional subgraph. In the second stage, we fine-tune our model and formulate fuzzy boundary detection as a graph segmentation task, moving beyond prior approaches that treated boundary identification as an input-output classification problem [ 13,14]. By leveraging information from nodes located within the true boundaries, our segmentation approach significantly enhances performance and continuity of fuzzy boundary prediction. Our experiments demonstrate the effectiveness of the proposed framework. Evaluations conducted across several widely-used benchmarks, ITC99 [ 15], OpenABCD [ 16] and ForgeEDA [ 17], show that our approach significantly surpasses traditional structure-based methods. Specifically, our framework achieves an average accuracy of 93.8%for functional subgraph detection and attains a DICE score of 91.3%for fuzzy boundary detection tasks. In contrast, structure-based baseline methods typically exhibit near-random performance (accuracy close to 50%) and high variability in precision, recall, and F1-score, underscoring their limitations in capturing implicit functionality. To further validate our method’s robustness and generalizability, we additionally propose three function-aware baseline variants by integrating different graph encoders into our framework. In summary, the contributions of this work include: •Introducing and formally defining the novel concept of functional subgraph matching, clearly distinguishing it from structural isomorphism and functional equivalence. •Developing a two-stage multi-modal embedding framework, leveraging both intra-modal and inter-modal alignments to capture structure-agnostic and function-invariant graph repre- sentations. This allows effective functional subgraph detection across different modalities. •Proposing an innovative approach for fuzzy boundary identification by formulating the task as a graph segmentation problem rather than a simple input-output classification problem, significantly enhancing boundary continuity and localization accuracy. 2 Preliminaries 2.1 Subgraph Isomorphism Matching Subgraph isomorphism matching is a fundamental problem in graph theory with applications across bioinformatics [ 18], social network analysis [ 19], and knowledge graphs [ 20,21]. We first recall the standard definition of subgraph isomorphism in Definition 1. 2 Query:𝑏+𝑐𝑜𝑢𝑡𝑝𝑢𝑡 𝑏 𝑐𝑜𝑢𝑡𝑝𝑢𝑡 𝑏 𝑐 𝑎 Candidate 1:𝑏+𝑐×𝑎 𝑏 𝑎 𝑐𝑜𝑢𝑡𝑝𝑢𝑡 Candidate 2:𝑎×𝑏+𝑎×𝑐bANDANDcANDy cANDANDbaANDANDy cANDbANDaANDANDANDy: not: and: input: output:
|
https://arxiv.org/abs/2505.21988v1
|
adder: multiplierFigure 1: We present the query and candidate graphs. Above : 1-bit adder and multiplier. Below : AIG netlist. The query b+cis explicitly contained within the candidate (b+c)×a, making it straightforward to identify the exact subgraph in the candidate. In contrast, the query b+cis implicitly contained within the candidate a×b+a×c, which implies no subgraph of a×b+a×chas the same structure or function as the query graph. Definition 1 (Subgraph Isomorphism ).A graph Qis an isomorphic subgraph ofGif there exists a subgraph G′ofGsuch that Qis isomorphic to G′. Then, based on the definition of subgraph isomorphism, the subgraph isomorphism matching task is defined as follows: given a query graph Qand a target graph G, determine if Qis isomorphic to a subgraph of G. Classical approaches of subgraph isomorphism matching rely primarily on combinatorial search algorithms [ 7,8,6]. Its NP-complete nature, however, makes exact matching computationally intensive. More recently, graph-neural-network-based (GNN-based) methods have been introduced to learn compact graph embeddings that accelerate the matching process [ 9,10,11]. In the EDA domain, Li et al. [12] adapt the NeuroMatch architecture [ 10] to solve subcircuit isomorphism on transistor-level netlists. However, in EDA flow, graphs often represent circuits or computations where structural modifications can preserve the underlying function. Standard subgraph isomorphism struggles with such cases. For instance, as illustrated in Figure 1, a model based on Definition 1 can identify that the structure representing b+cis contained within a×(b+c), but it cannot identify the functional presence of b+cwithin the structurally different but functionally related expression a×b+a×c. 2.2 Subgraph Equivalence The limitation of structure-based subgraph matching motivates considering functional properties. Function-aware representation learning has emerged as a pivotal subfield in EDA. Many recent works emphasize functional equivalence, denoted G1≡funcG2. DeepGate [ 22,23,24] and DeepCell [ 25] employ disentanglement to produce separate embeddings for functionality and structure, pretraining across various EDA benchmarks and predict functional similarity with a task head. PolarGate [ 26] enhances functional embeddings by integrating ambipolar device principles. FGNN [ 27,28] applies contrastive learning to align circuit embeddings according to functional similarity. While graph isomorphism requires structural identity, functional equivalence relates graphs based on their input-output behavior. Building on this, we can define a notion of subgraph relationship based on function, as shown in Definition 2. Definition 2 (Subgraph Equivalence ).A graph Qis an equivalent subgraph ofGif there exists a subgraph G′ofGsuch that Q ≡ funcG′. 3 This definition allows for functional matching within existing subgraphs. Some works adopt similar ideas for tasks such as arithmetic block identification [ 13,29] and symbolic reasoning [ 14,30], which aim to find a subgraph with specific functionality rather than structure. Compared to subgraph isomorphism, subgraph equivalence offers more flexibility against local structure modifications. However, Definition 2 still falls short for cases involving global restructuring. As shown in Figure 1, in the example a×b+a×c, no single subgraph is functionally equivalent to b+c. The function b+cisimplicitly present but not explicitly represented by a contiguous subgraph. 2.3 Functional Subgraph To address the limitations of both Definition 1 and Definition 2, we introduce the concept of a functional subgraph, which aims to identify
|
https://arxiv.org/abs/2505.21988v1
|
the implicit containment relation between graphs. Definition 3 (Functional Subgraph ).A graph Qis afunctional subgraph ofG, denoted Q≼G, if there exists a graph G′such that G′≡funcGandQis isomorphic to a subgraph of G′. This definition captures the idea that the query’s function is implicitly contained within the target’s function, even if the target’s structure has undergone functional transformations, and no exact subgraph isomorphic to the query graph can be found in the target graph. By this definition, we know thatb+cis a functional subgraph of a×b+a×csince a×b+a×c≡funca×(b+c)andb+c is an isomorphic subgraph of a×(b+c). Furthermore, Definition 3 encompasses Definition 2, i.e., Definition 2 is a special case of Definition 3, as discussed in Proposition 1 (proof in Appendix A). Proposition 1. If a graph Qis an equivalent subgraph of G, thenQis a functional subgraph of G. Properties of Functional Subgraph In this paper, we assume that a graph obtained by removing some nodes and edges is not functionally equivalent to the original graph, i.e. ∀g̸=∅,G\g̸≡funcG. For example, we consider it illegal to directly connect two NOT gates. Therefore, such connections do not appear in our graph structures. In fact, EDA tools such as ABC [ 31] inherently enforce this constraint. According to Definition 3, functional subgraphs exhibit the following properties: •Reflexivity : For any graph G,Gis the functional subgraph of G, i.e.∀G,G≼G. •Functional Equivalence Preservation : IfG1is a functional subgraph of G2, andG′ 1is functionally equivalent to G1, thenG′ 1is a functional subgraph of G2, i.e. if G1≼G2and G′ 1≡funcG1, thenG′ 1≼G2. •Transitivity : IfG1is a functional subgraph of G2andG2is a functional subgraph of G3, then G1is a functional subgraph of G3, i.e. if G1≼G2andG2≼G3, thenG1≼G3. •Anti-symmetry : IfG1is a functional subgraph of G2, thenG2is a functional subgraph of G1if and only if they are functionally equivalent, i.e. G1≼G2andG2≼G1if and only if G1≡funcG2. For detailed proofs of the above properties, please refer to Appendix A. It is worth noting that the subgraph equivalence defined in Definition 2 does notsatisfy the Transitivity property. This highlights the improved completeness of the functional subgraph in Definition 3. 2.4 Task Definition Based on Definition 3, we define our primary task: Task #1: Functional Subgraph Detection. Given a query graph Qand a candidate graph G, determine if Q≼G. While functional subgraph detection is a decision problem (yes/no), it is often desirable to identify which part of the target graph Gcorresponds to the query function Q. However, as shown in Figure 1, due to potential functional transformations, identifying an exact boundary in the original graph Gthat perfectly represents Qcan be challenging or impossible. This leads to our second task, which aims to find the smallest region in Gthat encapsulates the function of Q. Definition 4 (Fuzzy Boundary ).Given a query graph Qand a candidate graph G= (V, E), a subgraph G∗= (V∗, E∗)ofG, where V∗⊆VandE∗=E∩(V∗×V∗), is a fuzzy boundary for QinGif: 4 1.Q≼G∗ 2. For any proper subgraph HofG∗(i.e.,H ⊂ G∗andH ̸=G∗),Q ̸≼H As illustrated in Figure 1, for Grepresenting a×b+a×candQrepresenting b+c, the fuzzy boundary G∗would likely encompass the components corresponding to b,c, the two multiplications, and the addition, as this
|
https://arxiv.org/abs/2505.21988v1
|
minimal collection is required to functionally contain b+cvia transformation. Based on Definition 4, we further define another task as: Task #2: Fuzzy Boundary Identification. Given a query graph Qand a candidate graph Gsuch thatQ≼G, determine for each node in G, whether it belongs to the fuzzy boundary G∗ofQ. 3 Method 3.1 Stage #1: Functional Subgraph Detection Random drop nodesFunc.&Stru.SubgraphSynthesisFunction EquivalentFunctionalSubgraphMappingFunction EquivalentFunctionalSubgraphAIG EncoderPMEncoderAIG EncoderTarget GraphQuery GraphData ProcessingStage1: Functional Subgraph Detection𝐺!"#𝐺$%&𝐺$'(𝐺)*𝑓$'(𝑓!"#𝑓)*Intra-modal & Inter-modal AlignmentInner-modal Retrieval𝐺$%&→𝐺$'(Cross-modal Retrieval𝐺$%&→𝐺)*: not: and: input: output: adder: multiplier𝐺!" ADDMUL𝐺#$% ANDANDANDAND𝐺&'( ANDANDAND 𝐺&)* ANDANDANDANDAND Figure 2: The pipeline of Stage #1. Left: Our data processing pipeline. For a given Gaig, we first randomly extract a subgraph Gsub. Then, we obtain GsynandGpmthrough synthesis and mapping, respectively. Right : Our training pipeline via intra-modal and inter-modal alignments for functional subgraph detection. We first encode the query and target graphs using their respective encoders. Next, we perform intra-modal and inter-modal alignment on the target graph to obtain function-invariant and structure-agnostic embeddings. These embeddings are then sent to a task head to determine whether the query graph is contained within the target graph. Data Processing As illustrated in Figure 2, given an AIG netlist Gaig, we first randomly drop nodes while ensuring legality, to obtain the subgraph Gsub. Next, we use the ABC tool [ 31] to generate Gsynby randomly selecting a synthesis flow. Importantly, in this step we ensure that Gsynis not isomorphic to Gaig. Finally, we apply the ABC tool again to map GsyntoGpmusing the Skywater Open Source PDK [ 32]. This data processing pipeline ensures that Gaigis equivalent to both Gsyn andGpm. Since Gsubis an isomorphic subgraph of Gaig, it follows from Definition 3 that Gsubis a functional subgraph of both GsynandGpm. For negative pairs, following the approach in Li et al. [12], we randomly sample Gaig,Gsyn, andGpmfrom other pairs within the same batch. It is important to note that all circuits in this paper have multiple inputs and a single output. For more details, please refer to Section 4.1 and Appendix C. Retrieval In this paper, we adopt DeepGate2 [ 22] and DeepCell [ 25] as backbones for encoding AIG netlists and post-mapping netlists, respectively. Given a query graph Gsub, along with positive candidates G+ aig,G+ syn,G+ pmand negative candidates G− aig,G− syn,G− pm, we first use the AIG encoder 5 𝐺!" ADDMULReverseEngineeringRandomDropNodesAnnotationFunction EquivalentData ProcessingStage2: Fuzzy Boundary Identification FunctionalSubgraph Library PMEncoderAIG EncoderQuery Graph𝐺!"# 𝐺$%Target Graph𝑓!"#…NodeEmbeddingSeg.Head𝐺!" ADDMUL𝐺#$% ANDANDANDAND 𝐺&'( ANDANDANDFuzzy Boundary : not: and: input: output: adder: multiplierPredictionFigure 3: The pipeline of Stage #2. Left: Our data processing pipeline. For a given Gpm, we replace each node in Gpmwith the AIG implementation according to the functionality in the library. Then, we randomly sample a subgraph GsubfromGaig. Finally, we annotate the nodes in Gpmif one of the corresponding AIG nodes still exist in Gsub.Right : Our training pipeline for fuzzy boundary identification via graph segmentation. Given the query graph Gsuband the target graph Gpm, we first useEnc aigto obtain the graph embedding of GsubandEnc pmto obtain the node embeddings of Gpm. These embeddings are then concatenated and passed to a task head to
|
https://arxiv.org/abs/2505.21988v1
|
determine whether a node in Gpmlies within the fuzzy boundary of Gsub. Enc aigand the PM encoder Enc pmfor different modalities as follows: fsub=Enc aig(Gsub), faig=Enc aig(Gaig) fsyn=Enc aig(Gsyn), fpm=Enc pm(Gpm) Next, we concatenate the embeddings of the query graph and the candidate graphs and feed them into a classification head, a 3-layer MLP: ˆyaig=MLP ([fsub, faig]),ˆysyn=MLP ([fsub, fsyn]),ˆypm=MLP ([fsub, fpm]) Finally, we compute the binary cross-entropy (BCE) loss for each prediction: Lcls=BCELoss (ˆyaig, yaig) +BCELoss (ˆysyn, ysyn) +BCELoss (ˆypm, ypm) Function-Invariant Alignment EDA flows such as synthesis and mapping modify the circuit structure while preserving functional equivalence. As defined in Definition 3, the functional subgraph relation focuses on the functionality of the candidate circuits rather than structure, as they can be transformed into an equivalent circuit with any structure. Therefore, learning function-invariant embeddings for equivalent circuits across different stages is crucial for functional subgraph detection. While GaigandGsynshare the same gate types, GaigandGpmdiffer significantly in modality, i.e., the gate types in Gpmare substantially dissimilar to those in Gaig. Therefore, we employ both intra-modal and inter-modal alignment techniques to acquire function-invariant and structure-agnostic embeddings with the InfoNCE loss [ 33]. We select Gaigas the anchor and compute the intra-modal and inter-modal losses as follows: Lintra =InfoNCE (f+ aig, f+ syn, f− syn) Linter =InfoNCE (f+ aig, f+ pm, f− pm) Finally, we summarize the losses for stage #1 as: Lstage 1=Lcls+Lintra +Linter 3.2 Stage #2: Fuzzy Boundary Identification Data Processing Given a post-mapping netlist Gpm, we replace the cells in Gpmwith the corre- sponding AIGs from the library to acquire the netlist Gaig. This process yields a mapping function 6 ϕthat associates the node indices of Gaigwith those of Gpm. Next, we randomly drop nodes to obtain Gsub, which serves as the functional subgraph of both GpmandGaig. Using the subgraph Gsub, we annotate the nodes in Gpmby mapping the node indices of Gsubto those of Gpmthrough the function ϕ. Specifically, for each node in Gsub, if it maps to a node iinGpm, we annotate node ias 1; otherwise, we annotate it as 0. This annotation process strictly follows the fuzzy boundary definition in Definition 4. Cross-modal Retrieval Given a query graph Gsuband a target graph Gpm= (Vpm, Epm), we first compute the embedding of Gsuband the node embeddings of Gpm: fsub=Enc aig(Gsub), f1 pm, f2 pm, . . . , f|Vpm| pm =Enc pm(Gpm) Next, we use fsubas the query embedding and concatenate it with the node embeddings from Gpm. These concatenated embeddings are then fed into a 3-layer MLP for node classification: ˆyi= MLP ([fsub, fi pm]). While previous works [ 13,29] treat this task as an input-output classification problem, we frame it as a graph segmentation task. This approach arises from the observation that nodes near the input-output nodes contribute to identifying fuzzy boundaries and thus should not be simply labeled as zero. During training, we optimize the model using cross-entropy loss: Lstage 2=−X i[yilog(ˆyi) + (1 −yi) log(1 −ˆyi)] (1) 4 Experiment 4.1 Experimental Setup We evaluate our method on three AIG datasets: ITC99 [ 15], OpenABCD [ 16], and
|
https://arxiv.org/abs/2505.21988v1
|
ForgeEDA [ 17]. Each metric in Tables 1 and 2 is reported as the mean±standard deviation over three independent runs. For data processing, we begin by randomly sampling k-hop subgraphs (with kranging from 8 to 12) to partition large circuits into smaller circuits. Next, we randomly sample subgraphs from these smaller circuits. For logic synthesis, we use the ABC tool [ 31] with a randomly selected flow from src_rw ,src_rs ,src_rws ,resyn2rs , and compress2rs . We then apply the VF2 algorithm [ 6] to verify that the synthesis process has modified the circuit structure. If no modification is detected, we repeat this step until we obtain a circuit with a different structure. For technology mapping, we invoke ABC with the Skywater Open Source PDK [ 32]. For additional details on the environment, evaluation metrics, and dataset statistics, please refer to Appendix C. 4.2 Stage #1: Functional Subgraph Detection Table 1: Result of Functional Subgraph Detection(%). Dataset MethodGsub→ G syn Gsub→ G pm Accuracy Precision Recall F1-score Accuracy Precision Recall F1-scoreITC99NeuroMatch 49.8±0.316.7±23.633.3±47.122.2±31.449.8±0.216.7±23.650.0±50.033.4±33.4 HGCN 44.5±7.735.0±21.267.3±46.345.3±30.249.5±0.835.7±20.266.8±47.044.7±31.2 Gamora 50.6±12.821.1±27.733.0±46.025.4±34.851.7±4.434.0±24.251.2±40.940.2±29.6 ABGNN 56.4±9.120.8±29.432.7±46.325.4±35.954.1±5.819.0±26.933.3±47.124.2±34.2 Ours 95 .3±0.194 .4±0.296 .3±0.195 .4±0.093 .1±0.392 .3±0.394 .2±0.993 .2±0.4OpenABCDNeuroMatch 44.2±9.817.3±23.933.4±47.122.7±31.844.9±8.417.0±23.933.4±47.122.5±31.7 HGCN 52.5±3.618.0±25.532.5±46.023.2±32.850.0±0.020.4±21.433.0±46.722.2±31.3 Gamora 50.8±1.133.7±23.966.6±47.144.8±31.749.8±0.333.2±23.562.1±44.343.3±30.6 ABGNN 34.1±5.45.2±3.92.6±2.63.4±3.241.3±4.09.7±7.63.5±3.25.1±4.5 Ours 92 .3±0.293 .7±0.290 .6±0.492 .1±0.290 .8±0.492 .4±0.488 .9±0.990 .6±0.5ForgeEDANeuroMatch 50.0±0.016.7±23.633.3±47.122.2±31.450.0±0.016.7±23.633.3±47.122.2±31.4 HGCN 44.0±8.518.2±22.633.9±46.723.1±30.848.8±1.618.8±22.233.5±47.022.5±31.2 Gamora 40.6±6.32.4±1.60.7±0.81.0±1.148.2±1.551.0±8.231.0±31.628.5±22.9 ABGNN 52.3±3.334.6±24.566.6±47.145.5±32.252.0±2.934.4±24.466.6±47.145.4±32.1 Ours 96 .0±0.196 .8±0.495 .2±0.596 .0±0.195 .3±0.095 .9±0.594 .7±0.595 .3±0.0 7 We evaluate the performance of our proposed method on three datasets: ITC99, OpenABCD, and ForgeEDA. Our method is compared against several state-of-the-art models, including Neuro- Match [ 10] and HGCN [ 12], which are designed for isomorphism subgraph matching in general domain and EDA domain respectively, and Gamora [ 14] and ABGNN [ 13], which are designed for reasoning in EDA domain, i.e. for equivalent subgraph matching. Since Gamora and ABGNN focus on boundary detection instead of subgraph matching, we integrate them into the NeuroMatch framework for Stage #1. Further integration of Gamora and ABGNN with our method is discussed in Appendix B. The evaluation metrics include accuracy, precision, recall, and F1-score, computed for two tasks: GsubtoGsynandGsubtoGpm. As shown in Table 1, the results on the ITC99, OpenABCD, and ForgeEDA datasets demonstrate that our method significantly outperforms all baseline models. Specifically, for the Gsub→ G syntask, our model achieves an average accuracy of 94.5%, precision of 95.0%, recall of 94.0%, and F1-score of94.5%, surpassing all other methods by a large margin. Similarly, for the Gsub→ G pmtask, our method also shows superior performance with an accuracy of 93.1%, precision of 93.5%, recall of 92.6%, and F1-score of 93.0%. In contrast, structure-based methods show an accuracy close to 50% and large standard errors in precision, recall, and F1-score. Such unreliable performance typically arises because these methods indiscriminately predict all pairs as either entirely positive or negative, highlighting their limitations in functional subgraph detection. 4.3 Stage #2: Fuzzy Boundary Identification Table 2: Result of Fuzzy Boundary Identification(%). MethodITC99 OpenABCD ForgeEDA IoU DICE IoU DICE IoU DICE NeuroMatch 44.2±0.0 61.3±0.0 41.2±0.0 58.3±0.0 42.0±0.0 59.1±0.0 HGCN 44.1±0.0 61.2±0.0 41.2±0.0 58.3±0.0 42.0±0.0 59.2±0.0 Gamora 39.1±2.8 56.2±2.9 44.2±1.2 61.3±1.1 39.5±0.6 56.6±0.6 ABGNN 26.7±6.2 41.7±7.5 37.5±0.8
|
https://arxiv.org/abs/2505.21988v1
|
54.5±0.8 31.9±2.6 48.2±3.0 Ours 83 .0±1.490 .7±0.985 .2±0.992 .0±0.583 .8±0.891 .2±0.4 In this stage, we treat Gsubas the query and aim to locate its fuzzy boundary within the post-mapping netlist Gpm. Since Gamora and ABGNN are designed for the detection of the input-output boundary, we first apply each to identify the input and output nodes in Gpm. We then perform a BFS between inputs and outputs to recover the corresponding fuzzy boundary, and evaluate the result using Intersection-over-Union (IoU) and DICE score. Table 2 reports results on ITC99, OpenABCD, and ForgeEDA, demonstrating that our model substan- tially outperforms all baselines. Specifically, we achieve an average IoU of 84.0%and a Dice score of91.3%, significantly outperforming all other methods. Structure-based methods (e.g., NeuroMatch and HGCN) fail to capture functional boundaries and often generate trivial solutions (predicting all nodes as boundary nodes), yielding low variance but poor performance. Although Gamora and ABGNN can detect clear block boundaries for specific arithmetic modules, they struggle with the variable, function-driven fuzzy boundaries required here, resulting in significantly lower performance. Further integration of Gamora and ABGNN within our framework is detailed in Appendix B. Table 3: Ablation Study on ITC99 Dataset( %). SettingStage #1 Stage #2 Gsub→ G syn Gsub→ G pm Gsub→ G pm Accuracy F1-score Accuracy F1-score IoU DICE Stage #1 wo.alignment 94.6 94.6 91.4 91.5 - - Stage #2 wo.stage #1 - - - - 76.3 86.5 Stage #2 wo.seg. - - - - 29.6 45.7 Ours 95.3 95.4 93.1 93.2 83.0 90.7 4.4 Ablation Study We perform ablation study on ITC99 dataset and compare the performance of the ablation settings with our proposed method to evaluate the contribution of various components in our method. 8 Stage #1 without alignment achieves accuracy and F1-scores of 94.6%and94.6%onGsub→ G syn task, which are lower than our method’s 95.3%and95.4%. Our model also improves accuracy and F1- score by 1.7%onGsub→ G pmtask. These results demonstrate the importance of function-invariant alignment, particularly inter-modal alignment, i.e. aligning GpmandGaig. Stage #2 without Stage #1 shows a performance drop, with IoU and DICE scores of 76.3%and 86.5%, compared to our method’s improved values of 83.0%and90.7%. This highlights the crucial role of pretraining knowledge in Stage #1. Stage #2 without segmentation also shows a significant drop in performance, with IoU and DICE values of 29.6%and45.7%, compared to our method’s improved 83.0%and90.7%. These results suggest that directly predicting the input-output nodes of the fuzzy boundary is challenging, as it varies with different functional transformations and omits the information of nodes in fuzzy boundary. 5 Limitations While our proposed framework demonstrates strong performance and significant improvements over existing structural approaches, several limitations remain and should be addressed in future research: Scalability to Large-scale Circuits: Currently, our method has primarily been evaluated on moderately-sized circuits due to computational resource constraints. Real-world EDA applications often involve extremely large netlists with millions of nodes. Scaling our detection and segmentation approaches to handle such large-scale graphs efficiently is non-trivial. Future research could investi- gate more computationally efficient embedding methods, hierarchical segmentation approaches, or incremental graph processing techniques to enhance scalability. Multiple
|
https://arxiv.org/abs/2505.21988v1
|
and Overlapping Fuzzy Boundaries: Our fuzzy boundary identification method presently assumes a single, minimal enclosing region within the target graph. In practical scenarios, multiple occurrences or overlapping functional subgraphs might exist within a single large circuit, compli- cating boundary identification tasks. Extending our methodology to effectively handle multiple or overlapping fuzzy boundaries within the same circuit remains an open and challenging direction for further investigation. Single-output Circuit Assumption: The current approach assumes single-output logic circuits. In real-world scenarios, however, most circuits possess multiple outputs and complex internal functional dependencies. The direct applicability of our method to multi-output circuits, particularly when outputs share significant internal logic, remains unexplored. Generalizing the definitions and embedding strategies to model multi-output scenarios could further enhance practical relevance. Non-trivial Function Assumption: In this paper, we assume that a graph obtained by removing some nodes and edges is not functionally equivalent to the original graph, i.e. ∀g̸=∅,G\g̸≡funcG. While EDA tools inherently enforce this constraint, it may limit the generalizability of the functional subgraph in other domains. By systematically addressing these limitations, subsequent research can extend our approach to broader, more realistic settings, thereby increasing its practical utility in EDA domain and beyond. 6 Conclusion In this paper, we introduce the concept of functional subgraph matching , a method to identify implicit logic functions within larger circuits, despite structural variations. We propose a two-stage framework: first, we train models across different modalities with alignment to detect functional subgraphs; second, we fine-tune our model and treat fuzzy boundary identification as a graph segmentation task for precise localization of fuzzy boundary. Evaluations on benchmarks (ITC99, OpenABCD, ForgeEDA) show that our approach outperforms structure-based methods, achieving 93.8%accuracy in functional subgraph detection and a 91.3%DICE score for fuzzy boundary detection. Broader Impact Our method contributes to the advancement of deep learning, particularly in graph-based functional relationship analysis. By improving the detection of functional relationships in complex systems, it has the potential to impact a wide range of applications, from circuit design to other domains that rely on graph functionality, e.g. molecular and protein graphs. 9 References [1]Alireza Mahzoon, Daniel Große, and Rolf Drechsler. Polycleaner: clean your polynomials before backward rewriting to verify million-gate multipliers. In 2018 IEEE/ACM International Conference on Computer-Aided Design (ICCAD) , pages 1–8. IEEE, 2018. [2]Alireza Mahzoon, Daniel Große, and Rolf Drechsler. Revsca: Using reverse engineering to bring light into backward rewriting for big and dirty multipliers. In Proceedings of the 56th Annual Design Automation Conference 2019 , pages 1–6, 2019. [3]Xing Wei, Yi Diao, Tak-Kei Lam, and Yu-Liang Wu. A universal macro block mapping scheme for arithmetic circuits. In 2015 Design, Automation & Test in Europe Conference & Exhibition (DATE) , pages 1629–1634. IEEE, 2015. [4]Travis Meade, Shaojie Zhang, Yier Jin, Zheng Zhao, and David Pan. Gate-level netlist reverse engineering tool set for functionality recovery and malicious logic detection. In International Symposium for Testing and Failure Analysis , volume 81368, pages 342–346. ASM International, 2016. [5]Haocheng Li, Satwik Patnaik, Abhrajit Sengupta, Haoyu Yang, Johann Knechtel, Bei Yu, Evangeline FY Young, and Ozgur Sinanoglu. Attacking split manufacturing from a deep learning
|
https://arxiv.org/abs/2505.21988v1
|
perspective. In Proceedings of the 56th Annual Design Automation Conference 2019 , pages 1–6, 2019. [6]Luigi P Cordella, Pasquale Foggia, Carlo Sansone, and Mario Vento. A (sub) graph isomorphism algorithm for matching large graphs. IEEE transactions on pattern analysis and machine intelligence , 26(10):1367–1372, 2004. [7]Julian R Ullmann. An algorithm for subgraph isomorphism. Journal of the ACM (JACM) , 23 (1):31–42, 1976. [8]Luigi Pietro Cordella, Pasquale Foggia, Carlo Sansone, Mario Vento, et al. An improved algo- rithm for matching large graphs. In 3rd IAPR-TC15 workshop on graph-based representations in pattern recognition , pages 149–159. Citeseer, 2001. [9]Yunsheng Bai, Hao Ding, Song Bian, Ting Chen, Yizhou Sun, and Wei Wang. Simgnn: A neural network approach to fast graph similarity computation. In Proceedings of the twelfth ACM international conference on web search and data mining , pages 384–392, 2019. [10] Zhaoyu Lou, Jiaxuan You, Chengtao Wen, Arquimedes Canedo, Jure Leskovec, et al. Neural subgraph matching. arXiv preprint arXiv:2007.03092 , 2020. [11] Rex Ying, Tianyu Fu, Andrew Wang, Jiaxuan You, Yu Wang, and Jure Leskovec. Representation learning for frequent subgraph mining. arXiv preprint arXiv:2402.14367 , 2024. [12] Bohao Li, Shizhang Wang, Tinghuan Chen, Qi Sun, and Cheng Zhuo. Efficient subgraph matching framework for fast subcircuit identification. In Proceedings of the 2024 ACM/IEEE International Symposium on Machine Learning for CAD , pages 1–7, 2024. [13] Ziyi Wang, Zhuolun He, Chen Bai, Haoyu Yang, and Bei Yu. Efficient arithmetic block identification with graph learning and network-flow. IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems , 42(8):2591–2603, 2022. [14] Nan Wu, Yingjie Li, Cong Hao, Steve Dai, Cunxi Yu, and Yuan Xie. Gamora: Graph learning based symbolic reasoning for large-scale boolean networks. In 2023 60th ACM/IEEE Design Automation Conference (DAC) , pages 1–6. IEEE, 2023. [15] Scott Davidson. Characteristics of the itc’99 benchmark circuits. In ITSW , 1999. [16] Animesh Basak Chowdhury, Benjamin Tan, Ramesh Karri, and Siddharth Garg. Openabc-d: A large-scale dataset for machine learning guided integrated circuit synthesis. arXiv preprint arXiv:2110.11292 , 2021. 10 [17] Zhengyuan Shi, Zeju Li, Chengyu Ma, Yunhao Zhou, Ziyang Zheng, Jiawei Liu, Hongyang Pan, Lingfeng Zhou, Kezhi Li, Jiaying Zhu, Lingwei Yan, Zhiqiang He, Chenhao Xue, Wentao Jiang, Fan Yang, Guangyu Sun, Xiaoyan Yang, Gang Chen, Chuan Shi, Zhufei Chu, Jun Yang, and Qiang Xu. Forgeeda: A comprehensive multimodal dataset for advancing eda, 2025. URL https://arxiv.org/abs/2505.02016 . [18] Vincenzo Bonnici, Rosalba Giugno, Alfredo Pulvirenti, Dennis Shasha, and Alfredo Ferro. A subgraph isomorphism algorithm and its application to biochemical data. BMC bioinformatics , 14:1–13, 2013. [19] Wenfei Fan. Graph pattern matching revised for social network analysis. In Proceedings of the 15th international conference on database theory , pages 8–21, 2012. [20] Jinha Kim, Hyungyu Shin, Wook-Shin Han, Sungpack Hong, and Hassan Chafi. Taming subgraph isomorphism for rdf query processing. arXiv preprint arXiv:1506.01973 , 2015. [21] Jorge Pérez, Marcelo Arenas, and Claudio Gutierrez. Semantics and complexity of sparql. ACM Transactions on Database Systems (TODS) , 34(3):1–45, 2009. [22] Zhengyuan Shi, Hongyang Pan, Sadaf Khan, Min Li, Yi Liu, Junhua Huang, Hui-Ling Zhen, Mingxuan Yuan, Zhufei Chu, and Qiang Xu. Deepgate2: Functionality-aware
|
https://arxiv.org/abs/2505.21988v1
|
circuit repre- sentation learning. In 2023 IEEE/ACM International Conference on Computer Aided Design (ICCAD) , pages 1–9. IEEE, 2023. [23] Zhengyuan Shi, Ziyang Zheng, Sadaf Khan, Jianyuan Zhong, Min Li, and Qiang Xu. Deepgate3: Towards scalable circuit representation learning. arXiv preprint arXiv:2407.11095 , 2024. [24] Ziyang Zheng, Shan Huang, Jianyuan Zhong, Zhengyuan Shi, Guohao Dai, Ningyi Xu, and Qiang Xu. Deepgate4: Efficient and effective representation learning for circuit design at scale. arXiv preprint arXiv:2502.01681 , 2025. [25] Zhengyuan Shi, Chengyu Ma, Ziyang Zheng, Lingfeng Zhou, Hongyang Pan, Wentao Jiang, Fan Yang, Xiaoyan Yang, Zhufei Chu, and Qiang Xu. Deepcell: Multiview representation learning for post-mapping netlists. arXiv preprint arXiv:2502.06816 , 2025. [26] Jiawei Liu, Jianwang Zhai, Mingyu Zhao, Zhe Lin, Bei Yu, and Chuan Shi. Polargate: Breaking the functionality representation bottleneck of and-inverter graph neural network. In 2024 IEEE/ACM International Conference on Computer-Aided Design (ICCAD) , 2024. [27] Ziyi Wang, Chen Bai, Zhuolun He, Guangliang Zhang, Qiang Xu, Tsung-Yi Ho, Bei Yu, and Yu Huang. Functionality matters in netlist representation learning. In Proceedings of the 59th ACM/IEEE Design Automation Conference , pages 61–66, 2022. [28] Ziyi Wang, Chen Bai, Zhuolun He, Guangliang Zhang, Qiang Xu, Tsung-Yi Ho, Yu Huang, and Bei Yu. Fgnn2: A powerful pre-training framework for learning the logic functionality of circuits. IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems , 2024. [29] Zhuolun He, Ziyi Wang, Chen Bai, Haoyu Yang, and Bei Yu. Graph learning-based arithmetic block identification. In 2021 IEEE/ACM International Conference On Computer Aided Design (ICCAD) , pages 1–8. IEEE, 2021. [30] Chenhui Deng, Zichao Yue, Cunxi Yu, Gokce Sarar, Ryan Carey, Rajeev Jain, and Zhiru Zhang. Less is more: Hop-wise graph attention for scalable and generalizable learning on circuits. In Proceedings of the 61st ACM/IEEE Design Automation Conference , pages 1–6, 2024. [31] Robert Brayton and Alan Mishchenko. Abc: An academic industrial-strength verification tool. InCAV 2010, Edinburgh, UK, July 15-19, 2010. Proceedings 22 , pages 24–40. Springer, 2010. [32] Google. Skywater open source pdk. URL https://github.com/google/skywater-pdk. git. 2020. [33] Aaron van den Oord, Yazhe Li, and Oriol Vinyals. Representation learning with contrastive predictive coding. arXiv preprint arXiv:1807.03748 , 2018. 11 A Proofs of the Proposed Properties In this section, we use G1∼=G2to denote that G1is isomorphic to G2. Also, we use G1≡funcG2 to denote that G1is functional equivalent to G2. Proposition 2. If a graph Qis an equivalent subgraph of G, thenQis a functional subgraph of G. Proof. According to the Definition 2, there exists a subgraph G′ofGsuch that Q ≡ funcG′. By replacing G′withQ, we get ¯G=G \ G′∪ Q which is equivalent to Gand a subgraph of Gis isomorphic to Q. By the Definition 3, Qis a functional subgraph of G. Proposition 3 (Reflexivity) .∀G,G≼G. Proof. Gis a subgraph of itself, and G ≡ funcG. By the definition of functional subgraph, it follows thatG≼G. Proposition 4 (Functional Equivalence Preservation) .IfG1≼G2andG′ 1≡funcG1, thenG′ 1≼G2. Proof. By definition, there exists a graph G′ 2≡funcG2, such that G1∼=¯G2, (2) where ¯G2is a subgraph of G′ 2. Since G1∼=¯G2, it follows that G1≡func¯G2. (3) By the transitivity
|
https://arxiv.org/abs/2505.21988v1
|
of functional equivalence, we then have G1≡func¯G2≡funcG′ 1. (4) Thus, by replacing ¯G2inG′ 2withG′ 1, we obtain a new graph G′′ 2= (G′ 2\¯G2)∪ G′ 1, (5) which satisfies G′′ 2≡funcG2. (6) From the definition of functional equivalence, we know that G′′ 2≡funcG2and that G′ 1is a subgraph ofG′′ 2. Therefore, it follows that G′ 1≼G2. (7) Proposition 5 (Transitivity) .IfG1≼G2andG2≼G3, thenG1≼G3. Proof. By definition, there exists a graph G′ 2≡funcG2, such that G1∼=¯G1,and¯G1is a subgraph of G′ 2. (8) SinceG′ 2≡funcG2, by Proposition 4, it follows that G′ 2≼G3. Therefore, there exists a graph G′ 3≡funcG3, andG′ 2is a subgraph of G′ 3. Since ¯G1is a subgraph of G′ 2andG′ 2is a subgraph of G′ 3, it follows that ¯G1is a subgraph of G′ 3. SinceG1∼=¯G1,G′ 3≡funcG3and¯G1is a subgraph of G′ 3, by the definition of functional subgraph, we conclude that G1≼G3. (9) Proposition 6 (Anti-symmetry) .G1≼G2andG2≼G1if and only if G1≡funcG2. 12 Proof. (⇒) Since G1≼G2, we have G1∼=G′ 2\g, andG2≡funcG′ 2. (10) SinceG2≼G1, we have G′ 2≼G′ 2\g. By the definition of functional subgraphs, there exists a graph G3such that G3≡funcG′ 2\gandG′ 2is a subgraph of G3. This implies that G′ 2∼=G3\g′, so we also have G′ 2≡funcG3\g′. (11) SinceG3≡funcG′ 2\g, it follows that G3∪g≡funcG′ 2. (12) Thus, we have G3∪g≡funcG′ 2≡funcG3\g′. (13) Note that in Section 2.3, we assume that a graph obtained by removing some nodes and edges is not functionally equivalent to the original graph, i.e., ∀g̸=∅,G\g̸≡funcG. Therefore, we must have g=g′=∅, which implies G1∼=G′ 2\g∼=G′ 2,andG2≡funcG′ 2. (14) Thus, we conclude that G1≡funcG2. (15) (⇐) IfG1≡funcG2, since G1≼G1andG2≼G2, according to Functional Equivalence Preservation property, it follows that G1≼G2andG2≼G1. B Additional Experimental Results B.1 Functional Subgraph Matching Considering that the encoder in our method can be replaced with other backbones, we test our approach with different encoders and propose baselines for the functional subgraph detection task, as shown in Table 4. Table 4: Result of baselines in stage #1. Dataset MethodGsub→ G syn Gsub→ G pm Accuracy Precision Recall F1-score Accuracy Precision Recall F1-score ITC99Ours+Gamora 90.8 91.1 90.4 90.7 86.4 88.6 83.5 86.0 Ours+ABGNN 87.9 83.1 95.1 88.7 88.2 82.8 96.5 89.1 Ours 95.3 94.4 96.3 95.4 93.1 92.3 94.2 93.2 OpenABCDOurs+Gamora 90.1 89.6 90.7 90.2 91.0 89.3 93.2 91.2 Ours+ABGNN 81.7 78.5 87.5 82.7 83.3 78.9 91.1 84.5 Ours 92.3 93.7 90.6 92.1 90.8 92.4 88.9 90.6 ForgeEDAOurs+Gamora 94.2 95.9 92.4 94.1 80.6 93.8 65.5 77.1 Ours+ABGNN 89.7 88.5 91.2 89.8 87.6 88.3 86.8 87.5 Ours 96.0 96.8 95.2 96.0 95.3 95.9 94.7 95.3 B.2 Fuzzy Boundary Identification We futher evalute these methods on fuzzy boundary identification. The results are shown in Table 5. Table 5: Result of baselines in stage #2. MethodITC99 OpenABCD ForgeEDA IoU DICE IoU DICE IoU DICE Ours+Gamora 82.1 90.2 81.4 89.8 83.6 91.1 Ours+ABGNN 82.7 90.5 84.4 91.5 88.4 93.8 Ours 83.0 90.7 85.2 92.0 83.8 91.2 13 C Datasets and Implementation Details Dataset Dataset statistics and splits are shown in Table 6. For dataset split, we first split the training circuits and test circuits in the source dataset, then we
|
https://arxiv.org/abs/2505.21988v1
|
cut subgraph for the training circuit and test circuits to generate our small circuit dataset. For ITC99 and OpenABCD, the split follow the previous work [ 24]. For ForgeEDA, we randomly select 10% circuits in the dataset as test circuits. For small circuit, we apply Algorithm 1 to randomly sample subgraph. Table 6: Dataset Statistics. We report average and standard error with avg.±std. Source DatasetSplit #PairGsub Gaig Gsyn Gpm #Node Depth #Node Depth #Node Depth #Node Depth ITC99train 36592 248±13215.0±2.0320±16619.1±3.0315±16419.0±3.0179±916.9±1.0 test 5917 218±11314.0±2.0282±14117.3±2.2278±13817.0±2.0157±796.3±0.9 OpenABCDtrain 54939 155±11313.0±2.0203±14016.4±3.2198±13416.0±3.0108±755.8±1.1 test 9726 100±6613.0±2.0132±8416.0±2.2128±8215.0±2.069±465.5±0.9 ForgeEDAtrain 60183 126±10213.4±3.5161±12916.6±4.2156±12516.2±4.588±695.8±1.4 test 7753 127±9613.6±3.3163±12217.0±3.8159±12016.4±4.289±655.9±1.3 Environment All experiments are run on an NVIDIA A100 GPU with 64 GB of memory. Models are trained using the Adam optimizer with a learning rate of 0.001, a batch size of 1024. We train our model in stage#1 for 100 epochs and finetune it in stage#2 for 10 epochs. Training our model on one dataset takes approximately 10 hours. Model architectures follow the configurations specified in the original works except that we set the hidden dimension to 128 for all models. Evaluation Metrics For Stage #1, we measure classification performance by accuracy and report precision, recall and f1-score according to the counts of true positives (TP), true negatives (TN), false positives (FP), and false negatives (FN): Precision =TP TP + FP,Recall =TP TP + FN, Accuracy =TP + TN TP + TN + FP + FN,F1-score =2×Precision ×Recall Precision + Recall. For Stage #2, which is similar to a segmentation task, we use Intersection over Union (IoU) and the Dice coefficient. Let Pbe the set of predicted positive nodes and Gthe set of ground-truth positive nodes: IoU =|P∩G| |P∪G|,Dice =2|P∩G| |P|+|G|(16) 14 Algorithm 1 Random Sample Subgraph Input: ndoesV, edges E, root r Output: nodes V, edges E, root r Build adjacency GfromE ifrand (0,1)<0.5then Setrto the predecessor p∈G[r]that maximizes predCount( p) end if ρ←rand (0.6,0.95) T←ρ· |V|,V← {r},Q←[r],E← ∅ while Q̸=∅ ∧ |V|< T do n←pop(Q) for all v∈shuffle( G[n])do ifv /∈Vthen push(Q, v);V∪ {v};E∪ {(n, v)} end if end for end while return V, E, r 15
|
https://arxiv.org/abs/2505.21988v1
|
arXiv:2505.21996v1 [cs.CV] 28 May 2025Learning World Models for Interactive Video Generation Taiye Chen1∗Xun Hu2∗Zihan Ding3∗Chi Jin3† 1School of EECS, Peking University 2Department of Engineering Science, University of Oxford 3Department of Electrical and Computer Engineering, Princeton University Abstract Foundational world models must be both interactive and preserve spatiotemporal coherence for effective future planning with action choices. However, present mod- els for long video generation have limited inherent world modeling capabilities due to two main challenges: compounding errors and insufficient memory mechanisms. We enhance image-to-video models with interactive capabilities through additional action conditioning and autoregressive framework, and reveal that compounding error is inherently irreducible in autoregressive video generation, while insufficient memory mechanism leads to incoherence of world models. We propose video retrieval augmented generation (VRAG) with explicit global state conditioning, which significantly reduces long-term compounding errors and increases spatiotem- poral consistency of world models. In contrast, naive autoregressive generation with extended context windows and retrieval-augmented generation prove less effective for video generation, primarily due to the limited in-context learning capa- bilities of current video models. Our work illuminates the fundamental challenges in video world models and establishes a comprehensive benchmark for improving video generation models with internal world modeling capabilities. 1 Introduction Figure 1: A world model possesses memory capa- bilities and enables faithful long-term future predic- tion by maintaining awareness of its environment and generating predictions based on the current state and actions. Example is in Minecraft game.Foundational world models capable of simulat- ing future outcomes based on different actions are crucial for effective planning and decision- making [ 1,2,3]. To achieve this, these mod- els must exhibit both interactivity, allowing for action conditioning, and spatiotemporal consis- tency over long horizons. While recent advance- ments in video generation, particularly diffu- sion models [ 4,5,6,7], have shown promise, extending them to generate long, interactive, and consistent videos remains a significant chal- lenge [8, 9, 10]. Autoregressive approaches [ 11,12,13,14], which generate videos frame by frame or chunk by chunk conditioned on previous outputs, are a natural fit for modeling long temporal dependen- cies and incorporating interactivity. However, ∗Equal contribution. †Corresponding Author: Chi Jin (chij@princeton.edu). these methods face significant challenges stemming from two fundamental, often coupled, limitations: compounding errors andinsufficient memory mechanisms . Compounding errors arise as small inaccuracies in early predictions accumulate over time, leading to significant divergence from plau- sible future states. Our analysis suggests this may be inherent to current autoregressive paradigms. Insufficient memory mechanisms hinder the models’ ability to maintain consistent object identities, spatial layouts, and world states over extended durations, resulting in inconsistent world models. These two issues often exacerbate one another, making long-term consistent generation difficult. Inspired by the success of large language models (LLMs) [ 15][16] in handling long sequences, we investigate analogous techniques for video generation. Extending the context window, while potentially alleviating compounding errors to some degree, introduces substantial computational and memory overhead. More critically, we find that unlike LLMs, current video generation models exhibit weaker in-context learning capabilities, making longer context less effective in resolving fundamental consistency issues. Similarly, retrieval-augmented generation (RAG) [ 17][18], a
|
https://arxiv.org/abs/2505.21996v1
|
powerful technique for incorporating external knowledge in LLMs, shows limited benefits in our experiments with video models. Neither static retrieval with heuristic sampling nor dynamic retrieval based on similarity search significantly improved world model consistency. These findings suggest that implicitly learning world consistency solely from autoregressive prediction on pixel or latent representations is insufficient. We argue that explicit global state conditioning is necessary. Incorporating explicit representations like world maps, object states, or coordinate systems as conditioning information could provide the necessary grounding for generating consistent long-term interactive simulations. Furthermore, evaluating the specific failure modes of long video generation demands appropriate metrics. Existing metrics often conflate the distinct issues of compounding errors and long-term consistency (memory faithfulness), providing a coupled assessment that obscures the underlying problems. To enable a clearer analysis, we advocate for and introduce a decoupled evaluation strategy by separately quantify the severity of compounding errors and the faithfulness of memory retrieval in long interactive video generation. Our main contributions are: (1). We systematically decouple and analyze the challenges of com- pounding errors and insufficient memory in autoregressive video generation for interactive world modeling. (2). We propose video retrieval augmented generation (VRAG) with explicit global state conditioning, which significantly improves long-term spatiotemporal coherence and reduces compounding errors for interactive video generation. (3). We conduct a comprehensive comparison with various long-context methods adapted from LLM techniques, including position interpolation, neural memory augmentation, and historical frame retrieval, demonstrating their limited effectiveness due to the inherent weak in-context learning capabilities of video diffusion models. This work sheds light on the fundamental obstacles in building consistent, interactive video world models and provides a benchmark and evaluation framework for future research in this direction. 2 Related Works Video Diffusion Models Diffusion generative modeling has significantly advanced the fields of image and video generation [ 19,12,20,21,22,23,24,25,26,27,28,29]. Latent video diffusion models [ 21] operate on video tokens within a latent space derived from a variational auto-encoder (V AE) [ 30], building upon prior work in latent image diffusion models [ 31]. The Diffusion Trans- former (DiT) [ 32] introduced the Transformer [ 33] backbone as an alternative to the previously prevalent U-Net architecture [23, 21, 22] in diffusion models. Long Video Generation Autoregressive video generation [ 11,12,13,14,34,35,36,37,38,39] represents a natural approach for long video synthesis by conditioning on preceding frames, drawing inspiration from successes in large language models. This can be implemented using techniques such as masked conditional video diffusion [ 40,34] or Diffusion Forcing [ 41]. Diffusion Forcing introduces varying levels of random noise per frame to facilitate autoregressive generation conditioned on frames at inference time. Furthermore, the autoregressive framework naturally supports interactive world simulation by allowing action inputs at each step to influence future predictions. Nevertheless, compounding errors remain a significant challenge in long video generation, particularly within the autoregressive paradigm, as will be discussed subsequently. 2 Interactive Video World Models World models [ 1,2,3] are simulation systems designed to predict future trajectories based on the current state and chosen actions. Diffusion-based world models [ 42,43,10] facilitate the modeling of high-dimensional distributions, enabling high-fidelity prediction of
|
https://arxiv.org/abs/2505.21996v1
|
diverse trajectories, even directly in pixel space. The Sora model [ 8] introduced the concept of leveraging video generation models as world simulators. Extending video generation models with interactive capabilities has led to promising applications in diverse domains, including game simulation like Genie [ 9], GameNGen [ 10], Oasis [ 44], Gamegen-x [ 45], The Matrix [ 37], Mineworld [ 46], GameFactory [ 47] and so on [ 43], autonomous driving [ 48], robotic manipula- tion [ 35,49], and navigation [ 50]. While existing work on interactive video world models has made significant engineering advances, there remains a notable gap in systematically analyzing and addressing the fundamental challenges underlying long-term consistency and compounding errors. A lack of spatiotemporal consistency is a primary bottleneck for developing internal world models using current video generation techniques. One line of research addressing this involves predicting the underlying 3D world structure like Genie2 [ 51], Aether [ 52], Gen3C [ 53] and others [ 54,55,56]; however, these approaches often suffer from lower resolution compared to direct video generation due to the complexity of 3D representations, exhibit limited interaction capabilities, and typically operate only within localized regions. Consequently, our work focuses on enhancing the consistency of video-based world models [ 10,34,57]. SlowFast-VGen [ 34] employs a dual-speed learning system to progressively trained LoRA modules for memory recall, utilizing semantic actions but offering limited interactivity. Concurrent work [ 57] explores interactive world simulation through the integration of supplementary memory blocks. 3 Methodology 3.1 Preliminary: Latent Video Diffusion Model Video diffusion models have emerged as a powerful framework for video generation. We adopt a latent video diffusion model [ 21] that operates in a compressed latent space rather than pixel space for computational efficiency. Specifically, given an input video sequence x∈RL×H×W×3, where Lis the number of frames, HandWare the height and width of each frame respectively, we first encode it into a latent representation z=E(x)using a pretrained variational autoencoder (V AE). The forward process gradually adds Gaussian noise to the latent according to a variance schedule {βt}T t=1: q(zt|zt−1) =N(zt;p 1−βtzt−1, βtI) (1) The model learns to reverse this process by predicting the noise ϵθat each step: L=Et,ϵ,z[∥ϵ−ϵθ(zt, t)∥2 2] (2) where zt=√¯αtz0+√1−¯αtϵwithϵ∼ N(0,I). At inference time, we can sample new videos by starting from random noise zT∼ N(0,I)and iteratively denoising: zt−1=1√αt(zt−βt√1−¯αtϵθ(zt, t)) +σtϵ (3) where αt= 1−βtand¯αt=Qt s=1αs. The final latent sequence z0is decoded back to pixel space using the decoder Dto obtain the generated video. 3.2 Interactive Long Video Generation To enable interactive long video generation conditioned on action sequences, we augment the base diffusion model with two techniques: (1) additional action condition with adaptive layer normalization (AdaLN), and (2) random frame noise for autoregressive modeling. Action Conditioning To enable interactive video generation conditioned on action sequences, we augment the base diffusion model with adaptive layer normalization (AdaLN). Given an action sequence a∈RL×Awhere Ais the action dimension, we first embed it into a latent space using a learnable embedding layer: ea=Embed (a)∈RL×Dewhere Deis the embedding dimension. 3 Figure 2: Overview of our VRAG framework for interactive video generation.
|
https://arxiv.org/abs/2505.21996v1
|
The framework incorporates global state conditioning and memory retrieval mechanisms to ensure spatiotemporal consistency and mitigate error accumulation. During both training and inference, retrieved memory serves as context for joint self-attention in spatiotemporal DiT blocks. The model employs per-frame noise injection during training to facilitate autoregressive sampling at inference time. For each normalization layer in the diffusion model, we learn action-dependent scale and shift parameters through linear projections: γa=Wγea+bγ∈RL×Dh, βa=Wβea+bβ∈RL×Dh, where Dhmatches the hidden dimension of the feature maps. The AdaLN operation is then defined asAdaLN (h) =γa⊙LayerNorm (h) +βa, where h∈RL×Dhrepresents the intermediate feature maps and ⊙denotes dot production. These AdaLN layers are integrated into each DiT block of the diffusion model, allowing the network to modulate its feature representations based on the input actions while maintaining the core diffusion architecture. Auto-regressive Video Generation To enable long video generation, we adopt an autoregressive approach where we generate frames sequentially. At each step, we condition on a fixed-length context window Lcof previously generated frames. However, naive autoregressive generation with teacher forcing can suffer from large compounding errors where mistakes accumulate over time. We apply the Diffusion Forcing [41] technique during training. Specifically, during training, we randomly add noise to each frame in the entire input video sequence according to the diffusion schedule: zi t=√¯αtzi 0+√1−¯αtϵi, ϵi∼ N(0,I), where zi trepresents the noised latent of the i-th frame. This forces the model to be robust to noise in the conditioning frames and prevents it from relying too heavily on the context. With above two techniques, the training objective for action-conditioned auto-regressive video models become: LDF=E[t],ϵ,z,a[∥ϵ−ϵθ(z[t],[t],a)∥2 2],ϵ={ϵi}L i=1,z[t]={zi t}L i=1 (4) where [t]is vector of Ltimesteps with different t∈[T]for each frame. The noise prediction model ϵθconditioned on both the action sequence aand noised frames z[t]. Architecture We apply diffusion transformer (DiT) for video generation modeling. We adopt spatiotemporal DiT block with separate spatial and temporal attention modules. Rotary Position Embedding (RoPE) [ 58] is applied for both attention modules, and temporal attention is implemented with causal masking. 3.3 Retrieval Augmented Video World Model with Global State While the vanilla model in Sec. 3.2 provides a foundation for interactive video generation, it lacks robust mechanisms for maintaining long-term consistency and world model coherence. To address these limitations, we integrate memory retrieval and context enhancement with inspiration from LLMs, and incorporate video-specific approaches such as historical frame buffer and global state con- ditioning. These enhancements enable more consistent and coherent autoregressive video generation by providing the model with better access to historical context and spatial awareness. 4 Global State Conditioning To enhance spatial consistency in video generation, we incorporate global state information—specifically the character’s current coordinates and pose—as an additional conditioning signal. The global state vector s∈RSconsists of two key components: sposrepresenting 3D position coordinates and soricapturing orientation angles. Given an action sequence a∈RL×A and the global state sequence s∈RL×S, both are transformed by a learnable embedding layer, ec=Embed c(a,s), to produce conditioning features. These features are then fed into AdaLN layers within the diffusion model. This mechanism allows the model to modulate its generation process,
|
https://arxiv.org/abs/2505.21996v1
|
adapting to both the input actions and the character’s spatial context, thereby improving overall coherence. Video Retrieval Augmented Generation (VRAG) Beyond global state conditioning, we propose memory retrieval augmented generation to enhance the model’s ability to leverage historical context while maintaining temporal coherence, namely video retrieval augmented generation (VRAG). For VRAG, we combine the concatenated historical and current frames with their corresponding action sequences ˜a∈RL×Aand global state sequences ˜s= [shist,s]∈RL×Sas conditional inputs to the model. The historical frames are retrieved from a fixed-length buffer B, which stores previously generated frames. The per-frame retrieval process is based on a heuristic sampling strategy, where we select the most relevant historical frames based on similarity search to concatenate with the current context. The similarity score based on global state is defined as: r(ˆs) =fsim(ˆs⊙w, sL−1⊙w),ˆs∈ B (5) where fsimis a distance metric (e.g., Euclidean distance) between the history frame and the last frame to be predicted sL−1, andw∈RSis a weight vector that modulates the importance of different state components. The top Lhmost similar historical states and frames are selected and sorted to form the retrieved context. Unlike RAG in LLMs which leverages strong in-context learning capabilities, video diffusion models exhibit weak in-context learning abilities, making direct inference with historical frames as context ineffective , as demonstrated later in our experiments. To address this limitation, we propose VRAG training with key modifications to the standard RAG approach, enabling effective memory-augmented video generation. During training, we retrieve historical frames zhist∈RLh×Dand concatenate them with the current context window z∈RLc×Dto form the extended context ˜z= [zhist,z]. For effective VRAG, we make several key modifications: (1). To distinguish retrieved frames from normal context frames, we modify the RoPE embeddings by adding a temporal offset ∆tto the retrieved frames’ position indices. (2). Additionally, we apply lower noise levels βt′< β tto the retrieved frames zhistto simulate partially denoised historical frames during inference. This enhances the robustness of the model with imperfect historical frames generated previously during the autoregressive process. The model is trained to denoise for the entire context ˜zincluding both retrieved and current frames. (3). To ensure the model focuses on denoising the current context while leveraging historical information, we mask the diffusion loss LDFfor retrieved frames. (4). Furthermore, for retrieved frames, we only condition on their global states shist∈RLh×S, masking out action conditions ahist∈RLh×Ato avoid temporal discontinuity in action sequences. This selective conditioning approach helps maintain spatial consistency while preventing action-related artifacts from propagating through the generation process. Overall, the training objective of VRAG on diffusion models is defined as: LVRAG =E[t],[t′],ϵ,˜z,a,s[∥ϵt−ϵθ(˜z˜t,˜t,˜a,˜s)∥2 2⊙m], (6) ˜z˜t= [zhist,[t′],z[t]],˜a= [∅Lh,a],˜s= [shist,s],m= [0Lh,1Lc], (7) where ˜tis a concatenation of [t′]and[t], with t′< tandt′, t∈[T]. 3.4 Long-context Extension Baselines To investigate whether established long-context extension techniques from LLMs can effectively enhance video generation models, we design three complementary approaches that leverage either explicit frame context or neural memory hidden states, based on vanilla models in Sec. 3.2. These methods serve as baseline comparisons to our main approach, specifically targeting the model’s ability to maintain spatial coherence and temporal consistency in long video generation. Through
|
https://arxiv.org/abs/2505.21996v1
|
these baselines, we aim to verify the in-context learning capabilities of video diffusion models and assess their effectiveness in handling extended sequences. 5 Long-context Enhancement We extend the temporal context window using YaRN [ 59] modifica- tion for RoPE in temporal attention. RoPE encodes relative positions via complex-valued rotations, where the inner product between query qmand key kndepends on relative distance (m−n). YaRN extends the context window by applying a frequency transformation to the rotary position embed- dings. This transformation scales the rotation angles in a way that preserves the relative positioning information while allowing the model to handle longer video sequences, after small-scale fine-tuning on longer video clips. Frame Retrieval from History Buffer We implement a fixed-length buffer Bstoring historical latent frames with a heuristic sampling strategy. The buffer is partitioned into NS= 5exponentially decreasing segments Gj, where Lj=L1·αj−1. From each segment Gj, we sample kframes to form subset Fj. The retrieved memory zmem= [F1, . . . , F NS]is concatenated with current frame window zas additional context: ˜z= [zmem,z], which is then passed into the spatiotemporal DiT blocks. This design ensures higher sampling density for recent frames, emphasizing recent visual information while maintaining access to historical context for temporal consistency. Neural Memory Augmented Attention Instead of using explicit frames as context in above two methods, we explore a neural memory mechanism to store and retrieve hidden states. This approach is inspired by the success of Infini-attention [ 60] in LLMs, which utilizes a compressed memory representation to enhance attention mechanisms. The model processes video in overlapping segments to maintain temporal continuity. For each video segment zs, we compute query qs, keyks and value vsmatrices. The model retrieves hidden state Amemfrom compressive memory Ms−1: Amem=σ(qs)Ms−1 σ(qs)ns−1. Memory Ms−1and normalization vector ns−1are then updated. The final attention output combines retrieved hidden state Amemand standard attention using learnable gating to maintain visual consistency across the long video sequence. More details of the above methods can be found in the supplementary material. 4 Experiments Figure 3: Visual comparison of VRAG with ground truth videos on world coherence evaluation. With 100 initial frames as history buffer, VRAG predicts 200 subsequent frames. 4.1 Datasets and Evaluation Protocol For training, we collected 1000 long Minecraft gameplay videos (17 hours total) using MineRL [ 61]. All videos have a fixed resolution of 640 ×360 pixels. Each sequence spans 1200 frames, annotated 6 with action vectors (forward/backward movement, jumping, camera rotation) and world coordinates (x, y, z positions and yaw angle). For evaluation, we assembled two distinct test sets: (1) for compounding error evaluation, we use 20 long videos of 1200 frames with randomized actions and locations, and (2) for world coherence, we use 60 carefully curated 300-frame video sequences designed to systematically assess spatiotemporal consistency. These curated sequences feature controlled motion patterns including in-place rotation, direction reversal, and circular trajectory following. The first 100 frames of each sequence serve as initialization buffer for methods requiring buffer frames or are excluded from evaluation for others. Each model autoregressively generates next single frame with stride 1 until the desired
|
https://arxiv.org/abs/2505.21996v1
|
length. We evaluate the models against ground-truth test sets using several metrics: Structural Similarity Index (SSIM) [ 62] to measure spatial consistency, Peak Signal-to-Noise Ratio (PSNR) for pixel- level reconstruction quality, Learned Perceptual Image Patch Similarity (LPIPS) [ 63] to assess perceptual similarity. For the compounding error evaluation, we find SSIM more accurately reflect the faithfulness of frames over long sequences. 4.2 Training Details A consistent window size of 20 frames is applied for both model training and evaluation for fair comparison. For vanilla Diffusion Forcing, we additionally train a variant with window sizes of 10 frame for context length evaluation. For our VRAG method, we combine 10 retrieved frames with 10 current frames for both training and inference. We represent the agent’s state using a global state vector s= [x, y, z, yaw]during training, which can be extended to incorporate a full 3D pose representation when needed. To facilitate training convergence, these values are normalized relative to the initial state, thereby reducing the complexity of the diffusion process. The YaRN implementation extends the vanilla model (window size 20) by replacing position embeddings with YaRN and stretching factor 4, followed by fine-tuning for 104steps on 80-frame sequences. During evaluation of Yarn, we use a 40-frame window. The Infini-attention with neural memory employs a sliding window size 20 and stride 10, using the first 10 frames for memory state updates and the last 10 for local attention computation. The History Buffer method maintains a 124-frame buffer partitioned into 5 exponentially decreasing segments ( L1= 2, α= 2), sampling 2 frames per segment to form 10 historical frames that are concatenated with the 10 current frames. All models are trained for 3 epochs on the dataset, with a batch size of 32 across 8 A100 GPUs. 4.3 World Coherence Results Figure 4: Visual comparison of different methods, evaluated for world coherence analysis.We investigate the spa- tiotemporal consistency of internal world models by evaluating the predicted videos given initial frames and action sequences. As visualized in Fig. 4, our VRAG provides an effec- tive approach to enhance the model’s ability to lever- age historical context for improving world coherence. Fig. 3 shows more visual comparison of VRAG with ground truth videos. We evaluate the world coher- ence of different methods using multiple metrics. Fig- ure 5 shows the SSIM scores over time, while Ta- ble 1 presents a compre- hensive comparison across all metrics. Our VRAG 7 Figure 5: SSIM scores over time for different methods on world coherence evaluation.Method SSIM ↑PSNR ↑LPIPS ↓ DF (window 10) 0.455 16.161 0.509 DF (window 20) 0.466 16.643 0.538 YaRN 0.462 16.567 0.532 History Buffer 0.459 16.922 0.543 VRAG 0.506 17.097 0.506 Table 1: Quantitative comparison of world coher- ence across different methods, evaluated on videos with 300 frames. method achieves the best performance across all metrics, demonstrating its superior ability to maintain world coherence in generated videos. Our experimental results demonstrate that expanding the window size from 10 to 20 frames in the baseline DF model improves world coherence, indicating that longer context windows enhance consistency. However,
|
https://arxiv.org/abs/2505.21996v1
|
further context extension using YaRN shows no improvement over the vanilla DF model. This suggests that YaRN’s context extension capabilities, while effective in language models, do not transfer effectively to video generation for maintaining world coherence. Similarly, the History Buffer method fails to effectively utilize historical frames for spatiotemporal consistency without explicit in-context training. These findings from both YaRN and History Buffer approaches reveal that video diffusion models at the current scale possess limited in-context learning capabilities, preventing them from effectively leveraging historical frames for maintaining long-term consistency. The Neural Memory method performs poorly due to its instability in model training. 4.4 Compounding Error Results Figure 7: SSIM scores over time for compound- ing error evaluationMethod SSIM ↑ DF (window 10) 0.297 DF (window 20) 0.321 YaRN 0.316 History Buffer 0.188 Neural Memory 0.283 VRAG 0.349 Table 2: Average SSIM scores across all frames in compounding error evaluation Figure 6: Visual comparison of long-term video prediction (1200 frames) across different methods, evaluated for compounding error analysis.We evaluate the compound- ing error in long video generation across different methods using the SSIM metric. As shown in Figure 7 and Table 2, our VRAG method achieves superior performance with an SSIM score of 0.349, demonstrat- ing better structural similar- ity preservation compared to baseline methods. In- creasing the window size in DF from 10 to 20 frames improves SSIM, indicating that longer context helps mitigate compounding er- rors. However, this im- provement is still inferior to 8 Figure 8: Comparison of SSIM scores over time for VRAG variants.Method SSIM ↑PSNR ↑LPIPS ↓ VRAG 0.506 17.097 0.506 VRAG (no training) 0.455 16.670 0.528 VRAG (no memory) 0.436 16.372 0.547 Table 3: Ablation study of VRAG components. We compare the full model with variants that remove ei- ther the memory component (additional global state conditioning only) or training component (in-context learning only). VRAG’s performance, suggesting that our retrieval-augmented approach provides more effective long-term consistency. As visualized in Fig. 6, our VRAG method generates more coherent and consistent frames over long sequences, while other methods exhibit noticeable artifacts and incon- sistencies. The History Buffer method performs poorly, with an SSIM score of 0.188, indicating that naive historical frame retrieval without effective in-context training fails to maintain long-term consistency. Given its limited performance in the world coherence experiments (Section 4.3), we exclude the Neural Memory method from visualization in this longer video prediction visualization. 4.5 Ablation: Memory and Training of VRAG We ablate the key designs for VRAG methods, including the memory and training components. The ablation results are shown in Figure 8 and Table 3. We compare the full VRAG model with two variants: (1) VRAG without the memory component, which only uses additional global state conditioning, and (2) VRAG without the training component, i.e., vanilla model with retrieval augmented generation for in-context learning at inference. The ablation study is conducted on the world coherence evaluation dataset. The ablation results reveal several key insights about VRAG components. First, removing the mem- ory component leads to the largest performance drop across all metrics, with SSIM decreasing by 13.8%
|
https://arxiv.org/abs/2505.21996v1
|
and LPIPS increasing by 8.1%. This demonstrates that the memory mechanism is crucial for maintaining spatiotemporal consistency and quality. Second, removing the training component also causes significant degradation, with SSIM dropping by 10.1% and LPIPS increasing by 4.3%, high- lighting the weak capabilities of in-context learning for current video models. The full VRAG model achieves the best performance across all metrics, showing that both components work synergistically to improve video generation quality. 5 Conclusions and Discussions In conclusion, VRAG tackles the fundamental challenge of maintaining long-term consistency in interactive video world models through an innovative combination of memory retrieval-augmented generation and global state conditioning. By maintaining a buffer of past frames associated with spatial information, VRAG effectively recalls relevant context and preserves coherent dynamics across extended sequences. Its memory mechanism with explicit in-context training process substantially mitigates compounding errors and improves spatiotemporal consistency. Extensive experiments on long-horizon interactive tasks demonstrate the superior performance of VRAG over both long-context and memory-based baselines, establishing a scalable framework for faithful video-based world modeling. Notably, we discovered that context enhancement techniques from LLMs fail to transfer effectively to the video generation domain, even with shared transformer backbones, due to the inherent limitations of in-context learning capabilities for video models. This finding underscores the critical importance of VRAG’s in-context training approach. We hope our work will inspire further exploration into memory retrieval mechanisms for long video generation and interactive simulation. Limitaitons. We acknowledge the current computational limitations preventing effective scaling to longer sequences or larger architectures. GPU memory constraints severely restricted memory buffer size and training sequence length, potentially impacting long-horizon consistency and model performance. The higher computational cost of memory retrieval-augmented generation may further limit deployment in resource-constrained settings such as edge devices. Future work could explore more efficient memory mechanisms, adaptive optimization strategies, and hardware-aware algorithms. 9 Broader Impacts. We acknowledge serious ethical concerns regarding the potential misuse of such technology for creating highly convincing misleading or manipulated video content in games or simulation systems. We strongly encourage responsible development and deployment of video generation technologies, with appropriate technical and ethical safeguards, clear accountability frameworks, and transparency measures in place to mitigate risks. References [1]Manuel Watter, Jost Springenberg, Joschka Boedecker, and Martin Riedmiller. Embed to control: A locally linear latent dynamics model for control from raw images. Advances in neural information processing systems , 28, 2015. [2]David Ha and Jürgen Schmidhuber. Recurrent world models facilitate policy evolution. Ad- vances in neural information processing systems , 31, 2018. [3]Danijar Hafner, Timothy Lillicrap, Mohammad Norouzi, and Jimmy Ba. Mastering atari with discrete world models. arXiv preprint arXiv:2010.02193 , 2020. [4]Jascha Sohl-Dickstein, Eric A. Weiss, Niru Maheswaranathan, and Surya Ganguli. Deep unsupervised learning using nonequilibrium thermodynamics. In International Conference on Machine Learning , pages 2256–2265, 2015. [5]Yang Song and Stefano Ermon. Generative modeling by estimating gradients of the data distribution. Advances in neural information processing systems , 32, 2019. [6]Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffusion probabilistic models. Advances in neural information processing systems , 33:6840–6851, 2020. [7]Yang Song, Jascha Sohl-Dickstein, Diederik P. Kingma, Abhishek Kumar,
|
https://arxiv.org/abs/2505.21996v1
|
Stefano Ermon, and Ben Poole. Score-based generative modeling through stochastic differential equations. In International Conference on Learning Representations , 2021. [8]Tim Brooks, Bill Peebles, Connor Holmes, Will DePue, Yufei Guo, Li Jing, David Schnurr, Joe Taylor, Troy Luhman, Eric Luhman, Clarence Ng, Ricky Wang, and Aditya Ramesh. Video generation models as world simulators. 2024. [9]Jake Bruce, Michael D Dennis, Ashley Edwards, Jack Parker-Holder, Yuge Shi, Edward Hughes, Matthew Lai, Aditi Mavalankar, Richie Steigerwald, Chris Apps, et al. Genie: Generative interactive environments. In Forty-first International Conference on Machine Learning , 2024. [10] Dani Valevski, Yaniv Leviathan, Moab Arar, and Shlomi Fruchter. Diffusion models are real-time game engines. arXiv preprint arXiv:2408.14837 , 2024. [11] Dirk Weissenborn, Oscar Täckström, and Jakob Uszkoreit. Scaling autoregressive video models. arXiv preprint arXiv:1906.02634 , 2019. [12] William Harvey, Søren Nørskov, Niklas Kölch, and George V ogiatzis. Flexible diffusion modeling of long videos. arXiv preprint arXiv:2205.11495 , 2022. [13] Zongyi Li, Shujie Hu, Shujie Liu, Long Zhou, Jeongsoo Choi, Lingwei Meng, Xun Guo, Jinyu Li, Hefei Ling, and Furu Wei. Arlon: Boosting diffusion transformers with autoregressive models for long video generation. arXiv preprint arXiv:2410.20502 , 2024. [14] Desai Xie, Zhan Xu, Yicong Hong, Hao Tan, Difan Liu, Feng Liu, Arie Kaufman, and Yang Zhou. Progressive autoregressive video diffusion models. arXiv preprint arXiv:2410.08151 , 2024. [15] Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, et al. Gpt-4 technical report. arXiv preprint arXiv:2303.08774 , 2023. [16] Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timo- thée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, et al. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971 , 2023. 10 [17] Yunfan Gao, Yun Xiong, Xinyu Gao, Kangxiang Jia, Jinliu Pan, Yuxi Bi, Yixin Dai, Jiawei Sun, Haofen Wang, and Haofen Wang. Retrieval-augmented generation for large language models: A survey. arXiv preprint arXiv:2312.10997 , 2:1, 2023. [18] Penghao Zhao, Hailin Zhang, Qinhan Yu, Zhengren Wang, Yunteng Geng, Fangcheng Fu, Ling Yang, Wentao Zhang, Jie Jiang, and Bin Cui. Retrieval-augmented generation for ai-generated content: A survey. arXiv preprint arXiv:2402.19473 , 2024. [19] Andreas Blattmann, Robin Rombach, Huan Ling, Tim Dockhorn, Seung Wook Kim, Sanja Fidler, and Karsten Kreis. Align your latents: High-resolution video synthesis with latent diffusion models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages 22563–22575, 2023. [20] Patrick Esser, Robin Rombach, and Björn Ommer. Structure-aware video generation with latent diffusion models. arXiv preprint arXiv:2303.07332 , 2023. [21] Andreas Blattmann, Tim Dockhorn, Sumith Kulal, Daniel Mendelevitch, Maciej Kilian, Do- minik Lorenz, Yam Levi, Zion English, Vikram V oleti, Adam Letts, et al. Stable video diffusion: Scaling latent video diffusion models to large datasets. arXiv preprint arXiv:2311.15127 , 2023. [22] Haoxin Chen, Yong Zhang, Xiaodong Cun, Menghan Xia, Xintao Wang, Chao Weng, and Ying Shan. Videocrafter2: Overcoming data limitations for high-quality video diffusion models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages 7310–7320, 2024. [23] Jonathan Ho, Tim Salimans, Alexey Gritsenko, William Chan, Mohammad Norouzi, and
|
https://arxiv.org/abs/2505.21996v1
|
David J. Fleet. Video diffusion models. arXiv preprint arXiv:2204.03458 , 2022. [24] Uriel Singer, Adam Polyak, Eliya Nachmani, Guy Dahan, Eli Shechtman, and Haggai Ha- cohen. Make-a-video: Text-to-video generation without text-video data. arXiv preprint arXiv:2209.14792 , 2022. [25] Yu Hong, Jing Wei, Xing Liu, Xiaodi Wang, Yutong Bai, Haitao Li, Ming Zhang, and Hao Xu. Cogvideo: Large-scale pretraining for text-to-video generation with transformers. arXiv preprint arXiv:2205.15868 , 2022. [26] Zhuoyi Yang, Jiayan Teng, Wendi Zheng, Ming Ding, Shiyu Huang, Jiazheng Xu, Yuanming Yang, Wenyi Hong, Xiaohan Zhang, Guanyu Feng, et al. Cogvideox: Text-to-video diffusion models with an expert transformer. arXiv preprint arXiv:2408.06072 , 2024. [27] Jiuniu Wang, Hangjie Yuan, Dayou Chen, Yingya Zhang, Xiang Wang, and Shiwei Zhang. Modelscope text-to-video technical report. arXiv preprint arXiv:2308.06571 , 2023. [28] Zihan Ding, Chi Jin, Difan Liu, Haitian Zheng, Krishna Kumar Singh, Qiang Zhang, Yan Kang, Zhe Lin, and Yuchen Liu. Dollar: Few-step video generation via distillation and latent reward optimization. arXiv preprint arXiv:2412.15689 , 2024. [29] Zangwei Zheng, Xiangyu Peng, Tianji Yang, Chenhui Shen, Shenggui Li, Hongxin Liu, Yukun Zhou, Tianyi Li, and Yang You. Open-sora: Democratizing efficient video production for all. arXiv preprint arXiv:2412.20404 , 2024. [30] Diederik P Kingma, Max Welling, et al. Auto-encoding variational bayes, 2013. [31] Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. High- resolution image synthesis with latent diffusion models. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition , pages 10684–10695, 2022. [32] William Peebles and Saining Xie. Scalable diffusion models with transformers. In Proceedings of the IEEE/CVF international conference on computer vision , pages 4195–4205, 2023. [33] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. Advances in neural information processing systems , 30, 2017. 11 [34] Yining Hong, Beide Liu, Maxine Wu, Yuanhao Zhai, Kai-Wei Chang, Linjie Li, Kevin Lin, Chung-Ching Lin, Jianfeng Wang, Zhengyuan Yang, et al. Slowfast-vgen: Slow-fast learning for action-driven long video generation. arXiv preprint arXiv:2410.23277 , 2024. [35] Jialong Wu, Shaofeng Yin, Ningya Feng, Xu He, Dong Li, Jianye Hao, and Mingsheng Long. ivideogpt: Interactive videogpts are scalable world models. Advances in Neural Information Processing Systems , 37:68082–68119, 2024. [36] Jihwan Kim, Junoh Kang, Jinyoung Choi, and Bohyung Han. Fifo-diffusion: Generating infinite videos from text without training. arXiv preprint arXiv:2405.11473 , 2024. [37] Ruili Feng, Han Zhang, Zhantao Yang, Jie Xiao, Zhilei Shu, Zhiheng Liu, Andy Zheng, Yukun Huang, Yu Liu, and Hongyang Zhang. The matrix: Infinite-horizon world generation with real-time moving control. arXiv preprint arXiv:2412.03568 , 2024. [38] Sand-AI. Magi-1: Autoregressive video generation at scale, 2025. [39] Roberto Henschel, Levon Khachatryan, Daniil Hayrapetyan, Hayk Poghosyan, Vahram Tade- vosyan, Zhangyang Wang, Shant Navasardyan, and Humphrey Shi. Streamingt2v: Consistent, dynamic, and extendable long video generation from text. arXiv preprint arXiv:2403.14773 , 2024. [40] Vikram V oleti, Alexia Jolicoeur-Martineau, and Chris Pal. Mcvd-masked conditional video dif- fusion for prediction, generation, and interpolation. Advances in neural information processing systems , 35:23371–23385, 2022. [41] Boyuan Chen, Diego Martí Monsó, Yilun Du, Max Simchowitz, Russ Tedrake,
|
https://arxiv.org/abs/2505.21996v1
|
and Vincent Sitzmann. Diffusion forcing: Next-token prediction meets full-sequence diffusion. Advances in Neural Information Processing Systems , 37:24081–24125, 2024. [42] Zihan Ding, Amy Zhang, Yuandong Tian, and Qinqing Zheng. Diffusion world model: Fu- ture modeling beyond step-by-step rollout for offline reinforcement learning. arXiv preprint arXiv:2402.03570 , 2024. [43] Eloi Alonso, Adam Jelley, Vincent Micheli, Anssi Kanervisto, Amos J Storkey, Tim Pearce, and François Fleuret. Diffusion for world modeling: Visual details matter in atari. Advances in Neural Information Processing Systems , 37:58757–58791, 2024. [44] Decart, Etched, Julian Quevedo, Quinn McIntyre, Spruce Campbell, Xinlei Chen, and Robert Wachen. Oasis: A universe in a transformer. 2024. [45] Haoxuan Che, Xuanhua He, Quande Liu, Cheng Jin, and Hao Chen. Gamegen-x: Interactive open-world game video generation. arXiv preprint arXiv:2411.00769 , 2024. [46] Junliang Guo, Yang Ye, Tianyu He, Haoyu Wu, Yushu Jiang, Tim Pearce, and Jiang Bian. Mineworld: a real-time and open-source interactive world model on minecraft. arXiv preprint arXiv:2504.08388 , 2025. [47] Jiwen Yu, Yiran Qin, Xintao Wang, Pengfei Wan, Di Zhang, and Xihui Liu. Gamefactory: Creating new games with generative interactive videos. arXiv preprint arXiv:2501.08325 , 2025. [48] Anthony Hu, Lloyd Russell, Hudson Yeo, Zak Murez, George Fedoseev, Alex Kendall, Jamie Shotton, and Gianluca Corrado. Gaia-1: A generative world model for autonomous driving. arXiv preprint arXiv:2309.17080 , 2023. [49] Alisson Azzolini, Hannah Brandon, Prithvijit Chattopadhyay, Huayu Chen, Jinju Chu, Yin Cui, Jenna Diamond, Yifan Ding, Francesco Ferroni, Rama Govindaraju, et al. Cosmos-reason1: From physical common sense to embodied reasoning. arXiv preprint arXiv:2503.15558 , 2025. [50] Amir Bar, Gaoyue Zhou, Danny Tran, Trevor Darrell, and Yann LeCun. Navigation world models. arXiv preprint arXiv:2412.03572 , 2024. 12 [51] Jack Parker-Holder, Philip Ball, Jake Bruce, Vibhavari Dasagi, Kristian Holsheimer, Christos Kaplanis, Alexandre Moufarek, Guy Scully, Jeremy Shar, Jimmy Shi, Stephen Spencer, Jessica Yung, Michael Dennis, Sultan Kenjeyev, Shangbang Long, Vlad Mnih, Harris Chan, Maxime Gazeau, Bonnie Li, Fabio Pardo, Luyu Wang, Lei Zhang, Frederic Besse, Tim Harley, Anna Mitenkova, Jane Wang, Jeff Clune, Demis Hassabis, Raia Hadsell, Adrian Bolton, Satinder Singh, and Tim Rocktäschel. Genie 2: A large-scale foundation world model. 2024. [52] Aether Team, Haoyi Zhu, Yifan Wang, Jianjun Zhou, Wenzheng Chang, Yang Zhou, Zizun Li, Junyi Chen, Chunhua Shen, Jiangmiao Pang, et al. Aether: Geometric-aware unified world modeling. arXiv preprint arXiv:2503.18945 , 2025. [53] Xuanchi Ren, Tianchang Shen, Jiahui Huang, Huan Ling, Yifan Lu, Merlin Nimier-David, Thomas Müller, Alexander Keller, Sanja Fidler, and Jun Gao. Gen3c: 3d-informed world- consistent video generation with precise camera control. arXiv preprint arXiv:2503.03751 , 2025. [54] Fangfu Liu, Wenqiang Sun, Hanyang Wang, Yikai Wang, Haowen Sun, Junliang Ye, Jun Zhang, and Yueqi Duan. Reconx: Reconstruct any scene from sparse views with video diffusion model. arXiv preprint arXiv:2408.16767 , 2024. [55] Ruiqi Gao, Aleksander Holynski, Philipp Henzler, Arthur Brussee, Ricardo Martin-Brualla, Pratul Srinivasan, Jonathan T Barron, and Ben Poole. Cat3d: Create anything in 3d with multi-view diffusion models. arXiv preprint arXiv:2405.10314 , 2024. [56] Haoyu Zhen, Qiao Sun, Hongxin Zhang, Junyan Li, Siyuan Zhou, Yilun Du, and Chuang Gan. Tesseract: Learning 4d embodied world models. arXiv preprint arXiv:2504.20995 , 2025. [57] Zeqi
|
https://arxiv.org/abs/2505.21996v1
|
Xiao, Yushi Lan, Yifan Zhou, Wenqi Ouyang, Shuai Yang, Yanhong Zeng, and Xin- gang Pan. Worldmem: Long-term consistent world simulation with memory. arXiv preprint arXiv:2504.12369 , 2025. [58] Jianlin Su, Murtadha Ahmed, Yu Lu, Shengfeng Pan, Wen Bo, and Yunfeng Liu. Roformer: Enhanced transformer with rotary position embedding. Neurocomputing , 568:127063, 2024. [59] Bowen Peng, Jeffrey Quesnelle, Honglu Fan, and Enrico Shippole. Yarn: Efficient context window extension of large language models. arXiv preprint arXiv:2309.00071 , 2023. [60] Tsendsuren Munkhdalai, Manaal Faruqui, and Siddharth Gopal. Leave no context behind: Efficient infinite context transformers with infini-attention, 2024. [61] William H Guss, Brandon Houghton, Nicholay Topin, Phillip Wang, Cayden Codel, Manuela Veloso, and Ruslan Salakhutdinov. Minerl: A large-scale dataset of minecraft demonstrations. arXiv preprint arXiv:1907.13440 , 2019. [62] Zhou Wang, Alan C Bovik, Hamid R Sheikh, and Eero P Simoncelli. Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing , 13(4):600– 612, 2004. [63] Richard Zhang, Phillip Isola, Alexei A Efros, Eli Shechtman, and Oliver Wang. The unrea- sonable effectiveness of deep features as a perceptual metric. In Proceedings of the IEEE conference on computer vision and pattern recognition , pages 586–595, 2018. [64] Jianwen Jiang, Chao Liang, Jiaqi Yang, Gaojie Lin, Tianyun Zhong, and Yanbo Zheng. Loopy: Taming audio-driven portrait avatar with long-term motion dependency. arXiv preprint arXiv:2409.02634 , 2024. 13 Appendix A Baseline Method Details For the baseline methods in Sec. 3.4, we implemented the following techniques to enhance the temporal context window of our video generation model. Long-context Enhancement To extend the temporal context window of our video generation model, we apply the YaRN [ 59] modification for ROPE in temporal attention module for improved extrapolation. RoPE encodes relative position via complex-valued rotations, such that the inner product between the m-th query qmandn-th key kndepends only on the relative distance (m−n): ⟨qm,kn⟩=⟨fWq(zm, m), fWk(zn, n)⟩R (8) = Re ⟨(Wqzm)eimθ,(Wkzn)einθ⟩ (9) = Re (Wqzm)(Wkzn)∗·ei(m−n)θ (10) =g(zm,zn, m−n) (11) where Re[·]is real part of complex values and (·)∗represents conjugate of complex numbers, zm,zn∈RDare input vectors, Wq,Wkare learned projections, and θ∈RDencodes rotation frequencies per dimension: θd=b−2d/D,withb= 10000 . YaRN modifies modifies the rotated input vector fW(zm, m, θ d)by applying a frequency transforma- tion: f′ W(zm, m, θ d) =fW(zm, g(m), h(θd)) (12) withg(m) =mand frequency warping function: h(θd) = (1 −γ(rd))·θd s+γ(rd)·θd (13) Here, sis a stretching factor and rd=Lc/λdis the context-to-wavelength ratio with λd= 2π/θd= 2π(b′)2d/Dandb′=bsD D−2. The ramp function γ(·)interpolates low-frequency dimensions to improve extrapolation while preserving high-frequency components. Frame Retrieval from History Buffer We also experimented with a fixed-length buffer Bthat stores a history of previously generated latent frames, employing a heuristic sampling strategy for retrieval. Following [ 64], this strategy involves partitioning BintoNS= 5 segments Gjfor j∈ {1, . . . , N S}, ordered from oldest ( G1) to most recent ( GNS). The total number of frames in the buffer is NB=PNS j=1|Gj|. The lengths of these segments, Lj=|Gj|, decrease exponentially (e.g., Lj=L1·αj−1for a base α <1andL1being the length of the oldest segment G1), ensuring that more recent segments are shorter. From each
|
https://arxiv.org/abs/2505.21996v1
|
segment Gj,kframes are randomly sampled to form a subset Fj⊆Gj(where |Fj|=k). The retrieved memory zmemis constructed as the concatenation of these sampled frames, zmem= [F1, F2, . . . , F NS], totaling NS·kframes. This design with recency bias implies that the sampling density k/Ljis higher for more recent segments, thereby placing greater emphasis on recent information. This retrieved information zmemis concatenated with current frame window zalong temporal dimension as additional context: ˜z= [zmem,z], which is then passed as input to the spatiotemporal DiT blocks, enabling the model to jointly attend to both recent and historical frames. Neural Memory Augmentation To extend video generation capabilities to longer sequences be- yond a fixed attention window while retaining memory of past scenes, we adapt Infini-attention [ 60] as a neural memory mechanism for our video diffusion model. Infini-attention is a recurrent mechanism that augments standard dot-product attention (local context) with a compressed summary of past context (global context) stored in an evolving memory. The model processes the video in segments using a sliding window. To maintain the high degree of temporal continuity crucial for video generation, we employ overlapping segments. This is a modification from the original Infini-attention, which typically processes non-overlapping segments. The input latent video segment zs∈RN×D(sis segment index) is processed to derive query qs, 14 keyksand value vsmatrices using standard attention mechanisms. Key-value pairs from processed segments are incrementally summarized and stored in a compressive memory M, which can be efficiently queried by subsequent segments using their query vectors. After each slide, the model first retrieves a hidden state Amemby querying the compressive memory Ms−1: Amem=σ(qs)Ms−1 σ(qs)ns−1(14) where σ(·)is an element-wise nonlinear activation function (e.g., ELU (·) + 1 ) and ns−1is a normalization vector (accumulated up to segment s−1). Next, the compressive memory Msand normalization vector nsare updated using the KV entries of the current segment s: Ms=Ms−1+σ(ks)T vs−σ(ks)Ms−1 σ(ks)ns−1 ns=ns−1+σ(ks)T1N(15) Here, Nis the length of the current segment s.σ(·)is applied element-wise, and 1Nis an N×1 vector of ones. The final attention output for segment s, denoted As, combines the standard dot-product attention output Alocal(local context from the current segment) with the retrieved memory state Amem(global context from past segments) using a learnable gating scalar β∈R: As=sigmoid (β)⊙Amem+ (1−sigmoid (β))⊙Alocal (16) As in standard multi-head attention, a final linear projection is applied to Asto produce the output of the Infini-attention layer. B Implementation Details The V AE compresses each input frame of size 3×640×360into a latent representation of size 16×32×18before processing by the diffusion model. All diffusion models employ a hidden size of 1024 and depth of 16, with one temporal and one spatial attention modules in each spatialtemporal DiT block. We use a uniform learning rate of 8×10−5during training. For Infini-Attention, we apply a learning rate of 3×10−3specifically to the global weight parameter to effectively balance global and local attention contributions while maintaining stable convergence. In VRAG, we set the weights as [10.0,10.0,10.0,3.0]across the global state dimentions ([ x, y, z, yaw]) in the similarity function, to accommodate the wider range of yaw values. To differentiate retrieved historical frames
|
https://arxiv.org/abs/2505.21996v1
|
from current context frames along the temporal dimension, we incorporate a temporal offset of 100 in the rotary position embeddings of temporal attentional for retrieved frames. C Additional Experiments C.1 Analysis of Compounding Error Evaluation Metrics Traditional metrics like SSIM, PSNR, and LPIPS measure pixel-level or feature-level differences between original and generated images. However, these metrics lose effectiveness when the generated video sample deviates significantly from the original video sample, especially for the compounding error evaluation, even if they falls in the same distribution and are visually reasonable. As shown in Figure 9, we normalize all metrics to a 0-1 scale where higher values indicate better generation quality (with SSIM score flipped). While all metrics perform well on the initial frame (index 0), assigning high scores to ground truth, their values begin to deteriorate after frame 100. To address this limitation, we developed a discriminator-based evaluation metric. We train a discrimi- nator using 1000 videos from the vanilla DF model (window size 20), with each video containing 1000 frames. This yielded a dataset of 106ground truth frames and 106generated frames as fake ones. We implemented the discriminator as a binary classifier using a lightweight architecture with 4 ResNet blocks. Too large discriminator architecture will lead to less meaningful discriminative signals. Each block contains two convolutional layers with batch normalization and activation functions. This design provides discriminative outputs while maintains computational efficiency. 15 Figure 9: Comparison of SSIM, PSNR, LPIPS, and discriminator metrics. All metrics are normalized to the [0,1] range, where higher values indicate better performance for all scores. The discriminator score can accurately capture variations in generated image quality, while the other metrics are affected by distribution shift and fail to properly reflect compounding errors. As shown in Fig. 9, the decrease of discriminator value faithfully reflects the distortions in generated images, while other metrics decline for two reasons: image quality degradation and distributional shift from the original video. This shift prevents traditional metrics from accurately assessing generation performance in terms of the compounding error. For instance, while the 270th frame shows significantly better generation quality than the 1080th frame, SSIM, PSNR, and LPIPS assign similar scores to both. This indicates that the distribution shift has become the dominant factor in lowering the metric scores, making these metrics unreliable for evaluating compounding error in long-range video generation. Unlike traditional metrics, the discriminator’s evaluation remains robust to distribution shifts since it doesn’t depend on the original image, but rather depending only on the distortion of the generated images. This makes the discriminator score a more reliable metric for evaluating compounding errors in this case. However, the discriminator approach has several limitations. First, training requires sampling from a pre-trained diffusion model, which incurs computational overhead. Second, the training of the discriminator heavily depends on human judgment. We find that even a shallow ResNet architecture can effectively distinguish between ground truth and generated images. This suggests that an overly complex model might assign uniformly low scores to all generated content, making the discriminator metric less meaningful to look at. Finally, the discriminator shows limited
|
https://arxiv.org/abs/2505.21996v1
|
generalization capability. When evaluating videos generated by new methods or datasets, the discriminator may be deceived into assigning inappropriately high scores. Therefore we do not report the discriminator score in the main paper, and advocate more investigation into faithful evaluation of compounding error in future work. C.2 Vanilla Long-context Extension vs. YaRN To ensure a fair comparison, we evaluate YaRN against a baseline that directly extrapolates the vanilla model’s window size from 20 during training to 40 at inference, to match the inference window length as YaRN in our experiments as Sec. 4. Evaluation of quantitative metrics LPIPS, SSIM and PSNR shown in Figures 10, 11, and 12 indicates that, YaRN maintains lower compounding error for long video generation (1100 frames). This demonstrates YaRN’s effectiveness in extending the context window of diffusion video models to 40 frames after minimal fine-tuning. Vanilla extension of context length on DF models performs poorly due to out-of-distribution window size at inference. While YaRN effectively extends the context window, its performance improvements are constrained by the inherent limitations of diffusion models in in-context learning. As demonstrated in Figure 13, the model exhibits difficulties in effectively leveraging long-range dependencies, leading to subopti- mal spatialtemporal consistency against the ground truth. In addition, YaRN also requires greater 16 computational overheads during inference as it has a larger window size compared with other methods in our experiments in Sec. 4, making it less suitable for real-time gameplay applications. Figure 10: Comparison of vanilla long-context extension for DF model and YaRN with window length of 40 frames at inferences. Lower is better for LPIPS score. Figure 11: Comparison of vanilla long-context extension for DF model and YaRN with window length of 40 frames at inferences. Higher is better for SSIM score. Figure 12: Comparison of vanilla long-context extension for DF model and YaRN with window length of 40 frames at inferences. Higher is better for PSNR score. C.3 More Discussions on Main Results For the main results in Sec. 4.3 and Sec. 4.4, we provide more discussions here. The Infini-attention model faces significant training challenges due to its global attention mechanism. As evidenced in Figure 14, the model struggles to converge during training. For VRAG without memory component, we incorporated global state conditioning (specifically [ x, y, z, yaw]) into the input. However, 17 Figure 13: Visual comparison of vanilla long-context extension for DF model and YaRN. Both models are inferred with 40 frames window. compared to the vanilla diffusion model, the training process becomes significantly more difficult. This may be due to the higher dimensionality and larger ranges of the spatial condition, whereas the action condition mostly consists of binary states ([0, 1]), making it harder for the model to learn and increasing perplexity. Figure 14: Training Loss Curves D More Results Table 4: Ablation study of VRAG components for compounding error on long video generation. We compare the full model with variants that remove either the memory component (additional global state conditioning only) or training component (in-context learning only). Method SSIM ↑PSNR ↑LPIPS ↓ VRAG 0.349 12.039 0.654 VRAG (no
|
https://arxiv.org/abs/2505.21996v1
|
training) 0.218 11.588 0.712 VRAG (no memory) 0.205 11.367 0.746 18 Figure 15: World coherence evaluation on all methods for PSNR (left) and LPIPS (right). Figure 16: Compounding error evaluation on all methods for PSNR (left) and LPIPS (right). Figure 17: Ablation study of VRAG components for world coherence (left) and compounding error (right), with PSNR metric. We compare the full model with variants that remove either the memory component (additional global state conditioning only) or training component (in-context learning only). Figure 18: Ablation study of VRAG components for world coherence (left) and compounding error (right), with LPIPS metric. We compare the full model with variants that remove either the memory component (additional global state conditioning only) or training component (in-context learning only). 19 Figure 19: Ablation study of VRAG components for world coherence (left) and compounding error (right), with SSIM metric. We compare the full model with variants that remove either the memory component (additional global state conditioning only) or training component (in-context learning only). 20
|
https://arxiv.org/abs/2505.21996v1
|
arXiv:2505.22003v1 [cs.CL] 28 May 2025LEGAL ASSIST AI: L EVERAGING TRANSFORMER -BASED MODEL FOR EFFECTIVE LEGAL ASSISTANCE∗ Jatin Gupta†, Akhil Sharma†, Saransh Singhania†, Ali Imam Abidi‡ Department of Computer Science and Engineering, Sharda University, Greater Noida, India †Equal contribution‡Corresponding author: aliabidi4685@gmail.com ABSTRACT Pursuit of accessible legal assistance in India faces a critical gap, as many citizens struggle to leverage their legal rights due to limited awareness and access to relevant legal information. This paper introduces Legal Assist AI, a transformer-based model designed to bridge this gap by offering effective legal assistance through large language models (LLMs). The system retrieves relevant legal information from a curated database and generates accurate responses, enabling effective assistance for diverse users, including legal professionals, scholars, and the general public. The model was fine- tuned on extensive datasets from the Indian legal domain, including Indian Constitution, Bharatiya Nyaya Sanhita (BNS), Bharatiya Nagarik Suraksha Sanhita (BNSS) and so forth, providing a robust understanding of the complexities of Indian law. By incorporating domain-specific legal datasets, the proposed model demonstrated remarkable efficiency and specialization in legal Question-Answering. The model was evaluated against state-of-the-art models such as GPT-3.5 Turbo and Mistral 7B, achieving a 60.08% score on the AIBE, outperforming its competitors in legal reasoning and accuracy. Unlike other models, Legal Assist AI avoided common issues such as hallucinations, making it highly reliable for practical legal applications. It showcases the model’s applicability in real-world legal scenarios, with future iterations aiming to enhance performance and expand its dataset to cover a broader range of multilingual and case-specific queries as well. Keywords Legal Assist AI, Large Language Model, Text summarization, Legal Assistance, Mistral, Aalap 1 Introduction Justice is a fundamental right in India protected by Articles 14 and 21 of the Indian Constitution [ 1]. The Right to Information (RTI) Act 2005 [ 2] allows citizens to seek knowledge and transparency with respect to legal proceedings. Despite these acts and articles enshrined in the Constitution of India, there is still a significant gap between the legal rights and the public’s awareness of this information. This prevents many people from taking the legal recourse they would have taken had they had easy access to adequate legal information [ 3]. This gap or disparity affects different groups differently, viz., the general public seeks this information to understand their rights, scholars aim to conduct in-depth research, and professionals need to fact-check information. Access to legal resources is thus crucial for ensuring that justice is not just a theoretical entity but could be exercised as a practical reality for all. In this context, one of the potential areas of change is the implementation of large language models (LLMs) in the legal system. Harnessing the power of natural language processing (NLP) with deep learning, a machine learning powered legal assistant can understand and generate human-like text for legal queries [ 4]. Transformer-based legal models could aid in optimizing processes and democratizing access to legal resources for the general population [ 5]. LLMs significantly reduce operational costs compared to traditional approaches, achieving an impressive 99.97 percent savings. The use of LLMs
|
https://arxiv.org/abs/2505.22003v1
|
enable legal departments and law firms to significantly scale their contract review throughput without a proportional increase in costs [ 6]. This paper proposes “Legal Assist” which in concept and application is a robust legal assistant. The proposed model has been tested for two different unstructured lines of questioning, i.e. long form (subjective) and objective (four options). ∗Citation :Jatin Gupta, Akhil Sharma, Saransh Singhania, Ali Imam Abidi. Legal Assist AI: Leveraging Transformer- based Model for Effective Legal Assistance. Legal Assist AI: Leveraging Transformer-based Model for Effective Legal Assistance Paper/Year Model Params Jurisdiction Results [8] 2023 ChatGPT 3.5 175B Canadian Good for generalized legal info. Often hallucinates (incorrect info). Lacks accuracy. JusticeBot - Canadian More complete legal info. Less generalized. [9] 2024 Mistral 7B 7B Indian Failed AIBE: 23.48%. AALAP 7B Indian Failed AIBE: 25.56%. Performed better than GPT-3.5. GPT-3.5 175B Indian Passed AIBE: 58.72%. Performed well on LegalBench. [10] 2024 Parmanu - Indian Scored 8+/10 in clarity, relevance, completeness. No hallucination measured. Potential for biased or factually incorrect info due to high cre- ativity. Table 1: Review of previous Related Works/Models 2 Literature Review A study dependent on ChatGPT (GPT-3.5 Turbo), generated legal responses in plain English, avoiding sophisticated legal terms, yet at times resulted in verbose and repetitive responses. One of the biggest drawbacks of ChatGPT is that it hallucinates (incorrect information generation), which sometimes confuses the user, even though the scalability of the platform is good and it helps to generate any type of information [7, 8]. A study conducted by (author?) [9]investigated the application of GPT-3.5 Turbo as a base model for legal querying. The model was tested on only a few sets of data to check its capability in the legal and paralegal domains, but it has come out to be only model so far to qualify the All-India Bar Examination (AIBE). The same study explored Mistral 7B as the base model and was tested on multiple parameters and datasets. This model performed quite well in resolving the core problem or legal question that needed prompt addressing. But it exhibited significant lag in performance in comparison to GPT-3.5-Turbo, as it scored low in Legal Bench Metrics and couldn’t pass the AIBE exam. The paper also discussed the AALAP (AI Assistant for Legal and Paralegal Functions in India) model, which is a fine-tuned Mistral 7B trained on the AALAP dataset. The performance of AALAP was found to be better than 85% of test data for Mistral 7B, better than 31% of the test data for GPT-3.5 Turbo, and AALAP achieved an equivalent score in 34% of the test data evaluated by GPT-4. AALAP did poorly in the AIBE exam. The model also did not manage to score well in the LegalBench metrics as compared to GPT-3.5 Turbo. A paper presents Paramanu, a family of Generative Large Language models designed to efficiently handle legal language processing in 10 Indian languages across 5 scripts. These models outperformed larger counterparts in legal text analysis for Bangla, Hindi, and Sanskrit, making them accessible and efficient for legal applications without requiring high computational power
|
https://arxiv.org/abs/2505.22003v1
|
[10]. Evaluation of the law-specific LLMs has to be done using benchmarks that would stress legal tasks. Because the legal processes of the different countries vary so significantly, it is very difficult to create a single global evaluation benchmark for the legal LLM. LegalBench is a collaboratively built legal reasoning benchmark, consisting of 162 tasks in the USA law context [ 11]. LawBench [ 12] is a similar benchmark in the context of China. Table 1 presents a comparative overview of existing model architectures. 3 Methodology 3.1 Data Preparation Legal documents pertaining to the Indian Judicial System, including the Indian Constitution, Bharatiya Nyaya Sanhita (BNS), Bharatiya Nagarik Suraksha Sanhita (BNSS), Bharatiya Sakshya Adhiniyam (BSA), Labor Laws, State Legislatures, SEBI Guidelines, Judicial Judgements and so forth were used in this study. Multiple legal contracts, their formats, etc., were also included in the dataset. All the legal documents for the study were downloaded from governmental and public websites, including court decisions, legislative text, and legal guidelines. 2 Legal Assist AI: Leveraging Transformer-based Model for Effective Legal Assistance The dataset was then loaded using Directory Loader. This utility was used to effectively handle different tasks in natural language processing by aggregating and combining datasets within several directories. Langchain provides a strong support system for all such operations, ensuring that the data is prepared correctly and incorporated with ease into the model’s training pipeline. Further, the documents were broken into manageable chunks. A chunk size of 1000 characters with an overlap of 20 characters across adjacent chunks was preferred to maintain continuity of context between chunks, which was important in maintaining fluency during legal document processing. This made the process of segmentation optimal and implied an improved treatment of long and complex legal texts. Embeddings are dense vector representations of text that encode its semantic meanings, thus making it possible to treat much larger models in a better way and understand languages more. These vectors transform text into a numerical format that can be utilized for classification and similarity searches. It uses embeddings to compare and analyze content in terms of meaning. For generating the embeddings of each text chunk, HuggingFaceEmbeddings was used. The sentence-transformers/all-MiniLM-L6-v2 model was chosen for its effective and quick embeddings. This allowed a powerful base in indexing and retrieval for a legal assistant model. After generating the embeddings, they were indexed in a process that uses FAISS (Facebook AI Similarity Search), a high-speed library for similar vector search and clustering. FAISS indexes made it possible to conduct fast and scalable searches by indexing the embeddings into a vector store that had been optimized for the retrieval of similar vectors at speed. This is important for comparing and finding similar text chunks, which many other processes in legal document analysis required. After indexing, the vector store was saved locally, and that was easier for access in further queries and analysis. The embeddings were kept in a local vector store to allow effective information retrieval by boosting performance and accuracy in legal document processing. The process described can be represented as the following equation:
|
https://arxiv.org/abs/2505.22003v1
|
The problem described can be represented as the following equation: DlegalLangChain Loader− − − − − → DchunksHuggingFace Embeddings− − − − − − − → EvectorsFAISS Indexing− − − − → VstoreStorage− − − − − → Vsaved (1) where: •Dlegalis the collection of legal documents •Dchunks are the processed document chunks •Evectors are the generated vector representations •Vstoreis the indexed vector store •Vsavedis the saved vector store for retrieval The equation (1)represents the data flow from document collection to processing, embedding generation, indexing, and final storage. 3.2 Proposed Framework 3.2.1 Model Selection and Architecture Table 2 compares different large language models across a variety of tasks and benchmarks where Llama 3.1 8B is found to be the best performing across majority of benchmarks. Therefore, Llama 3.1 8B [ 13], was selected as the base model for this study. The model is characterized by high performance in multilingual tasks, a maximum context length of 128K tokens, and enhanced reasoning capabilities. With 8.03 billion parameters, the model has an optimum balance between computational efficiency and accuracy that suits the specific application needs of this research [14]. The Transformer-based model, shown in figure 1, integrates Grouped Query Attention (GQA), which enhances memory efficiency during autoregressive decoding by using a single key-value head for multiple queries. Rotary Positional Embeddings (RoPE) are employed to capture the relative positional information between tokens, improving token interaction. The model utilizes RMS normalization before the self-attention and feedforward blocks to stabilize gradients and enhance training efficiency. Additionally, the SwiGLU activation function is used in the feedforward layers, combining the Swish function to boost performance [15]. 3 Legal Assist AI: Leveraging Transformer-based Model for Effective Legal Assistance Table 2: Comparison of Transformer Models Category Benchmark Llama 3.1 8B Gemma 2 9B ITMistral 7B In- structGPT-3.5 Turbo 175B General MMLU (0-shot, CoT)73.0 72.3 (5-shot, non-CoT)60.5 69.8 MMLU PRO (5- shot, CoT)48.3 37.6 36.9 49.2 Code IFEval 80.4 73.6 67.0 69.9 HumanEval (0- shot)72.6 54.3 40.2 68.0 Math GSM8K (8-shot, CoT)84.5 76.7 53.2 81.6 MATH (0-shot, CoT)51.9 44.3 13 43.1 Reasoning ARC Challenge (0- shot)83.4 87.6 74.2 78.4 GPQA (0-shot, CoT)32.8 - 28.8 30.8 Tool Use BFCL 76.1 - 60.4 85.9 Nexus 38.5 32.0 24.7 37.2 Long Context ZeroSCROLLS / QuaLITY81 - - - InfiniteBench 65.1 - - - NIH/Multi-needle 98.8 - - - Multilingual Multilingual MGSM (0-shot)68.9 53.2 29.9 51.4 Figure 1: Architecture of Base Model Used The Proposed Legal Assist AI Architecture integrates a Retrieval QA Chain and a Large Language Model (LLM) to efficiently handle legal queries. The model processes input text tokens (queries) by retrieving relevant information from a vector store using semantic search, which fetches contextual data to enhance the query. This enriched query is then passed to the LLM, which generates a legal response based on the query and its context. The response is finally converted into output text tokens. By combining retrieval-based techniques with LLM capabilities, this architecture ensures accurate and context-aware answers to complex legal questions. Specifically, the process involves parsing the inputXinto a query using a retrieval QA chain ( RQAC ), retrieving
|
https://arxiv.org/abs/2505.22003v1
|
contextual data from a vector store via semantic search ( search (f(X),VectorStore )), merging the retrieved information into a final prompt, and then utilizing the LLM to generate the response Y. Y=LLM (RQAC (f(X),search (f(X),VectorStore ))) (2) The equation (2) encapsulates the entire workflow, ensuring efficient and accurate legal assistance. Figure 2 illustrates the overall architecture of the Legal Assist AI system, depicting the flow from query processing to response generation using retrieval-augmented techniques and an LLM. 4 Legal Assist AI: Leveraging Transformer-based Model for Effective Legal Assistance Figure 2: Proposed Legal Assist AI Architecture 3.2.2 Prompt Figure 3: System Prompt This prompt, as shown in figure 3, serves as a structured framework for a legal expert assistant, delineating the expert’s role, outlining constraints that govern response formulation, and incorporating categorical labels to define the context, retrieval sources, and question format. This ensures precision, adherence to legal accuracy, and alignment with specified requirements. 3.2.3 Training The quantized version of the transformer model had been used, with Q4_0 as the quantization method. The methodology had brought down the model size to circa 4.7 GB, which had significantly improved the inference time and efficiency with no loss in high accuracy. The choice of the type of quantization to apply had been informed by the necessity to be able to run this model in a very resource-constrained environment without sacrificing performance [13]. The original base model had been pretrained on more than 15 trillion tokens from publicly available sources. Later, the prepared set of embedded vector store had been fine-tuned on the model by adjusting the parameters of the model. This process had focused on maximizing the model’s performance in legal question answering while making use of the 5 Legal Assist AI: Leveraging Transformer-based Model for Effective Legal Assistance detailed nuances and complex structures in legal texts to ensure the model’s understanding and responses to a broad spectrum of legal questions had been accurate and relevant. 4 Model Evaluations To evaluate the effectiveness and robustness of the legal model, testing had been done with two sets of prompts over two separate datasets. Each type of prompt had been such that it had the ability to put pressure on different looks of the model, hence helping in getting a more complete set of estimations for the performance of the model. The following datasets had been used: 1. Lawyer_GPT_dataset [16] 2. AIBE dataset [17] 4.1 Evaluation using Lawyer_GPT_India dataset This test had been conducted to understand the model’s performance based on subjective reasoning. A dataset with different kinds of subjective questions had been chosen. The BERT score had also been calculated to see the model’s understanding and its paraphrasing power and to check whether had generated the answer in the right context or not. 4.1.1 Dataset The dataset used here was sourced from HuggingFace. It has 150 pairs of questions within the domain of Indian Polity. The aim is to generate answers to a very wide range of questions related to the Indian Constitution, its judiciary, the legislative, and other prevailing socio-political issues of the country. This
|
https://arxiv.org/abs/2505.22003v1
|
will be quite handy for a student/researcher or AI systems intending to understand or provide responses to questions on Indian Polity. 4.1.2 Observations i.The model had been iterated to answer all 150 questions with the given prompt template. Then each generated answer and actual answer was checked for semantic similarity using BERT Score. Refer to Appendix B for a sample of generated responses by the model. Figure 4: Distribution of BERT scores across different 150 questions Figure 4 shows the distribution of BERT scores across different 150 questions from the Lawyer_GPT_dataset. The average BERT score was found to be 76.90% ii.This test also revealed that the model proposed here has reduced hallucinations; when asked a question it did not know, the model replied, "I don’t know". Refer to Appendix B for such responses. In contrast, GPT-3.5 powered ChatGPT often hallucinated providing inaccurate or fabricated information [7, 8]. 6 Legal Assist AI: Leveraging Transformer-based Model for Effective Legal Assistance 4.2 Evaluation using All India Bar Examination dataset This test was conducted to assess the model’s knowledge domain and to evaluate its accuracy based on objective-based questions. By leveraging a standardized legal examination, the evaluation aimed to determine how effectively the model could understand and respond to various legal scenarios. 4.2.1 Dataset All India Bar Examination is a test conducted by the Bar Council of India for law graduates who are interested in practicing as advocates. The candidate will be provided with a certificate of practice on passing the examination and can appear in any court throughout India. The dataset is made up of questions that have been drawn from the AIBE examinations running from AIBE 4 to AIBE 16, and it covers a time span of 12 years. There are a total of 1156 questions, all multiple-choice. The passing grade in the examination stands at 40%. 4.2.2 Observations However, there were 22 questions for which the answers provided differed from the options specified. These questions were manually reviewed by experts, who found that the options’ answers were no longer relevant. Conversely, manual verification of the generated answers revealed that the model’s predictions for these 22 questions were accurate and up-to-date. To prevent skewing the outcome, the authors decided to exclude these 22 questions. Consequently, the Legal Assist AI model’s accuracy was found to be 60.08%. Table 3 presents a comparative analysis of multiple models based on the AIBE score. Table 3: Comparison of Models on the Basis of AIBE Score Metrics Mistral 7B AALAP GPT-3.5 Turbo Legal Assist AI AIBE Score 23.48% 25.56% 58.72% 60.08% AIBE Results Fail Fail Passed Passed 5 Discussions Previous legal assistance models, such as GPT3.5 Turbo, Mistral 7B, and AALAP, have struggled to achieve better performance due to limitations in their training datasets. The scarcity of high-quality, domain-specific, and case specific training data hindered the development of effective legal AI models. The proposed Legal Assist AI Model outperformed the current state-of-the-art models such as GPT3.5 Turbo, Mistral 7B, and AALAP in delivering results and insights. The proposed Legal Assist AI model achieved a score of 60.08% on the
|
https://arxiv.org/abs/2505.22003v1
|
AIBE examination, outperforming all other models. Notably, unlike GPT 3.5 Turbo, the proposed model did not hallucinate. The superior performance of the Legal Assist AI Model highlights the importance of high-quality training data in developing effective legal AI solutions. However, due to the lack of data specific to the case, Legal Assist AI model was unable to answer few questions. Still, this can be further generalized by enhancing the training corpus and adding more case-specific data such that a possible increment or decrement in the performance of the model could be noticed in future studies. 6 Conlusion This paper introduced the development and evaluation of a fine-tuned large language model that is specifically designed for legal tasks and queries in the Indian judicial system. The capability of this model was quite impressive, as it generated accurate responses to legal queries in various languages and obtained high similarity scores. Unlike other models—for instance, the ChatGPT model—this model has greatly reduced instances of hallucination, making the outputs reliable and trustworthy. Of particular mention is the performance of this Legal Assist AI model on the AIBE examination, where the model scored an impressive 60%, surpassing the performance of other models tested earlier. Notably, this was achieved despite the model using significantly fewer parameters—just 8 billion compared to the 175 billion parameters of GPT-3.5—yet still outperforming the larger model. This demonstrates the potential of the model to assist legal practitioners by carrying out tedious tasks, such as looking for relevant sections and articles pertaining to a specific case, and thus making valuable contributions to the field. 7 Legal Assist AI: Leveraging Transformer-based Model for Effective Legal Assistance Quality performance of the model can be enhanced by supplementing quality training data with more expert reviews and additional case specific data. However, it should be noted that increasing the data amount may result in either significant improvements or potential decreases in accuracy, respectively. Continuous improvement will ensure that the Legal Assist AI model remains a useful tool for the general public, providing them accurate and efficient assistance. References [1] Mahak Jain. Access to justice in india: A critical analysis. SSRN Electronic Journal , 1 2021. [2]Virat Kumar. Indian journal of law and legal research the evolution of the right to information act in india. Indian Journal of Law and Legal Research , pages 2582–8878, 2024. [3]Graham Greenleaf, Vivekanandan Anandan, Philip Chung, Andrew Mowbray, and Ranbir Singh. Challenges for free access to law in a multi-jurisdictional developing country: Building the legal information institute of india. SSRN Electronic Journal , 12 2011. [4]Shweta Pathania. Legal awareness in india: Need of the hour and strategy to spread legal awareness. Legal Service India, 2021. [5]Svea Klaus, Ria Van Hecke, Kaweh Djafari Naini, Ismail Sengor Altingovde, Juan Bernabé-Moreno, and Enrique Herrera-Viedma. Summarizing legal regulatory documents using transformers. SIGIR 2022 - Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval , pages 2426–2430, 1 2022. [6]Lauren Martin, Nick Whitehouse, Stephanie Yiu, Lizzie Catterson, and Rivindu Perera. Better call gpt, comparing large language models against lawyers. 1, 1 2024.
|
https://arxiv.org/abs/2505.22003v1
|
[7] Jinzhe Tan, H. Westermann, and Karim Benyekhlef. Chatgpt as an artificial lawyer? In AI4AJ@ICAIL , 2023. [8]Geoff Currie, Stephanie Robbie, and Peter Tually. Chatgpt and patient information in nuclear medicine: Gpt-3.5 versus gpt-4. Journal of Nuclear Medicine Technology , 51:307–313, 12 2023. [9]Aman Tiwari, Prathamesh Kalamkar, Atreyo Banerjee, Saurabh Karn, Varun Hemachandran, and Smita Gupta. Aalap: Ai assistant for legal & paralegal functions in india. 1 2024. [10] Mitodru Niyogi and Arnab Bhattacharya. Paramanu: A family of novel efficient generative foundation language models for indian languages. 1 2024. [11] Neel Guha, Julian Nyarko, Daniel E Ho, Christopher Ré, Adam Chilton, Aditya Narayana, Alex Chohlas-Wood, Austin Peters, Brandon Waldon, Daniel N Rockmore, Diego Zambrano, Dmitry Talisman, Enam Hoque, Faiz Surani, Frank Fagan, Galit Sarfaty, Gregory M Dickinson, Haggai Porat, Jason Hegland, Jessica Wu, Joe Nudell, Joel Niklaus, John Nay, Jonathan H Choi, Kevin Tobia, Margaret Hagan, Megan Ma, Michael Livermore, Nikon Rasumov-Rahe, Nils Holzenberger, Noam Kolt, Peter Henderson, Sean Rehaag, Sharad Goel, Shang Gao, Spencer Williams, Sunny Gandhi, Tom Zur, Varun Iyer, Zehua Li, and Law Houston. Legalbench: A collaboratively built benchmark for measuring legal reasoning in large language models. SSRN Electronic Journal , 17:19, 8 2023. [12] Zhiwei Fei, Xiaoyu Shen, Dawei Zhu, Fengzhe Zhou, Zhuo Han, Songyang Zhang, Kai Chen, Zongwen Shen, and Jidong Ge. Lawbench: Benchmarking legal knowledge of large language models. 9 2023. [13] Meta. llama3.1, 2024. [14] et al. Aaron Grattafiori. The llama 3 herd of models. 7 2024. [15] Vignesh Yaadav. Exploring and building the llama 3 architecture : A deep dive into components, coding, and inference techniques | by vignesh yaadav | medium. Medium, 4 2024. [16] Nisaar Agharia. nisaar/lawyer_gpt_india · datasets at hugging face. Hugging Face, 2023. [17] opennyaiorg. opennyaiorg/aibe_dataset · datasets at hugging face. Hugging Face, 2023. A Declaration of Generative AI Authors have presented test samples from the dataset that were fed into the model “as such” in Table 4 and 5 of Appendix B. Therefore, the questions and their corresponding answers have been put in quotes. The responses generated by the proposed LLM model (Legal Assist AI) have also been presented within the same Table in quotes. 8 Legal Assist AI: Leveraging Transformer-based Model for Effective Legal Assistance B Sample of responses Table 4: Sample of Generated responses for questions asked over Lawyer_GPT_dataset Question Answer Generated_Answer BERT What is the key issue in the legal scenario ’In Mehta v. Kapoor, Mehta, a tenant, suffered an elec- tric shock under the In- dian Rent Control Act for failure to provide safe electrical power to rented property. Mehta is seek- ing legal recourse against Kapoor. Advise Mehta.’?The key issue is Mehta v. Kapoor is whether the landlord is legally liable for the harm to the ten- ant (Mehta) due to negligence in providing safe, amenities, specif- ically maintaining the electrical wiring.The key issue in the legal scenario is the landlord’s (Kapoor) liability for providing basic amenities and safety measures in the rented property, specifically regarding the tenant’s (Mehta) complaint of a ’blackout’ caused by faulty wiring.88% What was the Supreme Court of India’s
|
https://arxiv.org/abs/2505.22003v1
|
main argu- ment of the Constitutional Bench judgment in Kesa- vananda Bharti v. State of Kerala?In Kesavananda Bharti v. State of Kerala, the Supreme Court ruled that the basic structure of the Con- stitution is not amendable. Funda- mental rights cannot be abrogated by the Parliament.The Supreme Court ruled by a 7-6 major- ity that the legislative power to amend the Constitution under Article 368 is subject to judicial review based on the doctrine of the ’basic structure’ of the Constitution. Parliament cannot amend certain funda- mental rights, such as property rights, in a way that alters the basic structure of the Constitution.86% Can you explain the legal concept of Public Interest Litigation (PIL) in simple language?Public Interest Litigation (PIL) is a legal action initiated in a court of law for the protection of public interest where basic human rights are denied. It allows any member of the public with a bonafide inter- est to file a writ petition directly addressed to the court.In simple terms, Public Interest Litigation (PIL) is a way for citizens to file com- plaints in courts on behalf of victims who have been harmed by government actions or policies. It’s designed to work like a ’public complaint’ system where people can bring attention to injustices and seek justice on behalf of others.85% What is the meaning of the following provision in the Indian Penal Code: ’Who- ever, intending to take dishonestly any movable property out of the posses- sion of any person with- out that person’s consent, moves that property, is said to commit theft’?This provision describes theft. It is a person intentionally and dis- honestly takes movable property from someone else’s possession without consent, and acts on this by moving the property, it is con- sidered as theft.The provision states that a person com- mits theft if they intend to take something (movable property) from someone else’s possession without their permission (con- sent). This means that the act of taking or moving the property with the dishonest intention amounts to theft.92% Table 5: Sample of Model’s response to unfamiliar questions Question Response What are the legal implications of data localization requirements under the Personal Data Protection Bill?I don’t know. The provided text does not mention data localization requirements... Explain the idea of a ’National Climate Court’ and its potential role in India.I don’t know. The provided text does not mention the concept of a "National Climate Court"... 9
|
https://arxiv.org/abs/2505.22003v1
|
arXiv:2505.22006v1 [cs.AI] 28 May 2025EFFICIENTLY ENHANCING GENERAL AGENTS WITH HIERARCHICAL-CATEGORICAL MEMORY Changze Qiao1, Mingming Lu1,* 1School of Computer Science and Engineering, Central South University ABSTRACT With large language models (LLMs) demonstrating re- markable capabilities, there has been a surge in research on leveraging LLMs to build general-purpose multi-modal agents. However, existing approaches either rely on com- putationally expensive end-to-end training using large-scale multi-modal data or adopt tool-use methods that lack the ability to continuously learn and adapt to new environments. In this paper, we introduce EHC, a general agent capable of learning without parameter updates. EHC consists of a Hierarchical Memory Retrieval (HMR) module and a Task- Category Oriented Experience Learning (TOEL) module. The HMR module facilitates rapid retrieval of relevant mem- ories and continuously stores new information without being constrained by memory capacity. The TOEL module en- hances the agent’s comprehension of various task characteris- tics by classifying experiences and extracting patterns across different categories. Extensive experiments conducted on multiple standard datasets demonstrate that EHC outperforms existing methods, achieving state-of-the-art performance and underscoring its effectiveness as a general agent for handling complex multi-modal tasks. Index Terms —Hierarchical Memory, General Agent, Large Language Models 1. INTRODUCTION The field of artificial intelligence has long been dedicated to creating general-purpose intelligent assistants [1] capable of following multi-modal user instructions and efficiently ac- complishing various real-world tasks. With large language models (LLMs) demonstrating remarkable capabilities, there has been a proliferation of research focused on leveraging LLMs to construct general-purpose multi-modal assistants. Current approaches to building multi-modal agents pri- marily fall into two categories [1]. On the one hand, end-to- end training integrates LLMs directly with multi-modal data. Researchers collect large-scale image-text datasets and multi- modal instruction-following data to continually train LLMs, equipping them with visual information-processing abilities. *Corresponding authorSeveral models [2, 3, 4, 5] have demonstrated impressive ca- pabilities in visual understanding and reasoning. However, these methods are computationally expensive, require direct access to LLM parameters, and may restrict the flexibility and generalization capabilities of the LLMs. On the other hand, tool-use approaches enable LLMs to invoke different tools to accomplish required (sub)tasks through carefully designed prompts without additional model training. Notable examples include VisProg [6], ViperGPT [7], and Visual ChatGPT [8]. Such methods efficiently perform a wide range of visual tasks using corresponding tools, offering a cost-effective integra- tion into AI agents. However, most existing tool-use methods do not enable agents to acquire new knowledge, thus limiting their contin- uous learning and adaptation to new environments. Assist- GPT [9] updates LLMs post-deployment through in-context learning, and CLOV A [10] updates both LLMs and visual tools during reflection and learning phases. Prompt-based methods can enhance the sequential decision-making and planning capabilities of LLMs by providing a few in-context examples [11]. Nevertheless, due to the context-window limitations of LLMs [12], these agents cannot recall pre- viously encountered information, restricting their learning capability beyond a limited number of examples. ExpeL [13] addresses this by proposing experiential learning, where agents autonomously collect experiences from training tasks through trial-and-error, distill natural-language insights from these experiences, and reuse successful experiences
|
https://arxiv.org/abs/2505.22006v1
|
as in- context examples at test time. Yet, as memory size grows and the need for semantic structuring increases, these systems encounter memory redundancy and overhead issues. Addi- tionally, diverse task types stored in memory can interfere with the outputs of LLMs, making it essential for agents to focus specifically on relevant task types when handling var- ious tasks. HAMMR [14] targets multiple types of visual question-answering (VQA) problems by constructing a hier- archical multi-modal system that allows a high-level agent to invoke low-level agents specialized in specific task types. De- spite achieving promising results, HAMMR does not address a broader variety of task types in more general scenarios, and its multi-layer agent structure is relatively complex. In this research, we present EHC, a novel general-purpose agent framework designed to address the aforementioned challenges. To mitigate memory redundancy and overhead, we introduce the Hierarchical Memory Retrieval (HMR) module. Inspired by traditional operating system caching mechanisms, HMR adopts a dual-pool architecture consist- ing of a Fast-Access Memory Pool (in RAM) and a Deep- Retrieval Memory Pool (in an external database). During agent-environment interactions, HMR employs a dynamic migration strategy to manage memory flow between the two pools. This design enables rapid retrieval of relevant memories while significantly reducing storage overhead, effectively supporting continuous learning in open environ- ments. To handle diverse multi-modal tasks, we propose the Task-Oriented Experiential Learning (TOEL) module, which aims to balance flexibility and explainability in the learning process. TOEL continuously collects experiences from agent- environment interactions without requiring additional anno- tation. By combining predefined categories with LLM-based few-shot reasoning, TOEL classifies experiences, extracts category-specific patterns, and distills actionable knowledge to guide task execution. This allows agents to better under- stand the characteristics of various task types, formulate more effective solutions, and significantly enhance their adaptabil- ity and problem-solving capabilities across tasks. Moreover, clear experience classification improves system transparency and explainability, making the agent’s decision-making pro- cess more understandable and easier to debug—crucial for the improvement and optimization of complex systems. 2. METHODOLOGY The proposed EHC architecture, as illustrated in Fig. 1, com- prises two core components: the Hierarchical Memory Re- trieval (HMR) module and the Task-Type Oriented Experi- ence Learning (TOEL) module. These modules work collab- oratively to establish a well-structured and semantically en- riched memory system, which not only facilitates efficient in- formation storage and retrieval but also enables the agent to learn patterns across different memory categories. 2.1. Hierarchical Memory Retrieval Module Formally, given a memory set M, frequently accessed or re- cently used memories Msare maintained in the fast-access poolMmem, which has a fixed capacity C, while other impor- tant memories Mrare stored in the scalable external database Mdb. During agent-environment interactions, memory transfers between MmemandMdbare governed by HMR’s dynamic mi- gration policy. When storing new observations, the system first attempts to cache them in Mmem. If the memory exceeds the capacity threshold C, the system migrates the least re- cently used C/2entries to Mdbfollowing a Least Recently Used (LRU) eviction policy. For memory retrieval, the sys- tem first queries the fast-access pool using a similarity
|
https://arxiv.org/abs/2505.22006v1
|
metricto retrieve the top- kmost relevant in-context examples (i.e., those with confidence scores greater than θ). If the number of suitable matches is insufficient, the remaining candidates are retrieved from the deep-retrieval pool via database queries. Finally, the most relevant trajectories are integrated into the current context to generate the final output. 2.2. Task-Category Oriented Experience Learning Mod- ule Experience Collection. We build upon prior work by col- lecting experiences throughout the agent’s operation, thereby eliminating the need for additional annotations. Each task is allotted a maximum of Tattempts. For the i-th task with content ci, historical memory hi(retrieved from the hierar- chical memory pool), current trajectory p0, and initial reflec- tions r0=∅, the agent is prompted to generate a trajectory ptthrough combinatory reasoning for the t-th attempt. If the attempt succeeds, ptis stored in memory as a success experi- ence. Otherwise, the agent performs self-reflection to obtain rt+1=concat (rt,LLM (· |pt, ci, hi)). If the maximum num- ber of attempts is reached, ptandrt+1are stored as a failure experience; otherwise, the process repeats. Experience Classification. To maintain both flexibility and interpretability, we combine predefined task categories with LLM-based few-shot inference. Based on domain analy- sis and task objective decomposition, we define a set of mutually exclusive and collectively exhaustive categories C={c1, c2, . . . , c K}. For each collected experience, the LLM generates a candidate label ˆci. To mitigate LLM pre- diction instability, we compute semantic embeddings ϕ(ˆci) using BERT and compare them with embeddings ϕ(ck)for each predefined category. The final category is assigned based on the highest cosine similarity: ci= arg max ck∈Csim(ϕ(ˆci), ϕ(ck)), where sim (a, b)denotes cosine similarity. The memory pool is then organized into M={Mc1, Mc2, . . . , M cK}, where each Mckstores success and failure trajectories corre- sponding to category ck. Experience Learning. Our learning strategy focuses on iden- tifying patterns across experience categories. For each cate- goryck, the memory Mckis split into intra-category success- failure pairs and cross-category comparison groups. Intra- category contrast pairs Cfail/success compare are formed by extracting fixed-length segments from success trajectories Mc+ k, aiding in the identification of key success factors and common fail- ure modes. Cross-category comparison groups Ccategory compare are formed by pairing success trajectories of ckwith those from other categories cnon-k, revealing inherent differences among task types. During insight generation, the insight set is ini- tialized as ˆι=∅, and the LLM is iteratively prompted with memory data to generate new insights. These insights ˆιckcan Fig. 1 . The framework of EHC. be updated through operations such as ADD,EDIT ,UPVOTE , andDOWNVOTE . Each newly added insight starts with an ini- tial weight, which increases or decreases based on subsequent feedback. When an insight’s weight drops to zero, it is re- moved from the pool. Finally, the EHC framework leverages the generated insight set ˆιduring the inference phase. 2.3. Inference Based on Task Category During the inference phase, EHC integrates an information- rich memory pool M, consisting of trajectories from com- pleted tasks and insights obtained through experiential learn- ing. For program generation, EHC adopts the
|
https://arxiv.org/abs/2505.22006v1
|
compositional visual reasoning framework [6]. What distinguishes our ap- proach is that, for each specific task, EHC retrieves insights of the same category from the memory pool and selects the top-kmost relevant trajectories within that category as few- shot contextual examples. These, combined with a trajectory template, are provided to the LLM to guide generation. For qualitative results, refer to Section 3.4. 3. EXPERIMENT 3.1. Experimental Setup Datasets and Evaluation Protocol. To evaluate EHC, we conducted experiments using standard benchmark datasets and widely adopted evaluation metrics. In alignment with recent state-of-the-art studies, we used the GQA [15] and NLVR2 [16] datasets for compositional visual question an- swering (VQA) and multi-image reasoning, reporting top-1 accuracy. For factual grounding and referring expression comprehension, we used RefCOCO, RefCOCO+, and Re- fCOCOg [17, 18], evaluated using Intersection over Union(IoU). For language-guided image editing, we adopted the MagicBrush [19] dataset, with performance measured by the CLIP-I score. Implementation Details. We employed CLOV A [10] as the baseline and utilized its publicly available toolkit. We pre- defined several instruction categories— judgment ,counting , recognition ,comparison ,addition ,removal , and replace- ment —and initialized both internal and external memory pools with five contextual examples per category. 3.2. Main Results To assess the performance of our model, we compared EHC with several competitive baselines, including LLM-based agents and end-to-end models. The results, summarized in Table. 1, show that EHC significantly outperforms the base- line, achieving a 7.8% and 4.0% improvement in accuracy on the GQA and NLVR2 datasets, respectively. These gains un- derscore the effectiveness of the HMR and TOEL modules in managing complex multimodal tasks. The relatively smaller improvement observed on the image tagging task may be attributed to its simplicity and limited instruction diversity. EHC also consistently outperforms other tool-use meth- ods, further demonstrating its robustness. Compared to end- to-end models, EHC substantially narrows the performance gap. While end-to-end models typically rely on large-scale multimodal data for training and fine-tuning—constraining their flexibility and generalization—EHC’s structured mem- ory management and task-oriented learning approach allow it to process diverse multimodal tasks more efficiently, with- out incurring the heavy computational costs and resource de- mands associated with end-to-end training. Table 1 .Comparison of results with previous models. Method GQA NLVR2 Editing Tagging E2EOtter [3] 48.2 48.2 - - MMICL [2] 64.4 62.2 - - CFR [4] 72.1 - - - Qwen-vl-chat-7B [5] 57.5 58.72 - 32.54 TOOLVisual ChatGPT [8] 43.2 51.6 - - ViperGPT [7] 47.2 - - - VISPROG [6] 50.5 62.4 90.82 27.28 HAMMR [14] 60.2 63.8 - - ExoViP [20] 61.49 67.96 91.27 31.50 CLOV A [10] 60.2 63.8 92.16 31.86 Ours 68.4 68.0 93.44 31.92 3.3. Ablation Study We analyzed the contributions of different components within the model through ablation studies, with the results shown in Table. 2. HMR significantly enhanced accuracy by managing the memory flow between the fast-access pool and the deep- retrieval pool, and continuously storing memories. TOEL further improved performance through empirical learning tai- lored to different task categories. By categorizing experiences and learning patterns across different categories using pre- defined classes and
|
https://arxiv.org/abs/2505.22006v1
|
few-shot reasoning from LLMs, TOEL strengthened the model’s adaptability to diverse categories. When combined with HMR, TOEL achieved additional im- provements in accuracy on both datasets. This indicates that TOEL can effectively identify and leverage category differ- ences, thus significantly enhancing the accuracy and inter- pretability of the model’s decision making. Additionally, the results in Table. 3 further demonstrate that our approach can also significantly enhance the performance of other LLMs. Table 2 .Ablation on the GQA and NLVR2 dataset. Dataset Method LLaMA2-7B GQABaseline 60.2 + HMR 64.6 + HMR + TOEL 68.4 NLVR2Baseline 63.8 + HMR 65.8 + HMR + TOEL 68.0 Table 3 .Comparison of results on two open-source LLMs. Method GQA NLVR2 Editing Tagging LLama2-7B + clova 60.2 63.8 92.16 31.86 LLama2-7B + ours 68.4 68.0 93.44 31.92 Mistral-7B + clova 31.4 42.2 90.34 29.73 Mistral-7B + Ours 39.6 43.8 91.82 29.873.4. Qualitative Experiment In Fig. 2, we visualized two representative examples. The un- classified agent, unable to distinguish between categories and susceptible to interference from diverse contexts, often gener- ates incorrect outputs. In contrast, EHC effectively enhances category understanding and selectively retrieves relevant in- stances, thereby ensuring category-level consistency between answers and questions. (a) Result of a compositional VQA task (b) Result of a multiple-image reasoning task Fig. 2 . Case study of EHC on two typical example tasks. 4. CONCLUSION In this paper, we introduced EHC, a general agent frame- work that enhances decision-making capabilities through hi- erarchical memory management and category-based memory pattern learning. EHC consists of two core modules: Hier- archical Memory Retrieval (HMR) and Task-Oriented Expe- riential Learning (TOEL). Together, they enable the agent to continuously store new experiences while preserving exist- ing knowledge during continual learning, thereby mitigating catastrophic forgetting. Simultaneously, EHC empowers the agent to learn cross-category task patterns, resulting in more accurate and interpretable decision-making. Experimental re- sults validate the effectiveness of EHC as a general-purpose agent and highlight its strong performance and lightweight design. 5. REFERENCES [1] Chunyuan Li, Zhe Gan, Zhengyuan Yang, Jianwei Yang, Linjie Li, Lijuan Wang, Jianfeng Gao, et al., “Multi- modal foundation models: From specialists to general- purpose assistants,” Foundations and Trends® in Com- puter Graphics and Vision , vol. 16, no. 1-2, pp. 1–214, 2024. [2] H. Zhao, Z. Cai, S. Si, X. Ma, K. An, L. Chen, and B. Chang, “Mmicl: Empowering vision-language model with multi-modal in-context learning,” arXiv preprint , vol. arXiv:2309.07915, 2023. [3] B. Li, Y . Zhang, L. Chen, J. Wang, J. Yang, and Z. Liu, “Otter: A multi-modal model with in-context instruction tuning,” CoRR , vol. abs/2305.03726, 2023. [4] Binh X Nguyen, Tuong Do, Huy Tran, Erman Tjiputra, Quang D Tran, and Anh Nguyen, “Coarse-to-fine rea- soning for visual question answering,” in Proceedings of the IEEE/CVF conference on computer vision and pat- tern recognition , 2022, pp. 4558–4566. [5] Jinze Bai, Shuai Bai, Shusheng Yang, Shijie Wang, Sinan Tan, Peng Wang, Junyang Lin, Chang Zhou, and Jingren Zhou, “Qwen-vl: A versatile vision-language model for understanding, localization,” Text Reading, and Beyond , vol. 2, 2023. [6] T. Gupta and A. Kembhavi,
|
https://arxiv.org/abs/2505.22006v1
|
“Visual programming: Compositional visual reasoning without training,” in Proceedings of the IEEE/CVF Conference on Com- puter Vision and Pattern Recognition , 2023, pp. 14953– 14962. [7] D. Sur ´ıs, S. Menon, and C. V ondrick, “Vipergpt: Visual inference via python execution for reasoning,” in Pro- ceedings of the IEEE/CVF International Conference on Computer Vision , 2023, pp. 11888–11898. [8] Chenfei Wu, Shengming Yin, Weizhen Qi, Xiaodong Wang, Zecheng Tang, and Nan Duan, “Visual chat- gpt: Talking, drawing and editing with visual foundation models,” arXiv preprint arXiv:2303.04671 , 2023. [9] D. Gao, L. Ji, L. Zhou, K. Q. Lin, J. Chen, Z. Fan, and M. Z. Shou, “Assistgpt: A general multi-modal assis- tant that can plan, execute, inspect, and learn,” arXiv preprint , vol. arXiv:2306.08640, 2023. [10] Z. Gao, Y . Du, X. Zhang, X. Ma, W. Han, S. C. Zhu, and Q. Li, “Clova: A closed-loop visual assistant with tool usage and update,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recogni- tion, 2024, pp. 13258–13268.[11] Shibo Hao, Yi Gu, Haodi Ma, Joshua Jiahua Hong, Zhen Wang, Daisy Zhe Wang, and Zhiting Hu, “Reasoning with language model is planning with world model,” arXiv preprint arXiv:2305.14992 , 2023. [12] Charles Packer, Sarah Wooders, Kevin Lin, Vivian Fang, Shishir G Patil, Ion Stoica, and Joseph E Gonzalez, “Memgpt: Towards llms as operating systems,” arXiv preprint arXiv:2310.08560 , 2023. [13] Andrew Zhao, Daniel Huang, Quentin Xu, Matthieu Lin, Yong-Jin Liu, and Gao Huang, “Expel: Llm agents are experiential learners,” in Proceedings of the AAAI Conference on Artificial Intelligence , 2024, vol. 38, pp. 19632–19642. [14] Lluis Castrejon, Thomas Mensink, Howard Zhou, Vitto- rio Ferrari, Andre Araujo, and Jasper Uijlings, “Hammr: Hierarchical multimodal react agents for generic vqa,” arXiv preprint arXiv:2404.05465 , 2024. [15] D. A. Hudson and C. D. Manning, “Gqa: A new dataset for real-world visual reasoning and compositional ques- tion answering,” in Proceedings of the IEEE/CVF Con- ference on Computer Vision and Pattern Recognition , 2019, pp. 6700–6709. [16] A. Suhr, S. Zhou, A. Zhang, I. Zhang, H. Bai, and Y . Artzi, “A corpus for reasoning about natural lan- guage grounded in photographs,” arXiv preprint , vol. arXiv:1811.00491, 2018. [17] Licheng Yu, Patrick Poirson, Shan Yang, Alexander C Berg, and Tamara L Berg, “Modeling context in re- ferring expressions,” in Computer Vision–ECCV 2016: 14th European Conference, Amsterdam, The Nether- lands, October 11-14, 2016, Proceedings, Part II 14 . Springer, 2016, pp. 69–85. [18] Sahar Kazemzadeh, Vicente Ordonez, Mark Matten, and Tamara Berg, “Referitgame: Referring to objects in photographs of natural scenes,” in Proceedings of the 2014 conference on empirical methods in natural lan- guage processing (EMNLP) , 2014, pp. 787–798. [19] Kai Zhang, Lingbo Mo, Wenhu Chen, Huan Sun, and Yu Su, “Magicbrush: A manually annotated dataset for instruction-guided image editing,” Advances in Neu- ral Information Processing Systems , vol. 36, pp. 31428– 31449, 2023. [20] Yuxuan Wang, Alan Yuille, Zhuowan Li, and Zilong Zheng, “Exovip: Step-by-step verification and ex- ploration with exoskeleton modules for compositional visual reasoning,” arXiv preprint arXiv:2408.02210 , 2024.
|
https://arxiv.org/abs/2505.22006v1
|
Technical Report Qwen VRAG-RL: Empower Vision-Perception-Based RAG for Visually Rich Information Understanding via Iterative Reasoning with Re- inforcement Learning Qiuchen Wang1, Ruixue Ding2, Yu Zeng1, Zehui Chen1, Lin Chen1 Shihang Wang2, Pengjun Xie2, Fei Huang2, Feng Zhao1† 1MoE Key Laboratory of Brain-inspired Intelligent Perception and Cognition, USTC 2Tongyi Lab, Alibaba Group Abstract Effectively retrieving, reasoning and understanding visually rich information remains a challenge for traditional Retrieval-Augmented Generation (RAG) methods. On the one hand, traditional text-based methods cannot handle visual-related information. On the other hand, current vision-based RAG approaches are often limited by fixed pipelines and frequently struggle to reason effectively due to the insufficient activation of the fundamental capabilities of models. As reinforcement learning (RL) has been proven to be beneficial for model reasoning, we introduce VRAG-RL , a novel RL framework tailored for complex reasoning across visually rich information. With this framework, VLMs interact with search engines, autonomously sampling single-turn or multi-turn reasoning trajectories with the help of visual perception tokens and undergoing con- tinual optimization based on these samples. Our approach highlights key limitations of RL in RAG domains: (i) Prior Multi-modal RAG approaches tend to merely incor- porate images into the context, leading to insufficient reasoning token allocation and neglecting visual-specific perception; and (ii) When models interact with search en- gines, their queries often fail to retrieve relevant information due to the inability to articulate requirements, thereby leading to suboptimal performance. To address these challenges, we define an action space tailored for visually rich inputs, with actions including cropping and scaling, allowing the model to gather information from a coarse- to-fine perspective. Furthermore, to bridge the gap between users’ original inquiries and the retriever, we employ a simple yet effective reward that integrates query rewrit- ing and retrieval performance with a model-based reward. Our VRAG-RL optimizes VLMs for RAG tasks using specially designed RL strategies, aligning the model with real-world applications. Extensive experiments on diverse and challenging benchmarks show that our VRAG-RL outperforms existing methods by 20% (Qwen2.5-VL-7B) and 30% (Qwen2.5-VL-3B), demonstrating the effectiveness of our approach. The code is available at https://github.com/Alibaba-NLP/VRAG. 1 Introduction Retrieval-Augmented Generation (RAG) Gao et al. (2023); Jin et al. (2024); Chen et al. (2025b) enables Language Models (LMs) to leverage external information to tackle problems. Due to the limitations of traditional textual RAG methods in handling visually rich information , efforts have been made to introduce RAG into the visual domain by integrating Vision-Language Models (VLMs) Bai et al. (2025); Chen et al. (2024d); OpenAI (2024); Jaech et al. (2024); Pichai et al. (2024) with search engines. However, current visual RAG methods still fall short in effectively reasoning with search engines and understanding complex visual information. Reinforcement Learning (RL) has been recognized as an †Corresponding author 1arXiv:2505.22019v1 [cs.CL] 28 May 2025 Technical Report Qwen LLM-based Evaluation…Stacked retrieval results fromRound 1 to N(b) Visual Perception Action Space Pre-defined Visual Perception Actions Region of Interest Select & Crop & Zoom(a) Reinforcement Learning Driven Visual Iterative Reasoning Framework (c) Fine-grained Retrieval-based Reward Retrieval-based RewardGolden ImageUnrelatedEvaluate retrieval performance using NDCGPattern RewardModel-based RewardPattern and Action VerificationRule-basedFunction Answer ExtractionOutcome
|
https://arxiv.org/abs/2505.22019v1
|
judgement:True or FalseInstructions + Human QueryRollout Module:Interaction with external environmentSearch EngineVisual Perception FunctionPolicy Model (VLMs) %='(%)*++*(%,-.++(%/0+Reward Func (RAG Specific)Reference ModelH1,H2,...HG.r1,r2,...rG.Adv1,Adv2,...AdvG.GroupComputationKLUpdate Policy ModelAdv. Trajectories H1,H2,… Trjs. Figure 1: Overall Framework of our Reinforcement Learning Framework. (a) demonstrates the interaction process between the model and the external environment, as well as the implementation of the GRPO algorithm. (b) shows the proposed visual perception action space which allows the model to extract information from a coarse-to-fine perspective. (c) is the specially designed reward for RAG, which combines outcome and retrieval performance across the entire sampling process. effective approach for optimizing VLMs in complex reasoning tasks Sutton et al. (1999); Kaelbling et al. (1996); Huang et al. (2025); Meng et al. (2025); Yu et al. (2025a). Therefore, RL offers a promising approach to address the challenges faced by visual RAG methods. Inspired by these advancements, we introduce VRAG-RL , a novel multimodal RL framework specifically designed for iterative reasoning in visually rich information RAG. Our approach is based on three critical observations: (i) Insufficient activation of reasoning capabilities with visual infor- mation. Existing methods underutilize the reasoning potential of VLMs when incorporating visual information. For instance, prior approaches tend to merely embed images into the context without adequately addressing visual-specific perception processes, resulting in insufficient reasoning token allocation and limiting the models’ ability to fully leverage visual data for complex reasoning tasks. (ii) Inefficient and disjointed Retrieval. In previous work, limited by the inability to articulate complex requirements, models struggled to retrieve relevant information efficiently, which may lead to repetitive and meaningless interactions, restricting the overall effectiveness. (iii) Inconsistent multi-turn reasoning and unstable training with VLMs. Current RL frameworks for LMs often struggle with maintaining stability and consistency during multi-turn reasoning. Handling complex, multi-step reasoning tasks can be particularly challenging, as models may encounter difficulties in maintaining effective reasoning across interactions with external environments, leading to inconsis- tent performance and suboptimal results. This challenge is further exacerbated for VLMs, which are limited by their instruction-following and reasoning capabilities. Building upon these insights, VRAG-RL introduces improvements in various modules: (i) We propose a visual perception action space that includes selecting regions of interest and zooming into these areas. VLMs with visual perception tokens in the action space are capable of acquiring information from coarse-to-fine perspective. As shown in Figure 1(b), when dealing with images or charts within documents, VLMs can give higher attention to information-dense areas through the proposed perception tokens. This allows the model to more effectively activate reasoning abilities within a limited context length, preventing the overlooking of details. (ii) Furthermore, rather than relying solely on a simple outcome-based reward, we factor in the effectiveness of the retrieval process as part of the reward structure. In particular, during the interaction between the model and the search engine, retrieving pertinent images promptly enhances the model’s ability to address questions effectively, whereas persistently retrieving irrelevant documents adds noise and hampers the reasoning process. As illustrated in Figure 1(c), by integrating retrieval performance into reward, 2 Technical Report Qwen we establish comprehensive guidance for retrieval-augmented generation frameworks. (iii)
|
https://arxiv.org/abs/2505.22019v1
|
Inspired by the current think-then-answer approach and the ReAct paradigm, we model the interaction between the VLMs and the search engine, along with the visual perception action space, as a process of iterative reasoning and tool invocation. Figure 1(a) illustrates our training pipeline, which supports automatic sampling and integrates the GRPO algorithm. To ensure stability in multi-turn sampling and training, we have carefully designed the sampling strategy including post-processing for each interaction, and model-based reward together with the retrieval reward mentioned above guides the model training. Additionally, we have re-annotated existing datasets of visually rich documents and developed a data construction pipeline to efficiently scale data for RL and SFT. Our major contributions are as follows: •We propose VRAG-RL, a novel reinforcement learning framework tailored for training VLMs to effectively reason, retrieve, and understand visually rich information. •We define a visual perception action space that includes selecting, cropping, and scaling regions of interest, allowing VLMs to gather information progressively from coarse-grained to fine-grained levels. This action space enhances the models’ ability to focus on information- dense areas and activates their vision-specific reasoning capabilities more effectively. •We introduce a comprehensive reward structure that integrates retrieval performance and model-based outcome reward. This reward mechanism aligns the model more closely with real-world applications, bridging the gap between users’ original intentions and the retriever. •Extensive experiments demonstrate the effectiveness of our method. VRAG-RL significantly outperforms strong baselines, achieving over 20% improvement on various benchmarks. 2 VRAG-RL In this section, drawing on insights and foundational ideas, we present a comprehensive description of our VRAG-RL framework. We start with the formulation of the problem (§2.1), then introduce the action space designed for visual perception (§2.2) and the fine-grained reward specifically defined for the RAG task (§2.3). Finally, we illustrate the model interaction process in the rollout module and the reinforcement learning training implementation of our framework (§2.4). 2.1 Problem Formulation Given a query denoted as q, we have a huge collection of images C={I1,I2,. . .,IN}, consisting of Nimages. Each image contains a variety of visually rich elements, such as flowcharts, charts, tables, and diverse layouts, derived from real-world documents across multiple domains, including slides and reports. Our goal is to efficiently reason, accurately retrieve the most relevant images, extract valuable information from the complex visual data, and generate the final answer ato the query q. 2.2 Visual Perception Action Integration for Understanding Information-Dense Regions Previous works merely involved migrating textual RAG to the multi-modal domain, which simply meant inserting images into the context and then reasoning and responding. However, these efforts overlooked the characteristics of image data, where the efficiency of visual perception is closely related to image resolution, visual element layouts, information density, and other visually related factors. Motivated by these findings, we introduce a dynamic novel visual perception paradigm into VLMs that involves region selection and re-encoding at the token level, as illustrated in Figure 2. Definition of Visual Perception Actions. We define the visual perception action space for VLMs by taking into account the specific characteristics of visual information. This enables the model to
|
https://arxiv.org/abs/2505.22019v1
|
select regions with high information density or regions relevant to the query for a detailed view, acquiring 3 Technical Report Qwen information from a coarse to fine perspective. We integrate search queries, answer summaries, and visually specific actions into a unified action space to align with the model’s pre-training domain. Language Model Decoder………………… Vision Encoder(T1, A1)(T2, A2)(Tk, Ak)Language Model Decoder…………… Vision Encoder(T1, A1)(T2, A2)(Tk, Ak)…Action ExtractionMapping Region Selection CropO1: Retrieved Imagere-input Retrieved Image 1:Retrieved Image N:… Precise retrieval and Efficient perception from coarse to fine(a) Traditional RAG with search action only(b) Our VRAG-RL with unified visual perception action space Inefficient perception and Repetitive retrievalselect Figure 2: Comparison between our VRAG-RL and the traditional RAG in terms of perception methods. (a) Traditional methods lack effective perception, which easily leads to repetitive and ineffective retrieval calls and suboptimal outcomes. (b) Our VRAG-RL is efficient and accurate, enabling the model to perceive information-dense regions from a coarse-to-fine perspective. The policy model πθinteracts with the environment in the Thought-Action-Observation (T,A,O) paradigm. In each interaction, the model generates the next action At∼πθ(· | H t−1)based on the trajectory Ht−1from step t−1and earlier. A role-based function is used to extract visual perception tokens < region > and < /region >, whose main purpose is to select, crop, and zoom in on the region of interest of the image that has already been retrieved in the context: At× O k→ O t,k∈ {1, 2, . . . , t−1}, (1) Given a w×himage as an observation Ok, a bounding box [xmin,ymin,xmax,ymax]within perception tokens can precisely delineate the position of region R, where (xmin,ymin)and(xmax,ymax)represent the coordinates of the top-left and bottom-right pixels of region R. Some current models’ pre-training domains for grounding tasks normalize the coordinates to [0,δ], resulting in actual coordinates of(x×w δ,y×h δ), while other models, such as Qwen2.5VL, directly use the original coordinates without normalization. Then we will map the selected region Rfrom the image tokens in context to thewraw×hrawraw image, and crop this raw image to obtain ˆR: ˆR=Crop(Iraw,[xmin×wraw wencoder,ymin×hraw hencoder,xmax×wraw wencoder,ymax×hraw hencoder]). (2) where (wraw,hraw)are the shape of the original image Iraw,(wencoder ,hencoder )are determined by the vision encoder such that wencoder×hencoder =Pixels max. Finally, ˆRis integrated into the context as an observation: ˆR → O t. Actually, the image token embedded in the context does not represent the original size of the image. The maximum pixel size Pixels maxfor the vision encoder is often considerably smaller than the pixel of visually rich documents found in real-world applications. This is the reason why the region cropped from the original image and scaled within the vision encoder has a higher density of vision tokens. This simple yet effective "crop and re-input" strategy enhances visual perception performance by directly increasing perceptual resolution Yu et al. (2025b); Liu et al. (2024); Shao et al. (2024). Trajectory Data Scaling-Up Based on Multi-Expert Sampling. To effectively train the model, especially smaller-scale models, to learn the utilization of Visual Perception Tokens while retaining their foundational capabilities, we need to train them with high-quality data through Supervised Fine-Tuning before applying RL. We
|
https://arxiv.org/abs/2505.22019v1
|
propose a multi-expert sampling strategy to scale up the trajectory data, aiming to sample diverse interactions within the same reasoning trajectory for each data. 4 Technical Report Qwen The core idea is to utilize large-scale models πLMto effectively guide the reasoning process and tool selections within a trajectory, while smaller expert models πEMannotate coordinate under the guidance of large-scale models. At the tthinteraction between the model and the environment: Ht={T1,A1,O1,· · ·,Ot−1,Tt,At,Ot}, (3) where Htis the trajectory, representing the sequence of past observations and actions leading up to the current step. The πLMequipped with extensive capacities for understanding and process- ing complex multi-modal interactions, act as pioneers in determining the overarching reasoning pathway: {Tt,At}=πLM(· | H t−1), (4) We use a rule-based function to extract action and thought. If the action is search, the engine returns the original image as Ot. Otherwise, each time a visual perception token is output, we employ grounding-specific expert models to re-locate the coordinates of regions of interest: ˆAt=πEM(· | H t−1;Tt), (5) where the expert models πEMbenefit from the guidance provided by the large model’s thought Tt, leveraging these insights to enhance their precision in region localization. The newly generated coordinates of the region of interest ˆAtwill replace the old visual perception tokens Atgenerated by πLM, and the re-encoded image serves as observation ˆOt: ˆOt=PV(Ot−1,ˆAt). (6) where PVrepresents the visual processing function, the selected region will undergo cropping, zooming in, and re-encoding before being inserted into the context. 2.3 Fine-Grained Reward Function Tailored for Enhancing RAG Framework Inference w/ NoiseUpper Bound63.5 34.754.156.455.350.142.4Num of Reference ImagesAcc1234567060504030 Figure 3: Experiments on the impact of context length on model performance.Unlike traditional RL methods that focus only on out- put results, VRAG-RL emphasizes optimizing retrieval in RAG, as retrieval quality directly affects overall per- formance. We designed a reward function with three components: pattern reward, retrieval efficiency reward, and model-based outcome reward, guiding the model to efficiently retrieve information and generate high-quality answers. Retrieval Efficiency Reward. As shown in Figure 3, when the information is sufficient, an excessively long context can interfere with the model. Therefore, the ear- lier and more comprehensive the retrieval of relevant information, the better the model can construct a coherent and informative context for generating high-quality answers. Inspired by Normalized Discounted Cumulative Gain, and using our predefined relevance of the recalled images, we define: DCG(Dtrj) =|Dtrj| ∑ i=12si−1 log2(i+1),si=( 1, if di∈ D rel 0, if di/∈ D rel, (7) where di∈ D trjrepresents stacked retrieved images within the trajectory, Drelis the collection of relevant golden images, siis the predefined relevance score. We believe that the performance is optimal when all relevant documents are retrieved first, the Ideal-DCG is defined as: IDCG (Drel) =|Drel| ∑ i=12srel−1 log2(i+1)+n ∑ i=|Drel|+12sunrel−1 log2(i+1)=|Drel| ∑ i=11 log2(i+1), (8) where srel=1and sunrel=0respectively represent the relevance scores of ideally relevant and irrelevant documents. Our Retrieval Efficiency Reward is defined as: rRet=DCG(Dtrj,Drel) IDCG (Drel). (9) where rRet, the modified NDCG, is directly used as the reward to reflect retrieval performance. 5 Technical Report Qwen Pattern Consistency and Model-Based Outcome Rewards. The rule-based
|
https://arxiv.org/abs/2505.22019v1
|
pattern reward is designed to encourage the model to follow the reasoning patterns during the interaction process: rPat∼Parse (H), (10) where His the generated trajectory. Parse (·)employ action tokens <search> and </search> to extract predefined actions in the action space. This is crucial for a reasoning agent with a predefined action space, as it helps effectively extract actions and thoughts. Regarding outcome reward, unlike rule-based methods that are prone to falling into local optima, we adopt a model-based reward: rAns∼πRM(·|Q,Agolden ,Apred), (11) where Qrepresents the input query, Agolden is the reference golden answer, and Apred is the answer generated by the VLMs. Based on these inputs, the evaluation model πRMassesses the correctness of the final answer. Integrated Reward Function. The final reward function is a weighted combination of the three components described above, with weights used to balance the contributions of each component: rϕ=α·rRet+β·rAns+γ·rPat. (12) where α+β+γ=1. In practice, we usually set γ=0as the model can effectively learn the pattern after SFT. We set γ=0.1when performing RL with cold start to help the model learn the predefined pattern. By integrating these three components into the reward function, our VRAG-RL provides a comprehensive and fine-grained evaluation mechanism that guides the model in optimizing its reasoning and retrieval capabilities in a way that aligns closely with real-world applications. 2.4 Reinforcement Learning Framework with Iterative Reasoning We apply RL to multimodal RAG agent tasks to enhance the capability of VLMs in retrieving and reasoning. Our RL framework is primarily divided into two parts for discussion: the rollout process for multimodal agent and the reinforcement learning training strategy for multi-turn interactions. Multi-Round Generation with Search Engine and Visual Perception Actions. As shown in Algorithm 1, the model interacts with the external environment in multiple turns, where the observation, which is the image, is inserted into the trajectory in the role of the user. This is necessary to align with the model’s pre-training domain, where only the user token can insert image tokens. Algorithm 1 Interaction of VLM with the External Environment through Iterative Reasoning Input: Input query x, Policy model πθ, External environment V, Maximum iterations T. Output: Final trajectory y. 1: Initialize rollout sequence y←∅and action count t←0 2:while t<Tdo 3: Generate VLM response sequence yt∼πθ(· |x,y) 4: Concatenate ytto the ysequence with the role of assistant: y←y+yt 5: if<search> </search> detected in ytthen 6: Extract search query q←Parse (yt)and Retrieve related image It=Ret(q) 7: else if <region> </region> detected in ytthen 8: Extract visual perception tokens loc←Parse (yt)and Processing image It=PV(loc,y) 9: else if <answer> </answer> detected in ytthen 10: return final generated trajectory y 11: end if 12: Concatenate vision tokens Itto the sequence ywith the role of user: y←y+It 13: Increment action count t←t+1 14:end while 15:return final generated trajectory y 6 Technical Report Qwen Table 1: Main Results. The best performance are marked in bold. SlideVQA and ViDoSeek mainly focus on reasoning type, while MMLongBench focuses on the visual type of reference content. OCR-based ( ) RAG and purely visual ( ) RAG are evaluated with the same prompt and setting.
|
https://arxiv.org/abs/2505.22019v1
|
METHODSLIDE VQA VIDOSEEK MML ONG BENCHOVERALLSingle-hop Multi-hop Extraction Logic Text Table Chart Figure Layout Qwen2.5-VL-3B-Instruct Vanilla RAG 15.1 12.1 8.8 14.3 3.9 5.1 1.7 3.1 2.5 11.2 ReAct 11.8 9.9 5.3 7.4 6.5 3.7 3.9 5.2 2.5 8.4 Search-R1 17.5 13.8 13.3 20.7 3.4 3.2 4.5 4.1 6.8 14.1 Vanilla RAG 19.4 12.2 10.1 17.3 2.2 4.1 5.2 4.7 4.3 13.2 ReAct 15.7 10.9 6.7 14.2 2.7 3.6 3.4 3.1 5.1 10.9 Search-R1-VL 26.3 20.1 20.1 29.8 8.5 7.8 7.9 9.3 7.6 21.3 VRAG-RL 65.3 38.6 63.1 73.8 22.7 16.1 21.9 21.4 19.5 53.5 Qwen2.5-VL-7B-Instruct Vanilla RAG 26.1 10.6 24.7 30.9 8.5 5.4 11.7 4.4 3.3 20.9 ReAct 21.2 13.3 14.3 21.3 5.9 5.1 7.3 5.5 1.7 15.8 Search-R1 28.4 19.7 20.8 30.6 9.9 6.0 7.9 10.1 5.9 22.2 Vanilla RAG 29.1 17.4 26.4 41.3 13.1 14.7 15.9 4.3 7.6 24.2 ReAct 34.8 20.4 27.5 42.1 10.1 12.4 10.2 6.2 7.1 26.9 Search-R1-VL 48.3 42.3 40.5 50.3 19.9 13.4 12.9 11.4 10.2 37.4 VRAG-RL 69.3 43.1 60.6 74.8 26.1 26.3 24.8 25.9 21.2 57.1 Training Strategy for Reinforcement Learning in Multi-Step Interactions. We propose a RL framework that enables VLM to learn how to interact with search engines and gather visually rich information from a coarse-to-fine perspective. The optimization objective is formulated as: max πθEx∼D,y∼πθ(·|x;V) rϕ(x,y)−βDKL[πθ(y|x;V)||πref(y|x;V)], (13) where the πθis the policy model, πre fis the reference model, DKLis KL-divergence, and y∼ πθ(· |x;V) =πθ(· |x)⊗ V is the rollout process. Our approach implements Group Relative Policy Optimization (GRPO) Guo et al. (2025), which optimizes the model’s retrieval-augmented reasoning capability with group-sampled role-play trajectories. 3 Experiments 3.1 Experimental Settings Datasets, Metric and Baselines. To evaluate the effectiveness of VRAG-RL, we compare our method with the text-based and vision-based baselines: (1) Vanilla RAG Faysse et al. (2024) uses the original question as a query for the search engine, then VLMs perform direct inference. (2) ReAct Yao et al. (2023): The model performs rewriting, retrieving, and reasoning in the think-then-act paradigm. (3) Search-R1(-VL) is the baseline adapted from Search-R1 Jin et al. (2025), and the settings are aligned across all experiments to ensure fairness. We evaluate our method on three challenging, visually rich benchmarks: ViDoSeek Wang et al. (2025a), SlideVQA Tanaka et al. (2023) and MMLongBench Ma et al. (2024). The model-based evaluation metric is binary 0 or 1, indicating the accuracy of the model’s responses. Training and Inference Setups. We conducted SFT and RL on llama-factory Zheng et al. (2024) and verl Sheng et al. (2024) respectively. We use full parameter fine-tuning and cosine learning scheduler with a warmup ratio of 0.1 during SFT. When training with the GRPO algorithm, we set the group size to 5 and the coefficient for the KL loss is typically set to 0.01, but if we perform cold start, we set it to 0 to disable the KL loss constraint on the model. During training and inference, we built a search engine from a database of approximately ∼70k visual documents. 7 Technical Report Qwen 3.2 Results Main Results. As shown in Table 1, compared to purely visual
|
https://arxiv.org/abs/2505.22019v1
|
methods, OCR-based methods exhibit significant limitations on visually intensive benchmarks. On the one hand, visual information inherently contains elements that cannot be represented by text, such as element positions, layout, and color, etc. On the other hand, the perceptual capabilities of OCR models are considerably inferior to those of the current advanced VLMs, which restricts the overall performance ceiling of the framework. Visual-based methods have proven to be a more elegant solution compared to OCR-based methods, especially in tasks related to visual understanding. For prompt-based baselines of vision domain, Vanilla RAG and ReAct exhibit poor performance, far behind RL-based baselines and our method on various benchmarks. The 7B model, compared to the 3B model, possesses superior perception and understanding capabilities, exhibiting strong performance across various datasets. For RL-based baselines, our method also performs better than search-R1-VL on both Qwen2.5-VL-7B-Instruct (34.7 →57.1) and Qwen2.5-VL-3B-Instruct (21.3 →53.5). The evaluation results on SlideVQA and ViDoSeek demonstrate our model’s significant improvement in reasoning capabilities across various reasoning tasks. Furthermore, as MMLongBench includes multiple visual elements, which indicates the model’s improvement in visual perception capabilities, this phenomenon is related to our proposed visual perception action space. The results across various benchmarks prove the effectiveness and generalization of our method in the retrieval and reasoning of visually rich information. Table 2: Ablation study on three benchmarks. REWARD ACTION SPACEAccuracyVanilla RAG-Specific Search Visual-Perception ✓ ✓ 47.2 ✓ ✓ ✓ 49.3 ✓ ✓ 54.9 ✓ ✓ ✓ 57.1Approach Ablations. As shown in Table 2, taking Qwen2.5-VL-7B-Instruct as an example, we decompose the key components of VRAG- RL to examine the impact of different rewards and action space on performance separately. In a macro view, removing each module results in a clear drop in the accuracy, which validates the power of our RAG-specific reward and Visual- perception action space. The action space module we defined shows a certain degree of improvement on different bases, which proves the effectiveness of the visual perception-based strategy . Consis- tent with the findings demonstrated in MMLongBench in Figure 5, the visual perception action space we introduced has generally enhanced the framework’s performance, particularly in improving high-density visual information. Furthermore, ablation experiments on the reward model further demonstrate that retrieving relevant information is a prerequisite for high-quality generation, high- lighting the role of high-quality retrieval in RAG, which proves the importance of our RAG-specific reward . Comparisons and analyses of experiments across different settings collectively demon- strate the effectiveness and generalization of our modules, and their combination comprehensively enhances end-to-end performance from various perspectives. 3.3 Analysis 40506070 SlideVQAViDoSeekMMLongBenchVRAG-RLReActDirect RetrievalRecall@1 Figure 4: Retrieval perfor- mance of our approach.Better retrieval facilitates high-quality generation. Our VRAG- RL framework significantly enhances the retrieval efficiency, which is crucial for constructing a coherent and informative context for high-quality generation. As demonstrated in Figure 3, the context length has a substantial impact on model performance. When the context is too long, it can introduce noise and interfere with the model’s ability to generate accurate answers. In contrast, when relevant information is retrieved early and comprehensively, the model can build a more focused and
|
https://arxiv.org/abs/2505.22019v1
|
informative context. As shown in Figure 4, our model is more effective at retrieving relevant information compared to traditional prompt-based rewrite methods. Our approach provides the vision model with a better context for generating high-quality answers. 8 Technical Report Qwen TextTable ChartFigureLayout Vanillaw/ Visual Perceptionw/o Visual Perception Figure 5: Relative perfor- mance on MMLongBench.Visual perception action space provides a fine-grained perspective. The visual perception action space introduced in our framework further enhances understaning by allowing the model to focus on information- dense regions of images. Figure 5 illustrates the relative performance comparison between our approach with visual perception action space and various baselines, from which we can observe that VRAG-RL not only performs well in textual tasks but also shows noticeable improve- ments in tasks requiring visual perception abilities, particularly in Layout, Chart, and Figure. This is particularly important given the cur- rent limitations in computational resources, especially considering that VLMs are highly memory-intensive. Using this dynamic resolution strategy, the model can achieve more detailed perception within the constraints of limited computa- tional resources, rather than simply maximizing the resolution of the original image. Our method achieves an improvement in perceptual abilities while optimizing resource utilization. Perhaps this human-like way of thinking and acting is the key to AGI. Table 3: Average Finish Rate ( %) and Average Invalid Action Rate (%). Method Invalid Action Rate ↓Finish Rate ↑ SFT 9.4 84.2 + RL 5.1 97.1Reinforcement learning helps the model to perform multi- step reasoning effectively. One major challenge of the prompt-based method is that as the number of interactions increases, the model’s capability to follow instructions weak- ens. However, pre-training with SFT helps the model reason in a pre-defined pattern compared to cold strat, but it also im- pacts the model’s inherent foundational capabilities to some extent. To further explore the activation of multi-turn reasoning abilities in models by RL, we compared the iterative reasoning performance of models with and without RL, as shown in Table 3. For our method with action space, effective actions are crucial for interacting with the external environment. The Invalid Action Rate indicates incorrect action responses, which include not only pattern errors but also hallucinations caused by wrong cropping, answering before retrieval, and so on. Inefficient reasoning often includes repeated meaningless searches, leading to a decrease in the finish rate. Our method with RL effectively re- duces the invalid rate and increases the finish rate. It guides the model to make optimal decisions at each step of the reasoning process, enabling it to flexibly adjust strategies when faced with different types of out-of-domain visual information, thereby better completing complex reasoning tasks. Model-based reward offers more stable training compared to rule-based reward. Previous works often use EM as the reward, which is too strict. Unlike short answers for data-related questions, it is difficult for the model’s responses to exactly match the golden answer, resulting in inefficient training. However, using recall as a reward may lead to misjudgments and cause models to hack the function, resulting in repetitive responses that destabilize training. In contrast, a model-based reward leverages
|
https://arxiv.org/abs/2505.22019v1
|
an evaluation model to assess the quality and relevance of generated responses in a more flexible manner. This approach not only aligns better with real-world applications but also provides a more stable and effective training signal, as demonstrated in Appendix A. The model-based reward thus enables VRAG-RL to achieve more robust performance across visual reasoning tasks. AnswerSearchPerceptionAvg. Action CountVanilla RAGReAct RAGVRAG-RL Figure 6: Latency Analysis on Generation.Time efficiency. As shown in Figure 6, our method’s multi-turn interaction with external envi- ronments can lead to increased latency. The latency of vanilla RAG remains consistent, as it only per- forms a single search and provides an answer. ReAct RAG, a prompt-based method, also demonstrates multi-turn interaction capabilities due to the funda- mental reasoning abilities of the model. However, it is limited to only two defined actions: answer and search. Due to the lack of sufficient perception capabilities, it often falls into repetitive search loops. Our approach equips the model with a visual perception space that can effectively understand visually rich images. The model can quickly extract answers after retrieval, thus avoiding ineffective 9 Technical Report Qwen searches. Despite the increase in latency, the overall performance improves due to the higher quality of generated answers, making the trade-off between latency and accuracy highly beneficial for visually rich retrieval and understanding tasks. Case Study. In Figure 7 and 8 (Appendix H), we list the trajectories of our VRAG-RL to illustrate how our model reasons and interacts with the environment. These cases highlight two challenges in visually rich information RAG: (1) accurately retrieving relevant images, and (2) the reference information often requires higher-resolution perception. In Figure 7, we can observe that the model demonstrated reflective capability, and eventually identified subtle clues in the relevant images. Moreover, as shown in Figure 8, the model engages in visual perception actions only when required, showcasing human-like reasoning instead of simply replicating patterns from its training data. 4 Related Work Vision-based Retrieval-augmented Generation. RAG demonstrates significant advantages in addressing knowledge-intensive problems Lewis et al. (2020); Gao et al. (2023); Chen et al. (2024a). Traditional text-based RAG methods typically involve designing different agents to interact with search engines Wu et al. (2025b); Chen et al. (2024b;c); Wu et al. (2025a); Li et al. (2023); Moreira et al. (2024); Lee et al. (2024). However, with the widespread adoption of electronic documents, knowledge is no longer confined to text. Recently, there has been an increasing amount of research on OCR-free retrieval methods that directly align textual queries with imagesYu et al. (2024); Faysse et al. (2024). Furthermore, more and more work is focusing on multimodal RAG agents Wang et al. (2025a); Cho et al. (2024); Jiang et al. (2024); Li et al. (2024); Xia et al. (2024), enabling more accurate retrieval and extraction of visual information. Our work builds upon these developments by incorporating visual perception actions into visual-based RAG, effectively activating the reasoning and understanding capabilities of VLMs. Reinforcement Learning with Large Models. Reasoning capabilities are crucial for models to effectively address complex problems, and RL has been proven to be a powerful
|
https://arxiv.org/abs/2505.22019v1
|
approach to enhance these capabilitiesGuo et al. (2025); Jaech et al. (2024). Previous work applied RL in the training of LLMs Meng et al. (2024); Williams (1992); Rafailov et al. (2023); Schulman et al. (2017); Guo et al. (2025). Additionally, more and more works aim to use RL to enhance the reasoning capabilities of VLMs Chen et al. (2025a); Meng et al. (2025); Liu et al. (2025). Recent advancements have seen RL being widely applied to the training of large model-driven agents Wang et al. (2025b). These agents, especially RAG agents, require robust multi-step reasoning capabilities to interact effectively with external environments Jiang et al. (2025); Li et al. (2025). However, there is still a scarcity of RL frameworks specifically tailored for multimodal iterative reasoning, which is essential for handling visually rich information. Our work aims to fill this gap by introducing a novel RL framework that enables VLMs to perform iterative reasoning with visual perception actions, thereby enhancing their reasoning capabilities in complex, multi-modal retrieval-augmented reasoning tasks. 5 Conclusion and Future Work In this paper, we introduce VRAG-RL, a novel reinforcement learning framework tailored for com- plex reasoning across visually rich information. Our approach enables Vision Language Models to interact with search engines more effectively, significantly enhancing their reasoning and retrieval ca- pabilities. Extensive evaluations on various benchmarks have demonstrated significant advantages in visual information reasoning, retrieval, and understanding with our model. For future work, we plan to introduce more actions that mimic how humans handle complex information, allowing the model to focus more on deep thinking. Additionally, we aim to reduce hallucinations by leveraging more advanced models, further improving the accuracy and reliability of our framework. 10 Technical Report Qwen References Shuai Bai, Keqin Chen, Xuejing Liu, Jialin Wang, Wenbin Ge, Sibo Song, Kai Dang, Peng Wang, Shijie Wang, Jun Tang, et al. Qwen2. 5-vl technical report. arXiv preprint arXiv:2502.13923 , 2025. Jiawei Chen, Hongyu Lin, Xianpei Han, and Le Sun. Benchmarking large language models in retrieval-augmented generation. In Proceedings of the AAAI Conference on Artificial Intelligence , pp. 17754–17762, 2024a. Liang Chen, Lei Li, Haozhe Zhao, Yifan Song, and Vinci. R1-v: Reinforcing super generalization ability in vision-language models with less than $3. https://github.com/Deep-Agent/R1-V , 2025a. Accessed: 2025-02-02. Mingyang Chen, Tianpeng Li, Haoze Sun, Yijie Zhou, Chenzheng Zhu, Haofen Wang, Jeff Z Pan, Wen Zhang, Huajun Chen, Fan Yang, et al. Research: Learning to reason with search for llms via reinforcement learning. arXiv preprint arXiv:2503.19470 , 2025b. Zehui Chen, Kuikun Liu, Qiuchen Wang, Jiangning Liu, Wenwei Zhang, Kai Chen, and Feng Zhao. Mindsearch: Mimicking human minds elicits deep ai searcher. arXiv preprint arXiv:2407.20183 , 2024b. Zehui Chen, Kuikun Liu, Qiuchen Wang, Wenwei Zhang, Jiangning Liu, Dahua Lin, Kai Chen, and Feng Zhao. Agent-flan: Designing data and methods of effective agent tuning for large language models. arXiv preprint arXiv:2403.12881 , 2024c. Zhe Chen, Weiyun Wang, Yue Cao, Yangzhou Liu, Zhangwei Gao, Erfei Cui, Jinguo Zhu, Shenglong Ye, Hao Tian, Zhaoyang Liu, et al. Expanding performance boundaries of open-source multimodal models with model, data, and test-time scaling. arXiv preprint arXiv:2412.05271 , 2024d.
|
https://arxiv.org/abs/2505.22019v1
|
Jaemin Cho, Debanjan Mahata, Ozan Irsoy, Yujie He, and Mohit Bansal. M3docrag: Multi- modal retrieval is what you need for multi-page multi-document understanding. arXiv preprint arXiv:2411.04952 , 2024. Yuning Du, Chenxia Li, Ruoyu Guo, Xiaoting Yin, Weiwei Liu, Jun Zhou, Yifan Bai, Zilin Yu, Yehua Yang, Qingqing Dang, and Haoshuang Wang. Pp-ocr: A practical ultra lightweight ocr system, 2020. URL https://arxiv.org/abs/2009.09941 . Manuel Faysse, Hugues Sibille, Tony Wu, Bilel Omrani, Gautier Viaud, Céline Hudelot, and Pierre Colombo. Colpali: Efficient document retrieval with vision language models. In The Thirteenth International Conference on Learning Representations , 2024. Yunfan Gao, Yun Xiong, Xinyu Gao, Kangxiang Jia, Jinliu Pan, Yuxi Bi, Yixin Dai, Jiawei Sun, Haofen Wang, and Haofen Wang. Retrieval-augmented generation for large language models: A survey. arXiv preprint arXiv:2312.10997 , 2:1, 2023. Daya Guo, Dejian Yang, Haowei Zhang, Junxiao Song, Ruoyu Zhang, Runxin Xu, Qihao Zhu, Shirong Ma, Peiyi Wang, Xiao Bi, et al. Deepseek-r1: Incentivizing reasoning capability in llms via reinforcement learning. arXiv preprint arXiv:2501.12948 , 2025. Wenxuan Huang, Bohan Jia, Zijie Zhai, Shaosheng Cao, Zheyu Ye, Fei Zhao, Zhe Xu, Yao Hu, and Shaohui Lin. Vision-r1: Incentivizing reasoning capability in multimodal large language models. arXiv preprint arXiv:2503.06749 , 2025. Aaron Jaech, Adam Kalai, Adam Lerer, Adam Richardson, Ahmed El-Kishky, Aiden Low, Alec Helyar, Aleksander Madry, Alex Beutel, Alex Carney, et al. Openai o1 system card. arXiv preprint arXiv:2412.16720 , 2024. Dongzhi Jiang, Renrui Zhang, Ziyu Guo, Yanmin Wu, Jiayi Lei, Pengshuo Qiu, Pan Lu, Zehui Chen, Chaoyou Fu, Guanglu Song, et al. Mmsearch: Benchmarking the potential of large models as multi-modal search engines. arXiv preprint arXiv:2409.12959 , 2024. 11 Technical Report Qwen Pengcheng Jiang, Jiacheng Lin, Lang Cao, Runchu Tian, SeongKu Kang, Zifeng Wang, Jimeng Sun, and Jiawei Han. Deepretrieval: Hacking real search engines and retrievers with large language models via reinforcement learning. arXiv preprint arXiv: 2503.00223 , 2025. URL https: //arxiv.org/abs/2503.00223 . Bowen Jin, Jinsung Yoon, Jiawei Han, and Sercan O Arik. Long-context llms meet rag: Overcoming challenges for long inputs in rag. arXiv preprint arXiv:2410.05983 , 2024. Bowen Jin, Hansi Zeng, Zhenrui Yue, Jinsung Yoon, Sercan Arik, Dong Wang, Hamed Zamani, and Jiawei Han. Search-r1: Training llms to reason and leverage search engines with reinforcement learning. arXiv preprint arXiv:2503.09516 , 2025. Leslie Pack Kaelbling, Michael L Littman, and Andrew W Moore. Reinforcement learning: A survey. Journal of artificial intelligence research , 4:237–285, 1996. Chankyu Lee, Rajarshi Roy, Mengyao Xu, Jonathan Raiman, Mohammad Shoeybi, Bryan Catanzaro, and Wei Ping. Nv-embed: Improved techniques for training llms as generalist embedding models. arXiv preprint arXiv:2405.17428 , 2024. Patrick Lewis, Ethan Perez, Aleksandra Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich Küttler, Mike Lewis, Wen-tau Yih, Tim Rocktäschel, et al. Retrieval-augmented gen- eration for knowledge-intensive nlp tasks. Advances in neural information processing systems , 33: 9459–9474, 2020. Xiaoxi Li, Guanting Dong, Jiajie Jin, Yuyao Zhang, Yujia Zhou, Yutao Zhu, Peitian Zhang, and Zhicheng Dou. Search-o1: Agentic search-enhanced large reasoning models. arXiv preprint arXiv:2501.05366 , 2025. Yangning Li, Yinghui Li, Xinyu Wang, Yong Jiang, Zhen Zhang, Xinran Zheng, Hui Wang, Hai-Tao Zheng, Fei Huang, Jingren Zhou,
|
https://arxiv.org/abs/2505.22019v1
|
et al. Benchmarking multimodal retrieval augmented generation with dynamic vqa dataset and self-adaptive planning agent. arXiv preprint arXiv:2411.02937 , 2024. Zehan Li, Xin Zhang, Yanzhao Zhang, Dingkun Long, Pengjun Xie, and Meishan Zhang. Towards general text embeddings with multi-stage contrastive learning. arXiv preprint arXiv:2308.03281 , 2023. Haotian Liu, Chunyuan Li, Yuheng Li, and Yong Jae Lee. Improved baselines with visual instruction tuning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pp. 26296–26306, 2024. Jerry Liu. LlamaIndex, 11 2022. URL https://github.com/jerryjliu/llama_index . Ziyu Liu, Zeyi Sun, Yuhang Zang, Xiaoyi Dong, Yuhang Cao, Haodong Duan, Dahua Lin, and Jiaqi Wang. Visual-rft: Visual reinforcement fine-tuning. arXiv preprint arXiv:2503.01785 , 2025. Yubo Ma, Yuhang Zang, Liangyu Chen, Meiqi Chen, Yizhu Jiao, Xinze Li, Xinyuan Lu, Ziyu Liu, Yan Ma, Xiaoyi Dong, et al. Mmlongbench-doc: Benchmarking long-context document understanding with visualizations. arXiv preprint arXiv:2407.01523 , 2024. Fanqing Meng, Lingxiao Du, Zongkai Liu, Zhixiang Zhou, Quanfeng Lu, Daocheng Fu, Botian Shi, Wenhai Wang, Junjun He, Kaipeng Zhang, et al. Mm-eureka: Exploring visual aha moment with rule-based large-scale reinforcement learning. arXiv preprint arXiv:2503.07365 , 2025. Yu Meng, Mengzhou Xia, and Danqi Chen. Simpo: Simple preference optimization with a reference- free reward. Advances in Neural Information Processing Systems , 37:124198–124235, 2024. Gabriel de Souza P Moreira, Radek Osmulski, Mengyao Xu, Ronay Ak, Benedikt Schifferer, and Even Oldridge. Nv-retriever: Improving text embedding models with effective hard-negative mining. arXiv preprint arXiv:2407.15831 , 2024. 12 Technical Report Qwen OpenAI. Hello gpt-4o. https://openai.com/index/hello-gpt-4o/ , 2024. Sundar Pichai, D Hassabis, and K Kavukcuoglu. Introducing gemini 2.0: our new ai model for the agentic era, 2024. Rafael Rafailov, Archit Sharma, Eric Mitchell, Christopher D Manning, Stefano Ermon, and Chelsea Finn. Direct preference optimization: Your language model is secretly a reward model. Advances in Neural Information Processing Systems , 36:53728–53741, 2023. John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347 , 2017. Hao Shao, Shengju Qian, Han Xiao, Guanglu Song, Zhuofan Zong, Letian Wang, Yu Liu, and Hongsheng Li. Visual cot: Advancing multi-modal language models with a comprehensive dataset and benchmark for chain-of-thought reasoning. Advances in Neural Information Processing Systems , 37:8612–8642, 2024. Guangming Sheng, Chi Zhang, Zilingfeng Ye, Xibin Wu, Wang Zhang, Ru Zhang, Yanghua Peng, Haibin Lin, and Chuan Wu. Hybridflow: A flexible and efficient rlhf framework. arXiv preprint arXiv: 2409.19256 , 2024. Richard S Sutton, Andrew G Barto, et al. Reinforcement learning. Journal of Cognitive Neuroscience , 11(1):126–134, 1999. Ryota Tanaka, Kyosuke Nishida, Kosuke Nishida, Taku Hasegawa, Itsumi Saito, and Kuniko Saito. Slidevqa: A dataset for document visual question answering on multiple images. In Proceedings of the AAAI Conference on Artificial Intelligence , pp. 13636–13645, 2023. Qiuchen Wang, Ruixue Ding, Zehui Chen, Weiqi Wu, Shihang Wang, Pengjun Xie, and Feng Zhao. Vidorag: Visual document retrieval-augmented generation via dynamic iterative reasoning agents. arXiv preprint arXiv:2502.18017 , 2025a. Zihan Wang, Kangrui Wang, Qineng Wang, Pingyue Zhang, Linjie Li, Zhengyuan Yang, Xing Jin, Kefan Yu, Minh Nhat Nguyen, Licheng Liu, Eli Gottlieb, Yiping Lu, Kyunghyun Cho, Jiajun Wu, Li Fei-Fei, Lijuan Wang, Yejin
|
https://arxiv.org/abs/2505.22019v1
|
Choi, and Manling Li. Ragen: Understanding self-evolution in llm agents via multi-turn reinforcement learning, 2025b. URL https://arxiv.org/abs/2504.20073 . Ronald J Williams. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine learning , 8:229–256, 1992. Jialong Wu, Wenbiao Yin, Yong Jiang, Zhenglin Wang, Zekun Xi, Runnan Fang, Linhai Zhang, Yulan He, Deyu Zhou, Pengjun Xie, et al. Webwalker: Benchmarking llms in web traversal. arXiv preprint arXiv:2501.07572 , 2025a. Weiqi Wu, Shen Huang, Yong Jiang, Pengjun Xie, Fei Huang, and Hai Zhao. Unfolding the head- line: Iterative self-questioning for news retrieval and timeline summarization. arXiv preprint arXiv:2501.00888 , 2025b. Peng Xia, Kangyu Zhu, Haoran Li, Hongtu Zhu, Yun Li, Gang Li, Linjun Zhang, and Huaxiu Yao. Rule: Reliable multimodal rag for factuality in medical vision language models. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing , pp. 1081–1093, 2024. An Yang, Baosong Yang, Beichen Zhang, Binyuan Hui, Bo Zheng, Bowen Yu, Chengyuan Li, Dayiheng Liu, Fei Huang, Haoran Wei, Huan Lin, Jian Yang, Jianhong Tu, Jianwei Zhang, Jianxin Yang, Jiaxi Yang, Jingren Zhou, Junyang Lin, Kai Dang, Keming Lu, Keqin Bao, Kexin Yang, Le Yu, Mei Li, Mingfeng Xue, Pei Zhang, Qin Zhu, Rui Men, Runji Lin, Tianhao Li, Tingyu Xia, Xingzhang Ren, Xuancheng Ren, Yang Fan, Yang Su, Yichang Zhang, Yu Wan, Yuqiong Liu, Zeyu Cui, Zhenru Zhang, and Zihan Qiu. Qwen2.5 technical report. arXiv preprint arXiv:2412.15115 , 2024. 13 Technical Report Qwen Shunyu Yao, Jeffrey Zhao, Dian Yu, Nan Du, Izhak Shafran, Karthik Narasimhan, and Yuan Cao. React: Synergizing reasoning and acting in language models. In International Conference on Learning Representations (ICLR) , 2023. En Yu, Kangheng Lin, Liang Zhao, Jisheng Yin, Yana Wei, Yuang Peng, Haoran Wei, Jianjian Sun, Chunrui Han, Zheng Ge, et al. Perception-r1: Pioneering perception policy with reinforcement learning. arXiv preprint arXiv:2504.07954 , 2025a. Runpeng Yu, Xinyin Ma, and Xinchao Wang. Introducing visual perception token into multimodal large language model. arXiv preprint arXiv:2502.17425 , 2025b. Shi Yu, Chaoyue Tang, Bokai Xu, Junbo Cui, Junhao Ran, Yukun Yan, Zhenghao Liu, Shuo Wang, Xu Han, Zhiyuan Liu, et al. Visrag: Vision-based retrieval-augmented generation on multi- modality documents. arXiv preprint arXiv:2410.10594 , 2024. Yaowei Zheng, Richong Zhang, Junhao Zhang, Yanhan Ye, Zheyan Luo, Zhangchi Feng, and Yongqiang Ma. Llamafactory: Unified efficient fine-tuning of 100+ language models. In Proceed- ings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 3: System Demonstrations) , Bangkok, Thailand, 2024. Association for Computational Linguistics. URL http://arxiv.org/abs/2403.13372 . 14 Technical Report Qwen A Model-Based Reward We employ a model-based reward to evaluate the quality and relevance of generated responses. Specifically, we utilize Qwen2.5-7B-Instruct Yang et al. (2024) as our reward model. This model is deployed on 4 NVIDIA A100 GPUs to enable efficient batch evaluation. The prompt used for the reward model is illustrated in Figure 12. Given the input query, reference answer, and generated response, the reward model assesses the correctness of the generated response and outputs a binary value (0 or 1) to represent the accuracy of the answer. Compared to the rule-based reward like exact match
|
https://arxiv.org/abs/2505.22019v1
|
(EM) or Recall, used in previous work Jin et al. (2025); Chen et al. (2025a), our model-based reward provides a more flexible and comprehensive evaluation of the generated response. This leads to higher training efficiency and better generalization to diverse datasets. B The implementation of the search engine To effectively support the retrieval-augmented generation tasks in our VRAG-RL framework, we implemented OCR-based and vision-based pipeline separately. The vision-based retriever is built upon the state-of-the-art embedding model ColPali Faysse et al. (2024), which is specifically designed for aligning textual queries with images. For the textual retrieval pipeline, we employ the PP-OCR Du et al. (2020) to extract text from images. We utilize the Llama-Index Liu (2022) to ensure an efficient indexing and querying mechanism for large-scale image datasets. In our experiments, we deployed the search engine on a single NVIDIA A100 80G GPU, allowing us to handle large-scale queries efficiently. The use of batch querying further optimizes the retrieval speed, making it suitable for real-time applications. C Reinforcement Learning Framework with GRPO Our framework implements the Group Relative Policy Optimization (GRPO), which leverages the average reward of multiple sampled outputs as a baseline rather than relying on a learned value function. The policy model is optimized by maximizing the following objective function: JGRPO(θ) =Ex∼D,{yi}G i=1∼πold(·|x;V)" 1 GG ∑ i=11 ∑|yi| t=1I(yi,t)|yi| ∑ t=1:I(yi,t)=1min πθ(yi,t|x,yi,<t;V) πold(yi,t|x,yi,<t;V)ˆAi,t, clip πθ(yi,t|x,yi,<t;V) πold(yi,t|x,yi,<t;V), 1−ϵ, 1+ϵ! ˆAi,t! −βDKL[πθ||πref] where rollout module samples a group of trajectories {y1,y2,. . .,yG}from the reference policy πreffor each input question xby interacting with the external environment V.ˆAi,trepresent the advantage, computed based on the relative rewards of outputs within each group. D Expert Trajectories Collection Data Collection. To train our model effectively, we collected expert trajectories using Qwen-VL- max-latest for prompt-based data collection. Specifically, we utilized the React-based prompt to gather data, ensuring that the model could perform complex reasoning tasks. During the data collection process, whenever grounding was required to focus on specific regions of interest within images, we employed Qwen2.5VL-72B to perform the grounding tasks. This was done under the guidance of the historical trajectories. Data Proportions. To ensure that our model could perform diverse multi-step reasoning during Reinforcement Learning (RL), we carefully balanced the training data. Specifically, we balanced the trajectories based on the number of steps (2-6) and the types of actions involved (search and 15 Technical Report Qwen perception). This approach ensured that the model was exposed to a wide range of reasoning tasks and could learn to handle different types of interactions with the environment effectively. E Dataset Information We evaluate our method on three visually rich document datasets: SlideVQA, ViDoSeek, and MMLongbench. 1.SlideVQA Tanaka et al. (2023) is a dataset for document visual question answering focused on understanding slides. It contains over 2,600 slide decks with more than 52,000 slide images and 14,500 questions that require complex reasoning skills such as single-hop, multi- hop, and numerical reasoning. The dataset is designed to support various reasoning types and includes annotated arithmetic expressions for numerical questions to enhance reasoning capabilities. 2.ViDoSeek Wang et al. (2025a) is a dataset specifically
|
https://arxiv.org/abs/2505.22019v1
|
designed for visually rich document retrieval-reason-answer tasks. It aims to evaluate the performance of RAG systems on large-scale document collections. Unlike traditional VQA datasets that focus on single images or documents, ViDoSeek contains queries with unique answers across a collection of approximately 6,000 images, covering diverse content types such as text, charts, tables, and layouts. This dataset provides a more comprehensive and challenging benchmark for evaluating the retrieval and reasoning capabilities of RAG models in real-world scenarios. 3.MMLongbench Ma et al. (2024) is a dataset designed to evaluate the document under- standing capabilities of VLMs with an emphasis on long-context, multi-modal documents composed of text, images, charts, tables, and layout structures. F Compared Baselines Here we detailedly introduce the baselines we compare with and our re-produce details. 1.Vanilla RAG . There are two types of Vanilla RAG: text-based and visual-based. Text-based Vanilla RAG uses text as the retrieval corpus, which is reflected in text search engines and text modality generation. During the retrieval phase, it directly uses the original question to search for relevant text, which is then inserted into the context to answer the question. Visual-based Vanilla RAG uses images as the corpus. During the retrieval phase, it directly uses the original question to search for relevant images, which are then inserted into the context to answer the question. 2.ReAct RAG Yao et al. (2023). The method incorporates Chain-of-Thought (COT) prompting in RAG agent tasks with a format of a Thought-Action-Observation loop. The main dif- ference between text-based and visual-based approaches lies in the retrieval corpus of the search engine and the modality of the information inserted. 3.Search-R1 Jin et al. (2025). The method introduces multi-turn reasoning RL into the text RAG. We used our framework for reproducing, which includes multi-turn interactions and rule-based rewards. 4.Search-R1-VL . This is a vision-based baseline implemented on our framework based on search-R1. We used the same reward and post-process methods and trained models based on cold start with the same dataset as VRAG-RL. G Hyperparameters The detailed hyperparameters we use during training are shown in Table 4 and Table 5. We employ identical hyperparameters for different models. 16 Technical Report Qwen Table 4: Key hyperparameters for SFT. Name Value Finetuning type Full Freeze vision tower True Freeze multi-modal projector True Freeze language model False Cutoff len 16384 Epochs 3 Batch size 16 Gradient accumulation steps 2 Learning rate 1.0e-5 LR scheduler type cosine Warmup ratio 0.1Table 5: Key hyperparameters for RL. Name Value Number of agent groups 5 Warmup steps ratio 0.285 Mini batch size 64 Micro batch size per GPU 2 Learning rate (Actor) 1.0e-6 KL loss coefficient 0.01 (optional) Tensor model parallel size 4 Total epochs 1 Max prompt length 8192 Max response length 2048 GPU memory utilization 0.6 H Case Study In Figure 7 and 8, we list the trajectories of our VRAG-RL to illustrate how our model reasons and interacts with the environment. These cases highlight two challenges in visually rich information RAG: (1) accurately retrieving relevant images, and (2) the reference information often requires higher-resolution perception. In Figure 7,
|
https://arxiv.org/abs/2505.22019v1
|
we can observe that the model demonstrated reflective capability, and eventually identified subtle clues in the relevant images. Moreover, as shown in Figure 8, the model engages in visual perception actions only when required, showcasing human-like reasoning instead of simply replicating patterns from its training data. I Prompts In this section, we illustrate all the prompts used in our paper. Part of our prompts are taken from Search-R1 Jin et al. (2025). I.1 Vanilla RAG Prompt See Figure 11. I.2 Search-R1 Prompt See Figure 10. I.3 ReAct RAG Prompt ReAct RAG uses the same prompt as Search-R1, as shown in Figure 10. I.4 VRAG-RL Prompt See Figure 9. I.5 Model-based Reward Prompt See Figure 12. 17 Technical Report Qwen Question:HowmanymoreDugwellsaretherethanShallowTubewellsintheyearsofIndia'sEnergyDividewhosemapwaspreparedbyIWMIbasedonMinorIrrigationCensus? <think>Toanswerthisquestion,IneedtofindthespecificdatarelatedtoDugwellsandShallowTubewellsinIndia'sEnergyDividemappreparedbyIWMIbasedonMinorIrrigationCensus.Thisrequiressearchingfortherelevantinformation.</think><search>India'sEnergyDivideIWMIMinorIrrigationCensusDugwellsShallowTubewells</search> <think>TheimageprovidedisamapshowingtheproportionofelectricstructuresinIndia'sEnergyDivide.Itindicatestheproportionofelectricstructuresusingdifferentcolorcodes.However,itdoesnotprovidespecificnumbersforDugwellsandShallowTubewells.IneedtosearchformoredetaileddataonDugwellsandShallowTubewellsfromtheMinorIrrigationCensus.</think><search>DetaileddataonDugwellsandShallowTubewellsdata</search> <think>ThecroppedimageprovidesthespecificnumbersforDugwells(9.20million)andShallowTubewells(9.10million).TofindhowmanymoreDugwellstherearethanShallowTubewells,IneedtosubtractthenumberofShallowTubewellsfromthenumberofDugwells.</think><answer>Thereare0.10million(or100,000)moreDugwellsthanShallowTubewells.</answer><think>Theimagecontainsatablewiththenumberofstructures,includingDugwellsandShallowTubewells.Ineedtocropthispartoftheimagetofocusontherelevantdata.</think><region>[361,366,721,517]</region> Select & Crop & Zoom in Image Retrieval Image Retrieval Figure 7: Case 1 for VRAG-RL. 18 Technical Report Qwen Question:WhichH2antagonisthasalowerbioavailability:thedrugthatinhibitsseveralCYP-450isoenzymesandreduceshepaticbloodfloworFamotidine? <think>Toanswerthisquestion,IneedtoidentifytheH2antagonistthatinhibitsseveralCYP-450isoenzymesandreduceshepaticbloodflow.Then,IwillcompareitsbioavailabilitywiththatofFamotidine.</think><search>H2antagonistthatinhibitsseveralCYP-450isoenzymes</search> <think>Theimageprovidesinformationaboutdruginteractions,specificallymentioningCimetidine'seffectonCYP-450isoenzymesandhepaticbloodflow.Thisalignswiththesearchquery.Ineedtolookatthelowerrightcorneroftheimagetoconfirmthis.</think><region>[556,337,722,505]</region> <think>Basedontheprovidedtable,Cimetidinehasabioavailabilityof80%,whileFamotidinehasabioavailabilityof40%.Therefore,FamotidinehasalowerbioavailabilitycomparedtoCimetidine.ThedrugthatinhibitsseveralCYP-450isoenzymesandreduceshepaticbloodflowisCimetidine,asindicatedinpreviousinformation.</think><answer>Famotidinehasalowerbioavailability(40%)comparedtoCimetidine(80%).Therefore,theH2antagonistwithlowerbioavailabilityisFamotidine.</answer><think>ThediagramconfirmsthatCimetidineinhibitstheP-450enzymes,leadingtoincreasedserumconcentrationsofdrugsmetabolizedbytheseenzymes.Now,IneedtofindinformationonthebioavailabilityofCimetidineandFamotidinetodeterminewhichhaslowerbioavailability.</think><search>bioavailabilityofCimetidineandFamotidine</search> Image Retrieval Image Retrieval Select & Crop & Zoom in Figure 8: Case 2 for VRAG-RL. 19 Technical Report Qwen VRAG-RL Prompt. System Prompt: Answer the given question. You must conduct reasoning inside <think> and </think> first every time you get new information. After reasoning, if you find you lack some knowledge, you can call a search engine by <search> query </search> and user will return the searched results. Every time you retrieve an image, you have the option to crop it to obtain a clearer view, the format for coordinates is <bbox>[x1, y1, x2, y2]</bbox>. You can search as many times as your want. If you find no further external knowledge needed, you can directly provide the answer inside <answer> and </answer>, without detailed illustrations. For example, <answer> Beijing </answer>. User Prompt: Query: {Query Description} Figure 9: Prompt of VRAG-RL. Search-R1(-VL) Prompt. System Prompt: Answer the given question. You must conduct reasoning inside <think> and </think> first every time you get new information. After reasoning, if you find you lack some knowledge, you can call a search engine by <search> query </search> and user will return the searched results. You can search as many times as your want. If you find no further external knowledge needed, you can directly provide the answer inside <answer> and </answer>, without detailed illustrations. For example, <answer> Beijing </answer>. User Prompt: Query: {Query Description} Figure 10: Prompt of Search-R1(-VL) and ReAct RAG. 20 Technical Report Qwen Vanilla RAG Prompt. System Prompt: Answer the given question. You must conduct reasoning inside <think> and </think> first every time you get new information. After reasoning, you should directly provide the answer inside <answer> and </answer>, without detailed illustrations. For example, <answer> Beijing </answer>. User Prompt: Query: {Query Description} Reference: {Retrieved Images / Text Tokens} Figure 11: Prompt of Vanilla RAG. Reward Model Prompt. System Prompt: Character Introduction You are an expert evaluation system for a question answering chatbot. You are given the following information: - the query - a generated answer - a reference answer Your task is to evaluate the correctness of the generated answer. Response Format Your response should be formatted as following: <judge>True or False</judge> If the generated answer
|
https://arxiv.org/abs/2505.22019v1
|
GL-PGENet: A Parameterized Generation Framework for Robust Document Image Enhancement Zhihong Tang Yang Li* QQ Browser R&D Team, Tencent CSIG harryzhtang@tencent.com Abstract Document Image Enhancement (DIE) serves as a criti- cal component in Document AI systems, where its per- formance substantially determines the effectiveness of downstream tasks. To address the limitations of exist- ing methods confined to single-degradation restoration or grayscale image processing, we present Global with Local Parametric Generation Enhancement Network (GL-PGENet), a novel architecture designed for multi- degraded color document images, ensuring both effi- ciency and robustness in real-world scenarios. Our so- lution incorporates three key innovations: First, a hier- archical enhancement framework that integrates global appearance correction with local refinement, enabling coarse-to-fine quality improvement. Second, a Dual- Branch Local-Refine Network with parametric gener- ation mechanisms that replaces conventional direct pre- diction, producing enhanced outputs through learned in- termediate parametric representations rather than pixel- wise mapping. This approach enhances local consis- tency while improving model generalization. Finally, a modified NestUNet architecture incorporating dense block to effectively fuse low-level pixel features and high-level semantic features, specifically adapted for document image characteristics. In addition, to en- hance generalization performance, we adopt a two- stage training strategy: large-scale pretraining on a syn- thetic dataset of 500,000+ samples followed by task- specific fine-tuning. Extensive experiments demonstrate the superiority of GL-PGENet, achieving state-of-the- art SSIM scores of 0.7721 on DocUNet and 0.9480 on RealDAE. The model also exhibits remarkable cross- domain adaptability and maintains computational effi- ciency for high-resolution images without performance degradation, confirming its practical utility in real- world scenarios. 1 Introduction The proliferation of smartphone and tablet technologies has revolutionized the digitization of printed materials, encom- passing diverse media ranging from bound publications to ephemeral handwritten notes. Unlike conventional docu- ment scanning systems, the quality of document images *Corresponding author (Project Leader) Copyright © 2025, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.captured by handheld mobile devices is often adversely af- fected by environmental and technical factors, including var- ious lighting conditions, camera angles, and paper quality. These factors may cause the degradation of document im- ages (e.g., shadows, blurs, wrinkles, and color casts, etc.), which negatively affect the readability and usability of dig- itized documents (Zamora-Mart ´ınez, Espa ˜na-Boquera, and Castro-Bleda 2007). Moreover, such deficiencies can also hinder the performance of document AI systems, particu- larly optical character recognition (OCR), which rely on high-quality images to produce accurate results (Souibgui and Kessentini 2020; Das et al. 2019). To address these chal- lenges, researchers have developed various techniques for DIE (Document Image Enhancement). Early document image enhancement (DIE) methodolo- gies predominantly employed conventional image process- ing techniques, including histogram equalization and con- trast stretching (Xiong et al. 2018). While effective for basic quality improvements, these approaches demonstrate lim- ited capability in handling complex multi-degradation sce- narios prevalently in real-world color document images. The advent of deep learning has catalyzed significant ad- vancements in DIE through convolutional neural networks (CNNs) that learn nonlinear mappings between degraded in- puts and enhanced outputs (Souibgui and Kessentini 2020; Zhang et al. 2023b; Feng et
|
https://arxiv.org/abs/2505.22021v1
|
al. 2021). Contemporary image restoration research predominantly focuses on specialized models targeting individual degradation types - exemplified by BEDSR-Net (Lin, Chen, and Chuang 2020) for shadow removal and DeepDeblur (Mei et al. 2019) for deblurring. Recent efforts to address multi-degradation challenges in color document images, such as the two-stage GCDRNet (Zhang et al. 2023b) and transformer-based DocStormer (Liu et al. 2023), reveal two critical limitations: (1) high computational complexity that impedes high-resolution pro- cessing, and (2) suboptimal enhancement quality for real- world color document images. While unsupervised ap- proaches like UDoc-GAN (Wang et al. 2022) attempt to mit- igate data scarcity issues, their performance on color doc- ument image enhancement remains unsatisfactory. These challenges motivate our investigation into efficient multi- degradation enhancement solutions. To address these critical gaps, we present GL-PGENet, a novel framework that achieves state-of-the-art performancearXiv:2505.22021v1 [cs.CV] 28 May 2025 backboneBrightness Contrast SaturationGPPNet DB-LRNet NestUNetR G B3 C coefficient X(1,0) X(2,0) X(3,0) X(4,0)X(3,1)X(2,2) X(2,1)X(1,1) X(1,2) X(1,3)X(0, 0) X(0,1) X(0,2) X(0,3) X(0,4) Figure 1: Overview of the proposed GL-PGENet framework. The architecture follows a coarse-to-fine two-stage paradigm. (a) The GPPNet first estimates global enhancement parameters for brightness, contrast, and saturation transformations to gen- erate globally enhanced images with illumination consistency. (b) The DB-LRNet refines detail features: one branch employs convolutional operations for image smoothing, while the other utilizes a dense block-integrated NestUNet to learn linear trans- formation parameters. The final enhanced image is synthesized through the fusion of dual-branch outputs, achieving a balance between high-frequency detail preservation and local contextual consistency adaptation in color document image enhancement. in multi-degradation color document enhancement while maintaining computational efficiency. Our principal contri- butions include: • A novel two-stage enhancement architecture featuring a streamlined global processing stage with parametric regression mechanism, significantly reducing computa- tional overhead while preserving global consistency. • A Dual-Branch Local-Refine Network that innovatively predicts linear transformation parameters for final en- hancement, departing from conventional direct prediction paradigms • The Dual-Branch Local-Refine Network incorporates a modified nested UNet (Zhou et al. 2018) with dense blocks (Huang et al. 2017), effectively preserving high- frequency details while enhancing semantic feature ex- traction. • Demonstrated computational scalability achieving about 75% inference time reduction for high-resolution images without model retraining or significant quality degrada- tion The proposed GL-PGENet advances practical deployment of document enhancement systems through its balanced op- timization of enhancement quality and computational effi- ciency. Our framework addresses the critical industry needfor real-time processing of high-resolution color documents under complex degradation conditions. 2 Related Works 2.1 Document Image Enhancement Document image enhancement play an important compo- nent in document analysis systems, forming the founda- tional stage of intelligent document processing pipelines to improve both readability and visual quality (Liu et al. 2023). Before the emergence of deep learning, traditional document enhancement methodologies primarily relied on thresholding-based approaches, achieving foreground- background separation through intensity differentiation for image quality improvement. The FAIR framework (Lelore and Bouchara 2013) implements a double-threshold edge detection algorithm designed to balance detail preservation and noise suppression. Alternative methodologies include a fuzzy expert system integrated with adaptive pixel-counting mechanisms for global threshold
|
https://arxiv.org/abs/2505.22021v1
|
optimization (Annabestani and Saadatmand-Tarzjan 2019). And a spatial segmentation approach proposed by Xiong et al (Xiong et al. 2018) par- titions the document image into non-overlapping w×w sub-regions for localized contrast enhancement, followed by SVM-based threshold selection. However, empirical analy- ses reveal that these methods are highly sensitive to the doc- ument condition. The evolution of deep learning has revolutionized doc- ument analysis research. In particularly, Generative Ad- versarial Networks (GANs) (Goodfellow et al. 2020) have demonstrated remarkable success in image generation and translation tasks (Isola et al. 2017; Zhang et al. 2023b). Initial research focused on single-degradation scenarios. For shadow removal, BEDSR-Net (Lin, Chen, and Chuang 2020) presents an innovative deep learning framework in- tegrating a dedicated background estimation module that significantly enhances visual quality and text legibility through global background color. Zhang et al. (Zhang et al. 2023c) further advance this direction with a color-aware background extraction network (CBENet) and background- guided shadow removal network (BGShadowNet). Fre- quency domain decomposition approaches, such as those proposed by Li et al. (Li et al. 2023), enable effective learn- ing of both low-level pixel features and high-level seman- tic features. In the document image deblurring, DeepDe- blur (Mei et al. 2019) employs a 20-layer network with Sequential Highway Connections(SHC) to address conver- gence challenges. Recent work by DocDiff (Yang et al. 2023) presents the first diffusion-based framework for docu- ment deblurring, combining coarse low-frequency restora- tion with high-frequency residual refinement. DE-GAN (Souibgui and Kessentini 2022) demonstrates an end-to-end framework using conditional GANs for multi-task restora- tion including enhancement and watermark removal. How- ever, these approaches demonstrate limited effectiveness in real-world scenarios characterized by concurrent degrada- tions. Recent efforts addressing multiple degradations in color documents remain limited. Both DocProj (Li et al. 2019) and DocTr (Feng et al. 2021) incorporate enhancement modules after geometric correction, but their effectiveness is con- strained by synthetic training data with insufficient scale. DocStormer (Liu et al. 2023) introduces a Perceive-then- Restore paradigm with reinforced transformer blocks restor- ing multi-degration color document images to pristine PDF quality. Due to constraints in computational resources, the processing of large resolution images poses significant chal- lenges. DocRes (Zhang et al. 2024) proposes a novel multi- task restoration transformer (Vaswani 2017) framework by taking different condition input, which is called Dynamic Task-Specific Prompt (DTSPrompt), a novel visual prompt approach that provides distinct prior features for different tasks. It’s also implemented for the DIE task, but faces simi- lar resource constraints. GCDRNet (Zhang et al. 2023b) ad- vances multi-degradation color document image through a two stage architecture: GC-Net performs global contextual modeling and DR-Net conducts multi-scale detail restora- tion, integrated with a comprehensive multi-loss training strategy. While UDoc-GAN (Wang et al. 2022) introduces unpaired training through modified cycle consistency con- straints, though with limited success on color documents. Current DIE methods exhibit critical limitations in pre- serving essential information when handling complex multi- degradation scenarios. This challenge proves particularly acute in color document enhancement, where chromatic in-formation carries significant semantic content. These limita- tions highlight the need for improved DIE algorithms to bet- ter
|
https://arxiv.org/abs/2505.22021v1
|
address multi-degradation in real-world scenarios while maintaining critical document information. 2.2 Generative Models Generative Adversarial Networks (GANs) (Goodfellow et al. 2020) represent a fundamental advancement in genera- tive modeling through implicit density estimation. The GAN framework employs a dual-network architecture, compris- ing a generator and a discriminator, that is adversarially op- timized based on game-theoretic principles. Initial imple- mentations faced notable limitations in training stability and mode collapse phenomena, constraining output diversity and quality. To address these limitations, Wasserstein GANs (Ar- jovsky, Chintala, and Bottou 2017) were proposed, introduc- ing the Wasserstein distance as an alternative loss function. Building on these advancements (Brock, Donahue, and Si- monyan 2018; Gulrajani et al. 2017), subsequent research has significantly expanded the application of GANs across various domains, including image synthesis (Mirza 2014; Karras, Laine, and Aila 2019), video generation (Tulyakov et al. 2018), and domain adaptation (Souibgui and Kessen- tini 2022; Isola et al. 2017). The recent emergence of diffusion models has signifi- cantly advanced generative model research. The Denoising Diffusion Probabilistic Model (Ho, Jain, and Abbeel 2020) establishes a theoretical foundation by decomposing the im- age formation process through sequential denoising autoen- coders guided by principles from non-quilibrium thermo- dynamics, achieving remarkable results in high-quality im- age synthesis. The framework was extended by Dhariwal and Nichol (Dhariwal and Nichol 2021) through classifier- guided sampling with Denoising Diffusion Implicit Models (Song, Meng, and Ermon 2020). Despite these innovations demonstrate significant potential, conventional DM imple- mentations face substantial computational challenges. To al- leviate this limitation, Latent Diffusion Model (Rombach et al. 2022) proposes an efficient adaptation by operating in the compressed latent space of pretrained autoencoders. This ar- chitectural innovation not only reduces computational over- head but also enhances model generalization capabilities. The evolution of diffusion architectures continues with Dif- fusion Transformers (Peebles and Xie 2023), which replaces traditional U-Net structures. However, the inherent iterative sampling process remains a persistent challenge for real- world applications. 2.3 Natural Image Enhancement Natural image enhancement can be divided into white-box and black-box methods. Here we mainly introduce white- box algorithm. For more information, please refer to the pa- per (Qi et al. 2021). The Harmonizer (Ke et al. 2022) formu- lates image enhancement as a parameter regression task for fundamental image filters, where neural networks predict in- terpretable transformation parameters (e.g., brightness, con- trast) that are subsequently applied through image process- ing operations. The RSFNet (Ouyang et al. 2023) is a white- box framework that employs parallel region-specific color filters to achieve fine-grained enhancements, mirroring pro- fessional colorists’ divide-and-conquer methodology. Fur- thermore, Bianco et al. (Bianco et al. 2019) propose an ef- ficient two-stage architecture that decouples parameter pre- diction from enhancement execution: transformation param- eters are initially predicted from downsampled inputs, fol- lowed by full-resolution color adjustments, thereby achiev- ing professional-grade retouching quality with optimized computational efficiency. 2.4 Datasets Real datasets . Existing document enhancement datasets predominantly focus on single degradation types, such as the RDD dataset (Zhang et al. 2023c) for shadow removal and the TDD dataset (Souibgui and Kessentini 2022) for de- blurring tasks. However, it
|
https://arxiv.org/abs/2505.22021v1
|
is common that multiple degrada- tions often occur together in real-world scenarios. The Do- cUNet dataset (Ma et al. 2018) is a valuable resource for DIE, featuring a variety of degradations such as shadows, wrinkles, and bleed-through. But this dataset is limited by two key constraints: its small size(130 images) proves in- sufficient for model training, and its monolingual English documents restrict cross-linguistic applicability. The Real- DAE dataset represents a high-quality collection of image pairs acquired through physical capture and professional post-processing. Specifically, the dataset was constructed by engaging expert photographers to meticulously retouch the raw degraded images using Adobe Photoshop1(Zhang et al. 2023b). Meanwhile, we incorporate an additional dataset from Baidu AI Studio2for generalization evalua- tion. Notably, this collection extends beyond the aforemen- tioned degradation patterns by incorporating moire degra- dation, while primarily consisting of Chinese-language doc- ument samples. Particularly, the dataset encompasses a di- verse range of degradation types, offering comprehensive coverage of real-world document degradation scenarios. The availability of real-world datasets for this task remains sig- nificantly limited. To address this data scarcity, we further augment training through binary document datasets from the (H)-DIBCO series (Gatos, Ntirogiannis, and Pratikakis 2009; Ntirogiannis, Gatos, and Pratikakis 2014; Pratikakis, Gatos, and Ntirogiannis 2010; Pratikakis et al. 2017). Synthetic datasets To address the aforementioned limi- tations of real-world datasets, researchers (Lin, Chen, and Chuang 2020; Das et al. 2020) have employed rendering en- gines such as Blender3to generate synthetic datasets. These datasets offer enhanced diversity, larger scale, and more ac- curate ground truths. It can simulate complex lighting con- ditions, paper shapes, environmental backgrounds, and cam- era positions. Despite recent advancements, the diversity and authenticity of synthetic datasets require further improve- ment. The Doc3D (Das et al. 2019) and Doc3dShade (Das et al. 2020) demonstrate enhanced authenticity and degrada- tion diversity compared to earlier benchmarks like DocProj 1https://www.adobe.com 2https://aistudio.baidu.com/datasetdetail/ 126294 3https://www.blender.org(Li et al. 2019), they are primarily optimized for predict- ing low-frequency components. Moreover, the complex syn- thetic pipelines employed in these datasets pose significant implementation challenges, particularly in case-specific sce- narios. To address these limitations, we implement a stream- lined degradation synthesis pipeline using the Augraphy framework4during pre-training. Empirical analysis reveals a non-monotonic correlation between degradation intensity and model efficacy, with optimal performance achieved at intermediate intensity levels. (a) source (b) gt (c)Ig (d)Ie Figure 2: Two-Stage Image Enhancement Process Visu- alization . (a) Original degraded images; (b) Ground-truth reference images; (c) Global enhancement results Igfrom GPPNet with optimized brightness, contrast, and satura- tion parameters; (d) Final refined outputs Iegenerated by DB-LRNet demonstrating preserved high-frequency details alongside illumination consistency. 3 Proposed Approach 3.1 Overview We propose the GL-PGENet, a novel end-to-end deep neural network framework designed to address multiple degradation restoration in color document images. As de- picted in Fig. 1, the architecture follows a coarse-to-fine two-stage paradigm: (1) global perception parameter esti- mation through the Global Perception Parameter Network (GPPNet), followed by (2) local feature refinement via the Dual-Branch Local-Refine Network(DB-LRNet). Initially, a parameter regression model is employed to estimate en- hancement coefficients for brightness, contrast, and satu- ration transformations (Ke
|
https://arxiv.org/abs/2505.22021v1
|
et al. 2022), enabling efficient 4https://github.com/sparkfish/augraphy (a) (b) (c) (d) (e) (f) (g) Figure 3: Qualitative Comparison with State-of-the-Art DIE Methods. (a) Original degraded images; (b) Ground-truth reference images; (c) DocProj (Li et al. 2019); (d) DocRes (Zhang et al. 2024); (e) DocTr (Feng et al. 2021); (f) GCDRNet (Zhang et al. 2023b); (g) Proposed GL-PGENet. Quantitative evaluation demonstrates the superior performance of our method in both structural preservation (particularly document detail enhancement and semantic legibility as shown in Row 5) and color processing (effective restoration and balanced color reproduction observed in Rows 3-4). Comparative results indicate that GL- PGENet achieves comprehensive improvements over existing benchmark methods across multiple perceptual criteria. image-level adaptation through parallel application of these fundamental operations. This lightweight parameterization scheme ensures rapid model convergence while produc- ing globally enhanced image Igwith improved illumina- tion consistency. Subsequently, the DB-LRNet employs dual branch network for local feature enhancement: one branch is aimed to smooth the image via convolutional networks, while the other learns linear transformations through a denseblock-integrated NestUNet (Zhou et al. 2018; Huang et al. 2017). The enhanced image Ieis synthesized by fusing the dual outputs, preserving high-frequency details which is critical for document analysis while reinforcing local con- textual consistency through adaptive parameter learning. 3.2 Global Perception Parameter Network Building upon the Lambertian reflectance assumption in GCDRNet (Zhang et al. 2023b), where source images can be decomposed as I=R⊗S(with Rdenoting reflectance maps and Srepresenting shadow maps), we observe that the original GC-Net’s UNet architecture for pixel-wise shadow map estimation incurs substantial computational overhead. Drawing inspiration from parameter-efficient enhancement frameworks (Ke et al. 2022), we propose a GPPNet that predicts fundamental image operation parameters for size- agnostic processing. Our network architecture employs three parallel process- ing branches for brightness, contrast, and saturation adjust- ments, followed by feature concatenation. This design dif- fers from conventional cascaded approaches to ensure train- ing stability. The implementation adopts a lightweight 15- layer convolutional backbone operating on 224×224 reso- lution inputs, maintaining computational efficiency while preserving enhancement quality. As demonstrated in Fig. 2, GPPNet produces visually compatible results Igwith improved brightness, contrast, and saturation, improving computational efficiency compared to pixel-wise estimation methods. 3.3 Dual-Branch Local-Refine Network Document image enhancement presents dual challenges: preserving fine-grained features while ensuring semantic awareness. DB-LRNet addresses this through a modified Nest UNet (Zhou et al. 2018) integrated with dense block (Huang et al. 2017), specifically designed to preserve critical edge features and texture patterns during semantic process- ing. To optimize computational efficiency, we introduce a pixel-unshuffle (Shi et al. 2016) operation at the input stage, achieving 2 ×spatial reduction through learned sub-pixel re- organization while maintaining information completeness. This preprocessing strategy preserves original content fi- delity with significantly reduced computational complexity. As illustrated in Fig. 1, our framework introduces a novel dual-branch paradigm that diverges from conventional gen- erative approaches where enhanced images Ieare directly synthesized through network f(·)as in Eq. 1. The first branch employs three convolutional layers for smoothing the image, while the second branch predicts linear transfor- mation parameters through the modified Nest UNet (Zhou et
|
https://arxiv.org/abs/2505.22021v1
|
al. 2018) with dense block. The enhanced output is for- mulated through Eq. 2-4, combining these two branches. This architecture provides two principal advantages: (1) en- hanced local consistency achieved by constraining linear transformation parameters through Eq. 6, which generates smoother output through parameter regularization, and (2) improved computational efficiency for high-resolution pro- cessing with negligible quality degradation. These technical advancements, including implementation details and com- prehensive evaluations, will be elaborated in Sec. 4.6. IL=f(IO) (1) feat =f(IO) (2)α=g1(feat ), β =g2(feat ) (3) IL=αh(IO) +β (4) 4 Experiments 4.1 Implementation details The proposed framework is implemented in PyTorch (Paszke et al. 2017) and trained on a single NVIDIA A100- 40G GPU with a batch size of 16. To ensure training sta- bility, we adopt a two-stage optimization strategy: (a) GPP- Net undergoes independent pre-training to establish reliable Igestimation before initiating joint optimization with the DB-LRNet. (b) During joint training, gradient backprop- agation to GPPNet remains disabled to preserve parame- ter stability. The Adam optimizer with initial learning rate 0.0001( β1= 0.9,β2= 0.99) drives the optimization pro- cess. The architecture accepts dual-resolution inputs: GPP- Net processes 224×224images while DB-LRNet operates at512×512resolution. During pre-training, both networks are trained on a dataset of 500,000+ synthetic samples by minimizing a com- posite loss function which integrates adversarial loss (Good- fellow et al. 2020), L1 loss, structural similarity (SSIM) (Wang et al. 2004) and total variation (TV) loss (Aly and Dubois 2005): L=λ1L1+λ2LSSIM +λ3LTV+λ4LGAN +λ5Lr(5) Here we set λ1,λ2,λ3,λ4andλ5to 1, 0.5, 0.01, 0.05 and 0.01. Notably, the adversarial loss is excluded during fine- tuning to emphasize perceptual quality preservation, while introducing an additional regularization term: Lr=∥∇α∥2+∥∇β∥2(6) This multi-stage training paradigm significantly enhances model performance on document image enhancement tasks. All reported results adhere to the standardized evaluation protocol using the RealDAE test set unless otherwise speci- fied. 4.2 Objective Optimization Our comprehensive experimental analysis validates the crit- ical importance of Structural Similarity (SSIM) Loss (Wang et al. 2004) for maintaining fine-grained visual details in document image processing. The method demonstrates con- sistent effectiveness across tasks: super-resolution recon- struction, shadow removal, and image dewarping. To fur- ther enhance local structural consistency, we integrate To- tal Variation(TV) regularization (Aly and Dubois 2005) as a complementary constraint. As evidenced by the systematic ablation study presented in Tab. 1, the progressive integra- tion of SSIM loss and TV regularization yields measurable improvements in both SSIM and PSNR metrics. Figure 4: Visual Comparison: Baseline vs. Our Efficient Method . Visual comparison of inference approaches with baseline method (top row) versus our efficient implementation (bottom row), demonstrating negligible quality degradation despite 3× acceleration. Quantitative analysis reveals only 1.06% relative SSIM reduction (0.9480 vs. 0.9379) while maintaining superior performance over prior arts. Table 1: Ablation study on objective functions for DIE. SSIM Loss TV Loss SSIM/PSNR 0.9077/22.78 ✓ 0.9193/23.56 ✓ ✓ 0.9197/23.62 4.3 Architectural Exploration of DB-LRNet We conducted systematic investigations into neural archi- tecture design for DIE task. While RRDBNet (Wang et al. 2021), employing a columnar architecture, achieves state- of-the-art performance in super-resolution natural image, our frequency domain analysis (Fig. 5)
|
https://arxiv.org/abs/2505.22021v1
|
reveals fundamen- tal limitations in its direct application to document images enhancement. The document images exhibit richer high- frequency components compared to natural images, neces- sitating specialized architectural considerations. Our analysis identifies two critical design requirements: (1) enhanced feature reuse mechanisms for high-frequency detail preservation, and (2) sufficient receptive fields for robust distinguishing foreground-background. To address these requirements, we introduce a modified nested UNet (Zhou et al. 2018) with dense block (Huang et al. 2017), establishing a dense-in-dense structure. This design is mo- tivated by two key insights: First, the dense connections fa- cilitate improved gradient flow and feature propagation, re- ducing high-frequency information loss. Second, the multi- scale encoder-decoder structure enables contextual feature aggregation, which is crucial for handling complex docu-ment backgrounds. Despite these improvements, conventional end-to-end generation approaches suffer from two inherent limitations: (1) compromised generalization performance on out-of- distribution samples, characterized by foreground degrada- tion and regional artifacts; (2) unstable parameter conver- gence during gradient-based optimization. To address these challenges, we propose a dual-branch parametric transfor- mation framework that replaces direct pixel prediction with smoothness-constrained coefficient learning. As detailed in Sec. 3.3, DB-LRNet integrates an image smoothing branch and a linear transformation branch, generating enhanced outputs through learned parameters( α,β) and smooth im- age. As quantified in Tab. 2, our nested architecture achieves comparable performance to RRDBNet while signifi- cantly improving computational efficiency. The proposed NestUNet-Denseblock variant reduces parameter count by 51% and computational complexity by 79% compared to RRDBNet, establishing a new efficiency-performance fron- tier. Moreover, the proposed DB-LRNet achieves significant performance gains with minimal computational overhead, elevating SSIM from 0.9193 to 0.9385 while maintaining comparable efficiency (Tab. 2). At the same time, this archi- tecture demonstrates superior scalability for high-resolution document processing without performance degradation, as detailed in Sec. 4.6. 4.4 Two-stage Enhancement Paradigm Our architectural refinements demonstrate substantial im- provements in enhancement quality and cross-domain gen- document image document frequency nature image nature frequencyFigure 5: Frequency Analysis of Natural Images and Doc- ument Images. There are obvious high-frequency compo- nents in the horizontal and vertical directions in the docu- ment image, but the energy in nature image is mainly con- centrated in the low frequency part. eralization. Nevertheless, achieving comprehensive consis- tency and robust generalization across heterogeneous docu- ment images remains challenging. Inspired by GCDRNet (Zhang et al. 2023b), we adopt a two-stage enhancement framework that systematically ad- dresses document image enhancement. Unlike their UNet- based image transformation approach, our architecture im- plements a parametric regression network to optimize global enhancement through learned brightness, contrast, and satu- ration parameters. This global correction from GPPNet is subsequently refined by our proposed DB-LRNet in Sec. 3.3) following a coarse-to-fine optimization strategy. Two critical implementation issues merit discussion: 1) feature integration of three parametric operations in GPPNet, and 2) optimal combination of global and local enhancement stages. Tab. 3 presents systematic comparisons of different archi- tectural configurations. For GPPNet’s operation fusion (Tab. 3a), concatenation achieves superior performance compared to cascading and additive fusion. Regarding stage combina- tion (Tab. 3b), the Global + Local cascade paradigm yields optimal results, outperforming
|
https://arxiv.org/abs/2505.22021v1
|
both Local + Global cascad- ing and standalone global processing. These systematic eval- uations validate our design choices for operation fusion and multi-stage coordination. The final GL-PGENet architec- ture, illustrated in Fig. 1, integrates these optimal configu- rations. NestUNetR G B3 coefficient0.5x2xFigure 6: Architecture of DB-LRNet’s efficient inference framework. The parameter prediction branch employs res- olution reduction followed by coefficient map upsampling, achieving 75% computational complexity reduction while maintaining performance parity. This optimization requires no additional training. Table 2: Comparative Analysis of Network Architec- tures on Image Quality and Computational Efficiency. The RRDBNet11architecture is configured with 11 residual blocks, while the proposed NestUNet variant employs two distinct building blocks: VGG-style (Simonyan 2014) and Dense Block (Huang et al. 2017) modules. Notably, batch normalization layers are intentionally omitted from all ar- chitectures. Architecture SSIM/PSNR Params(M) ↓GFLOPs ↓ RRDBNet110.9210/22.02 8.09 38.25 NestUNet-VGG 0.9145/23.13 9.21 3.71 NestUNet-Dense 0.9193/23.56 3.99 7.93 DB-LRNet 0.9385/23.99 4.05 8.12 4.5 Comparison with State-of-the-Art Methods We perform comprehensive evaluations of GL-PGENet against current state-of-the-art approaches on two bench- mark datasets: RealDAE (Zhang et al. 2023b) and DocUNet (Ma et al. 2018). The RealDAE dataset, containing diverse real-world degradations, serves as a rigorous benchmark for evaluating robustness under complex practical conditions. For DocUNet evaluation, we adopt the standardized proto- col from GCDRNet (Zhang et al. 2023b), using geometri- cally aligned document images from the paper (Zhang et al. 2023a) as degraded inputs to ensure pixel-wise correspon- dence with ground-truth images. Our proposed GL-PGENet achieves state-of-the-art per- formance on both SSIM and PSNR metrics shown in Tab. 4, demonstrating consistent superiority in structural preserva- tion. We argue that SSIM better correlates with human per- ceptual quality in document restoration tasks due to its sen- sitivity to structural distortions. On DocUNet dataset, GL- PGENet obtains an SSIM of 0.7721, outperforming DocRes (Zhang et al. 2024) (0.7598) and GCDRNet (Zhang et al. 2023b) (0.7658). The significant SSIM improvement con- firms enhanced capability in maintaining document struc- ture and document readability. Our method also establishes a new state-of-the-art SSIM of 0.9480 while maintaining com- (a) source (b) gt (c) GCDRNet (d) Our Figure 7: Cross-domain Visualization Comparison. Qual- itative comparison with GCDRNet demonstrates our method’s superior performance in cross-domain scenarios. The visual evidence substantiates our approach’s excep- tional generalization capability, particularly in handling do- main shift challenges. petitive PSNR performance on RealDAE dataset. We argue that the SSIM provides a more reliable indi- cator of human perceptual quality in document restoration tasks compared to pixel-wise metrics, owing to its enhanced sensitivity to structural distortions that critically affect doc- ument readability. Notably, our experimental results demon- strate superior performance in terms of SSIM compared to existing methods, thereby validating the effectiveness of our approach. Furthermore, qualitative comparisons provide vi- sual evidence supporting this claim. As illustrated in Fig. 3, our qualitative comparisons between GL-PGENet and state- of-the-art methods demonstrate its superior performance in enhancing local consistency while preserving fine-grained features. These comprehensive experimental results validate the effectiveness of our approach in addressing the challeng-Table 3: Comparative Analysis of Architectural Con- figurations. Global module: GPPNet; Local module: DB-
|
https://arxiv.org/abs/2505.22021v1
|
LRNet Fusion Strategy SSIM/PSNR Cascading 0.5781/12.36 Additive 0.8717/18.55 Concatenation 0.9111/21.92 (a) Operation fusion strategies Integration Strategy SSIM/PSNR Local + Global 0.9231/23.44 Global + Local 0.9480/24.10 Global 0.8823/20.12 (b) Stage integration strategies ing task of DIE. 4.6 Efficient High-resolution Image Inference Our architecture demonstrates superior computational effi- ciency for high-resolution image processing through two complementary mechanisms. As depicted in Fig. 1, GPP- Net employs fixed-resolution processing throughout its in- ference pipeline, ensuring computational stability regardless of input size. DB-LRNet employs a resolution reduction and coefficient map upsampling strategy for parameter predic- tion, achieving nearly 3×acceleration for high-resolution images (Fig. 6). The lightweight smoothing branch main- tains original resolution with minimal computational over- head. Fig. 4 provides visual evidence of our method’s efficacy, where the baseline approach (top row) and our efficient im- plementation (bottom row) exhibit negligible performance degradation. Quantitative evaluations on the test set con- firm this observation, showing only a 1.06% relative de- crease in SSIM (0.9480 vs. 0.9379) while maintaining sig- nificant superiority over DocRes (Zhang et al. 2024). This marginal performance trade-off enables substantial compu- tational savings, making our approach particularly suitable for practical high-resolution image processing applications. 4.7 Cross-domain Generalization The proposed method demonstrates strong cross-domain generalization capabilities, attributable to innovations in pre- training strategies and architectural design. As illustrated in Fig. 7, comparative cross-domain evaluations between GL- PGENet and GCDRNet reveal significant performance ad- vantages of our approach. The benchmark dataset encom- passes multiple degradation types including shadows, blurs, and moire degradation, with a training set of 2,684 images and validation set of 123 images. For rigorous generaliza- tion assessment, we exclusively utilize the test set with- out any fine-tuning or additional training. The experimental results indicate that GL-PGENet exhibits significantly en- hanced restoration performance, achieving an SSIM score of 0.8418 and a PSNR value of 18.47, which represents a (a) Performance comparison on DocUNet dataset. Venue Methods SSIM PSNR TOG’19 DocProj (Li et al. 2019) 0.7098 14.71 BMVC’20 Das et al. (Das et al. 2020) 0.7276 16.42 MM’21 DocTr (Feng et al. 2021) 0.7067 15.78 MM’22 UDoc-GAN (Wang et al. 2022) 0.6833 14.29 TAI’23 GCDRNet (Zhang et al. 2023b) 0.7658 17.09 CVPR’24 DocRes (Zhang et al. 2024) 0.7598 17.60 GL-PGENet(ours) 0.7721 16.89(b) Performance comparison on RealDAE dataset. Venue Methods SSIM PSNR TOG’19 DocProj (Li et al. 2019) 0.8684 19.35 BMVC’20 Das et al. (Das et al. 2020) 0.8633 19.87 MM’21 DocTr (Feng et al. 2021) 0.7925 18.62 MM’22 UDoc-GAN (Wang et al. 2022) 0.7558 16.43 TAI’23 GCDRNet (Zhang et al. 2023b) 0.9423 24.42 CVPR’24 DocRes (Zhang et al. 2024) 0.9219 24.65 GL-PGENet(ours) 0.9480 24.10 Table 4: Quantitative comparison with state-of-the-art methods. Bold values indicate best performance. notable improvement over GCDRNet’s corresponding met- rics of 0.8266 (SSIM) and 18.10 (PSNR). This marked per- formance gap substantiates the enhanced generalization ca- pacity of our method across diverse degradation scenarios, suggesting better preservation of structural and photometric consistency in cross-domain dataset. 5 Conclusion This paper introduces GL-PGENet, a novel framework de- signed to address the critical challenge of multi-degradation enhancement in color document images. Our approach makes four key
|
https://arxiv.org/abs/2505.22021v1
|
contributions to Document AI research: 1) A hierarchical coarse-to-fine enhancement architecture that efficiently balances global consistency with local detail preservation; 2) A lightweight Global Perception Parame- ter Network that replaces computationally intensive pixel- wise estimation with efficient parametric regression for brightness, contrast, and saturation adjustments; 3) A Dual- Branch Local-Refine Network employing parameter gener- ation mechanisms rather than direct pixel prediction, sig- nificantly improving generalization capabilities while main- taining local consistency; 4) A modified NestUNet architec- ture with integrated dense blocks specifically optimized for document image characteristics, effectively preserving high- frequency textual details crucial for downstream OCR tasks. Extensive experiments validate the effectiveness of our approach, achieving state-of-the-art SSIM scores of 0.7721 on DocUNet and 0.9480 on RealDAE, confirming supe- rior structural preservation particularly critical for document readability. The proposed framework also demonstrates ex- ceptional computational efficiency, reducing inference time by approximately 75% for high-resolution documents with- out significant quality degradation, addressing a key lim- itation of existing transformer-based approaches. Further- more, our model exhibits remarkable cross-domain adapt- ability, maintaining high performance (0.8418 SSIM) on un- seen datasets without fine-tuning. Looking forward, future research could extend this para- metric generation paradigm to address additional document- specific degradations such as moire patterns and ink bleed- ing, while exploring self-supervised and weakly-supervised learning strategies to further reduce dependency on paired training data. By effectively balancing enhancement quality with computational efficiency, GL-PGENet provides a prac-tical solution for real-world document digitization systems, serving as a robust foundation for downstream Document AI applications including OCR, layout analysis, and infor- mation extraction. References Aly, H. A., and Dubois, E. 2005. Image up-sampling using total-variation regularization with a new observation model. IEEE Transactions on Image Processing 14(10):1647–1659. Annabestani, M., and Saadatmand-Tarzjan, M. 2019. A new threshold selection method based on fuzzy expert sys- tems for separating text from the background of document images. Iranian journal of science and technology, transac- tions of electrical engineering 43:219–231. Arjovsky, M.; Chintala, S.; and Bottou, L. 2017. Wasserstein gan. Bianco, S.; Cusano, C.; Piccoli, F.; and Schettini, R. 2019. Learning parametric functions for color image enhancement. InInternational Workshop on Computational Color Imag- ing, 209–220. Springer. Brock, A.; Donahue, J.; and Simonyan, K. 2018. Large scale gan training for high fidelity natural image synthesis. arXiv preprint arXiv:1809.11096 . Das, S.; Ma, K.; Shu, Z.; Samaras, D.; and Shilkrot, R. 2019. Dewarpnet: Single-image document unwarping with stacked 3d and 2d regression networks. In Proceedings of the IEEE/CVF international conference on computer vision , 131–140. Das, S.; Sial, H. A.; Ma, K.; Baldrich, R.; Vanrell, M.; and Samaras, D. 2020. Intrinsic decomposition of document images in-the-wild. arXiv preprint arXiv:2011.14447 . Dhariwal, P., and Nichol, A. 2021. Diffusion models beat gans on image synthesis. Advances in neural information processing systems 34:8780–8794. Feng, H.; Wang, Y .; Zhou, W.; Deng, J.; and Li, H. 2021. Doctr: Document image transformer for geomet- ric unwarping and illumination correction. arXiv preprint arXiv:2110.12942 . Gatos, B.; Ntirogiannis, K.; and Pratikakis, I. 2009. Icdar 2009 document image binarization contest (dibco 2009). In 2009 10th International conference on document analysis and recognition , 1375–1382.
|
https://arxiv.org/abs/2505.22021v1
|
IEEE. Goodfellow, I.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; and Bengio, Y . 2020. Generative adversarial networks. Communications of the ACM 63(11):139–144. Gulrajani, I.; Ahmed, F.; Arjovsky, M.; Dumoulin, V .; and Courville, A. C. 2017. Improved training of wasserstein gans. Advances in neural information processing systems 30. Ho, J.; Jain, A.; and Abbeel, P. 2020. Denoising diffusion probabilistic models. Advances in neural information pro- cessing systems 33:6840–6851. Huang, G.; Liu, Z.; Van Der Maaten, L.; and Weinberger, K. Q. 2017. Densely connected convolutional networks. In Proceedings of the IEEE conference on computer vision and pattern recognition , 4700–4708. Isola, P.; Zhu, J.-Y .; Zhou, T.; and Efros, A. A. 2017. Image- to-image translation with conditional adversarial networks. InProceedings of the IEEE conference on computer vision and pattern recognition , 1125–1134. Karras, T.; Laine, S.; and Aila, T. 2019. A style-based gen- erator architecture for generative adversarial networks. In Proceedings of the IEEE/CVF conference on computer vi- sion and pattern recognition , 4401–4410. Ke, Z.; Sun, C.; Zhu, L.; Xu, K.; and Lau, R. W. 2022. Har- monizer: Learning to perform white-box image and video harmonization. In European Conference on Computer Vi- sion, 690–706. Springer. Lelore, T., and Bouchara, F. 2013. Fair: a fast algorithm for document image restoration. IEEE transactions on pattern analysis and machine intelligence 35(8):2039–2048. Li, X.; Zhang, B.; Liao, J.; and Sander, P. V . 2019. Docu- ment rectification and illumination correction using a patch- based cnn. ACM Transactions on Graphics (TOG) 38(6):1– 11. Li, Z.; Chen, X.; Pun, C.-M.; and Cun, X. 2023. High- resolution document shadow removal via a large-scale real- world dataset and a frequency-aware shadow erasing net. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) , 12449–12458. Lin, Y .-H.; Chen, W.-C.; and Chuang, Y .-Y . 2020. Bedsr- net: A deep shadow removal network from a single docu- ment image. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition , 12905–12914. Liu, C.; Li, J.; Teng, Y .; Wang, C.; Xu, N.; Wu, J.; and Tu, D. 2023. Docstormer: Revitalizing multi-degraded colored document images to pristine pdf. Ma, K.; Shu, Z.; Bai, X.; Wang, J.; and Samaras, D. 2018. Docunet: Document image unwarping via a stacked u-net. InProceedings of the IEEE conference on computer vision and pattern recognition , 4700–4709. Mei, J.; Wu, Z.; Chen, X.; Qiao, Y .; Ding, H.; and Jiang, X. 2019. Deepdeblur: text image recovery from blur to sharp. Multimedia tools and applications 78:18869–18885. Mirza, M. 2014. Conditional generative adversarial nets. arXiv preprint arXiv:1411.1784 .Ntirogiannis, K.; Gatos, B.; and Pratikakis, I. 2014. Icfhr2014 competition on handwritten document image bi- narization (h-dibco 2014). In 2014 14th International con- ference on frontiers in handwriting recognition , 809–813. IEEE. Ouyang, W.; Dong, Y .; Kang, X.; Ren, P.; Xu, X.; and Xie, X. 2023. Rsfnet: A white-box image retouching approach using region-specific color filters. In Proceedings of the IEEE/CVF International Conference on Computer Vision , 12160–12169. Paszke, A.; Gross, S.; Chintala, S.; Chanan, G.; Yang, E.; DeVito, Z.; Lin, Z.;
|
https://arxiv.org/abs/2505.22021v1
|
Desmaison, A.; Antiga, L.; and Lerer, A. 2017. Automatic differentiation in pytorch. Peebles, W., and Xie, S. 2023. Scalable diffusion models with transformers. In Proceedings of the IEEE/CVF Inter- national Conference on Computer Vision , 4195–4205. Pratikakis, I.; Zagoris, K.; Barlas, G.; and Gatos, B. 2017. Icdar2017 competition on document image binarization (dibco 2017). In 2017 14Th IAPR international conference on document analysis and recognition (ICDAR) , volume 1, 1395–1403. IEEE. Pratikakis, I.; Gatos, B.; and Ntirogiannis, K. 2010. H- dibco 2010-handwritten document image binarization com- petition. In 2010 12th International Conference on Frontiers in Handwriting Recognition , 727–732. IEEE. Qi, Y .; Yang, Z.; Sun, W.; Lou, M.; Lian, J.; Zhao, W.; Deng, X.; and Ma, Y . 2021. A comprehensive overview of image enhancement techniques. Archives of Computational Meth- ods in Engineering 1–25. Rombach, R.; Blattmann, A.; Lorenz, D.; Esser, P.; and Om- mer, B. 2022. High-resolution image synthesis with latent diffusion models. In Proceedings of the IEEE/CVF confer- ence on computer vision and pattern recognition , 10684– 10695. Shi, W.; Caballero, J.; Husz ´ar, F.; Totz, J.; Aitken, A. P.; Bishop, R.; Rueckert, D.; and Wang, Z. 2016. Real-time single image and video super-resolution using an efficient sub-pixel convolutional neural network. In Proceedings of the IEEE conference on computer vision and pattern recog- nition , 1874–1883. Simonyan, K. 2014. Very deep convolutional net- works for large-scale image recognition. arXiv preprint arXiv:1409.1556 . Song, J.; Meng, C.; and Ermon, S. 2020. Denoising diffu- sion implicit models. arXiv preprint arXiv:2010.02502 . Souibgui, M. A., and Kessentini, Y . 2020. De-gan: A condi- tional generative adversarial network for document enhance- ment. IEEE Transactions on Pattern Analysis and Machine Intelligence 44(3):1180–1191. Souibgui, M. A., and Kessentini, Y . 2022. De-gan: A condi- tional generative adversarial network for document enhance- ment. IEEE Transactions on Pattern Analysis and Machine Intelligence 44(3):1180–1191. Tulyakov, S.; Liu, M.-Y .; Yang, X.; and Kautz, J. 2018. Mocogan: Decomposing motion and content for video gen- eration. In Proceedings of the IEEE conference on computer vision and pattern recognition , 1526–1535. Vaswani, A. 2017. Attention is all you need. Advances in Neural Information Processing Systems . Wang, Z.; Bovik, A. C.; Sheikh, H. R.; and Simoncelli, E. P. 2004. Image quality assessment: from error visibility to structural similarity. IEEE transactions on image process- ing13(4):600–612. Wang, X.; Xie, L.; Dong, C.; and Shan, Y . 2021. Real- esrgan: Training real-world blind super-resolution with pure synthetic data. In Proceedings of the IEEE/CVF interna- tional conference on computer vision , 1905–1914. Wang, Y .; Zhou, W.; Lu, Z.; and Li, H. 2022. Udoc-gan: Un- paired document illumination correction with background light prior. In Proceedings of the 30th ACM International Conference on Multimedia , 5074–5082. Xiong, W.; Xu, J.; Xiong, Z.; Wang, J.; and Liu, M. 2018. Degraded historical document image binarization using lo- cal features and support vector machine (svm). Optik 164:218–223. Yang, Z.; Liu, B.; Xxiong, Y .; Yi, L.; Wu, G.; Tang, X.; Liu, Z.; Zhou, J.; and Zhang, X. 2023. Docdiff: Document en- hancement via residual diffusion models. In Proceedings of
|
https://arxiv.org/abs/2505.22021v1
|
the 31st ACM international conference on multimedia , 2795–2806. Zamora-Mart ´ınez, F.; Espa ˜na-Boquera, S.; and Castro- Bleda, M. 2007. Behaviour-based clustering of neural net- works applied to document enhancement. In International Work-Conference on Artificial Neural Networks , 144–151. Springer. Zhang, J.; Chen, B.; Cheng, H.; Guo, F.; Ding, K.; and Jin, L. 2023a. Docaligner: Annotating real-world photographic document images by simply taking pictures. arXiv preprint arXiv:2306.05749 . Zhang, J.; Liang, L.; Ding, K.; Guo, F.; and Jin, L. 2023b. Appearance enhancement for camera-captured document images in the wild. IEEE Transactions on Artificial Intel- ligence . Zhang, L.; He, Y .; Zhang, Q.; Liu, Z.; Zhang, X.; and Xiao, C. 2023c. Document image shadow removal guided by color-aware background. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) , 1818–1827. Zhang, J.; Peng, D.; Liu, C.; Zhang, P.; and Jin, L. 2024. Docres: A generalist model toward unifying document im- age restoration tasks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) , 15654–15664. Zhou, Z.; Rahman Siddiquee, M. M.; Tajbakhsh, N.; and Liang, J. 2018. Unet++: A nested u-net architecture for medical image segmentation. In Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical De- cision Support: 4th International Workshop, DLMIA 2018, and 8th International Workshop, ML-CDS 2018, Held in Conjunction with MICCAI 2018, Granada, Spain, Septem- ber 20, 2018, Proceedings 4 , 3–11. Springer.
|
https://arxiv.org/abs/2505.22021v1
|
Improving Respiratory Sound Classification with Architecture-Agnostic Knowledge Distillation from Ensembles Miika Toikkanen1, June-Woo Kim†1,2 1RSC LAB, MODULABS, Republic of Korea 2Department of Psychiatry, Wonkwang University Hospital, Republic of Korea miika.toikkanen.2@gmail.com, kaen2891@gmail.com Abstract Respiratory sound datasets are limited in size and quality, making high performance difficult to achieve. Ensemble mod- els help but inevitably increase compute cost at inference time. Soft label training distills knowledge efficiently with extra cost only at training. In this study, we explore soft labels for respi- ratory sound classification as an architecture-agnostic approach to distill an ensemble of teacher models into a student model. We examine different variations of our approach and find that even a single teacher, identical to the student, considerably im- proves performance beyond its own capability, with optimal gains achieved using only a few teachers. We achieve the new state-of-the-art Score of 64.39 on ICHBI, surpassing the previ- ous best by 0.85 and improving average Scores across architec- tures by more than 1.16. Our results highlight the effectiveness of knowledge distillation with soft labels for respiratory sound classification, regardless of size or architecture. Index Terms : respiratory sound classification, knowledge dis- tillation, ensembles, architecture-agnostic, lightweight distilla- tion 1. Introduction Respiratory sound classification (RSC) has been an active re- search area due to its potential to aid in diagnosing respi- ratory diseases. Previous works primarily focused on CNN architectures [1–8], such as ResNet [9], EfficientNet [10], and CNN6 [11]. More recently, pretrained Audio Spectro- gram Transformer (AST) [12, 13] models trained on large- scale datasets like ImageNet [14] and AudioSet [15] have demonstrated the advantages of self-attention-based mecha- nisms. Building on AST, techniques such as Patch-Mix con- trastive learning [13], adversarial methods for synthetic sam- ple inconsistencies [16], stethoscope bias mitigation [17], and large-scale pretraining method [18, 19] have further advanced the field. Bridging the text and audio modalities (BTS) [20] to leverage textual metadata prompts has also shown notably boosted RSC performance, achieving a state-of-the-art score of 63.54% on the ICBHI dataset [21]. However, high-quality open-source respiratory sound datasets are a scarce resource , which makes it difficult to train strong models. Ensemble models offer a simple way of boost- ing model performance, but at the cost of increased computa- tion at inference time [22]. Knowledge distillation [23–25] is a widely adopted technique for compressing deep neural net- work ensembles and transferring knowledge from complex or large-scale teacher models to more compact student models, thereby boosting their performance. One of the key mechanisms †Corresponding author.in knowledge distillation is employing the soft labels [26–28], which represent the probabilistic outputs of the teacher model. These soft labels provide richer representation than hard labels, enabling the student model to learn better representations of the data [26, 27]. Despite the success of distillation in various do- mains, its application to the RSC task remains relatively un- derexplored. We present a systematic evaluation of soft-label distillation in RSC, covering multiple ensemble configurations, and scales of models. In this study, we address this gap by investigating the effec- tiveness of soft label distillation in the context of RSC as an
|
https://arxiv.org/abs/2505.22027v1
|
architecture-agnostic approach to distill ensemble knowledge into individual models. Our main contributions are summarized as follows: • We demonstrate the strength of soft label distillation from ensembles on RSC data and set the new SOTA score 64.39 on the ICBHI dataset. • We explore methods for creating the soft labels and teacher ensembles. We find that even a single teacher is able to boost the performance of identical students, and just a few are enough for optimal performance, with additional benefits gained by curating the ensemble. • We also investigate compressing a second-generation ensem- ble from a first-generation teacher ensemble and find it to pro- duce considerably stronger predictor, increasing the ICBHI score from 64.34 to 65.45. • We release our code to support reproducibility and further research at https://github.com/RSC-Toolkit/ rsc-ensemble-kd . 2. Preliminaries 2.1. Dataset Description The ICBHI respiratory sound dataset [21] is a widely recog- nized benchmark for RSC tasks. It consists of approximately 5.5 hours of respiratory sound recordings, comprising a total of 6,898 breathing cycles. The dataset is officially divided into training (60%) and testing (40%) subsets at the breathing cycle level, ensuring no patient overlap between splits. Specifically, the training set contains 4,142 cycles, while the test set includes 2,756 cycles, categorized into four distinct classes: normal , crackle ,wheeze , and both (crackle and wheeze). We binarized the age groups into adults (over 18 years old) and pediatrics (18 years old or younger), following the approach in previous stud- ies [20]. Other metadata attributes, including sex, recording lo- cation, and recording device, are maintained as per the original ICBHI annotations.arXiv:2505.22027v1 [cs.SD] 28 May 2025 2.2. Training Details We applied data pre-processing to extract the respiratory cycles from the waveform samples, ensuring each cycle was standard- ized to 8 seconds, as described in [3,13,16,20,29]. All samples were resampled to 16 kHz, except when employing the BTS model, which operates at 48 kHz. For the BTS model experi- ments, we adopted the same experimental settings as described in [20]. The Adam optimizer was employed with a learning rate of 5e–5, cosine scheduling, and a batch size of 8 for fine-tuning the transformer based models (BTS, CLAP [30] and AST) un- til 50 epochs. For other architectures, a larger learning rate of 1e–3 and a batch size of 128 was used, with training conducted for 200 epochs. SpecAugment [31] is applied to all the archi- tectures except for BTS and CLAP. 2.3. Evaluation Metrics We employ Sensitivity (Se),Specificity (Sp), and their arith- metic mean, referred to as ICBHI Score metrics, by the standard definitions provided in [21]. SeandSprepresent the propor- tion of actual respiratory abnormality and healthy cases that are correctly classified, respectively. We report both the mean and variance of Sp,Se, and Score across multiple independent runs, each initialized with different seeds {1,2,3,4,5}. In case of ensembles, the metrics are reported from single run. 3. Method We employ a response-based knowledge distillation approach, where the teacher is an ensemble of predictors as depicted in Figure 1, and the student model learns to mimic the teacher’s responses. The teacher
|
https://arxiv.org/abs/2505.22027v1
|
ensemble creates soft labels by comput- ing the mean of the logits from all predictors, or by sampling the logits of a single predictor from the ensemble. The follow- ing sections explain the process in detail. 3.1. BTS Model The BTS (Bridging Text and Sound) [20] model is a multimodal framework designed to integrate respiratory sound with tex- tual metadata prompts to boost respiratory sound classification performance. Derived from the pretrained LAION-CLAP [30] model, BTS employs an acoustic encoder to extract robust res- piratory sound features, while its text encoder processes meta- data as textual prompts with details, such as patient age groups, sex, recording stethoscopes, and recording locations. By align- ing these modalities within a shared latent space, This multi- modal approach achieves state-of-the-art results on the ICBHI dataset, but has a higher compute cost in comparison to the other RSC models, making it a good choice as a teacher for knowl- edge distillation. 3.2. BTS++: Ensemble of Strong Models We train 30 BTS models with different seeds. The mean ICBHI score of all 30 individual models of the ensemble is 63.41 ±0.77. At test time, ensemble predictions are computed by av- eraging logits and selecting the class corresponding to the av- eraged logit with the highest magnitude. For each sample in the data, we denote the logits as z∈RN×C, where Nis the number of models and Cthe number of classes and i-th logits inzare represented as zi. We use the suffix “++” to denote the ensembled version of a model and kto denote the number of predictors, so the BTS ensemble with 5 predictors is denoted as BTS++[ k= 5]. Teacher 1 Teacher 2 … Teacher 𝒌𝒌Teacher Ensemble a) Mean b) RandomSoft LabelsHard Labels Data (Hard Labels) StudentCross -Entropy LossSoftmaxSoft Labels LogitsFigure 1: Diagram of the overall method. We create soft labels a) by taking the mean, or b) sampling the label from random predictor. The soft label replaces the hard label as training tar- get for cross-entropy loss. Hard labels exist only in the source data and are not used during the process. 3.3. BTS-d: Soft Label Distillation Following response-based knowledge distillation principles, we generate soft labels p∈RCfrom teacher logits zusing two ap- proaches, mean teacher andrandom teacher . We construct the labels separately for each iteration, such that random sampling results are different for each epoch Mean Teacher . In this case, we choose all models in range {1, . . . , k }, compute the mean logits and apply the softmax function to obtain pµ. This represents the mean opinion of all k possible teachers as a probability distribution. pµ=softmax 1 kkX i=1zi! (1) Random Teacher . In this case we uniformly sample a teacher index ifrom{1, . . . , k }and apply the softmax function to obtain the random teacher label pr. This label represents the probability distribution corresponding to one of the kpossible teachers. pr=softmax (zi), i∼Uniform ({1, . . . , k }) (2) Knowledge Distillation. Given the student model predic- tions ˆy, we optimize its parameters using the cross-entropy loss LCEwith
|
https://arxiv.org/abs/2505.22027v1
|
soft labels from the teacher model instead of the hard labels y. In this process, the hard labels are not used at all. The resulting knowledge distillation losses are Lµfor the mean teacher, and Lrfor the random teacher. LCE=H(y,ˆy) =−nX i=1yilog (ˆyi) (3) Lµ=H(pµ,ˆy) (4) Lr=H(pr,ˆy) (5) We use the suffix “-d” to denote the distilled version of a model and kto denote the number of teachers, so the BTS stu- dent with 5 teachers is denoted as BTS-d[ k= 5]. 3.4. BTS-d++: Ensemble of Even Stronger Models To further improve ensemble model performance, we create a second-generation ensemble by combining the BTS-d models distilled from the BTS++ teacher. This compresses the teacher ensemble into a student ensemble BTS-d++, that comprises of even stronger individual predictors. Table 1: Main results comparing models on the ICBHI dataset with the official train-test split. Pretraining Data column, IN, AS, and LA refer to ImageNet [14], AudioSet [15], and LAION-Audio-630K [30], respectively. ∗denotes the previous state-of-the-art ICBHI Score. The Best and second best non-ensemble results are highlighted by the bold characters and underlines. The suffixes ”-d” and ”++” indicate distillation and ensembling, respectively. Method Backbone Pretraining Data Venue Sp(%) Se(%) Score (%)4-class eval.SE+SA [1] ResNet18 - INTERSPEECH‘20 81.25 17.84 49.55 LungRN+NL [2] ResNet-NL - INTERSPEECH‘20 63.20 41.32 52.26 RespireNet [3] (CBA+BRC+FT) ResNet34 IN EMBC‘21 72.30 40.10 56.20 Chang et al. [4] CNN8-dilated - INTERSPEECH‘22 69.92 35.85 52.89 Renet al. [5] CNN8-Pt - ICASSP‘22 72.96 27.78 50.37 Wang et al. [6] (Splice) ResNeSt IN ICASSP‘22 70.40 40.20 55.30 Nguyen et al. [7] (StochNorm) ResNet50 IN TBME‘22 78.86 36.40 57.63 Nguyen et al. [7] (CoTuning) ResNet50 IN TBME‘22 79.34 37.24 58.29 Moummad et al. [8] (SCL) CNN6 AS WASPAA‘23 75.95 39.15 57.55 Baeet al. [13] (Fine-tuning) AST IN + AS INTERSPEECH‘23 77.14 41.97 59.55 Baeet al. [13] (Patch-Mix CL) AST IN + AS INTERSPEECH‘23 81.66 43.07 62.37 Kim et al. [16] (AFT on Mixed-500) AST IN + AS NeurIPSW‘23 80.72 42.86 61.79 Kim et al. [17] (SG-SCL) AST IN + AS ICASSP‘24 79.87 43.55 61.71 Kim et al. [29] (RepAugment) AST IN + AS EMBC‘24 82.47 40.55 61.51 Daisuke et al. [18] (M2D-X/0.7) M2D ViT AS TASLP‘24 81.51 45.08 63.29 Kim et al. [20] (Audio-CLAP) CLAP LA INTERSPEECH‘24 80.85 44.67 62.56 Kim et al. [20] (BTS) CLAP LA INTERSPEECH‘24 81.40 45.67 63.54*Distill.AST-d[ k= 5] (mean teacher) [ours] AST IN + AS - 79.24 ±3.51 42.92 ±4.04 61.08 ±1.26 Audio-CLAP-d[ k= 5] (mean teacher) [ours] CLAP LA - 82.82 ±2.14 44.44 ±2.35 63.63 ±0.60 BTS-d[ k= 5] (mean teacher) [ours] CLAP LA - 84.93 ±2.25 43.82 ±2.25 64.38±0.36 BTS-d[ k= 15 ] (random teacher) [ours] CLAP LA - 82.89±2.14 45.90 ±1.89 64.39 ±0.42Ensemble.BTS++[ k= 5] [ours] CLAP LA - 85.18 43.50 64.34 BTS-d++[ k= 5] (Second Generation) [ours] CLAP LA - 88.09 42.82 65.45 BTS++[ k= 15 ] [ours] CLAP LA - 88.28 41.21 64.75 BTS++[ k= 30 ] [ours] CLAP LA - 89.49 41.89 65.69 4. Experiments 4.1. Main Results Table 1 presents the main results compared to previous RSC work on the ICBHI dataset. For a fair
|
https://arxiv.org/abs/2505.22027v1
|
comparison, we sep- arate the ensemble models (labeled with “++”) from the non- ensemble models as they multiply the test-time compute cost. Without additional test-time compute cost, the distilled BTS- d outperforms all previous approaches. Highest score was achieved with random teacher k= 15 , increasing the previous state-of-the-art score by 0.85 from 63.54 to 64.39. The mean teacher with k= 5is nearly as strong in terms of score, reach- ing 64.38. Note that the optimal kvalues for each teacher method were chosen based on empirical results in Figure 2. Considering the ensembles, BTS++[ k= 5] reaches the ICBHI Score of 64.34, but also multiplies the test-time compute cost by 5. The second generation ensemble BTS-d++[ k= 5] reaches 65.45 with the same compute cost, highlighting the value of good quality teachers. Increasing kto 15 increases the score to 64.75 and further increasing kto 30 for the full available ensem- ble raises the score to 65.69, but with a massive 30x compute cost over the baseline BTS. Based on these results, for practical real-time inference, our soft-label distilled results BTS-d and the second generation ensemble BTS-d++ strike a good balance between performance and inference cost. 4.2. Effectiveness of Distillation on Lightweight Architec- tures As the method is architecture-agnostic, we investigate the effec- tiveness of soft-label distillation on various architectures, some of which are much smaller than the teacher models. We se- lect some of the models commonly used for RSC in previous work [20, 29] and apply soft label training with BTS++[ k= 5] as the mean teacher. Table 2 compares standard cross-entropy with hard label (one-hot) against the soft label distillation on ICBHI. Each of the models, regardless of architecture or size benefit from soft-label distillation. While on average the scoreincreased by 1.16, generally the specificity increased much more, and the sensitivity decreased slightly, indicating a trade- off between reducing false positives and a slight increase in false negatives. This suggests that soft-label distillation helps the models better distinguish normal from abnormal cases. From these results we also see that, compact models with lower inference cost, such as ResNet18 [9], EfficientNet [10], CNN6 [11], and Audio-CLAP [20, 30] can approach the per- formance of larger, more computationally expensive teacher model. 4.3. Ablation Study We perform an ablation study to verify the method and report the results in Table 3. First, we compare teacher labels against random noise. Noised Label (var = 0.1) adds a Gaussian noise vector with variance of 0.1 across training target label. Noised label (teacher var) models the variance of mean teacher logits on the training dataset and adds the equivalent noise vector to the training target label. The noised labels perform worse than the baseline, indicating that the softness of the labels is not the rea- son for increased performance, but modeling the teacher vari- ance does shift the sensitivity closer to the teacher’s sensitivity value. Using a single teacher to create the soft labels boosts the score notably (63.54% to 63.90%), and further using a teacher ensemble yields the best performance, 64.38% and 64.39% for mean
|
https://arxiv.org/abs/2505.22027v1
|
teacher at k= 5 and random teacher at k= 15 , respec- tively. Selecting the best models as teachers is also beneficial. We picked the 5 highest score checkpoints as the curated mean teacher ensemble and further raised the score to 64.61%. Since the soft label is only applied during training, this can be consid- ered a valid approach and does not cause leakage of test data. We also remove the softmax from the soft labels and instead train using the raw teacher ensemble logits as labels, but the model fails to converge properly with cross-entropy loss. These results clearly indicate that the teacher ensemble is useful for creating the soft labels as formulated in this work. Table 2: Comparison of hard and soft labels for different architectures on the ICBHI dataset for the official 60–40% train–test split task of respiratory sound classification. Model # Params Hard Label Soft Label Gain Sp(%) Se(%) Score (%) Sp(%) Se(%) Score (%) Score (%) ResNet18 11.7M 76.70 ±5.65 33.47±4.03 55.09 ±0.82 81.41 ±2.78 31.11 ±3.05 56.26 ±0.85 1.17 EfficientNet 5.3M 78.21 ±3.56 34.44 ±2.84 56.33 ±0.43 79.26 ±3.31 36.09 ±3.39 57.68 ±1.48 1.35 CNN6 4.8M 77.00 ±3.27 37.35 ±3.15 57.17 ±0.81 85.57 ±2.56 30.77 ±1.97 58.17 ±0.60 1.00 AST 87.7M 77.14 ±3.35 43.07 ±2.80 59.55 ±0.88 79.24 ±3.51 42.92 ±4.04 61.08 ±1.26 1.53 Audio-CLAP 28M 80.85 ±3.33 44.67 ±3.77 62.56 ±0.37 82.82 ±2.14 44.44 ±2.35 63.63 ±0.60 1.07 BTS∗153M 81.40 ±2.57 45.67 ±2.66 63.54 ±0.80 82.89 ±2.14 45.90 ±1.89 64.39 ±0.42 0.85 Average - 78.55 ±3.62 39.78 ±3.21 59.04 ±0.69 81.87 ±2.74 38.54 ±2.78 60.20 ±0.87 1.16 Table 3: Ablation study of BTS-d performance comparing dif- ferent variations of the soft labels. Method Sp(%) Se(%) Score (%) Baseline 81.40 ±2.57 45.67 ±2.66 63.54 ±0.80 Noised Label (var=0.1) 81.33 ±2.90 44.10 ±3.53 62.71 ±0.47 Noised Label (teacher var) 76.53 ±3.78 47.46 ±2.36 62.00 ±1.06 Single Teacher 83.18 ±1.59 44.62 ±1.56 63.90 ±0.15 Mean Teacher Ensemble ( k= 5) 84.93 ±2.25 43.82 ±2.25 64.38 ±0.36 Random Teacher Ensemble ( k= 15 ) 82.89 ±2.14 45.90 ±1.89 64.39 ±0.42 Curated Teacher Ensemble ( k= 5) 84.28 ±2.58 44.95 ±2.57 64.61 ±0.75 Remove softmax Failed to converge Figure 2: The diminishing beneficial effect of increasing teacher count k. 4.4. Diminishing Returns of Growing Teacher Ensemble Size In this section, we perform a series of experiments to study the optimal size for the teacher ensemble. We train and evaluate a student model with kset at different multiples of 5 from 0 teachers to 30 teachers, and include k= 1 andk= 3 to better view the effect at the lower end of the scale. The results are plotted in Figure 2. Each step is the mean Score of student model trained on 5 seeds (1-5). Both the random and mean teacher methods benefit early and begin tapering off after their peaks. Mean teacher reaches the best score at 5 teachers, while random teacher peaks later at 15. The random teacher reached a slightly higher value than mean teacher, but we have used the mean teacher with k= 5as the default soft
|
https://arxiv.org/abs/2505.22027v1
|
label setting for our experiments because random teacher requires larger kvalue to reach similar performance. We also evaluated the ensembled model BTS++ at each k∈[1...30]. Figure 3 compares the ICBHI score and validation loss of teacher ensemble BTS++, as well as the ICBHI score of distilled student model BTS-d plotted against increasing k. Growing the ensemble size yields increasing scores leveling off Figure 3: Teacher model BTS++ validation loss and ICBHI score compared to the student BTS-d ICBHI score as kin- creases. at around 65.7, while the test-time compute cost grows linearly askincreases. Although the score of the ensemble keeps in- creasing throughout the range of k, the score of student model tapers off after the teacher validation loss converges. At k= 5, the distilled model performs similarly to the teacher ensemble, despite being only fifth of the size. The reduced score could indicate that beyond this point, the student begins to overfit to teacher label distribution on training set, and cannot generalize more to the real ground truth distribution on test set. Therefore, under this setting, the optimal teacher ensemble is smaller in size than the full ensemble. 5. Conclusion In this study, we applied architecture-agnostic knowledge dis- tillation for RSC using soft label training to extract knowledge from teacher ensembles. Our approach effectively transferred knowledge from the ensemble of teacher models to lightweight student models, achieving state-of-the-art performance on the ICBHI dataset. We demonstrated that even a single teacher model can significantly boost student model performance, with further gains observed when employing multiple teachers. We also found that curated and second-generation ensembles fur- ther improved model performance. While we focus on RSC, the results are applicable to other classification tasks where data scarcity and inference constraints are critical, such as heart sound, ECG, or pathological speech classification. Future re- search could explore diverse model architectures, as well as re- ducing the gap between ensemble and distilled model perfor- mance. To encourage further research and support reproducibil- ity, we release our code to the research community. 6. Acknowledgement This research was supported by Brian Impact Foundation, a non-profit organization dedicated to the advancement of science and technology for all. 7. References [1] Z. Yang, S. Liu, M. Song, E. Parada-Cabaleiro, and B. W. Schuller, “Adventitious respiratory classification using attentive residual neural networks,” in Interspeech , 2020. [2] Y . Ma, X. Xu, and Y . Li, “Lungrn+ nl: An improved adventitious lung sound classification using non-local block resnet neural net- work with mixup data augmentation.” in Interspeech , 2020, pp. 2902–2906. [3] S. Gairola, F. Tom, N. Kwatra, and M. Jain, “Respirenet: A deep neural network for accurately detecting abnormal lung sounds in limited data setting,” 2021 43rd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC) , 2021. [4] Y . Chang, Z. Ren, T. T. Nguyen, W. Nejdl, and B. W. Schuller, “Example-based Explanations with Adversarial Attacks for Res- piratory Sound Analysis,” in Proc. Interspeech 2022 , 2022, pp. 4003–4007. [5] Z. Ren, T. T. Nguyen, and W. Nejdl, “Prototype learning for inter- pretable respiratory
|
https://arxiv.org/abs/2505.22027v1
|
sound analysis,” in ICASSP 2022-2022 IEEE International Conference on Acoustics, Speech and Signal Pro- cessing (ICASSP) . IEEE, 2022, pp. 9087–9091. [6] Z. Wang and Z. Wang, “A domain transfer based data augmenta- tion method for automated respiratory classification,” in ICASSP 2022-2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) . IEEE, 2022, pp. 9017–9021. [7] T. Nguyen and F. Pernkopf, “Lung sound classification using co-tuning and stochastic normalization,” IEEE Transactions on Biomedical Engineering , vol. 69, no. 9, pp. 2872–2882, 2022. [8] I. Moummad and N. Farrugia, “Pretraining respiratory sound rep- resentations using metadata and contrastive learning,” in 2023 IEEE Workshop on Applications of Signal Processing to Audio and Acoustics (WASPAA) . IEEE, 2023, pp. 1–5. [9] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition , 2016, pp. 770–778. [10] M. Tan and Q. Le, “Efficientnet: Rethinking model scaling for convolutional neural networks,” in International conference on machine learning . PMLR, 2019, pp. 6105–6114. [11] Q. Kong, Y . Cao, T. Iqbal, Y . Wang, W. Wang, and M. D. Plumb- ley, “Panns: Large-scale pretrained audio neural networks for audio pattern recognition,” IEEE/ACM Transactions on Audio, Speech, and Language Processing , vol. 28, pp. 2880–2894, 2020. [12] Y . Gong, Y .-A. Chung, and J. Glass, “AST: Audio Spectrogram Transformer,” in Proc. Interspeech 2021 , 2021, pp. 571–575. [13] S. Bae, J.-W. Kim, W.-Y . Cho, H. Baek, S. Son, B. Lee, C. Ha, K. Tae, S. Kim, and S.-Y . Yun, “Patch-Mix Contrastive Learning with Audio Spectrogram Transformer on Respiratory Sound Clas- sification,” in Proc. INTERSPEECH 2023 , 2023, pp. 5436–5440. [14] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei, “Im- agenet: A large-scale hierarchical image database,” in 2009 IEEE conference on computer vision and pattern recognition . Ieee, 2009, pp. 248–255. [15] J. F. Gemmeke, D. P. W. Ellis, D. Freedman, A. Jansen, W. Lawrence, R. C. Moore, M. Plakal, and M. Ritter, “Audio set: An ontology and human-labeled dataset for audio events,” in Proc. IEEE ICASSP 2017 , New Orleans, LA, 2017. [16] J.-W. Kim, C. Yoon, M. Toikkanen, S. Bae, and H.-Y . Jung, “Ad- versarial fine-tuning using generated respiratory sound to address class imbalance,” arXiv preprint arXiv:2311.06480 , 2023.[17] J.-W. Kim, S. Bae, W.-Y . Cho, B. Lee, and H.-Y . Jung, “Stethoscope-guided supervised contrastive learning for cross- domain adaptation on respiratory sound classification,” in ICASSP 2024-2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) . IEEE, 2024, pp. 1431–1435. [18] D. Niizumi, D. Takeuchi, Y . Ohishi, N. Harada, and K. Kashino, “Masked modeling duo: Towards a universal audio pre-training framework,” IEEE/ACM Transactions on Audio, Speech, and Lan- guage Processing , 2024. [19] Y . Zhang, T. Xia, J. Han, Y . Wu, G. Rizos, Y . Liu, M. Mosuily, J. Chauhan, and C. Mascolo, “Towards open respiratory acoustic foundation models: Pretraining and benchmarking,” in The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track
|
https://arxiv.org/abs/2505.22027v1
|
, 2024. [Online]. Available: https://openreview.net/forum?id=vXnGXRbOfb [20] J.-W. Kim, M. Toikkanen, Y . Choi, S.-E. Moon, and H.-Y . Jung, “Bts: Bridging text and sound modalities for metadata-aided res- piratory sound classification,” in Interspeech 2024 , 2024, pp. 1690–1694. [21] B. Rocha, D. Filos, L. Mendes, I. V ogiatzis, E. Perantoni, E. Kaimakamis, P. Natsiavas, A. Oliveira, C. J ´acome, A. Marques et al. , “A respiratory sound database for the development of auto- mated classification,” in Precision Medicine Powered by pHealth and Connected Health: ICBHI 2017, Thessaloniki, Greece, 18-21 November 2017 . Springer, 2018, pp. 33–37. [22] Z. Li, K. Ren, Y . Yang, X. Jiang, Y . Yang, and D. Li, “Towards inference efficient deep ensemble learning,” in Proceedings of the AAAI Conference on Artificial Intelligence , vol. 37, no. 7, 2023, pp. 8711–8719. [23] G. Hinton, “Distilling the knowledge in a neural network,” arXiv preprint arXiv:1503.02531 , 2015. [24] J. Gou, B. Yu, S. J. Maybank, and D. Tao, “Knowledge distilla- tion: A survey,” International Journal of Computer Vision , vol. 129, no. 6, pp. 1789–1819, 2021. [25] L. Wang and K.-J. Yoon, “Knowledge distillation and student- teacher learning for visual intelligence: A review and new out- looks,” IEEE transactions on pattern analysis and machine intel- ligence , vol. 44, no. 6, pp. 3048–3068, 2021. [26] H. Zhou, L. Song, J. Chen, Y . Zhou, G. Wang, J. Yuan, and Q. Zhang, “Rethinking soft labels for knowledge distillation: A bias–variance tradeoff perspective,” in International Conference on Learning Representations , 2021. [Online]. Available: https: //openreview.net/forum?id=gIHd-5X324 [27] Z. Yang, A. Zeng, Z. Li, T. Zhang, C. Yuan, and Y . Li, “From knowledge distillation to self-knowledge distillation: A unified approach with normalized loss and customized soft labels,” in Proceedings of the IEEE/CVF International Conference on Com- puter Vision , 2023, pp. 17 185–17 194. [28] D. Busbridge, A. Shidani, F. Weers, J. Ramapuram, E. Lit- twin, and R. Webb, “Distillation scaling laws,” arXiv preprint arXiv:2502.08606 , 2025. [29] J.-W. Kim, M. Toikkanen, S. Bae, M. Kim, and H.-Y . Jung, “Repaugment: Input-agnostic representation-level augmentation for respiratory sound classification,” in 2024 46th Annual Inter- national Conference of the IEEE Engineering in Medicine and Biology Society (EMBC) , 2024, pp. 1–6. [30] Y . Wu, K. Chen, T. Zhang, Y . Hui, T. Berg-Kirkpatrick, and S. Dubnov, “Large-scale contrastive language-audio pretraining with feature fusion and keyword-to-caption augmentation,” in ICASSP 2023-2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) . IEEE, 2023, pp. 1–5. [31] D. S. Park, W. Chan, Y . Zhang, C.-C. Chiu, B. Zoph, E. D. Cubuk, and Q. V . Le, “Specaugment: A simple data augmentation method for automatic speech recognition,” Interspeech 2019 , Sep 2019.
|
https://arxiv.org/abs/2505.22027v1
|
Analysis and Evaluation of Synthetic Data Generation in Speech Dysfluency Detection Jinming Zhang1, Xuanru Zhou1, Jiachen Lian2, Shuhe Li1, William Li2, Zoe Ezzes3, Rian Bogley3, Lisa Wauters3, Zachary Miller3, Jet Vonk3, Brittany Morin3, Maria Gorno-Tempini3, Gopala Anumanchipalli2 1Zhejiang University, China2UC Berkeley, USA3UCSF, USA pmhuan1212@gmail.com, jiachenlian@berkeley.edu, gopala@berkeley.edu Abstract Speech dysfluency detection is crucial for clinical diagnosis and language assessment, but existing methods are limited by the scarcity of high-quality annotated data. Although recent ad- vances in TTS model have enabled synthetic dysfluency gener- ation, existing synthetic datasets suffer from unnatural prosody and limited contextual diversity. To address these limitations, we propose LLM-Dys — the most comprehensive dysfluent speech corpus with LLM-enhanced dysfluency simulation. This dataset captures 11 dysfluency categories spanning both word and phoneme levels. Building upon this resource, we improve an end-to-end dysfluency detection framework. Experimental validation demonstrates state-of-the-art performance. All data, models, and code are open-sourced at https://github. com/Berkeley-Speech-Group/LLM-Dys . Index Terms : speech dysfluency, synthetic dataset 1. Introduction Speech dysfluency detection is an essential step for assisting in disordered speech diagnosis, language screening, or early prevention. For a long time, dysfluency or stutter detection has been treated simply as a classification problem, mostly bi- nary [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], among others. However, to better serve clinical needs, researchers have developed more so- phisticated methods [12, 13, 14] that can identify both the types and timing of dysfluencies. The development of robust dysfluency detectors requires large-scale, high-quality datasets. While public corpora like UCLASS [15] and SEP-28K [16] exist, they have limitations in both size and annotation quality. The segmentation and annotation in these datasets often fall short of the require- ments for training robust models. For instance, SEP-28K con- tains numerous partially pronounced words and lacks accurate ground truth transcriptions, making it challenging to develop reliable dysfluency detection systems. Some researchers have attempted to simulate dysfluent speech. For example, LibriS- tutter [17], VCTK++ [12], and [9] directly inject dysfluencies in the time or spectrogram domain. However, this approach has been shown to produce low audio quality [14]. Instead, [14] proposed VCTK-TTS, where dysfluencies are simulated only at the text level and then synthesized into speech using a TTS model [18]. VCTK-TTS demonstrates significantly bet- ter intelligibility and naturalness than all previous datasets. A subsequent study [19] introduced VCTK-Pro by incorporating co-dysfluencies. Libri-Dys [20] adopted the same technology but further scaled it up to LibriTTS [21], and then extended it to co-dysfluency [22]. Additionally, VCTK-Token [23]shares the same simulation pipeline as VCTK-TTS but includes token- level labels. A key shortcoming of these TTS-based simulatedcorpora is that the text simulation is purely rule-based, which may not accurately reflect human stuttering patterns . More- over, the diversity of text variations [24, 21] explored in these methods remains quite limited . Another issue is that there is no standard for dysfluent speech labeling. For traditional bi- nary classification-based detection, the label is simply ”stutter” or ”not stutter”. For advanced clinical-aware dysfluency mod- eling [12, 14], both types and accurate timestamps for all dys- fluency types are usually required.
|
https://arxiv.org/abs/2505.22029v1
|
For normal ASR tasks, the dysfluency labels can just be limited to filler words [25]. The lack of a high-quality, text-diversified, naturalistic corpus with unified labels makes scaling efforts particularly challenging. In this work, we propose leveraging Large Language Mod- els (LLMs) to generate dysfluent text across a diverse textual corpus, capitalizing on their learned understanding of natural dysfluency patterns. The generated text is synthesized using a Text-to-Speech (TTS) model to create LLM-Dys , which consti- tutes the largest simulated dysfluency corpus to date, with over 10,000 hours of speech (as presented in Table 1). Recognizing that traditional binary stuttering detection can be viewed as a subset of the broader multi-class dysfluency localization prob- lem, we focus on modeling eleven distinct dysfluency types: in- sertions, repetitions, pauses, deletions, and substitutions at both word and phoneme levels, as well as phonetic prolongations. This is the first comprehensive and opensourced dataset cover- ing all major types of dysfluencies and can be adapted to a wide range of dysfluency detection tasks. Quantitative analysis [26] reveals that LLM-Dys achieves superior synthesis quality compared to other text-diversity- constrained simulation corpora and is even comparable to real fluent speech, as visualized in Fig 2. We perform dysfluency de- tection on both our simulated and real stuttered speech bench- marks, consistently achieving state-of-the-art performance. To further explore the limits of text-based simulation, we investi- gate scaling laws [27] with respect to data and report that simply increasing textual data or diversity may not yield additional per- formance improvements unless a high-quality acoustic (TTS) model is employed. Additional ablation studies demonstrate that our proposed benchmark is generalizable and robust, and we hope it will further facilitate research in the community. 2. Data Simulation 2.1. Dysfluent Text Generation We build upon the most recent and naturalistic simulated corpus VCTK-Token [23], leveraging Large Language Models instead of rule-based methods to generate authentic dysfluent texts. Through prompting, we obtain both dysfluent texts and corre- sponding labels, eliminating manual annotation needs (The la- bel can be adjusted according to specific tasks). Our LLM im-arXiv:2505.22029v1 [eess.AS] 28 May 2025 Text Prompts Large Language Model Dysfluent Text Label Word Level: • Insertion: Don't forget to you know , set your clocks forward this weekend. •Repetition: The conference will feature keynote speeches from , from leading experts in the field. •Deletion: The client wants us deliver the product by next month. •Substitution: The patient needs immediate medical retention. •Pause/Block: The team is working hard to <pause> finish the project on time. Phoneme Level: •Insertion: ɑːɹ juː fɹˈiː ðɪs wˈiːk mɛnd fɚɹə hˈaɪk? (Are you free this weekend for a hike?) •Repetition: ðeɪ ɑːɹ plˈeɪ...plˈeɪ ɪŋ ɪnðə pˈɑːɹk. čThey are playing in the park.Ď •Deletion: dˈɑːɹk stˈoːɹm klˈaʊdz ɡˈæðɚd kˈɪkli.čDark storm clouds gathered quickly.Ď •Substitution: ˈaɪ wˈɪʃ tə wˈɑːʃ maɪ ˈaɪɹɪʃ ɹˈɪstwɑː s. (I wish to wash my Irish wristwatch.) •Prolongation: wiː nˈiː<prolong>d tʊ ɪmpɹˈuːv pɹədˈʌkʃən ɪfˈɪʃənsi. (We need to improve production efficiency.) •Pause/Block: ʃiː ɪz pɹɪ <pause>pˈɛɹɪŋ fɚðə pɹˌɛzəntˈeɪʃən təmˈɑːɹoʊ. (She is pr eparing for the presentation to- morrow.) Word Level: • Insertion: Don't forget to [INS]
|
https://arxiv.org/abs/2505.22029v1
|
set your clocks forward this weekend. •Repetition: The conference will feature keynote speeches from [REP] leading experts in the field. •Deletion: The client wants us to [DEL] deliver the product by next month. •Substitution: The patient needs immediate medical attention [SUB]. •Pause/Block: The team is working hard to [P AU] finish the project on time. Phoneme Level: •Insertion: AA R Y UW F R IY DH IH S W IY K [INS] EH N D F AO R AH HH AY K •Repetition: DH EY AA R P L EY [REP] IH NG IH N DH AH P AA R K •Deletion: D AA R K S T AO R M K L AW D Z G AE DH ER D K W [DEL] IH K L IY •Substitution: AY W IH SH T UW W AA SH M AY AY R IH SH R IH S T W AA CH [SUB] •Prolongation: W IY N IY [PRO] D T UW IH M P R UW V P R AH D AH K SH AH N IH F IH SH AH N S IY •Pause/Block: SH IY IH Z P R IY [P AU] P EH R IH NG F AO R DH AH P R EH Z AH N T EY SH AH N T AH M AA R OWClean Text And IPA/CMU Phonme Sequence Text To Speech Model Prolongation/Pause Position Index Duration Prediction Model Dysfluent Speech (LLM-Dys)Add Pause( 0.8~3.5s for word -level, and 0.3~1.5s for phoneme level) "phoneme_before_<pause>": 0.5572 "phoneme_after_<pause>": 0.6153 outputDon’t forget to [INS]set your clocks forward this weekend SH IY ... P R IY [PRO] P EH R IH NG ...T AH M AA R OWWord Level Phoneme LevelorDysfluency Transcriber Train InferenceWord_level Insertion Phoneme_level ProlongationFigure 1: Overview of our approach: We leverage Large Language Models (LLMs) to generate dysfluent text for TTS synthesis and corresponding labels for dysfluency transcriber training. After applying special processing for pauses and prolongations, we establish a large-scale dataset called LLM-Dys. By jointly feeding acoustic features and labels into the dysfluency transcriber for training, we achieve end-to-end dysfluent speech detection. plementations is based on claude-3-5-sonnet [28]. The prompts we used can be found at our open-sourced page. When generate phoneme-level utterances, we also provide clean texts with their CMU and IPA sequences (via phonimizer [29]) as ad- ditional context, enabling LLMs to generate phonetically valid dysfluent sequences. 2.2. Dysfluent Speech Generation We primarily adopt VITS [18] for dysfluent speech generation. Our experiments show that VITS is more reliable in generat- ing dysfluent speech, particularly in preserving dysfluent seg- ments rather than automatically omitting them. Furthermore, with specific modifications, VITS can directly accept IPA se- quences as input, enabling phoneme-level dysfluency simula- tion. Its robust duration prediction model allows precise times- tamp insertion for pauses and accurate prolongation of specified phonemes. However, VITS shows limitations in synthesizing fillers like ”um” and ”uh,” which constitute a significant portion of inserted filler words. Therefore, we employ E2-TTS [30, 31] for word-level insertions. VITS includes 109 VCTK speakers, generating 109 samples per LLM-generated utterance. For E2- TTS, which
|
https://arxiv.org/abs/2505.22029v1
|
requires reference audios, we extract sample clips from each VCTK speaker. This allows us to generate an equiv- alent set of 109 variations per LLM-generated utterance, ensur-ing dataset consistency. Some examples of LLM-generated ut- terances are shown in Fig 1. We provide explanations for pause and prolongation implementations: •Pause: LLMs first generate <pause> markers in the dys- fluent text. We then generate fluent speech and obtain times- tamps for the phonemes adjacent to the <pause> marker. We then smoothly insert a silent segment of 0.8-3.5s (word level) or 0.3-1.5s (phoneme level) into the fluent speech. •Prolongation: Using the <prolong> markers generated by LLMs, we identify the prolong position index, which corre- sponds to the position of the target prolonged phoneme in the VITS duration matrix. During audio synthesis with VITS, we extend the duration of this phoneme by 0.17-0.8s. 2.3. Statistics As detailed in Table 1, we generate utterances per type using LLMs. The total duration of word-level and phoneme-level speech amounts to approximately 6,843 and 5,947 hours respec- tively, resulting in a substantial dataset of 12,790 hours . In Fig 4, We analyze POS patterns for four word-level dys- fluency types. The analysis reveals distinct LLM-generated patterns: (1) Substitutions: LLMs exchange words with simi- lar pronunciations, particularly nouns and verbs; (2) Deletions: commonly occur with auxiliary verbs and conjunctions,which Table 1: Utterance and Duration statistics of our synthetic dataset LLM-Dys. In this table, ’10028*109’ means there’re 10028 different utterances generated by LLMs, and 109 speakers are used for Synthesis Level Types Insertion Repetition Pause Deletion Substitution Prolongation WordSamples 10028*109 14184*109 7667*109 10000*109 9876*109 - (109 speakers) Hours 1540 hrs 1916 hrs 1379 hrs 1140 hrs 868 hrs - PhonemeSamples 9298*109 9377*109 9396*109 8917*109 6858*109 9500*109 (109 speakers) Hours 1008 hrs 1021 hrs 1742 hrs 732 hrs 499 hrs 945 hrs /uni00000035/uni00000048/uni00000053 /uni00000033/uni00000044/uni00000058 /uni0000002c/uni00000051/uni00000056 /uni00000027/uni00000048/uni0000004f/uni00000018/uni00000011/uni00000015/uni0000001c/uni00000018/uni00000011/uni0000001b/uni00000013 /uni00000017/uni00000011/uni0000001b/uni00000015/uni00000018/uni00000011/uni00000017/uni0000001a /uni00000017/uni00000011/uni0000001b/uni00000015/uni00000018/uni00000011/uni0000001b/uni00000013 /uni00000018/uni00000011/uni00000016/uni00000015/uni00000018/uni00000011/uni0000001b/uni0000001a/uni0000000b/uni0000005a/uni00000052/uni00000055/uni00000047/uni00000010/uni0000004f/uni00000048/uni00000059/uni00000048/uni0000004f/uni0000000c/uni00000026/uni00000028 /uni00000035/uni00000048/uni00000053 /uni00000033/uni00000044/uni00000058 /uni0000002c/uni00000051/uni00000056 /uni00000027/uni00000048/uni0000004f/uni00000019/uni00000011/uni00000015/uni0000001a/uni00000019/uni00000011/uni0000001a/uni0000001a /uni00000019/uni00000011/uni00000014/uni00000014/uni00000019/uni00000011/uni0000001b/uni00000014 /uni00000019/uni00000011/uni00000014/uni00000014/uni0000001a/uni00000011/uni00000013/uni00000018 /uni00000019/uni00000011/uni00000015/uni00000016/uni00000019/uni00000011/uni0000001c/uni00000015/uni0000000b/uni0000005a/uni00000052/uni00000055/uni00000047/uni00000010/uni0000004f/uni00000048/uni00000059/uni00000048/uni0000004f/uni0000000c/uni00000026/uni00000038 /uni00000035/uni00000048/uni00000053 /uni00000033/uni00000044/uni00000058 /uni0000002c/uni00000051/uni00000056 /uni00000027/uni00000048/uni0000004f/uni0000001a/uni00000011/uni00000014/uni00000018/uni0000001a/uni00000011/uni00000016/uni00000019/uni0000001a/uni00000011/uni00000013/uni00000014/uni0000001a/uni00000011/uni00000017/uni00000016 /uni0000001a/uni00000011/uni00000013/uni00000014/uni0000001a/uni00000011/uni00000016/uni00000014 /uni0000001a/uni00000011/uni00000013/uni0000001a/uni0000001a/uni00000011/uni00000017/uni00000015/uni0000000b/uni0000005a/uni00000052/uni00000055/uni00000047/uni00000010/uni0000004f/uni00000048/uni00000059/uni00000048/uni0000004f/uni0000000c/uni00000033/uni00000034 /uni00000035/uni00000048/uni00000053 /uni00000033/uni00000055/uni00000052 /uni00000036/uni00000058/uni00000045 /uni00000027/uni00000048/uni0000004f/uni00000018/uni00000011/uni00000015/uni00000018/uni00000018/uni00000011/uni0000001a/uni0000001b /uni00000018/uni00000011/uni00000014/uni00000015/uni00000018/uni00000011/uni00000019/uni0000001b /uni00000018/uni00000011/uni00000016/uni0000001a/uni00000018/uni00000011/uni0000001a/uni00000018 /uni00000018/uni00000011/uni00000016/uni00000018/uni00000018/uni00000011/uni0000001b/uni00000014/uni0000000b/uni00000053/uni0000004b/uni00000052/uni00000051/uni00000048/uni00000050/uni00000048/uni00000010/uni0000004f/uni00000048/uni00000059/uni00000048/uni0000004f/uni0000000c /uni00000035/uni00000048/uni00000053 /uni00000033/uni00000055/uni00000052 /uni00000036/uni00000058/uni00000045 /uni00000027/uni00000048/uni0000004f/uni00000019/uni00000011/uni00000013/uni0000001c/uni00000019/uni00000011/uni0000001a/uni0000001a /uni00000019/uni00000011/uni00000013/uni00000019/uni00000019/uni00000011/uni0000001b/uni00000013 /uni00000019/uni00000011/uni00000015/uni0000001a/uni00000019/uni00000011/uni0000001a/uni0000001b /uni00000019/uni00000011/uni00000015/uni0000001a/uni00000019/uni00000011/uni0000001b/uni00000017/uni0000000b/uni00000053/uni0000004b/uni00000052/uni00000051/uni00000048/uni00000050/uni00000048/uni00000010/uni0000004f/uni00000048/uni00000059/uni00000048/uni0000004f/uni0000000c /uni00000035/uni00000048/uni00000053 /uni00000033/uni00000055/uni00000052 /uni00000036/uni00000058/uni00000045 /uni00000027/uni00000048/uni0000004f/uni0000001a/uni00000011/uni00000013/uni0000001c/uni0000001a/uni00000011/uni00000016/uni00000015 /uni00000019/uni00000011/uni0000001c/uni0000001b/uni0000001a/uni00000011/uni00000015/uni0000001c/uni0000001a/uni00000011/uni00000013/uni0000001c/uni0000001a/uni00000011/uni00000015/uni00000018/uni0000001a/uni00000011/uni00000013/uni0000001c/uni0000001a/uni00000011/uni00000016/uni00000013/uni0000000b/uni00000053/uni0000004b/uni00000052/uni00000051/uni00000048/uni00000050/uni00000048/uni00000010/uni0000004f/uni00000048/uni00000059/uni00000048/uni0000004f/uni0000000c/uni00000039/uni00000026/uni00000037/uni0000002e/uni00000010/uni00000037/uni00000052/uni0000004e/uni00000048/uni00000051 /uni0000002f/uni0000002f/uni00000030/uni00000010/uni00000027/uni0000005c/uni00000056 /uni00000039/uni00000026/uni00000037/uni0000002e /uni0000002f/uni0000004c/uni00000045/uni00000055/uni0000004c/uni00000037/uni00000037/uni00000036 /uni0000002f/uni0000004c/uni00000045/uni00000055/uni0000004c/uni00000036/uni00000057/uni00000058/uni00000057/uni00000057/uni00000048/uni00000055Figure 2: Comparision between differenct datasets, CE:Content Enjoyment, CU:Content Usefulness, PQ: Production Quality /uni00000035/uni00000048/uni00000053 /uni00000027/uni00000048/uni0000004f /uni00000033/uni00000044/uni00000058 /uni0000002c/uni00000051/uni00000056 /uni00000036/uni00000058/uni00000045/uni00000013/uni00000014/uni00000013/uni0000000b/uni0000005a/uni00000052/uni00000055/uni00000047/uni00000010/uni0000004f/uni00000048/uni00000059/uni00000048/uni0000004f/uni0000000c/uni00000026/uni00000028/uni00000003/uni00000027/uni00000048/uni0000004a/uni00000055/uni00000044/uni00000047/uni00000044/uni00000057/uni0000004c/uni00000052/uni00000051/uni00000003/uni0000000b/uni00000008/uni0000000c /uni00000035/uni00000048/uni00000053 /uni00000027/uni00000048/uni0000004f /uni00000033/uni00000044/uni00000058 /uni0000002c/uni00000051/uni00000056 /uni00000036/uni00000058/uni00000045/uni00000013/uni00000018/uni0000000b/uni0000005a/uni00000052/uni00000055/uni00000047/uni00000010/uni0000004f/uni00000048/uni00000059/uni00000048/uni0000004f/uni0000000c/uni00000026/uni00000038/uni00000003/uni00000027/uni00000048/uni0000004a/uni00000055/uni00000044/uni00000047/uni00000044/uni00000057/uni0000004c/uni00000052/uni00000051/uni00000003/uni0000000b/uni00000008/uni0000000c /uni00000035/uni00000048/uni00000053 /uni00000027/uni00000048/uni0000004f /uni00000033/uni00000044/uni00000058 /uni0000002c/uni00000051/uni00000056 /uni00000036/uni00000058/uni00000045/uni00000013/uni00000011/uni00000013/uni00000015/uni00000011/uni00000018/uni0000000b/uni0000005a/uni00000052/uni00000055/uni00000047/uni00000010/uni0000004f/uni00000048/uni00000059/uni00000048/uni0000004f/uni0000000c/uni00000033/uni00000034/uni00000003/uni00000027/uni00000048/uni0000004a/uni00000055/uni00000044/uni00000047/uni00000044/uni00000057/uni0000004c/uni00000052/uni00000051/uni00000003/uni0000000b/uni00000008/uni0000000c /uni00000035/uni00000048/uni00000053 /uni00000027/uni00000048/uni0000004f /uni00000033/uni00000055/uni00000052 /uni00000036/uni00000058/uni00000045 /uni00000033/uni00000044/uni00000058 /uni0000002c/uni00000051/uni00000056/uni00000013/uni00000014/uni00000013/uni0000000b/uni00000053/uni0000004b/uni00000052/uni00000051/uni00000048/uni00000050/uni00000048/uni00000010/uni0000004f/uni00000048/uni00000059/uni00000048/uni0000004f/uni0000000c /uni00000035/uni00000048/uni00000053 /uni00000027/uni00000048/uni0000004f /uni00000033/uni00000055/uni00000052 /uni00000036/uni00000058/uni00000045 /uni00000033/uni00000044/uni00000058 /uni0000002c/uni00000051/uni00000056/uni00000013/uni00000014/uni00000013/uni0000000b/uni00000053/uni0000004b/uni00000052/uni00000051/uni00000048/uni00000050/uni00000048/uni00000010/uni0000004f/uni00000048/uni00000059/uni00000048/uni0000004f/uni0000000c /uni00000035/uni00000048/uni00000053 /uni00000027/uni00000048/uni0000004f /uni00000033/uni00000055/uni00000052 /uni00000036/uni00000058/uni00000045 /uni00000033/uni00000044/uni00000058 /uni0000002c/uni00000051/uni00000056/uni00000013/uni00000011/uni00000013/uni00000015/uni00000011/uni00000018/uni0000000b/uni00000053/uni0000004b/uni00000052/uni00000051/uni00000048/uni00000050/uni00000048/uni00000010/uni0000004f/uni00000048/uni00000059/uni00000048/uni0000004f/uni0000000c/uni00000039/uni00000026/uni00000037/uni0000002e/uni00000010/uni00000057/uni00000052/uni0000004e/uni00000048/uni00000051 /uni0000002f/uni0000002f/uni00000030/uni00000010/uni00000027/uni0000005c/uni00000056 Figure 3: Comparative quality degradation analysis between LLM-Dys and VCTK-token n.65.64%()aux.35.31%()conj.26.42%()pron.16.58%()pron.28.85%()fillerwords52.72%()otherconj.prep.17.57%()otherotherprep.pron.adv. v.v.27.15%()adj.adv.otherv.SubstitutionDeletionRepetitionInsertion Figure 4: POS analysis on four word level subsets of LLM-Dys mirrors natural speech patterns where speakers often omit these elements for efficiency; (3) Repetitions: LLMs frequently du- plicate pronouns and prepositions, reflecting natural hesitation points; (4) Insertions: primarily use filler words, simulating nat- ural speech pauses and thinking time. 2.4. Dataset Evaluation To comprehensively and objectively evaluate our dataset, par- ticularly the naturalness of speech, we employ Meta Audiobox Aesthetics [26] as our evaluation tool. This tool can directly assess input audio samples across multiple dimensions and pro- vides four metrics: Content Enjoyment (CE), Content Useful- ness (CU), Production Complexity (PC), and Production Qual- ity (PQ). Among these metrics,
|
https://arxiv.org/abs/2505.22029v1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.