text
string
source
string
learning process, so that the learner does not violate safety while exploring new actions. In post-shielding, the shield is deployed during deployment, i.e., after the learn- ing process has ended, so that the potentially dangerous actions of the learned agents could be corrected at runtime. Besides, shields can be designed for ei- ther qualitative safety specifications or quantitative specifications. Our dynamic shields consider qualitative safety specifications, which makes them usable either as a pre-shield or as a post-shield [20]. Early works proposed only statically designed shields, while recent litera- ture has seen a surge of dynamic shielding frameworks. This is to keep up with the increasing uncertainties as shields have shown applicability in wide-ranging real-world use cases. Some examples of dynamic shields follow. When the un- derlying model parameters like environment probability distributions are apriori unknown or partially known, shields will need to dynamically adapt to account for newly discovered model parameters [21,29,6]. When an exhaustive computa- tion of the shield for all possible state-action pairs is computationally infeasible, shields could be dynamically computed at runtime by only analyzing the rel- atively small set of forward reachable states upto a given horizon [12]. When shields’ objectives are quantitative, e.g., requiring to keep some cost metric be- low a given threshold, they may need to adapt to changing requirements like changing cost thresholds under different conditions [10]. Surprisingly, none of the existing works considered our setting of changing qualitative safety speci- fications. This was an important gap, since runtime controller adaptation due to changing safety goals is an important topic in AI and control systems re- search [14,7,22,17]. Our synthesis algorithms are powered by abstraction-based control (ABC), which is a collection of model-based synthesis algorithms for formally verified controllers of nonlinear and hybrid dynamical systems [26,24,18]. Our work uses a particular ABC algorithm based on feedback refinement relations [24], whose strength is a fast refinement process that was crucial for the fast runtime Efficient Dynamic Shielding for Parametric Safety Specifications 5 deployment of our shields. Incidentally, to the best of our knowledge, our work is the first to use ABC for shield synthesis. 2 Preliminaries Notation. Given an alphabet X, we will write X∗andXωto respectively denote the set of finite and infinite sequences over X, and will write X∞to denote X∗∪Xω. Given a set S⊆X∞, we will write Pref(S)to denote the set of every finite prefix of S, i.e., Pref(S):={w∈X∗| ∃w′∈X∞. ww′∈S}. Control systems. We consider continuous-state, discrete-time control systems , described as tuples of the form (X,U,W, f), where X ⊂Rn,U ⊂Rm, and W ⊂Rpare all compact sets respectively called the state space , thecontrol input space, and the disturbance input space , and the function f:X × U × W → X is called the transition function . The constants n,m, and pare all positive integers and are called the dimensions of the respective spaces. The semantics of the control system Σ= (X,U,W, f)is described using its transitions and trajectories. For any given state x∈ X, control input u∈ U, and disturbance input w∈ WofΣat a given time step, the new
https://arxiv.org/abs/2505.22104v1
state at the next time step is given by x′=f(x, u, w ), and we will express this as the transition xu,w− − → x′Thetrajectory ξofΣstarting at a given initialstate x0∈ X and caused by control and disturbance input sequences u0, u1, . . .andw0, w1, . . . is a sequence of transitions x0u0,w0− − − − → x1u1,w1− − − − → x2. . .. The sequence of states x0, x1, . . .∈ X∞appearing in the trajectory ξwill be called the pathofξ. Trajectories and paths can be either finitely or infinitely long. Controllers. LetΣ= (X,U,W, f)be a control system. A controller ofΣis a partialfunctionoftheform X∗→2U,whichdeterminesthesetofallowedcontrol inputs given the history of past states at each point in time. Every controller C ofΣproduces a set of paths from a given initial state x0∈ X, defined as Paths (Σ,C, x0):=n x0x1. . .∈ X∞ ∃w0w1. . .∈ W∞. x0u0,w0− − − − → x1u1,w1− − − − → x2. . .is a trajectory of Σwhere ui∈ C(x0. . . x i)for all i≥0o . We will encounter the following three orthogonal subclasses of controllers, where each subclass can be combined with other subclasses. –Astate-feedback (memoryless) controller is a controller Cthat only considers the current state while selecting control inputs, not the entire history. For- mally, C(y) =C(y′)for every pair y, y′∈ X∗for which the last states are the same. We will represent state-feedback controllers as functions of the form C:X → 2U, whose domain is defined as the set of every x∈ Xfor which C(x)is defined, and written as Dom (C). –Adeterministic controller is a controller Cthat selects a single control input at each step, i.e., has the form X∗→ U. 6 Davide Corsi , Kaushik Mallik , Andoni Rodríguez, and César Sánchez –Anonblocking controller is a controller Cif every finite path generated by C hasaninfiniteextension,i.e., Paths (Σ,C, x0)∩X∗⊆Pref (Paths (Σ,C, x0)∩ Xω). In other words, every nonblocking controller Cmust disallow every control input ufor every finite path x0. . . x kif there exists a disturbance w∈ W with xk+1=f(xk, u, w )such that C(x0. . . x kxk+1)is undefined. Safety specifications. LetΣ= (X,U,W, f)be a control system, and let G⊆ Xbe a set of states, designated as the set of safestates.3The complement of the safe states will be called the unsafestates. The safety specification (with respect toΣandG) is the set of every sequence of states of Σthat never leaves G, formally written as SafetyΣ(G):={ρ=x0x1. . .∈ X∞| ∀i≥0. xi∈G}. When the system is clear from the context, we will drop the suffix and write Safety (G). Safety controllers. LetΣ= (X,U,W, f)be a control system and Safety (G) be a safety specification. A state-feedback controller CofΣis called a safety con- trollerforSafety (G),if,intuitively, Cguaranteesthatallpathsofthesystemstay forever inside Gno matter what disturbance inputs are experienced; formally, Cmust fulfill Paths (Σ,C, x0)⊆Safety (G)for every x0∈Dom (C). It is known that for fulfilling safety specifications of the form Safety (G), state-feedback con- trollers suffice [28]. It is also easy to see that Dom (C)⊆G. From now on, we will denote a safety controller for Safety (G)usingCG,
https://arxiv.org/abs/2505.22104v1
where the subscript “ G” makes it explicit that CGis attached to the particular specification Safety (G). We add one final subclass of controllers to the list of other subclasses pre- sented earlier. For this, we say a controller C′is asub-controller ofC, writ- tenC′⊑ C, if (a) Dom (C′)⊆Dom (C)and (b) for every state x∈Dom (C′), C′(x)⊆ C(x). Equivalently, we say Cis thesuper-controller ofC′. –For a given safety specification Safety (G), amaximally permissive safety controller is a safety controller C∗ Gsuch that every other safety controller CG forSafety (G)is a sub-controller of C∗ G, i.e.,CG⊑ C∗ G. It is known that if safety specifications admit controllers, then these con- trollers are uniquenonblocking, maximally permissive (and state-feedback) con- trollers [28], written n.m.p. controllers in short. Specifications other than safety (like liveness) lack this feature, even though workarounds exist that require sig- nificantly more sophisticated type of controllers [2]. 3 Dynamic Shielding for Parametric Safety Specifications 3.1 Preliminaries: The Existing (Safety) Shielding Framework Shielding is an emerging technology for safety assurance of autonomous systems. Most autonomous systems need to accomplish their assigned functional tasks, like navigation, while fulfilling a given set of safety constraints, like collision 3The symbol “ G” can be associated with the word “Globally,” which is the word used to describe safety properties in the linear temporal logic. Efficient Dynamic Shielding for Parametric Safety Specifications 7 avoidance. Shielding helps us to create a separation between fulfilling functional tasks and fulfilling safety constraints. In particular, the functional tasks can be delegated to a learned4state-feedback controller treated as a black-box, while the safety constraints are enforced by the shield, which monitors the learned controller’s decisions and overrides them if safety would be at risk otherwise. Definition 1 (Shields). Suppose Σ= (X,U,W, f)is a control system and Safety (G)is a given safety specification. A shieldis a partial function SG:X × U → U′withU′⊆ U, such that for every x∈ X,SG(x, u)is defined either for every u∈ Uor for none of u∈ U. The domain of the shield SGis defined as: Dom (SG):={x∈ X | S G(x, u)is defined for all u∈ U}. Suppose the shield SGis deployed with the learned controller C:X∗→ U. Letxbe the current state at a given time point. First, Cproposes the control input u=C(. . . x), and then, the shield SGtakes into account the pair (x, u), and selects the control input u′=SG(x, u)that is possibly different from u. The set of resulting paths starting at a given initial state x0∈ Xis given as: Paths (Σ,C,SG, x0):=n x0x1. . .∈ X∞ ∃w0w1. . .∈ W∞. x0SG(x0,C(x0)),w0− − − − − − − − − − − → x1SG(x1,C(x0x1)),w1− − − − − − − − − − − − → x2. . .is a trajectory of Σo . The shield SGguarantees safety under the learned controller Cfrom the initial state x0ifPaths (Σ,C,SG, x0)⊆Safety (G). Whenever the output u′of the shield SGis different from the output of the controller C, we say that SGhasintervened , and we want minimal interventions while fulfilling safety. Formally, a shield is said to be
https://arxiv.org/abs/2505.22104v1
minimally intervening if every intervention is a necessary intervention, i.e., without the intervention, dis- turbances could push the trajectory outside of the shield’s domain, and therefore safetyguaranteeswouldbelost.Ourdefinitionofminimalinterventionisadapted from the definition by Bloem et al. [4], which formalizes minimal intervention with respect to a generic intervention-penalizing cost metric. Problem 1 (Minimally intervening shield synthesis). Inputs:A control system Σand a safety specification Safety (G). Output:A shield S∗ Gsuch that for every learned controller Cand for every x0∈ Dom (S∗ G): Safety: Paths (Σ,C,S∗ G, x0)⊆Safety (G); Minimal interventions: for every finite path x0. . . x k∈Paths (Σ,C,S∗ G, x0), if an intervention happens, i.e., if S∗ G(xk,C(x0. . . x k))̸=C(x0. . . x k), then for every u∈ Uthere exists a w∈ Wsuch that f(x, u, w )/∈Dom (S∗ G). The output of Problem 1 will be called the minimally intervening shield for ΣandSafety (G), and can be obtained from n.m.p. safety controllers. 4The term “learned controller” is used as a convenient name. In reality, any unverified controller can be used. 8 Davide Corsi , Kaushik Mallik , Andoni Rodríguez, and César Sánchez Theorem 1. LetΣ= (X,U,W, f)be a control system, Safety (G)be a safety specification, and C∗ Gbe the (unique) nonblocking, maximally permissive (n.m.p.) controller of ΣforSafety (G). Then, every minimally intervening shield S∗ Gfor ΣandSafety (G)fulfills: S∗ G(x, u) =( u ifu∈ C∗ G(x) u′∈ C∗ G(x)otherwise .(1) for every (x, u)∈ X × U. Proof.By virtue of maximal permissiveness of C∗ G, we can infer that every u /∈ C∗ G(x)may potentially violate safety, since otherwise we could construct a safe super-controller of C∗ Gthat would allow ufrom x, and would otherwise mimic C∗ G. This is not possible since it would contradict the maximal permissiveness assumption of C∗ G. Since S∗ Gneeds to guarantee safety with its choice of control inputs, therefore it must select control inputs allowed by C∗ G. Now for the given x, u, ifu∈ C∗ G(x)butS∗ G(x, u)̸=u, then the shield violates the minimal intervention requirement, since we know that selecting u instead would not lead to a violation of safety. ⊓ ⊔ Minimally intervening shields are not unique, since any u′∈ C∗ G(x)can be selected when u /∈ C∗ G(x)in Eqn. (1). We will use the heuristics of selecting the u′that minimizes the Euclidean distance from the original input u. However, this does not provide any long-run optimality guarantees, and selecting the best intervening input is still an open problem in shield synthesis. Remark 1. We consider the so-called post-shielding framework, where the shield operates alongside an already learned controller In contrast, in the pre-shielding framework, shields are used already during the training phase of the controller to prevent safety violations. It is known that safety shields—and by extension our dynamic safety shields—can be used in both pre and post settings [20], though we will only use the post-shielding view for a simplicity. 3.2 Problem Statement A major drawback of traditional shielding is that the computed shield depends on the given safety specification, as can be seen from the statement of Problem 1.
https://arxiv.org/abs/2505.22104v1
If the safety specification changes, then the shield needs to be redesigned. This is especially problematic if the precise safety specification is unknown apriori, and the shield needs to adapt as new safety requirements are discovered during runtime. We propose the dynamic shielding problem, where the actual safety objective tobeencounteredduringdeploymentisunknownapriori,thoughitisknownthat it will belong to a parametric family of safety specifications. We formalize parametric safety specifications. Suppose Σ= (X,U,W, f)is a control system, and Ris a finite set of subsets of X, i.e., R={G0, . . . , G l} ⊂2X, Efficient Dynamic Shielding for Parametric Safety Specifications 9 where Rbehaves like a set of parameters and is called the set of atomic safe sets. Theparametric safety specification P-Safety (R)onRis the family of all safety specifications generated by the safe sets that are conjunctions of subsets ofR, i.e., P-Safety (R) ={Safety (G)| ∃S⊆R . G =∩S′∈SS′}. Clearly, the size ofP-Safety (R)is2|R|. Adynamic shield forRis a function mapping every safety specification Safety (G)∈P-Safety (R)to a regular, static shield for Safety (G). We assume that the set Ris provided statically during the design of the dynamic shield, while the actual safety specification Safety (G)∈P-Safety (R)is chosen dynami- callyduring runtime. Problem 2 (Dynamic shield synthesis). Inputs:A control system Σ= (X,U,W, f)and a finite set R⊂2Xof atomic safe sets. Output:Adynamicshield S∗ Rsuchthatforeverysafetyspecification Safety (G)∈ P-Safety (R),S∗ R(G)isaminimallyintervening(static)shieldfor ΣandSafety (G). With the help of Theorem 1, Problem 2 boils down to the dynamic safety controller synthesis problem, where dynamic safety controllers are functions that map every safety specification Safety (G)∈P-Safety (R)to an n.m.p. safety controller for Safety (G). Problem 3 (Dynamic safety controller synthesis). Inputs:A control system Σ= (X,U,W, f)and a finite set R⊂2Xof atomic safe sets. Output:A dynamic safety controller C∗ Rsuch that for every safety specification Safety (G)∈P-Safety (R),C∗ R(G)is a nonblocking, maximally permissive (static) safety controller for ΣandSafety (G). Clearly, every solution to Problem 3 can be transformed via Eqn. (1) to obtain a solution to Problem 2. Therefore, in what follows, we shift our focus to solving Problem 3. 3.3 Efficient Dynamic Safety Controller Synthesis In theory, Problem 3 can be solved using two different types of brute-force ap- proaches:Thefirstoneisapureofflinealgorithm,whereweiterateoverthesetof all safety specifications in P-Safety (R), and for each of them, compute an n.m.p. (static) safety controller. The second one is a pure online algorithm, where we compute a new n.m.p. (static) safety controller after observing the current safety specification at each time point during runtime. While the pure offline algorithm would be prohibitively expensive, owing to the exponential size of P-Safety (R), the pure online algorithm would be expensive enough to be deployed during runtime, especially for systems with fast dynamics. Our dynamic algorithm strikes a balance between offline design and online adaptation, and proves to be significantly more efficient compared to both brute- force algorithms. Our new algorithm has an offline design phase and an online 10 Davide Corsi , Kaushik Mallik , Andoni Rodríguez, and César Sánchez deployment phase. During the offline design phase, we compute the n.m.p. safety controller C∗ Gfor each atomic safety specification Safety (G)forG∈R. These controllers may
https://arxiv.org/abs/2505.22104v1
be called the atomicsafety controllers. During the online deploy- ment phase, at each step the true safe set G=G′∩G′′∩. . .is revealed, where G′, G′′, . . .∈R, and the required safety controller for Safety (G)is obtained by dynamically composing the corresponding atomic safety controllers C∗ G′,C∗ G′′, . . .. The process of composing atomic safety controllers at runtime involve two steps,namelya controllerproduct operation,followedby enforcingnon-blockingness . We describe the two steps one by one in the following; an illustration is provided in Figure 1. Definition 2 (Controller product). LetCandC′be a pair of state-feedback controllers of a given control system Σ= (X,U,W, f). We define the product C andC′as the state-feedback controller CNC′such that for every x∈Dom (C)∩ Dom (C′),CNC′(x) =C(x)∩ C′(x), and for every x /∈Dom (C)∩Dom (C′), CNC′(x)is undefined. Intuitively, the product controller CNC′outputs only those control inputs that are safe for both CandC′, and suppresses those that are unsafe for at least one of them. This, however, does not guarantee the nonblockingness of CNC′ itself, as is illustrated in Figure 1b, where the product controller C∗ GNC∗ Hblocks at the state e. Luckily, we will apply the product on the atomic safety controllers, which are n.m.p., and it follows that all nonblocking safety controllers for the overall safety specification will be sub-controllers of the product. Theorem 2. LetΣ= (X,U,W, f)be a given control system, Safety (G)and Safety (H)be safety specifications, and C∗ GandC∗ Hbe the nonblocking, maxi- mally permissive (n.m.p.) controllers for Safety (G)andSafety (H), respectively. A nonblocking controller is a safety controller for Safety (G∩H)if and only if it is a nonblocking sub-controller of C∗ GNC∗ H. Proof.First, observe that Safety (G∩H) =Safety (G)∩Safety (H). This follows from the definition of safety specifications. [If:]Suppose Cis a nonblocking sub-controller of C∗ GNC∗ H. Since this is a sub- controller of C∗ GNC∗ H, which outputs control inputs that are allowed by both C∗ G andC∗ H, it follows that Csatisfies both Safety (G)andSafety (H), and therefore it satisfies Safety (G∩H). (Moreover, Cis already assumed to be nonblocking.) [Only if:] Suppose Cis a nonblocking safety controller for Safety (G∩H). There- foreCis a nonblocking safety controller for both Safety (G)andSafety (H), sepa- rately. I.e., the control inputs selected by Cfulfill both Safety (G)andSafety (H) simultaneously, and therefore they are also allowed by C∗ GNC∗ H. Therefore, Cis a (nonblocking, by assumption) sub-controller of C∗ GNC∗ H. ⊓ ⊔ Theorem 2 dramatically narrows down the search space for the sought n.m.p. safety controller for Safety (G∩H). In particular, as the sought controller has to be nonblocking, it is now guaranteed to be a sub-controller of C∗ GNC∗ H. The maximal permissiveness on the other hand will be guaranteed by selecting the Efficient Dynamic Shielding for Parametric Safety Specifications 11 a startbc def gu1 u2u2u1 u1 u2 (a)Dom (C∗ G)is in red and Dom (C∗ H)is in blue.a startbc def gu1 u2u2u1 u1 u2 (b) Product construction: Dom (C∗ GNC∗ H)a startbc def gu1 u2u2u1 u1 u2 (c) Ensuring nonblocking- ness: The domain of the largest nonblocking sub- controller of C∗ GNC∗ H.
https://arxiv.org/abs/2505.22104v1
Fig.1: Illustration of the two steps involved in the online composition of atomic safety controllers. The automaton represents a finite-state control system with two control inputs u1, u2and no disturbance inputs. The nodes are the states and the arrows represent the transition function. Suppose there are two safety specifications Safety (G)andSafety (H), where G={a, b, c, d, e, f }andH= {a, b, c, d, e, g }. The colored regions represent the domains of the respective con- trollers, and each controller’s output at a given state is the set of control inputs for which the next state is in the domain. specific nonblocking sub-controller C∗ GNC∗ Hthat is the super-controller of all other nonblocking sub-controllers of C∗ GNC∗ H. We summarize this below. Corollary 1. The nonblocking, maximally permissive (n.m.p.) safety controller C∗ G∩HforSafety (G∩H)fulfills: (a)C∗ G∩His nonblocking, (b)C∗ G∩H⊑ C∗ GNC∗ H, and (c) for every nonblocking CwithC ⊑ C∗ GNC∗ H,C ⊑ C∗ G∩H. The nonblocking sub-controller of C∗ GNC∗ Hfulfilling (b) and (c) in Corol- lary 1 will be referred to as the largest nonblocking sub-controller ofC∗ GNC∗ H. The computation of n.m.p. safety controllers and largest nonblocking sub- controllers may or may not be decidable, depending on the nature of the state space (finite or infinite) and the nature of the transition function (linear or non- linear) of the system [28]. We will present a sound but incomplete abstraction- based synthesis algorithm in the next section. For the moment, assuming largest nonblocking sub-controllers can be algorithmically computed, we summarize the overall dynamic safety controller synthesis algorithm in the following. 12 Davide Corsi , Kaushik Mallik , Andoni Rodríguez, and César Sánchez Inputs: Control system Σ, setRof atomic safe sets Output: Dynamic safety controller C∗ Rfor the parameterized safety specification P-Safety (R) The algorithm: (A)The offline design phase: Compute the set of atomic safety controllers C∗ Gi Gi∈R, where C∗ Giis the n.m.p. safety controller for Gi. (B)The online deployment phase: Letxbe the current state and G= G1∩. . .∩Gkbe the current obstacle, where Gi∈Rfor all i∈[1;k]. We need to output C∗ R(G)(x), which is obtained by composing the atomic safety controllers C∗ G1, . . . ,C∗ Gkusing the following two-step process: 1. Compute the product C:=C∗ G1N. . .NC∗ Gk. 2. Compute the largest nonblocking sub-controller C′ofC, and output C∗ R(G)(x):=C′(x). 4 Synthesis Algorithm using Abstraction-Based Control In Section 3.3, we presented the theoretical steps for synthesizing dynamic safety controllers, which involves computing atomic safety controllers (Step A), the product operation (Step B1), and computing the largest nonblocking sub- controllerofagivencontroller(StepB2).Wenowpresentasoundbutincomplete algorithm for implementing these steps using grid-based finite abstraction of the given control system. Most of the results in this section are closely related to the existing works from the literature, but are adapted to our setting. 4.1 Preliminaries: Abstraction-Based Control (ABC) Abstraction-based control (ABC) is a collection of controller synthesis algo- rithms, which use systematic grid-based abstractions of continuous control sys- tems and performs synthesis over these abstractions using automata-based ap- proaches.ThestrengthsofABCalgorithmsareintheirexpressivepower,namely they support almost all widely used control system models alongside rich tem- poral logic
https://arxiv.org/abs/2505.22104v1
specifications [16,24]. Besides, ABC algorithms are usually imple- mentable using efficient symbolic data structures, such as BDDs, helping us to devise efficient push-button controller synthesis algorithms in practice. The typical workflow of an ABC algorithm has three stages, namely ab- straction ,synthesis , and (controller) refinement . We describe each stage one by one in the following, where we assume that we are given a control system Σ= (X,U,W, f)and a generic specification Φ⊆ X∞as inputs, and the aim is to compute a controller CΦ:X → 2Uwhose domain is as large as possible such that Paths (Σ,CΦ, x0)⊆Φfor all x0∈Dom (CΦ). Abstraction. In the abstraction stage, the given control system Σis approxi- mated by a finite grid-based abstraction . There are many alternative approaches Efficient Dynamic Shielding for Parametric Safety Specifications 13 to construct the abstraction, and we use the one based on feedback refinement relations (FRR) [24]. In FRR, the abstraction is modeled as a separate control system bΣ= bX,bU,bf without disturbances, where bXandbUare finite sets, andbfis a nondeterministic transition function, i.e., has the form bX ×bU → 2bX. The set bXis obtained as the collection of the finitely many grid cells created by partitioning the continuous state space X; therefore, every element of bXis a subset of X. The set bUis obtained as the collection of finitely many usually equidistant points selected from U, i.e.,bUis a finite subset of U. Suppose Q:x7→bxwith x∈bxis a mapping that maps every continuous state of Σto the (unique) cell of bXit belongs to. We will extend Qto map sets of states and sets of state sequences of Σto their counterparts for bΣin the obvious manner. We say Qis an FRR from ΣtobΣ, written Σ≼QbΣ, if for every x∈ X, for every bu∈bU, and for every w∈ W, there exists bx′∈bf(Q(x),bu) such that (f(x,bu, w),bx′)∈Q. We omit the details of how to construct bfsuch thatΣ≼QbΣholds, and refer the reader to the original paper [24]. When Σ≼QbΣ,itisguaranteedthatforeverycontroller C:X →bUofthesys- temΣand for every initial state x0,Q(Paths (Σ,C, x0))⊆Paths (bΣ,C, Q(x0)). In other words, the paths of the abstraction bΣconservatively over-approximates (with respect to the mapping Q) the paths of the control system Σunder the same controller Cand for every sequence of disturbance inputs. Synthesis. Inthesynthesisstage,first,thegivenspecification Φisconservatively abstracted to the specification bΦforbΣsuch that bΦ⊆Q(Φ). When Φis a safety specification Safety (G)for some G⊆ X, the abstract specification can be chosen asbΦ=Safety bΣ(Q(G)), i.e., the paths of bΣwhich avoid Q(G)at all time. Next,wetreattheabstraction bΣasatwo-player,turn-basedadversarialgame arena, where the controller player chooses a control input buat each state bx, while the environment player resolves the nondeterminism in bf(bx,bu). The objective of the controller player is to come up with an abstract controller bCbΦ:bX → 2bU such that no matter what the environment player does, the resulting sequence of states remains inside the set bΦ. In the next subsection, we will describe the algorithm for finding such abstract controllers for safety specifications. Refinement. The (controller) refinement is the stage where the abstract con- troller bCbΦofbΣforbΦis mapped back to a concrete controller CΦfor the system Σ, which amounts to simply defining CΦ(x):=bCbΦ(Q(x))for every
https://arxiv.org/abs/2505.22104v1
x∈Dom (C) = ∪bx∈Dom (bCbΦ)bx. By virtue of the FRR Qbetween ΣandbΣ, it is guaranteed that Paths (Σ,CΦ, x0)⊆Φfor all x0∈Dom (CΦ); in other words, CΦis a sound con- troller of Σ. It is worthwhile to mention that such a simple refinement stage is one unique strength of FRR, since the other alternatives [26,19] usually require a significantly more involved refinement mechanism. Remark 2. Even though ABC produces sound controllers, it lacks completeness. This means that sometimes it will not be able to find a controller even if there exist one, and sometimes the domain of the computed controller will be strictly 14 Davide Corsi , Kaushik Mallik , Andoni Rodríguez, and César Sánchez smaller than the controller with the largest possible domain that exists in reality. This is unavoidable if we are uncompromising with soundness, since temporal logic control of nonlinear control systems is undecidable in general [8]. The side- effect of using ABC to solve Problem 3 is that the maximal permissiveness guar- antee can no longer be achieved, though our safety controllers will be maximally permissive with respect to the abstraction. One way to improve the permissive- ness would be to reduce the discretization granularity in the abstraction, though this will increase the computational complexity due to larger abstraction size. 4.2 ABC-Based Dynamic Safety Control We now present ABC-based algorithms to solve the steps A, B1, and B2 of the dynamic safety controller synthesis algorithm (Section 3.3). For this, we fix the abstraction bΣof the system Σ, assuming Σ≼QbΣfor a given FRR Q, and present our algorithms on bΣ. Using the standard refinement process of ABC, we will obtain an dynamic safety controller for Σ. Algorithm 1 SafetyControl Input: bΣ= bX,bU,bf ,Safety bΣ(cGi) Output: Safety controller bCcGiofbΣ 1:S←bX 2:do 3: Sold←S 4: S←CPre (S)∩cGi 5:while S̸=Sold 6:∀bx∈ S . bCcGi(bx)←n bu∈bU |bf(bx,bu)⊆So 7:return bCcGiAlgorithm 2 NBControl Input: bΣ= bX,bU,bf ,bC:bX → 2bU Output: Largest nonblocking sub- controller bC′ofbC 1:bX′←bX ∪ {⊥} 2: Define bf′:bX′×bU → 2bX′: ∀bx∈bX.∀bu∈bC(bx).bf′(bx,bu)← bf(bx,bu) ∀bx∈bX.∀bu /∈bC(bx).bf′(bx,bu)← {⊥} ∀bu∈bU.bf′(⊥,bu)← {⊥} 3:bΣ′← bX′,bU,bf′ 4:return SafetyControl (bΣ′,bX = bX′\ {⊥} ) Step A: Computing Atomic Safety Controllers. Suppose Gi⊆ Xbe an atomic unsafe set of states of Σ. As described above, the abstract atomic safety specification Safety bΣ(cGi)isthesetofpathsof bΣthatremainsafewithrespectto cGi=Q(Gi) =n bx∈bX |bx∩G̸=∅o . The respective n.m.p. abstract controller bCcGi(n.m.p. with respect to bΣ) can be computed using a standard iterative pro- cedure from the literature [28] and presented using the function SafetyControl in Algorithm 1. SafetyControl uses the set Sas an over-approximation of the set of states from which the safety specification can be fulfilled (aka, controlled invariant set). Initially Sspans the entire state space bXofbΣ(Line 1). After- wards, the over-approximation Sis iteratively refined (the do-while loop) as states from the current Sare discarded owing to inability of fulfilling safety Efficient Dynamic Shielding for Parametric Safety Specifications 15 from them. This is implemented using the CPre : 2bX→2bXoperator defined asCPre (S):=n bx∈bX | ∃bu∈bU.bf(bx,bu)⊆So .When no more refinement of S is possible, we stop the iteration and extract the safety controller bCcGi(Line 6) as the one that
https://arxiv.org/abs/2505.22104v1
keep the abstract system inside S. It is guaranteed that bCcGiis an n.m.p. safety controller of bΣforSafety bΣ(cGi), and that its refinement is a nonblocking safety controller for SafetyΣ(Gi). Unfortunately, the maximal per- missiveness is not guaranteed with respect to Σ, as explained in Remark 2. Step B1: Computing the Product. Computing the product involves the straightforward application of Definition 2 on two abstract safety controllers. Step B2: Computing Largest Nonblocking Sub-Controllers. The largest nonblocking sub-controller is computed using the function NBControl , presented in Algorithm 2. NBControl first modifies bΣto a new system bΣ′by keeping those transitions that are allowed by bC, and redirecting the rest to a new sink state ⊥ (Line 2). With this modification, any safety controller of bΣ′that is nonblocking and avoids the unsafe state ⊥is by construction a nonblocking sub-controller of bC. If the subcontroller is in addition maximally permissive with respect to the unsafe state ⊥, then it follows that it is the largest nonblocking sub-controller of bC. Therefore, the largest nonblocking sub-controller of bCis obtained by invoking the subroutine SafetyControl with arguments bΣ′andSafety bΣ′(bX). Incontrast,thepureonlineshieldingalgorithmwouldrun SafetyControl (bΣ,SafetyΣ(bX)) at each step, which is significantly slower compared to executing the steps B1 and B2 described above. This is because in B2 (dominates B1), each invocation ofCPre (·)inSafetyControl is significantly faster on bΣ′compared to invoking CPre (·)onbΣ(the pure online case), as the complexity of CPre (·)is linear in the number of transitions of the abstract system, and this number effectively becomes small for bΣ′since we can ignore all the transitions that lead to “ ⊥” (surelyunsafe transitions). The smaller effective number of transitions also con- tributes to a smaller number of iterations of the while loop in SafetyControl , creating a compounding effect in reducing the overall complexity. 4.3 Symbolic Implementation Our dynamic safety controller synthesis algorithm is implemented symbolically using BDDs, where the states and inputs and transitions of abstract control sys- tems are modeled using boolean formulas represented by BDDs, and all the steps of Algorithm 1 and 2 and the product operation are implemented using logical operations over the BDDs and existential and universal quantifications. The im- plementation details follow standard procedures used by ABC algorithms from the literature [25,15]. In particular, our tool is built upon the ABC tool Mascot- SDS [15], which supports efficient, parallelized BDD libraries like Sylvan [27]. Theseimplementationdetailsenabledustocreateaprototypedynamicshielding tool that whose offline computation stage takes a few minutes, and, more impor- tantly, the online computations finish within just a few seconds on an average at each step. More details on the experiments are included in Section 6. 16 Davide Corsi , Kaushik Mallik , Andoni Rodríguez, and César Sánchez 5 Dynamic Shields for Robot Navigation in Unknown Territories We consider a mobile robot placed in an unknown world filled with static obsta- cles. The robotis controlledbyan unknownAI controller with unknown motives. We want to design a shield whose safety objective is to avoid colliding with the obstacles at all time. We assume that the shield only has limited observation of the world, and it can only observe obstacles that
https://arxiv.org/abs/2505.22104v1
are within a certain distance dalong each dimension of the X-Y coordinate axes. This creates a visible region that is a square whose sides have the length 2dcentered around the current location of the robot at each time step. This is a realistic scenario experienced by many mobile agents, including self-driving cars and exploratory robots. The dynamic shield assumes that the robot’s state space spans only the size of the visible region. At the design time, the shield assumes that obstacles can be arranged in all possible ways within this visible region at each step. At runtime, the shield observes the obstacles in the current snapshot of visible region, and does its online computations to quickly deploy the suitable shield. This shield is deployed just for the current time step. In the next step, the obstacle arrangements may have shifted, because the robot and its visible region has moved, and therefore the shield must dynamically adapt and recompute the safe control inputs. And the process repeats. actual obstaclesartificial fence(-1, -1)(1, 1)visible region reachability goal Fig.2: Illustration of dy- namic shielding of the robot (blue dot) in an unknown environment. Dynamic shields are com- puted using ABC, but not over the entire state space, rather over the the tiny visible region of the robot. The corners “ (−1,−1)” and “ (1,1)” of the visible region are in the robot’s own reference coordinates.We need to ensure a safe handover of two dynamically adapted shields at consecutive steps, i.e., every action that a shield allows must guaran- teethatthenewstateisinthedomainoftheshield at the next step. We take a conservative approach. We addartificial fences in the outer periphery of the visible region, to model the uncertainty that awaits in the unobservable parts. This is the worst possible scenario for which the shield must be pre- pared in the next step. As the robot moves one step after the shield has acted, some states that were previously outside of the visible region may become visible, and some of the states that were previously assumed as unsafe will turn out to ac- tually be safe, thus guaranteeing a safe handover. We summarize the setting of the dynamic shield synthesis problem to be also used in Sec- tion 6. The state space of the robot spans the vis- ible region extended with the artificial fences of a given thickness ϵ >0, i.e.,X= [−d−ϵ, d+ϵ]× [−d−ϵ, d+ϵ]with respect to the robot’s own refer- ence coordinate frame, in which the robot’s initial state is always the origin. We use the grid-based abstraction bXofX, and assume that each element ofbXis an atomic unsafestate set. This is because Efficient Dynamic Shielding for Parametric Safety Specifications 17 at runtime, no matter what obstacles are encountered within the visible region, it can be over-approximated as the union of the right subset of bX. In addition, the fence is included in each atomic unsafe set. Therefore, the set of atomic safe sets isn X \({bx} ∪fence )|bx∈bXo where fence =X \[−d, d]×[−d, d]. 6 Experiments The Experimental Setup. The dynamics of the mobile robot is modeled using the discrete-time
https://arxiv.org/abs/2505.22104v1
Dubins vehicle model. The system has three state variables x, y, and θ, where xandyrepresent the location in the X-Y coordinate, and θ represents the heading angle in radians (measured counter-clockwise from the positive X axis); two control input variables vanda, representing the forward velocity and the angular velocity of the steering; and three disturbance vari- ables w1, w2, w3, which affect the dynamics in the three individual states. The transitions are: x′=x+ (vcosθ)τ+w1, y′=y+ (vsinθ)τ+w2, θ′=θ+aτ+w3, where the primed variables on the left side represent the states at the next time step, and τrepresents the sampling time. We use the following spaces for the states, control inputs, and disturbance inputs: x∈[−1,1],y∈[−1,1], θ∈[−π, π],v∈ {− 0.4,−0.2, . . . , 0.2,0.4},a∈ {− 4,−3.5, . . . , 3.5,4},w1∈ [−0.01,0.01],w2∈[−0.01,0.01], and w3∈[−0.02,0.02]. Furthermore, we fix τ= 0.1s, and the thickness of the fence to ϵ= 0.3. The underlying AI controller is generated using reinforcement learning (RL) with reach-avoid objectives. In our experiments, the RL controller is made aware of the entire map, even though the shield’s visibility range is limited to a tiny region ranging [−1,1]×[−1,1]in its own reference frame. Table 1: Computation times of the offline phase (atomic shield synthesis) of dynamic shields. Grid size Computation time bX bUAbstraction Synthesis [0.10,0.10,0.30] [0 .2,0.5] 2m12s 1m [0.08,0.08,0.25] [0 .2,0.5] 4m40s 2m35s [0.06,0.06,0.20] [0 .2,0.5] 9m35s 9m25sPerformance Evalua- tions.We report the of- fline and online compu- tation times of our dy- namic shields for three differentlevelsofabstrac- tion coarseness used in the ABC algorithms. The abstraction coarseness is measured as the (uni- form) grid size used for discretizing the state and input spaces, which are respectively 3and 2- dimensional vectors representing the dimension-wise side lengths of the square- shaped grid elements. All synthesized shields are by-construction safe and min- imally permissive, and therefore these aspects are not reported. The code was run on a personal computer powered by Intel Core Ultra 7 255U processor and 32 GB RAM. 18 Davide Corsi , Kaushik Mallik , Andoni Rodríguez, and César Sánchez We report the offline computation times for the three different abstractions in Table 1. As expected, as the abstraction gets finer (smaller grid sizes), the computation time increases. Nonetheless, all computation finished within rea- sonable amount of time. In comparison, the pure offline baseline would timeout even for the coarsest abstraction, because with its X-Y state variables’ grid sizes [0.10,0.10], it would create 20×20 = 400 grid cells in the domain [−1,1]×[−1,1], and since we choose the number of atomic safety specifications to be equal to the number of grid cells, we would need to solve 2400instances of safety controller synthesis problems! Although the pure online baseline takes zero time in the offline phase, it will take more time at the online phase as we discuss next. For each of the three abstraction classes, we deployed the pure online and dynamic shields alongside a learned controller and tested them on 70 randomly generated reach-avoid control problem instances. In each instance and for each shield, we measured the computation time per step on an average, and
https://arxiv.org/abs/2505.22104v1
report them in Figure 3. We observe that the dynamic shields are almost always faster than the pure online shields, and as the abstraction gets finer, their difference becomes more prominent. With the finest abstraction, the dynamic shield was upto five times faster! Furthermore, any efficiency improvement of the pure online shield would benefit the dynamic shield too, because both rely on the SafetyControl algorithm for their online computation phase. 0 1 2 30123 1×2× 3× dynamic ( s)pure online ( s) 0 2 40246810 1×2× 3× dynamic ( s)pure online ( s) 0246810051015 1×2× 3×4× 5× dynamic ( s)pure online ( s) Fig.3: Average online computation times of the pure online algorithm (the base- line) and our dynamic algorithm. Each point in the scatter plots represents one randomly generated problem instance for the navigation task. The three plots correspond to three different abstraction granularities used in the ABC proce- dure, with the leftmost plot representing the coarsest (Row 1, Table 1) and the rightmost plot representing the finest (Row 3, Table 1) abstraction sizes. 7 Discussions and Future Work We propose dynamic shields that can adapt to changing safety specifications at runtime. While the problem can be solved using pure offline or pure online ap- proaches using brute force, our dynamic approach is significantly more efficient and combines both offline and online computations. We presented concrete al- gorithms using the abstraction-based control framework, and demonstrated the effectiveness of dynamic shields on a robot motion planning problem. Efficient Dynamic Shielding for Parametric Safety Specifications 19 Several future directions exist. Firstly, in our work, we use the atomic safe set as it is given, and we will investigate if this set can be first processed in a way that the online deployment phase of shield computation can be benefited (e.g., by simplifying the nonblockingness process). Secondly, in our simulations of the experiments, sometimes the system would get stuck and would not be able to make progress. This is a known issue in shielding and we will study how to eliminate this by taking inspiration from other works dealing with similar prob- lems [12]. Thirdly, we proposed a conservative but simple approach to the safe handover problem, and more advanced procedures [17] will be incorporated in subsequent versions. Finally, extending to richer settings, like dynamic obstacles and quantitative safety specifications would be a major step. Acknowledgments. This work was funded in part by the DECO Project (PID2022- 138072OB-I00) funded by MCIN/AEI/10.13039/501100011033 and by the ESF+. References 1. Alshiekh, M., Bloem, R., Ehlers, R., Könighofer, B., Niekum, S., Topcu, U.: Safe reinforcement learning via shielding. In: Proceedings of the AAAI conference on artificial intelligence. vol. 32 (2018) 2. Anand, A., Nayak, S.P., Schmuck, A.K.: Synthesizing permissive winning strat- egy templates for parity games. In: International Conference on Computer Aided Verification. pp. 436–458. Springer (2023) 3. Bharadwaj, S., Bloem, R., Dimitrova, R., Konighofer, B., Topcu, U.: Synthesis of minimum-cost shields for multi-agent systems. In: 2019 American Control Confer- ence (ACC). pp. 1048–1055. IEEE (2019) 4. Bloem, R., Könighofer, B., Könighofer, R., Wang, C.: Shield synthesis: Runtime
https://arxiv.org/abs/2505.22104v1
enforcement for reactive systems. In: International conference on tools and algo- rithms for the construction and analysis of systems. pp. 533–548. Springer (2015) 5. ElSayed-Aly, I., Bharadwaj, S., Amato, C., Ehlers, R., Topcu, U., Feng, L.: Safe multi-agent reinforcement learning via shielding. arXiv preprint arXiv:2101.11196 (2021) 6. Feng, Y., Zhu, J., Platzer, A., Laurent, J.: Adaptive shielding via parametric safety proofs. Proceedings of the ACM on Programming Languages 9(OOPSLA1), 816– 843 (2025) 7. Fridovich-Keil,D.,Herbert,S.L.,Fisac,J.F.,Deglurkar,S.,Tomlin,C.J.:Planning, fastandslow:Aframeworkforadaptivereal-timesafetrajectoryplanning.In:2018 IEEE International Conference on Robotics and Automation (ICRA). pp. 387–394. IEEE (2018) 8. Henzinger, T.A., Raskin, J.F.: Robust undecidability of timed and hybrid systems. In: International Workshop on Hybrid Systems: Computation and Control. pp. 145–159. Springer (2000) 9. Hu, H., Nakamura, K., Fisac, J.F.: Sharp: Shielding-aware robust planning for safe and efficient human-robot interaction. IEEE Robotics and Automation Letters 7(2), 5591–5598 (2022) 10. Jansen, N., Könighofer, B., Junges, S., Bloem, R.: Shielded decision-making in mdps. arXiv preprint arXiv:1807.06096 (2018) 20 Davide Corsi , Kaushik Mallik , Andoni Rodríguez, and César Sánchez 11. Könighofer, B., Alshiekh, M., Bloem, R., Humphrey, L., Könighofer, R., Topcu, U., Wang, C.: Shield synthesis. Formal Methods in System Design 51, 332–361 (2017) 12. Könighofer, B., Rudolf, J., Palmisano, A., Tappler, M., Bloem, R.: Online shielding forreinforcementlearning.InnovationsinSystemsandSoftwareEngineering 19(4), 379–394 (2023) 13. Li, S., Bastani, O.: Robust model predictive shielding for safe reinforcement learn- ing with stochastic dynamics. In: 2020 IEEE International Conference on Robotics and Automation (ICRA). pp. 7166–7172. IEEE (2020) 14. Majumdar, A., Tedrake, R.: Funnel libraries for real-time robust feedback motion planning. The International Journal of Robotics Research 36(8), 947–982 (2017) 15. Majumdar, R., Mallik, K., Rychlicki, M., Schmuck, A.K., Soudjani, S.: A flex- ible toolchain for symbolic rabin games under fair and stochastic uncertainties. In: International Conference on Computer Aided Verification. pp. 3–15. Springer (2023) 16. Majumdar, R., Mallik, K., Schmuck, A.K., Soudjani, S.: Symbolic control for stochastic systems via finite parity games. Nonlinear Analysis: Hybrid Systems 51, 101430 (2024) 17. Nayak,S.P.,Egidio,L.N.,DellaRossa,M.,Schmuck,A.K.,Jungers,R.M.:Context- triggered abstraction-based control design. IEEE Open Journal of Control Systems 2, 277–296 (2023) 18. Nilsson, P., Ozay, N., Liu, J.: Augmented finite transition systems as abstractions for control synthesis. Discrete Event Dynamic Systems 27(2), 301–340 (2017) 19. Pola, G., Tabuada, P.: Symbolic models for nonlinear control systems: Alternating approximate bisimulations. SIAM Journal on Control and Optimization 48(2), 719–733 (2009) 20. Pranger, S., Könighofer, B., Posch, L., Bloem, R.: Tempest-synthesis tool for reac- tive systems and shields in probabilistic environments. In: Automated Technology for Verification and Analysis: 19th International Symposium, ATVA 2021, Gold Coast,QLD,Australia,October18–22,2021,Proceedings19.pp.222–228.Springer (2021) 21. Pranger, S., Könighofer, B., Tappler, M., Deixelberger, M., Jansen, N., Bloem, R.: Adaptive shielding under uncertainty. In: 2021 American Control Conference (ACC). pp. 3467–3474. IEEE (2021) 22. Quan, L., Zhang, Z., Zhong, X., Xu, C., Gao, F.: Eva-planner: Environmental adaptive quadrotor planning. In: 2021 IEEE International Conference on Robotics and Automation (ICRA). pp. 398–404. IEEE (2021) 23. Raeesi,H.,Khosravi,A.,Sarhadi,P.:Safereinforcementlearningbyshieldingbased reachable zonotopes for autonomous vehicles. International Journal of Engineering 38(1), 21–34 (2025) 24. Reissig, G., Weber, A., Rungger, M.: Feedback refinement relations for the syn- thesis of symbolic controllers. IEEE Transactions on Automatic Control 62(4), 1781–1796
https://arxiv.org/abs/2505.22104v1
(2016) 25. Rungger,M.,Zamani,M.:Scots:Atoolforthesynthesisofsymboliccontrollers.In: Proceedings of the 19th international conference on hybrid systems: Computation and control. pp. 99–104 (2016) 26. Tabuada, P.: An approximate simulation approach to symbolic control. IEEE Transactions on Automatic Control 53(6), 1406–1418 (2008) 27. Van Dijk, T., Van De Pol, J.: Sylvan: Multi-core decision diagrams. In: Tools and Algorithms for the Construction and Analysis of Systems: 21st International Efficient Dynamic Shielding for Parametric Safety Specifications 21 Conference, TACAS 2015, Held as Part of the European Joint Conferences on Theory and Practice of Software, ETAPS 2015, London, UK, April 11-18, 2015, Proceedings 21. pp. 677–691. Springer (2015) 28. Vidal, R., Schaffert, S., Lygeros, J., Sastry, S.: Controlled invariance of discrete time systems. In: International Workshop on Hybrid Systems: Computation and Control. pp. 437–451. Springer (2000) 29. Waga, M., Castellano, E., Pruekprasert, S., Klikovits, S., Takisaka, T., Hasuo, I.: Dynamic shielding for reinforcement learning in black-box environments. In: International Symposium on Automated Technology for Verification and Analysis. pp. 25–41. Springer (2022)
https://arxiv.org/abs/2505.22104v1
arXiv:2505.22106v1 [cs.SD] 28 May 2025AudioTurbo: Fast Text-to-Audio Generation with Rectified Diffusion Junqi Zhao1, Jinzheng Zhao1, Haohe Liu1, Yun Chen1, Lu Han2, Xubo Liu1, Mark Plumbley1, Wenwu Wang1 1Centre for Vision, Speech and Signal Processing (CVSSP), University of Surrey, UK 2Laboratory of Noise and Audio Research, Institute of Acoustics, Chinese Academy of Sciences, China junqi.zhao@surrey.ac.uk, W.Wang@surrey.ac.uk Abstract Diffusion models have significantly improved the quality and diversity of audio generation but are hindered by slow infer- ence speed. Rectified flow enhances inference speed by learning straight-line ordinary differential equation (ODE) paths. How- ever, this approach requires training a flow-matching model from scratch and tends to perform suboptimally, or even poorly, at low step counts. To address the limitations of rectified flow while leveraging the advantages of advanced pre-trained diffu- sion models, this study integrates pre-trained models with the rectified diffusion method to improve the efficiency of text-to- audio (TTA) generation. Specifically, we propose AudioTurbo, which learns first-order ODE paths from deterministic noise- sample pairs generated by a pre-trained TTA model. Experi- ments on the AudioCaps dataset demonstrate that our model, with only 10 sampling steps, outperforms prior models and re- duces inference to 3 steps compared to a flow-matching-based acceleration model. Index Terms : audio generation, diffusion model, flow match- ing, rectified diffusion 1. Introduction Text-to-audio (TTA) generation is a task that aims to gen- erate audio samples based on natural language descriptions [1, 2, 3, 4, 5]. The audio generated by TTA systems is primarily categorized into three types: speech, sound effects, and music. These audio outputs have been widely applied in fields such as voice assistants [6, 7], game development [8], and media cre- ation [9]. In previous research, TTA methods are typically classified into two main categories: language model-based methods and diffusion model-based methods. The first category quantizes audio into discrete tokens and uses either autoregressive (AR) or non-autoregressive (NAR) transformer architectures for model- ing. AudioGen [1] is a notable AR generative model that lever- ages learned discrete audio tokens and a transformer decoder to synthesize audio based on text descriptions. UniAudio [2] lever- ages large language models (LLMs) to generate a wide range of audio types, including speech, music, singing, and sounds, conditioned on diverse input modalities. The second category applies diffusion models to continuous or discrete representa- tions provided by autoencoders to synthesize audio. Diffsound [3], which is the first diffusion-based TTA system, generates discrete codes by quantizing audio mel-spectrograms using a vector quantized variational autoencoder (VQ-V AE) [10]. In- spired by stable diffusion [11], AudioLDM [4] is the first to em- ploy a continuous latent diffusion model (LDM) to enhance the computational efficiency of diffusion models while maintaining their quality and versatility. To further enhance text understand-ing beyond AudioLDM and facilitate the learning of complex concepts in textual descriptions, Tango [5] proposes using an instruction-tuned LLM (FLAN-T5) as a text encoder instead of the contrastive language-audio pretraining (CLAP) model used by AudioLDM. Recently, Auffusion [12] tailors text-to-image LDM frameworks for the TTA task, effectively harnessing their inherent generative capabilities and accurate cross-modal align- ment. Like AudioLDM and Auffusion,
https://arxiv.org/abs/2505.22106v1
other TTA models, in- cluding AudioLDM2 [13], Make-an-Audio [14], and Make-an- Audio2 [15], utilize latent variable generation in conjunction with a pre-trained V AE and vocoder for audio synthesis, deliv- ering outstanding generation quality. Although diffusion models have achieved substantial ad- vancements in audio generation, the iterative sampling process of LDM imposes high computational costs, resulting in slow inference speeds and restricted real-time applicability. To date, several methods [16, 17, 18] that achieve fast sampling while maintaining satisfactory generation quality have been proposed in the field of text-to-speech (TTS). However, in the broader domain of general sound generation (TTA), such acceleration methods are still relatively limited. Recently, Guan et al. intro- duced LAFMA [19], a model that incorporates flow matching [20] into the audio latent space to enable text-guided audio gen- eration. LAFMA is capable of producing high-quality audio samples by utilizing a numerical ordinary differential equation (ODE) solver, effectively reducing the inference steps to around ten. A drawback of LAFMA is its reliance on flow match- ing, which prevents it from fully utilizing well-performing pre- trained TTA models and potentially achieving superior perfor- mance in fewer than 10steps. To this end, we propose AudioTurbo, which integrates rec- tified diffusion into pre-trained TTA models to enable effi- cient text-guided audio generation. Rectified diffusion [21] is a method that extends the design space of rectified flow to gen- eral diffusion models. Similar to rectification [22], rectified diffusion is a progressive retraining approach that transforms the random coupling of noise and real data used in diffusion training into a deterministic coupling of pre-sampled noise and generated data. Consequently, the model’s predictions maintain consistency throughout the ODE trajectory and continue to fol- low the same trajectory after each inference step. Our contributions are summarized as follows: 1) We intro- duce AudioTurbo, a fast TTA method based on rectified diffu- sion. To our knowledge, this is the first work to introduce rec- tified diffusion in the field of audio processing, including audio generation. 2) We conduct experiments with Auffusion, a state- of-the-art (SOTA) pre-trained diffusion model for TTA, and fur- ther investigate the application of classifier-free guidance [23] to direct AudioTurbo, achieving enhanced audio generation per- formance. 3) Our experiments demonstrate that AudioTurbo Diffusion Model Pre-trained Diffusion ModelsPaired Diffusion Process VAE Decoder Vocoder Text Description Text Encoder ... ODE Solver waveformLegend: Frozen Params. Trainable Params. Train only Inference only Train & Inference Rerverse ProcessFigure 1: An overview of AudioTurbo architecture. Note that the trainable parameters are initialized using the pretrained TTA model, Auffusion. achieves superior performance with significantly fewer sam- pling steps. Specifically, compared to several advanced base- line models, AudioTurbo generates higher-quality audio and ex- hibits better text-audio alignment capabilities in just five steps. 2. Rectified Diffusion Diffusion models operate by gradually adding noise to trans- form a complex data distribution into a known prior distribution and then learning to reverse the process by progressively de- noising the prior to reconstruct the original data distribution. The forward diffusion process can be modeled by a continuous- time stochastic differential equation (SDE) [24]: dxt=f(xt, t)dt+g(t)dw. (1) Here,
https://arxiv.org/abs/2505.22106v1
tis a continuous time variable with t∈[0, T]. The func- tionsf(·)andg(·)denote the drift coefficient and diffusion co- efficient, respectively, while wrepresents Brownian motion. The reverse process has an equivalent deterministic process whose trajectories share the same marginal probability densities as those of the corresponding SDE. This deterministic process is governed by an ODE, referred to as the probability flow ODE [24]: dxt= f(xt, t)−1 2g(t)2∇xtlogpt(xt) dt, (2) where∇xtlogpt(xt)is the score function. Numerical simula- tion of ODEs is generally simpler and faster compared to SDEs, making ODEs the preferred choice in most cases. By using a noise prediction model ϵθ(xt, t)to approximate the score function, Song et al. [24] defined the following diffu- sion ODE: dxt= f(xt, t) +g(t)2 2σtϵθ(xt, t) dt, (3) where xT∼ N (0, σ2I). Moreover, Lu et al. [25] provided an exact solution for the diffusion ODE: xt=αt αsxs−αtZλt λse−λϵθ(xtλ(λ), tλ(λ))dλ. (4) The function λt= log αt σt , in which λt=λ(t), is strictly de- creasing with respect to t. Therefore, it has an inverse function tλ. Starting from xsat time s, Equation 4 shows that solvingfor the value at time tis equivalent to approximating the ex- ponentially weighted integral of ϵθover the range from λsto λt. The first-order ODE means that Equation 4 is equivalent to the following equation: xt=αt αsxs−αtϵθ(xs, s)σs αs−σt αt (5) if and only if ϵθ(xt, t)remains constant along the same ODE trajectory [21]. By substituting s= 0,α0= 1,σ0= 0, and ϵθ(xs, s) =ϵinto Equation 5, we obtain: xt=αtx0+σtϵ. (6) This follows the same general form as forward diffusion pro- cess, but here, since the epsilon prediction is constant, the noisy data becomes a weighted interpolation of noise and samples that constitute a deterministic pair. This differs from standard diffu- sion model training, where noise and samples are paired ran- domly. If perfect coupling of deterministic noise-sample pairs is achieved during training, the model can make consistent pre- dictions along a single trajectory. Theoretically, this enables the same generation results to be obtained with any number of inference steps. 3. Proposed Method We propose AudioTurbo, a generative model based on rectified diffusion for efficient TTA generation. AudioTurbo consists of four key components: 1) a text encoder for generating text em- beddings; 2) a LDM based on rectified diffusion for predicting audio latent representations; 3) a V AE decoder that reconstructs the mel-spectrograms; and 4) a vocoder for generating audio samples. The model architecture and the mechanism of recti- fied diffusion are shown in Figure 1. 3.1. Text Encoder Unlike AudioLDM [4], which utilizes a CLAP [26] encoder, or Tango [5], which employs a FLAN-T5 model to encode text descriptions into text embeddings, our experiments use a Con- trastive Language-Image Pretraining (CLIP) [27] encoder, in- spired by the experimental results from Auffusion [12], where CLIP achieved a higher Inception Score than CLAP or FLAN- T5 in TTA. 3.2. Rectified Diffusion-based LDM The goal of the LDM is to reconstruct the audio representation z0in the latent space, conditioned on the text embedding τpro- vided by a pre-trained text encoder. This is accomplished by reversing a forward
https://arxiv.org/abs/2505.22106v1
diffusion process that progressively trans- forms the clean data distribution into a standard normal distri- bution, following a predefined noise schedule, q(zt|z0) =N(zt;αtz0, σtI), (7) this forward process demonstrates that any latent variable ztcan be directly sampled from z0, and it represents a weighted in- terpolation of clean data and Gaussian noise. The generative (reverse) diffusion process can be conditioned on text embed- dings by employing a deep neural network (DNN), ϵθ(zt, t, τ), to predict noise, thereby reconstructing z0. The loss function for the optimization process is given as follows: L(θ) =||ϵ−ϵθ(zt, t, τ)||2, (8) where ϵ∼ N(0, I)is Gaussian noise, and the noise prediction network ϵθis modeled using a UNet architecture [28], incorpo- rating a cross-attention mechanism to integrate the text embed- dingτ. Unlike the standard diffusion model training process, we first sample from a standard normal distribution and then use a SOTA pre-trained TTA model, i.e., Auffusion [12], to gener- ate the audio representation z0fromϵ. The generated data and the corresponding noise are subsequently paired to retrain the LDM. To simplify the sampling process, we can employ numeri- cal ODE solvers such as DDIM [29] or PNDM [30] to obtain the latent representation of audio samples. Subsequently, the V AE decoder reconstructs the mel-spectrogram, which is then fed into a pre-trained vocoder to generate the audio waveform. 3.3. Classifier-Free Guidance Classifier-free guidance [23] is a useful technique for condi- tional generation that balances control strength and sample di- versity. During training, we randomly drop a fixed proportion of the text embeddings τ, such as 10%, to train both the un- conditional diffusion model ϵθ(zt, t)and conditional diffusion model ϵθ(zt, t, τ). During sampling, we use a modified noise prediction: ˆϵθ(zt, t, τ) = (1 −w)·ϵθ(zt, t) +w·ϵθ(zt, t, τ)(9) to guide the reverse process to produce audio samples. Here, wrepresents the guidance scale, which determines the extent to which the text input influences the noise prediction compared to the unconditional prediction. 4. Experiments 4.1. Experimental Setup 4.1.1. Dataset Following previous works [5, 19], we conduct our experiments using the AudioCaps (AC) [31] dataset. It is a publicly avail- able dataset that contains high-quality human-annotated cap- tions. We first use Auffusion along with AudioCaps captions to generate 47,744ten-second audio samples for training. For evaluation, we use the test subset of AudioCaps, which is a widely recognized in-the-wild standard audio benchmark for TTA. Our test set consists of 928samples.4.1.2. Model and Training Details We train only the parameters of the LDM while freezing all other modules in Auffusion. The original Auffusion model is built on Stable Diffusion v 1.51, including its V AE and UNet. We fine-tune the UNet using our generated paired data {(ϵ, z0)} along with the corresponding text descriptions. Our proposed model is trained for 20,000iterations with a batch size of 128 using the AdamW optimizer and a fixed learning rate of 5× 10−6. 4.1.3. Baseline Models In our paper, we compare AudioTurbo with several existing TTA models, including diffusion-based models such as Au- dioLDM2 [13], Tango [5], and Auffusion2[12], as well as the flow-matching-based model LAFMA3[19]. Among
https://arxiv.org/abs/2505.22106v1
these, LAFMA is the SOTA accelerated model in the TTA domain and serves as our primary comparison target. 4.1.4. Evaluation Metrics In this study, we evaluate our proposed TTA system, utilizing both objective metrics and subjective measures to assess the fi- delity and diversity of the generated audio clips. For objective evaluation, we follow previous evaluation methods [4, 5], employing metrics such as Fr ´echet Distance (FD), Kullback-Leibler (KL) divergence, Inception Score (IS), and CLAP Score. Similar to the Fr ´echet Inception Distance used in image synthesis, the FD score measures the distribution- level similarity between the embeddings of the generated audio clips and those of the target clips, without requiring paired ref- erence audio samples. KL divergence is computed at the paired sample level based on the label distribution. IS is an effec- tive metric for evaluating the quality and diversity of generated samples. All of these three metrics are based on the SOTA au- dio classifier PANNs [32]. In addition, we utilize a pre-trained CLAP model to measure the similarity between the generated audio and the text prompt, providing an evaluation of text-audio alignment. The evaluation package from this project4is used to evaluate all these metrics. Following AudioLDM [4] and Tango [5], we conduct a sub- jective evaluation by asking six evaluators to assess two aspects of50randomly selected audio samples: overall quality (OVL) and relevance to the input text (REL). Each aspect is rated on a scale from 1to100. 4.2. Results and Analysis 4.2.1. Evaluation Setup and Main Results We compare our proposed model, AudioTurbo, trained on a single generated AudioCaps dataset using the pre-trained TTA model Auffusion, with other baselines on the AudioCaps test set. Our main comparative results are presented in Table 1. As the baseline systems typically achieve better performance with more inference steps, we set the inference steps to a standard value of 200. However, the results of our proposed Audio- Turbo are achieved using step counts of 5and10. As shown in the table, our AudioTurbo achieves the best generation results in both objective and subjective metrics with 1https://huggingface.co/ stable-diffusion-v1-5/stable-diffusion-v1-5 2https://huggingface.co/auffusion/auffusion 3https://github.com/gwh22/LAFMA 4https://github.com/haoheliu/audioldm_eval/ Table 1: The comparison between our AudioTurbo and baseline TTA models on the AudioCaps dataset. Model Params Steps Datasets TextObjective Metrics Subjective Metrics FD↓ KL↓ IS↑ CLAP (%) ↑ OVL↑ REL↑ Ground Truth − − − − − − − − 91.30 92 .03 Tango 866M 200 AC ✓ 21.97 1 .52 7 .37 23 .6 80.87 82 .30 AudioLDM2 1.1B 200 AC+4 others ✗ 36.95 1 .82 6 .87 22 .3 79.63 75 .97 LAFMA 272M 200 AC ✓ 29.59 1 .56 7 .29 22 .9 78.00 78 .73 AudioTurbo 1.1B 5 AC captions ✓ 22.18 1 .30 8 .88 29 .2 − − AudioTurbo 1.1B 10 AC captions ✓ 20.65 1 .29 9 .40 29 .8 82.25 85 .58 just10inference steps, whereas other TTA baselines use 200 steps. This demonstrates that, compared to other baseline mod- els, AudioTurbo achieves better audio quality with fewer in- ference steps, highlighting the enhanced performance of our proposed method in both audio generation capability and
https://arxiv.org/abs/2505.22106v1
infer- ence efficiency. In addition, the proposed AudioTurbo achieves a CLAP score of 29.8 and a REL of 85.58 with 10 sampling steps, outperforming other baselines by a large margin. This in- dicates that our proposed model can generate audio that is more relevant to the given textual description, demonstrating superior audio-text alignment capability. Additionally, our model maintains strong performance even with just 5 inference steps, without a significant decline. Over- all, it still outperforms other models, except for the FD metric, where it scores 22.18, slightly behind Tango’s 21.97. Moreover, the proposed model is trained exclusively on Au- dioCaps text annotations and the corresponding audio clips gen- erated by the pre-trained teacher model, Auffusion. Integrating the rectified diffusion method for TTA does not require train- ing the model from scratch and can leverage SOTA pre-trained models to directly generate audio-text pairs, moderately reduc- ing training costs. 4.2.2. Ablation Study on CFG Scale Table 2: Impact on objective evaluation metrics with varying levels of classifier-free guidance. Model Steps Guidance FD↓ KL↓ IS↑ CLAP(%) ↑ AudioTurbo 251.0 23.20 1 .35 8 .95 28 .9 1.5 21.69 1 .31 9.74 29.6 2.5 21.98 1 .36 9.86 28.8 5.0 26.18 1 .51 9 .16 25 .1 7.5 32.49 1 .72 7 .56 20 .4 Since the classifier-free guidance scale plays an important role in generation quality and sampling diversity [4, 33], we investigated the impact of the CFG scale on AudioTurbo with the inference step fixed at 25. The results are presented in Ta- ble 2. The first row represents a guidance scale of 1.0, mean- ing that classifier-free guidance is not applied during inference. The result of this configuration is not particularly impressive. We achieve the best performance across nearly all metrics for AudioTurbo with a guidance scale of 1.5, while the metrics de- teriorate as the guidance scale increases further. Adjusting the guidance scale carefully allows us to improve the overall effec- tiveness of our approach. Therefore, in the other experiments of this paper, we fix the guidance scale of AudioTurbo at 1.5.4.2.3. Comparative Study of TTA Models with Varying Infer- ence Steps Table 3: Objective metric values for LAFMA, Auffusion, and AudioTurbo models at different inference steps. Metrics ModelInference Steps 3 5 10 25 200 FD↓LAFMA 91.53 53 .46 31 .26 27 .37 29 .59 Auffusion 55.66 36 .53 25 .17 21.21 21 .31 AudioTurbo 29.96 22 .18 20 .65 21.69 21.75 KL↓LAFMA 3.30 2 .13 1 .61 1 .58 1 .56 Auffusion 2.75 1 .98 1 .65 1.27 1 .30 AudioTurbo 1.52 1 .30 1 .29 1.31 1.32 CLAP (%) ↑LAFMA −1.8 9 .7 19 .8 22 .9 22 .9 Auffusion 8.4 15 .5 21 .6 30.2 30 .5 AudioTurbo 25.4 29 .2 29 .8 29.6 29.6 We report the results of a comparative study on TTA mod- els with varying inference steps in Table 3. It can be observed that for each model, increasing the number of sampling steps improves sample quality. However, once a sufficient number of steps is reached, the improvement from adding more sampling steps gradually saturates.
https://arxiv.org/abs/2505.22106v1
With the same number of inference steps, our model achieves the best or near-best performance, with its advantage being especially prominent in the low-step regime. Furthermore, compared to the flow-matching-based TTA acceleration model (LAFMA), our model achieves compara- ble performance in just three steps, matching LAFMA’s optimal performance, which is achieved at 25steps, while significantly improving inference efficiency. 5. Conclusion We introduced AudioTurbo, a rectified diffusion-based model for TTA, designed to leverage the strengths of SOTA pre-trained TTA models while enhancing inference efficiency. Experiments show that by combining SOTA TTA models with rectified dif- fusion, AudioTurbo surpasses baseline models in both objec- tive and subjective evaluations. Moreover, it achieves compa- rable generation quality with only 3steps compared to a flow- matching-based acceleration model with 25steps, effectively reducing computational overhead. In future work, we will inte- grate distillation techniques to achieve one-step generation and extend this acceleration method to other tasks, such as target sound extraction [34, 35]. 6. References [1] F. Kreuk, G. Synnaeve, A. Polyak, U. Singer, A. D ´efossez, J. Copet, D. Parikh, Y . Taigman, and Y . Adi, “AudioGen: Textu- ally guided audio generation,” in The Eleventh International Con- ference on Learning Representations . [2] D. Yang, J. Tian, X. Tan, R. Huang, S. Liu, X. Chang, J. Shi, S. Zhao, J. Bian, X. Wu et al. , “UniAudio: An audio founda- tion model toward universal audio generation,” arXiv preprint arXiv:2310.00704 , 2023. [3] D. Yang, J. Yu, H. Wang, W. Wang, C. Weng, Y . Zou, and D. Yu, “Diffsound: Discrete diffusion model for text-to-sound genera- tion,” IEEE/ACM Transactions on Audio, Speech, and Language Processing , vol. 31, pp. 1720–1733, 2023. [4] H. Liu, Z. Chen, Y . Yuan, X. Mei, X. Liu, D. Mandic, W. Wang, and M. D. Plumbley, “AudioLDM: text-to-audio generation with latent diffusion models,” in Proceedings of the 40th International Conference on Machine Learning , 2023, pp. 21 450–21 474. [5] D. Ghosal, N. Majumder, A. Mehrish, and S. Poria, “Text-to- audio generation using instruction guided latent diffusion model,” inProceedings of the 31st ACM International Conference on Mul- timedia , 2023, pp. 3590–3598. [6] S. Janokar, S. Ratnaparkhi, M. Rathi, and A. Rathod, “Text-to- speech and speech-to-text converter—voice assistant,” in Inven- tive Systems and Control: Proceedings of ICISC 2023 . Springer, 2023, pp. 653–664. [7] M. Bo ˇzi´c and M. Horvat, “A survey of deep learning audio gener- ation methods,” arXiv preprint arXiv:2406.00146 , 2024. [8] T. Marrinan, P. Akram, O. Gurmessa, and A. Shishkin, “Lever- aging AI to generate audio for user-generated content in video games,” arXiv preprint arXiv:2404.17018 , 2024. [9] A. Agostinelli, T. I. Denk, Z. Borsos, J. Engel, M. Verzetti, A. Caillon, Q. Huang, A. Jansen, A. Roberts, M. Tagliasacchi et al. , “MusicLM: Generating music from text,” arXiv preprint arXiv:2301.11325 , 2023. [10] A. Van Den Oord, O. Vinyals et al. , “Neural discrete representa- tion learning,” Advances in Neural Information Processing Sys- tems, vol. 30, 2017. [11] R. Rombach, A. Blattmann, D. Lorenz, P. Esser, and B. Ommer, “High-resolution image synthesis with latent diffusion models,” inProceedings of
https://arxiv.org/abs/2505.22106v1
the IEEE/CVF Conference on Computer Vision and Pattern Recognition , 2022, pp. 10 684–10 695. [12] J. Xue, Y . Deng, Y . Gao, and Y . Li, “Auffusion: Leveraging the power of diffusion and large language models for text-to-audio generation,” IEEE/ACM Transactions on Audio, Speech, and Lan- guage Processing , vol. 32, pp. 4700–4712, 2024. [13] H. Liu, Y . Yuan, X. Liu, X. Mei, Q. Kong, Q. Tian, Y . Wang, W. Wang, Y . Wang, and M. D. Plumbley, “AudioLDM 2: Learn- ing holistic audio generation with self-supervised pretraining,” IEEE/ACM Transactions on Audio, Speech, and Language Pro- cessing , 2024. [14] R. Huang, J. Huang, D. Yang, Y . Ren, L. Liu, M. Li, Z. Ye, J. Liu, X. Yin, and Z. Zhao, “Make-an-Audio: Text-to-audio generation with prompt-enhanced diffusion models,” in International Con- ference on Machine Learning . PMLR, 2023, pp. 13 916–13 932. [15] J. Huang, Y . Ren, R. Huang, D. Yang, Z. Ye, C. Zhang, J. Liu, X. Yin, Z. Ma, and Z. Zhao, “Make-an-Audio 2: Temporal-enhanced text-to-audio generation,” arXiv preprint arXiv:2305.18474 , 2023. [16] R. Huang, Z. Zhao, H. Liu, J. Liu, C. Cui, and Y . Ren, “ProDiff: Progressive fast diffusion model for high-quality text-to-speech,” inProceedings of the 30th ACM International Conference on Mul- timedia , 2022, pp. 2595–2605. [17] S. Mehta, R. Tu, J. Beskow, ´E. Sz ´ekely, and G. E. Henter, “Matcha-TTS: A fast TTS architecture with conditional flow matching,” in International Conference on Acoustics, Speech and Signal Processing (ICASSP) . IEEE, 2024, pp. 11 341–11 345.[18] Z. Ye, Z. Ju, H. Liu, X. Tan, J. Chen, Y . Lu, P. Sun, J. Pan, W. Bian, S. He et al. , “FlashSpeech: Efficient zero-shot speech synthesis,” in Proceedings of the 32nd ACM International Con- ference on Multimedia , 2024, pp. 6998–7007. [19] W. Guan, K. Wang, W. Zhou, Y . Wang, F. Deng, H. Wang, L. Li, Q. Hong, and Y . Qin, “LAFMA: A latent flow matching model for text-to-audio generation,” arXiv preprint arXiv:2406.08203 , 2024. [20] Y . Lipman, R. T. Chen, H. Ben-Hamu, M. Nickel, and M. Le, “Flow matching for generative modeling,” in The Eleventh Inter- national Conference on Learning Representations . [21] F.-Y . Wang, L. Yang, Z. Huang, M. Wang, and H. Li, “Rectified diffusion: Straightness is not your need in rectified flow,” arXiv preprint arXiv:2410.07303 , 2024. [22] X. Liu, C. Gong, and Q. Liu, “Flow straight and fast: Learning to generate and transfer data with rectified flow,” arXiv preprint arXiv:2209.03003 , 2022. [23] J. Ho and T. Salimans, “Classifier-free diffusion guidance,” arXiv preprint arXiv:2207.12598 , 2022. [24] Y . Song, J. Sohl-Dickstein, D. P. Kingma, A. Kumar, S. Ermon, and B. Poole, “Score-based generative modeling through stochas- tic differential equations,” in International Conference on Learn- ing Representations , 2021. [25] C. Lu, Y . Zhou, F. Bao, J. Chen, C. Li, and J. Zhu, “DPM-Solver: A fast ODE solver for diffusion probabilistic model sampling in around 10 steps,” Advances in Neural Information Processing Sys- tems, vol. 35, pp. 5775–5787, 2022. [26] Y . Wu,
https://arxiv.org/abs/2505.22106v1
K. Chen, T. Zhang, Y . Hui, T. Berg-Kirkpatrick, and S. Dubnov, “Large-scale contrastive language-audio pretraining with feature fusion and keyword-to-caption augmentation,” in In- ternational Conference on Acoustics, Speech and Signal Process- ing (ICASSP) . IEEE, 2023, pp. 1–5. [27] A. Radford, J. W. Kim, C. Hallacy, A. Ramesh, G. Goh, S. Agar- wal, G. Sastry, A. Askell, P. Mishkin, J. Clark et al. , “Learning transferable visual models from natural language supervision,” in International Conference on Machine Learning . PMLR, 2021, pp. 8748–8763. [28] O. Ronneberger, P. Fischer, and T. Brox, “U-Net: Convolutional networks for biomedical image segmentation,” in Medical Image Computing and Computer-Assisted Intervention . Springer, 2015, pp. 234–241. [29] J. Song, C. Meng, and S. Ermon, “Denoising diffusion implicit models,” in International Conference on Learning Representa- tions , 2021. [30] L. Liu, Y . Ren, Z. Lin, and Z. Zhao, “Pseudo numerical methods for diffusion models on manifolds,” in International Conference on Learning Representations , 2022. [31] C. D. Kim, B. Kim, H. Lee, and G. Kim, “AudioCaps: Generat- ing captions for audios in the wild,” in Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Vol- ume 1 (Long and Short Papers) , 2019, pp. 119–132. [32] Q. Kong, Y . Cao, T. Iqbal, Y . Wang, W. Wang, and M. D. Plumb- ley, “PANNs: Large-scale pretrained audio neural networks for audio pattern recognition,” IEEE/ACM Transactions on Audio, Speech, and Language Processing , vol. 28, pp. 2880–2894, 2020. [33] P. Dhariwal and A. Nichol, “Diffusion models beat GANs on im- age synthesis,” Advances in Neural Information Processing Sys- tems, vol. 34, pp. 8780–8794, 2021. [34] J. Zhao, X. Liu, J. Zhao, Y . Yuan, Q. Kong, M. D. Plumbley, and W. Wang, “Universal sound separation with self-supervised audio masked autoencoder,” in 32nd European Signal Processing Conference (EUSIPCO) . IEEE, 2024, pp. 1–5. [35] H. Wang, J. Hai, Y .-J. Lu, K. Thakkar, M. Elhilali, and N. De- hak, “SoloAudio: Target sound extraction with language-oriented audio diffusion transformer,” arXiv preprint arXiv:2409.08425 , 2024.
https://arxiv.org/abs/2505.22106v1
arXiv:2505.22108v1 [cs.LG] 28 May 2025Inclusive, Differentially Private Federated Learning for Clinical Data Santhosh Parampottupadam1,2[0009−0009−9401−887X], Melih Coşğun3[0009−0008−3596−8376], Sarthak Pati5[0000−0003−2243−8487], Maximilian Zenk1,2[0000−0002−8933−5995], Saikat Roy1[0000−0002−0809−6524], Dimitrios Bounias1,2[0000−0002−3361−1698], Benjamin Hamm1,2[0009−0003−4818−8700], Sinem Sav3[0000−0001−9096−8768], Ralf Floca1[0000−0003−3218−3377], and Klaus Maier-Hein1,2,4[0000−0002−6626−2463] 1German Cancer Research Center (DKFZ), Heidelberg, Divisio n of Medical Image Computing, Germany 2Medical Faculty Heidelberg, Heidelberg University, Heide lberg, Germany 3Department of Computer Engineering, Bilkent University 4School of Medicine, Indiana University Pattern Analysis an d Learning Group, Department of Radiation Oncology, Heidelberg University H ospital, 69120 Heidelberg, Germany 5School of Medicine, Indiana University Abstract. Federated Learning (FL) offers a promising approach for training clinical AI models without centralizing sensitiv e patient data, yet its real-world adoption is hindered by challenges in pri vacy, resource constraints, and compliance. Existing differential privac y (DP) approaches often apply uniform noise, which disproportionately degra des model per- formance even among well-compliant institutions. In this w ork, we pro- pose a novel compliance-aware FL framework that enhances DP by adap- tively adjusting noise based on quantifiable client complia nce scores. Ad- ditionally, we introduce a compliance scoring tool based on key health- care and security standards to promote secure, inclusive, a nd equitable participation across diverse clinical settings. Extensiv e experiments on the public datasets demonstrate that integrating under-re sourced, less compliant clinics with highly regulated institutions yiel ds accuracy im- provements of up to 15% over traditional FL. This work advanc es FL by balancing privacy, compliance, and performance, making it a viable solution for real-world clinical workflows in global health care. Keywords: Compliance-Aware Clinical Federated Learning · Privacy- Preserving FL · Adaptive Compliance· Resource-Efficient DP. 1 Introduction Artificial Intelligence (AI) can advance healthcare throug h improved diagnostics and personalized treatments, but privacy concerns and regu latory constraints limit its adoption. Federated Learning (FL) [22] enables de centralized model training, preserving data privacy and security while suppo rting collaborative 2 S. Parampottupadam et al. clinical AI development. Despite its potential, FL in healt hcare [30] faces chal- lenges in data security, privacy, and inclusivity. FL syste ms are vulnerable to re- construction attacks, where model updates can reveal sensi tive information [8,32]. Differential privacy (DP) has been integrated into FL to miti gate these risks, providing theoretical guarantees against data reconstruc tion and inference at- tacks [2,10]. However, DP introduces trade-offs by adding no ise to model updates, often degrading performance [3]. Traditional DP methods ap ply noise uniformly across clients [21], overlooking disparities such as compl iance, resources [28,23]. Healthcare FL faces significant challenges due to instituti onal heterogeneity, with DP imposing high computational demands that often requ ire specialized hardware [6]. Clinical sites with lower patient loads strug gle to participate due to resource constraints, compliance gaps, and coordinatio n overhead [26,9,19]. Real-world FL studies [25,29] demonstrate feasibility but rely on trust-based fed- erations, marginalizing smaller institutions. Balancing privacy and utility in DP requires clear trade-offs, as any DP implementation impacts model performance. A review of 612 studies found only 5.2% involved real-world c linical applications, highlighting the need for FL frameworks that ensure privacy , inclusivity, and equitable participation
https://arxiv.org/abs/2505.22108v1
while addressing compliance and c omputational barri- ers [19,5,6]. This paper proposes a novel compliance-aware FL framework t o enhance pri- vacy in healthcare by dynamically integrating DP with clien t compliance scores. The framework introduces a customizable compliance scorin g tool aligned with key healthcare standards to ensure privacy, security, and i nteroperability while maintaining inclusivity. It incorporates privacy concept s from various regula- tory and best-practice frameworks such as patient consent m anagement [15], anonymization practices [13,17], audit logs & network secu rity [12], data en- cryption & secure infrastructure [24], ethical AI policies [1], interoperability [16], and data & model training quality. These standards collecti vely address privacy risks, enforce secure data handling, and promote equitable FL scalability in clin- ical environments. To mitigate manipulation risks in untrusted client setting s, our framework performs adaptive server-side DP, optimizing noise inject ion to balance privacy and utility [31]. By adapting noise levels to client complia nce scores, it ensures robust performance in resource-constrained healthcare en vironments. The com- pliance scoring tool enables investigators to weigh regula tory adherence, data integrity, and security protocols, fostering tailored and trustworthy FL deploy- ments. We evaluated our method on multiple public datasets [ 33] and aggregation methods [22,20,27], and quantified overall accuracy gains o f 1% to 15%. This manuscript’s contributions are: i)a compliance-aware FL framework with adaptive DP, adjusting noise based on client complianc e to enhance fairness and inclusivity, ii)a web-based compliance scoring tool aligned with healthcar e and security standards to provide quantifiable compliance s cores, and iii)imple- mentation of adaptive server-side DP, enabling resource-c onstrained clinics to participate while balancing privacy and performance. Inclusive, Differentially Private Federated Learning for C linical Data 3 2 Methods Fig.1. (a) Existing FL with client-side DP uses uniform noise, requ iring DP-compliant hardware, limiting less compliant, resource-constrained clinics. (b) Server-side DP adds uniform noise post-aggregation, reducing privacy-utilit y efficiency and further exclud- ing less compliant clinics. (c) Our compliance-aware adapt ive DP applies per-client noise before aggregation, enabling participation from low -resource, less compliant clin- ics while optimizing privacy and performance. Compliance Scoring Mechanism. Our compliance scoring tool enables exper- iment organizers to assign weights to various factors (see T able 2 for an example) and configure corresponding options, offering flexible, cust omized evaluation for diverse clinical settings. The overall compliance score ( Sc) for each client is de- termined by assessing all the factors and is calculated as fo llows: Sc=/summationtextn i=1(wi·si)/summationtextn i=1wi(1) wherenis the total number of compliance factors, wiis the weight assigned to factor i, andsiis the selected option score for factor i. For instance, the anonymization practices factor offers three options: ISO/TS 25237:2017 Fully Anonymized (Score 1.0), Pseudonymized (Partial Anonymization) (Score 0.7), andNo Anonymization (Score 0.5), with the tool defaulting to a 0.5threshold, adjustable by experiment owners, including setting it to 0if needed. 4 S. Parampottupadam et al. Algorithm 1 Adaptive Noise-Based Differential Privacy in Federated Lea rning 1: Initialize GLOBAL _MODEL 2:forround= 1toFED_ROUNDS do 3:Client Training: 4:foreach client ido 5: CLIENT i←Copy(GLOBAL _MODEL ) 6: CLIENT i←Train(CLIENT i,datai,epochs= 1)
https://arxiv.org/abs/2505.22108v1
7:end for 8: Send{CLIENT i}to aggregator 9:DP Processing: 10: foreach client ido 11: DPi←Copy(CLIENT i) 12: DPi←DPTrain (DPi,agg_data,η=AdaptiveNoise (ci)) 13: end for 14: Aggregation: 15:GLOBAL _MODEL←FedAvg ({DPi})⊲Fed Median/Prox/Yogi/Adam 16: Broadcast GLOBAL _MODEL to clients 17:end for 18:returnGLOBAL _MODEL Noise Multiplier Calculation. To implement DP adaptively, noise levels are dynamically adjusted based on client compliance scores. Th e noise multiplier (Nm) is computed as: Nm= (1.0−Sc)+Min Noise Multiplier, where Scdenotes the client’s compliance score, and Min Noise Multiplier (se t to1e-10in this experiment) ensures baseline privacy. This approach ensur es that clients with lower compliance scores need higher noise levels. Noise can be tuned or clipped per FL aggregation strategy, protecting data while preserv ing model quality and ensuring secure FL participation. Experimental Setup. Experiments were conducted with a batch size of 32, 50 FL training rounds, a learning rate of 0.001, and images re sized to 128× 128. Each FL round included 3 local epochs per client, followed b y 1 epoch on the aggregator dataset (at the server) using noise-injec ted client updates before global aggregation. This allows the model to adapt to perturbed updates, improving stability and convergence (see Algo 18). A total o f 61 experiments (Table 3) were performed, including an additional data qual ity experiment 2. The dataset was split into 16 client subsets, with one for agg regator training with DP and another for global evaluation. Vanilla FL used th e same FL rounds and learning rate but excluded DP and compliance. Data Quality Experiment To simulate a realistic scenario and assess the “data quality” compliance factor, we degraded data for 12 cl ients by randomly cropping, resizing (80–100% of the original size), adding G aussian noise ( σ= 0.05), and reducing contrast to 80%. These clients received a com pliance score of 0.3, while 4 trusted clients retained a score of 1.0. Compa red to Experiment 4 (only 4 trusted clients), this setup showed that incorporat ing lower-quality data, despite its lower compliance score, can still enhance overa ll model performance. Inclusive, Differentially Private Federated Learning for C linical Data 5 Table 1. Client participation per experiment, compliant/non-comp liant clients, DP settings. Non-compliant clients have compliance levels be tween0.1and0.6. Experi- ment 1 includes 12non-compliant clients, split into two groups of 6, each with com- pliance levels between 0.1and0.6. Experiment 2 has 6non-compliant clients with the same compliance range. Exp. 1-4: individual compliance-ba sed DP. Exp. 6: DP with uniform noise post-aggregation. Baseline noise is 1e−10. Client Type Exp. 1 Exp. 2 Exp. 3 Exp. 4 Exp. 5 Exp. 6 Compliant Clients4 10 16 4 16-Vanilla 16 Non-Compliant Clients12 clients 6 clients None None None None Compliance Applied?Yes Yes Yes No No Yes Minimum DP Applied?Yes Yes Yes Yes No Uniform DP Implementation Details. The framework was implemented using Lightning [11], Flower [4], and ResNet-18 [14], and tested on an NVIDIA Tesla T4 GPU (16GB), demonstrating its feasibility in resource-constrained cl inical settings. Compliance scores for each client were pre-assigned using a customizab le web-based compli- ance scoring tool, simulating the role of a Principal Invest
https://arxiv.org/abs/2505.22108v1
igator (PI)(Table 2). This tool, grounded in established healthcare and security standards, evaluated clients on 12 compliance factors with predefined options and weights (Equation 1). These scores determined the level of noise dynamically a dded to client con- tributions2, ensuring baseline privacy with a minimum nois e threshold applied across all clients. FL training began with the global model d istributed to clients, who performed 3 epochs of training without DP on local datase ts. The client contributions were then sent to the server, where noise prop ortional to compli- ance scores was added to each contribution. Before global ag gregation, the server trained for one epoch on the noise-adjusted data using the ag gregator dataset with DP [9]. The final aggregated model weights were computed using the se- lected FL strategy and redistributed to all clients. This it erative process was repeated for 50 FL training rounds, ensuring adaptive DP noi se, robust aggrega- tion, and inclusivity across clients with varying complian ce levels. DP was inte- grated using Opacus [34], with minimum noise level tested ( 1e−10). Noise distri- bution followed the compliance score distribution, where h igh-compliance clients received minimal noise to preserve model performance, whil e low-compliance clients had higher noise applied to maintain privacy. 3 Results Table 1 summarizes six experimental configurations on two da tasets Pneumoni- aMNIST and BreastMNIST using various FL strategies. In thes e experiments, compliance-aware DP was compared against Vanilla FL across 50 experimen- 6 S. Parampottupadam et al. Table 2. Compliance factors and standards are customizable to fit stu dy requirements. Compliance Factor Standards/Options Data Encryption Standards AES-256 (NIST), AES-128 (Health care Minimum) Ethical AI Policies EU AI Act, FDA Guidelines Privacy Regulations HIPAA, GDPR Data Quality DICOM Standard, Partially Validated Data Anonymization Practices ISO/TS 25237:2017, Pseudonymiza tion Interoperability Standards HL7/FHIR Standards Secure Network Infrastructure NIST Cybersecurity Framewo rk Authentication and Authorization MFA, RBAC Audit Logs SOC 2 Type II Certification Patient Consent Management HL7 CDA Compliant Trusted Execution Environments Intel SGX, AMD SEV Local Model Training Quality High Accuracy (>95%), Moderat e Accu- racy (85–95%) tal settings (see Table 3), with different combinations of co mpliant and non- compliant client groups. For both datasets—PneumoniaMNIST and BreastM- NIST—FedYogi achieved the highest accuracy in Experiment 1 ( 86.62% and 75.50%, respectively), FedAdam in Experiment 2 (85.55% and 71.49%), and Fe- dAvg in Experiment 3 (85.64% and 73.68%). In Experiment 4 (co mpliant clients only), FedAvg performed best (81.28% and 65.85%). In the Van illa FL configu- ration (Experiment 5), FedAdam achieved the highest accura cy for Pneumoni- aMNIST (86.96%), while FedYogi led for BreastMNIST (78.50% ). The official AUC and ACC for PneumoniaMNIST (centralized training) are 9 5.6 and 86.40. For BreastMNIST, they are 89.10 and 83.30, respectively. In addition to the experiments in Table 3, we conducted a Data Quality exper- iment and a realistic data quality-based compliance score e xperiment (see 2). The global model was evaluated on the test set using accuracy, wi th results across dif- ferent FL strategies as follows: dp_FedAvg achieved 72.68%, dp_FedYogi 71.62%, dp_FedAdam 69.55%,dp_FedMedian 66.23%,
https://arxiv.org/abs/2505.22108v1
and dp_FedProx 64.04%. 4 Discussion In this manuscript, we have developed a novel compliance-aw are FL framework which optimizes the privacy-utility trade-off by dynamical ly adjusting DP noise based on client compliance scores. We evaluated our method a cross multiple ex- periments using various aggregation strategies (FedAvg, F edProx, FedMedian, FedAdam, and FedYogi) and public datasets (PneumoniaMNIST and BreastM- NIST). Notably, The experiment with 4 highly compliant and 12 less-complian t clients beat the 4 highly compliant-only setup, gaining 1%– 15% accuracy across strategies, outperforming uniform server DP as well. This h ighlights that in- Inclusive, Differentially Private Federated Learning for C linical Data 7 corporating lower-compliance clients can enhance overall model performance. However, FedMedian exhibited sensitivity to compliance di stribution. Considering the experimental design (Section 2), in Experi ment 1 (75% low- compliance clients), FedMedian achieved only 70.12% accur acy on Pneumoni- aMNIST and 50.01% on BreastMNIST (see Table 3), likely due to the median selection favoring noisy updates. In contrast, Experiment 2 (37% low-compliance clients) saw improved FedMedian accuracy (82.94% and 70.86 %, respectively), nearing Vanilla FL performance. This suggests that FedMedi an’s effectiveness depends on compliance distribution, making it less reliabl e in settings with a high proportion of low-compliance clients. Performance gains mainly benefit the principal investigato r, while high com- pliance institutions access diverse, real-world data, imp roving model generaliz- ability. FL ethically integrates data from less-compliant or resource-constrained clinics, preserving privacy with minimal DP protection for all, regardless of com- pliance. In rare disease studies, this collaboration is cri tical. For instance, a glioblastoma study [25] across 71 sites (n=6,314) saw a 33% i mprovement in delineating surgically targetable tumors and a 23% gain for complete tumor ex- tent, demonstrating how high-compliance institutions ben efit from the inclusion of less regulated clinics (Asia, South America, Australia) by accessing rare and geographically diverse data that would otherwise be unavai lable. We have presented a compliance-aware DP framework in FL whic h pro- motes inclusivity and reducing resource constraints witho ut specialized hard- ware. While DP offers theoretical privacy guarantees [9,26] , it remains the most practical alternative to trusted execution environments ( hardware-dependent) and homomorphic encryption (computationally intensive). Our method mini- mizes computational burdens on resource-limited clinics, enabling broader par- ticipation without enforcing DP-compliant hardware [9,6] . The compliance scor- ing tool allows experiment administrators to customize com pliance factors, align- ing with global healthcare standards [18,7] to foster secur e, equitable FL par- ticipation. Unlike traditional server-side DP (See Exp.6 3 ), which applies uni- form noise across all clients, our adaptive DP mechanism adj usts noise based on compliance scores, ensuring a balanced trade-off between privacy and utility. This effectively simulates client-side DP at the server leve l, allowing resource- constrained clinics to contribute without requiring DP-co mpliant infrastructure. 5 Limitations and Future Works While our compliance-aware FL framework advances privacy, inclusivity, and performance, some limitations remain. One is the initial tr ust assumption, where first-round client updates lack DP, posing a minor risk if the server is curious. Later updates mitigate this with DP,
https://arxiv.org/abs/2505.22108v1
but adding minimal nois e in the first round or using secure multi-party computation (SMPC) could enhan ce security. Addi- tionally, the framework assumes accurate and honest compli ance scores, which may not always hold. Future work could explore dynamic valid ation to ensure real-time compliance verification. 8 S. Parampottupadam et al. This work brings “privacy” closer to clinical practice by va lidating the frame- work in controlled settings with defined resource constrain ts and compliance parameters. Expanding its evaluation to real-world clinic al environments with diverse datasets and infrastructures will provide deeper i nsights into its scal- ability and robustness. Our approach separates privacy fro m hardware limits, enabling resource-constrained clinics to join a more inclu sive FL ecosystem. Fu- ture work could refine adaptive aggregation by compliance, b alance efficiency and privacy, boost global clinical FL use, and prevent infer ence attacks from untrusted clients. Inclusive, Differentially Private Federated Learning for C linical Data 9 Table 3. Results for all combinations of Compliant Clients, Strateg ies, and Minimum DP Noise. Batch size is fixed at 32, and FL rounds are set to 50. I rrespective of compliance, a baseline noise of 1e−10is added to each model. Results for vanilla FL (no compliance, no DP) are included as a separate block. De tailed Experiment configurations are provided in Table 1. Experiment Strategy PneumoniaMNIST BreastMNIST Acc. Prec. Rec. F1 Acc. Prec. Rec. F1 1FedAvg 82.43 89.39 82.43 84.30 66.98 84.40 66.98 69.69 FedMedian 70.12 81.38 70.12 71.16 50.01 36.53 50.01 42.22 FedYogi 86.62 91.68 86.62 88.26 75.50 81.51 75.70 77.64 FedProx 84.01 89.93 84.01 85.76 71.61 81.60 71.11 74.34 FedAdam 84.01 89.93 84.01 85.76 64.16 60.26 64.86 66.11 2FedAvg 85.29 90.57 85.29 86.95 70.73 78.51 70.74 73.01 FedMedian 82.94 89.39 82.94 84.75 70.86 82.65 70.86 73.76 FedYogi 83.84 90.32 83.84 85.68 62.21 81.46 62.24 63.58 FedProx 84.78 90.54 84.78 86.52 64.47 76.62 64.47 66.37 FedAdam 85.55 91.15 85.55 87.28 71.49 77.93 71.49 73.56 3FedAvg 85.64 90.97 85.64 87.31 73.68 68.65 73.68 67.29 FedMedian 83.67 90.73 83.67 85.61 73.24 83.98 73.29 76.24 FedYogi 84.27 90.52 84.27 86.09 66.22 86.73 66.21 68.87 FedProx 85.04 91.13 85.04 86.85 71.36 75.20 71.40 72.79 FedAdam 82.99 89.91 82.99 84.87 62.97 79.36 62.97 64.58 Impact of Experiment 1 with only Compliant Clients: 4FedAvg 81.28 89.10 81.28 83.21 65.85 71.79 65.85 67.43 FedMedian 79.44 87.96 79.44 81.35 62.84 73.62 62.74 64.33 FedYogi 81.06 89.00 81.06 83.00 60.90 73.30 60.80 61.87 FedProx 78.80 87.66 78.80 80.70 63.03 68.46 63.03 64.27 FedAdam 79.65 88.06 79.65 81.56 54.76 57.50 54.96 51.55 Vanilla FL (No compliance Score and No DP noise): 5FedAvg 85.42 89.80 85.42 86.88 76.37 84.29 76.37 79.03 FedMedian 85.34 89.96 85.34 86.85 75.81 79.79 75.81 77.38 FedYogi 84.61 90.93 84.61 86.45 78.50 79.91 78.53 79.15 FedProx 86.88 91.18 81.28 88.35 73.43 78.27 73.45 75.19 FedAdam 86.96 90.10 87.00 88.12 75.18 77.89 83.65 75.18 DP with uniform noise post-weight aggregation: 6FedAvg 75.89 87.66 75.89 77.74 68.04 79.30 68.04 70.51 FedMedian 76.45 88.24 76.45 78.36 68.55 68.98 73.55 74.07 FedYogi 77.16 88.13 77.50 78.50 72.10 76.11 75.89 76.80 FedProx 79.53 89.18 79.60
https://arxiv.org/abs/2505.22108v1
81.56 63.72 70.80 63.72 65.51 FedAdam 79.12 89.10 78.30 89.12 63.45 79.90 73.01 75.30 10 S. Parampottupadam et al. References 1. Act, E.A.I.: EU Artificial Intelligence Act | Up-to-date d evelop- ments and analyses of the EU AI Act — artificialintelligencea ct.eu. https://artificialintelligenceact.eu/ , [Accessed 11-01-2025] 2. Adnan, M., Kalra, S., Cresswell, J.C., Taylor, G.W., Tizh oosh, H.R.: Federated learning and differential privacy for medical image analysi s. Scientific reports 12(1), 1953 (2022) 3. Bagdasaryan, E., Shmatikov, V.: Differential privacy has disparate impact on model accuracy (2019), https://arxiv.org/abs/1905.12101 4. Beutel, D.J., Topal, T., Mathur, A., Qiu, X., Fernandez-M arques, J., Gao, Y., Sani, L., Kwing, H.L., Parcollet, T., Gusmão, P.P.d., Lane, N.D.: Flower: A friendly federated learning research framework. arXiv preprint arX iv:2007.14390 (2020) 5. Calvino, G., Peconi, C., Strafella, C., Trastulli, G., Me galizzi, D., Andreucci, S., Cascella, R., Caltagirone, C., Zampatti, S., Giardina, E.: Federated learning: Breaking down barriers in global genomic research. Genes 15(12), 1650 (2024) 6. Cummings, R., Desfontaines, D., Evans, D., Geambasu, R., Huang, Y., Jagielski, M., Kairouz, P., Kamath, G., Oh, S., Ohrimenko, O., Papernot , N., Rogers, R., Shen, M., Song, S., Su, W., Terzis, A., Thakurta, A., Vassilv itskii, S., Wang, Y.X., Xiong, L., Yekhanin, S., Yu, D., Zhang, H., Zhang, W.: Advanc ing differential privacy: Where we are now and future directions for real-wor ld deployment (2024), https://arxiv.org/abs/2304.06929 7. Dankar, F.K., El Emam, K.: Practicing differential privac y in health care: A review. Trans. Data Privacy 6(1), 35–67 (Apr 2013) 8. Dimitrov, D.I., Balunović, M., Konstantinov, N., Vechev , M.: Data leakage in fed- erated averaging (2022) 9. Dwork, C., Roth, A., et al.: The algorithmic foundations o f differential privacy. Foundations and Trends® in Theoretical Computer Science 9(3–4), 211–407 (2014) 10. El Ouadrhiri, A., Abdelhadi, A.: Differential privacy fo r deep and federated learn- ing: A survey. IEEE access 10, 22359–22380 (2022) 11. Falcon, W., The PyTorch Lightning team: PyTorch Light- ning (Mar 2019). https://doi.org/10.5281/zenodo.3828935 , https://github.com/Lightning-AI/lightning 12. Force, J.T.: NIST Special Publication (SP) 800-53 Rev. 5 , Security and Pri- vacy Controls for Information Systems and Organizations — c src.nist.gov. https://csrc.nist.gov/pubs/sp/800/53/r5/upd1/final , [Accessed 11-01-2025] 13. gdpr.eu: General Data Protection Regulation (GDPR). https://gdpr.eu/tag/gdpr/ , [Accessed 11-01-2025] 14. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learnin g for image recognition. In: Proceedings of the IEEE conference on computer vision and pa ttern recognition. pp. 770–778 (2016) 15. hhs.gov: The hipaa privacy rule. https://www.hhs.gov/hipaa/for-professionals/privacy /index.html , [Accessed 11-01-2025] 16. http://hl7.org/fhir: Overview - FHIR v5.0.0 — hl7.org. https://hl7.org/fhir/overview.html , [Accessed 11-01-2025] 17. iso.org: ISO 25237:2017 — iso.org. https://www.iso.org/standard/63553.html , [Accessed 11-01-2025] 18. Kaiser, J., Mueller, T., Kaissis, G.: Differential priva cy in medical imaging appli- cations. In: Trustworthy AI in Medical Imaging, pp. 411–424 . Elsevier (2025) Inclusive, Differentially Private Federated Learning for C linical Data 11 19. Li, M., Xu, P., Hu, J., Tang, Z., Yang, G.: From challenges and pitfalls to rec- ommendations and opportunities: Implementing federated l earning in healthcare. arXiv preprint arXiv:2409.09727 (2024) 20. Li, T., Sahu, A.K., Zaheer, M., Sanjabi, M., Talwalkar, A ., Smith, V.: Federated
https://arxiv.org/abs/2505.22108v1
optimization in heterogeneous networks. Proceedings of Ma chine learning and sys- tems2, 429–450 (2020) 21. Li, X., Zmigrod, R., Ma, Z., Liu, X., Zhu, X.: Fine-tuning language models with differential privacy through adaptive noise all ocation (2024), https://arxiv.org/abs/2410.02912 22. McMahan, B., Moore, E., Ramage, D., Hampson, S., y Arcas, B.A.: Communication-efficient learning of deep networks from dece ntralized data. In: Artificial intelligence and statistics. pp. 1273–1282. PML R (2017) 23. Nguyen, P., Silence, A., Darais, D., Near, J.P.: Duetsgx : Differential privacy with secure hardware. arXiv preprint arXiv:2010.10664 (2020) 24. NIST: Cybersecurity Framework — nist.gov. https://www.nist.gov/cyberframework , [Accessed 11-01-2025] 25. Pati, S., Baid, U., Edwards, B., Sheller, M., Wang, S.H., Reina, G.A., Foley, P., Gruzdev, A., Karkada, D., Davatzikos, C., et al.: Federated learning enables big data for rare cancer boundary detection. Nature communicat ions13(1), 7346 (2022) 26. Pati, S., Kumar, S., Varma, A., Edwards, B., Lu, C., Qu, L. , Wang, J.J., Lakshmi- narayanan, A., Wang, S.h., Sheller, M.J., et al.: Privacy pr eservation for federated learning in health care. Patterns 5(7) (2024) 27. Reddi, S.J., Charles, Z., Zaheer, M., Garrett, Z., Rush, K., Konečný, J., Kumar, S., McMahan, H.B.: Adaptive federated optimizat ion. In: International Conference on Learning Representations (2021), https://openreview.net/forum?id=LkFG3lB13U5 28. Ren, X., Yang, S., Zhao, C., McCann, J., Xu, Z.: Belt and br aces: When federated learning meets differential privacy (2024), https://arxiv.org/abs/2404.18814 29. Schmidt, K., Bearce, B., Chang, K., Coombs, L., Farahani , K., Elbatel, M., Mouheb, K., Marti, R., Zhang, R., Zhang, Y., et al.: Fair eval uation of feder- ated learning algorithms for automated breast density clas sification: The results of the 2022 acr-nci-nvidia federated learning challenge. Med ical Image Analysis 95, 103206 (2024) 30. Sheller, M.J., Edwards, B., Reina, G.A., Martin, J., Pat i, S., Kotrotsou, A., Milchenko, M., Xu, W., Marcus, D., Colen, R.R., et al.: Feder ated learning in medicine: facilitating multi-institutional collaborati ons without sharing patient data. Scientific reports 10(1), 12598 (2020) 31. Wang, H., Zhao, Q., Wu, Q., Chopra, S., Khaitan, A., Wang, H.: Global and local differential privacy for collaborative bandits. In: Procee dings of the 14th ACM Conference on Recommender Systems. pp. 150–159 (2020) 32. Wen, Y., Geiping, J., Fowl, L., Goldblum, M., Goldstein, T.: Fishing for user data in large-batch federated learning via gradient magnificati on (2022) 33. Yang, J., Shi, R., Wei, D., Liu, Z., Zhao, L., Ke, B., Pfiste r, H., Ni, B.: Medmnist v2- a large-scale lightweight benchmark for 2d and 3d biomedica l image classification. Scientific Data 10(1), 41 (2023) 34. Yousefpour, A., Shilov, I., Sablayrolles, A., Testuggi ne, D., Prasad, K., Malek, M., Nguyen, J., Ghosh, S., Bharadwaj, A., Zhao, J., Cormode, G., Mironov, I.: Opacus: User-friendly differential privacy library in P yTorch. arXiv preprint arXiv:2109.12298 (2021)
https://arxiv.org/abs/2505.22108v1
The quest for the GRAph Level autoEncoder (GRALE) Paul Krzakala LTCI & CMAP , Télécom paris, IP Paris Gabriel Melo LTCI, Télécom paris, IP ParisCharlotte Laclau LTCI, Télécom paris, IP Paris Florence d’Alché-Buc LTCI, Télécom paris, IP ParisRémi Flamary CMAP, Ecole Polytechnique, IP Paris Abstract Although graph-based learning has attracted a lot of attention, graph representation learning is still a challenging task whose resolution may impact key application fields such as chemistry or biology. To this end, we introduce GRALE, a novel graph autoencoder that encodes and decodes graphs of varying sizes into a shared embedding space. GRALE is trained using an Optimal Transport-inspired loss that compares the original and reconstructed graphs and leverages a differentiable node matching module, which is trained jointly with the encoder and decoder. The proposed attention-based architecture relies on Evoformer, the core component of AlphaFold, which we extend to support both graph encoding and decoding. We show, in numerical experiments on simulated and molecular data, that GRALE enables a highly general form of pre-training, applicable to a wide range of down- stream tasks, from classification and regression to more complex tasks such as graph interpolation, editing, matching, and prediction. 1 Introduction Graph representation learning. Graph-structured data are omnipresent in a wide variety of fields ranging from social sciences to chemistry, which has always motivated an intense interest in graph learning. Machine learning tasks related to graphs are mostly divided into two categories: node-level tasks such as node classification/regression, clustering, or edge prediction, and graph-level tasks such as graph classification/regression or graph generation/prediction [ 68,28,7,83]. This paper is devoted to graph-level representation learning, that is, unsupervised learning of a graph-level Euclidean representation that can be used directly or fine-tuned for later tasks [ 25]. From the large spectrum of existing representation learning methods, we focus on the AutoEncoder approach, as it natively fea- tures the possibility to decode a graph back from the embedding space. In this work, we illustrate that this enables leveraging the learned representation in a variety of downstream tasks, ranging from classi- fication/regression to more involved tasks such as graph interpolation, editing, matching or prediction. From node-level AutoEncoders... Scrutinizing the literature, we observe that most existing works on graph AutoEncoders provide node-level embeddings instead of one unique graph-level embedding (See Fig. 1, left). In the following, we refer to this class of models as Node-Level AutoEncoders. The most emblematic example is the celebrated VGAE model [ 36], where the encoder is a graph convolutional network (GCN) that returns node embeddings zi, and the decoder reconstructs the adjacency matrix from a dot product Ai,j=σ(⟨zi, zj⟩). Many extensions have been proposed for this Preprint. Under review.arXiv:2505.22109v1 [cs.LG] 28 May 2025 NODE LEVEL AUTOENCODERf gEdge Reconstruction Loss f g GRAPH LEVEL AUTOENCODERf gmGraph Reconstruction Loss Graph Reconstruction LossNP (exact) or P (approx.)Graph MatchingFigure 1: The different classes of graph AutoEncoders. (Left) Node-level AutoEncoders such as [36] provide node level embeddings. (Center) Naive graph-level AutoEncoders such as [ 52] directly provide graph-level embeddings but rely on a graph matching solver to compute the training loss. (Right) Matching free
https://arxiv.org/abs/2505.22109v1
approaches, such as proposed in this work and in [ 67] use a learnable module to provide the matching. model, such as adversarial regularization [ 47] or masking [ 55], and it has been shown to be efficient for many node-level tasks, such as node clustering [ 80,81,59] or node outlier detection [ 14]. Node- level models can also be used for graph generation, given that one knows the size (number of nodes) of the graph to generate; this includes GV AE [ 36] but also GraphGAN [ 60] and graph normalizing flows [43]. When a graph-level representation is needed, one can apply some pooling operation on the node embeddings, for instance z=Pzi. Yet, this strategy is not entirely satisfying, as it inevitably leads to information loss, and it becomes difficult to decode a graph back from this representation [ 58,67]. ...to graph-level AutoEncoders. In contrast, very few works attempt to build graph-level AutoEn- coders where the encoder embeds the full graph into a single vector and the decoder reconstructs the whole graph (including the number of nodes). The Graph Deconvolutional AutoEncoder [ 40] and Graph U-net [ 21] are close to this goal, but in both cases, the decoder takes advantage of the ground-truth information, including the graph size, in the decoding phase. Ultimately, we identify only two works that share a similar goal with this paper: GraphV AE [ 52] (not to be confused with GV AE [ 36]) and PIGV AE [ 67]. GraphV AE is a pioneering work, but it is heavily based on an expensive graph matching algorithm. In that regard, the main contribution of PIGV AE is the addition of a learnable module that predicts the matching instead. This was a major step forward, yet we argue that PIGV AE is still missing some key components: for instance, the ground truth size of the graph needs to be provided to the decoder. We detail our positioning with respect to PIGV AE in Section 4. Contributions. We introduce a GRAph Level autoEncoder (GRALE) that encodes and decodes graphs of varying sizes into a shared Euclidean space. GRALE is trained with an Optimal Transport- inspired loss, without relying on expensive solvers, as the matching is provided by a learnable module trained end-to-end with the rest of the model. The proposed architecture leverages the Evoformer module [ 30] for encoding and introduces a novel "Evoformer Decoder" for graph reconstruction. GRALE enables general pretraining, applicable to a wide range of downstream tasks, including classification, regression, graph interpolation, editing, matching, and prediction. We demonstrate these capabilities through experiments on both synthetic benchmarks and large-scale molecular datasets. 2 Building a Graph-Level AutoEncoder 2.1 Problem Statement The goal of this paper is to learn a graph-level AutoEncoder. Given an unsupervised graph dataset D={xi}i∈1,...,n we aim at learning an encoder g:G 7→ Z and a decoder f:Z 7→ G whereZis an euclidean space and Gis the space of graphs. To this end, the classic AutoEncoder approach is to minimize a reconstruction loss L(f◦g(xi), xi). However, the Graph setting poses unique challenges compared to more
https://arxiv.org/abs/2505.22109v1
classical AutoEncoders (e.g., images): 1. The encoder gmust be permutation invariant. 2. The decoder fmust be able to map vectors of fixed dimension to graphs of various sizes. 3. The loss Lmust be permutation invariant, differentiable and efficient to compute. Permutation invariant encoder. The first challenge is a well-studied topic for which a variety of architectures have been proposed, most of which rely on message passing [ 82,83,72] or attention- 2 based [ 46,49] mechanisms. In this work, we have chosen to use an attention-based architecture, the Evoformer module from AlphaFold [ 30]. As the numerical experiments show it, this architecture is particularly powerful, enabling the encoding and updating of pairwise relationships between graph nodes. To maintain symmetry with the encoder, the decoder also uses a novel Evoformer Decoder module. Details are provided in Appendix C. Padded graphs for multiple output sizes. The second challenge can be mitigated by using a classical padded representation [ 37,51]. We represent a graph G∈ G as a triplet G= (h, F, C ) where h∈[0,1]Nis a masking vector, F∈RN×nfis a node feature matrix, and C∈RN×N×nc is an edge feature tensor. Nis a maximum graph size; all graphs are padded to this size. A node i exists if hi= 1and is a padding node if hi= 0. Thus, original and reconstructed graphs of various sizes are represented with fixed-dimensional tensors that can be efficiently parametrized. Permutation invariant loss. The third challenge is arguably the hardest to overcome, since any permutation-invariant loss LPIbetween graphs GandˆGcan be written as a matching problem LPI(G,ˆG) = min P∈σNLALIGN (G, P[ˆG]), (1) where σNis the set of permutation matrices and P[G]denote the application of permutation Pto graph Gi.e.P[G] = (Ph, PF, PCPT). This can be seen as first, aligning the two graphs, and then computing a loss LALIGN between them. This is problematic because, for any nontrivial loss LALIGN such that LALIGN (x, y) = 0 ⇐⇒ x=y, the optimization problem in (1)is NP complete [ 20]. We discuss how to avoid this pitfall in the next section. 2.2 Matching-free reconstruction loss A common approach to mitigate the computational complexity of a loss like (1)is to relax the discrete optimization problem into a more tractable one. For instance, Any2Graph [ 37] relaxes the graph matching problem into an Optimal Transport problem LA2G(G,ˆG) = min T∈πNLOT(G,ˆG, T)opti- mized over πN={T∈[0,1]N×N|T1N=1N, TT1N=1N}the set of bi-stochastic matrices and LOT(G,ˆG, T) =NX i,jℓh(hi,ˆhj)Ti,j+NX i,jhiℓF(Fi,ˆFj)Ti,j+NX i,j,k,lhihkℓC(Ci,k,ˆCj,l)Ti,jTk,l,(2) where G= (h, F, C ),ˆG= (ˆh,ˆF,ˆC), and ℓh, ℓF, ℓCare ground losses responsible for the correct prediction of node masking, node features and edge features respectively. Despite the relaxation, this loss still satisfies key properties as stated in Propositions 1 and 2 (proofs in Appendix F). Proposition 1. IfℓCis a Bregman divergence, then LOT(G,ˆG, T)can be computed in O(N3). Proposition 2. There exist T∈πNsuch that LOT(G,ˆG, T) = 0 if and only if there exist P∈σN such that G=P[ˆG](i.e.GandˆGare isomorphic). Unfortunately, the inner optimization problem w.r.t. Tis still NP-complete, as it is quadratic but not convex. The authors suggest an approximate solution using conditional gradient descent, with a complexity
https://arxiv.org/abs/2505.22109v1
of O(k(N)N3), where k(N)is the number of iterations until convergence. However, there is no guarantee that the optimizer reaches a global optimum. Another approach, first proposed in PIGV AE [ 67], is to completely bypass the inner optimization problem by making the matching Ta prediction of the model , that is, the model does not only re- construct a graph ˆG=f◦g(G), but it also provides the matching ˆTbetween output and input graphs. We propose to combine the loss (2)from Any2Graph with the matching prediction strategy from PIGV AE. Thus, the loss that we minimize to train GRALE is LGRALE (G) =LOT G, f◦g(G),ˆT(G) . (3) Note that ˆT(G)must be differentiable so that the model can learn ˆT, f andgby backpropagation. 3 Featurizer (Input) Featurizer (Target)Linear Linear Transformer DecoderLinear + PaddingLinear Encoder (g)Evoformer Encoder Transformer EncoderTransposeAffinity ProjectionMatcher (m) (1) (2) (3)(5) (6) Notable QuantitiesNon T rainable Module (Pipeline Order)LEGEND Linear LinearEvoformer Decoder (4) Decoder (f)Trainable ModuleFigure 2: GRALE illustrated for an input of size n= 3and a maximum output graph size N= 4. 3 Overview of GRALE architecture 3.1 Graph pre-processing and featurizer The first step is to build the input (resp. target) graph G(resp. G∗) from the datapoint x∈ D ϕ(x) =G, ϕ∗(x) =G∗(4) This module first extract from data1a simple graph presentation with adjacency matrix and node labels then builds additional node and edge features, by including higher order interactions such as the shortest path matrix. Note that we consider different schemes for the input and target graphs ( ϕ̸=ϕ∗) as it has been shown that breaking the symmetry between inputs and targets can be beneficial to AutoEncoders [ 73]. In particular, ϕoutputs slightly noisy node features while ϕ∗is deterministic. This is crucial for breaking symmetries in the input graph, enabling the encoder to produce distinct node embeddings, which in turn facilitates matching. We discuss this phenomenon in appendix B.3. 3.2 Encoder g The input graph G= (F, C)is passed through the encoder gthat returns both a graph level embedding Z∈RK×Dand node level embeddings X∈RN×dn. ggraph(G) =Z, g nodes(G) =X (5) Embedding. We represent the graph embedding as a set of Ktokens, each of dimension D, with bothKandDfixed. For most downstream tasks, we simply flatten this representation into a single vector of dimension d=K×D. This token-based approach follows a broader trend of modeling data as sets of abstract units (tokens) beyond NLP [ 29,54,26]. In this context, Kcan be interpreted as the number of concepts used to represent the graph, though a qualitative analysis is beyond the scope of this paper. We discuss the choice of KandDquantitatively in Section 5. Architecture. The main component of the encoder is an Evoformer module that provides hidden representation for the node feature matrix Fand edge feature tensor C. Then, we use a transformer decoder as a pooling function to produce the graph level representation Zand apply a simple linear layer on Fto get the node level representations X. 1For molecules, xis typically a SMILES string that can be converted into a graph using RDKit [38]. 4 3.3 Decoder f The decoder uses the graph-level
https://arxiv.org/abs/2505.22109v1
embedding Zto reconstruct a graph ˆGand its node embeddings ˆX that will be used for matching as discussed in the next subsection. fgraph(Z) =ˆG, f nodes(Z) =ˆX (6) The decoder architecture mirrors that of the encoder. First, a transformer encoder updates the graph representation Z, then a novel Evoformer decoder module reconstructs the output graph nodes and edges. This new module, detailed in Appendix C, is based on the original Evoformer module, augmented with cross-attention blocks to enable decoding. As before, ˆXis obtained by applying a simple linear layer to the last layer hidden node representations. 3.4 Matcher mand loss Finally, a matcher module leverages the node embeddings XandˆXto predict the matching between input and output graphs: ˆT=m(X,ˆX) (7) To enforce that ˆT∈πN, we decompose the matcher in two steps: 1) We construct an affinity matrix KandKi,j=Aff(Xi,ˆXj)where Affis a learnable affinity function 2) We apply the Sinkhorn algorithm to project KonπNin a differentiable manner i.e. ˆT=SINKHORN (K)[16, 19, 23]. GRALE loss. Omitting the preprocessing and denoting by ϕ, ψ andθthe parameters of the encoder, decoder, and matcher respectively, the expression of the loss function writes as follows: LGRALE (G, ϕ, ψ, θ ) =LOT G, fϕ graph◦gψ graph(G) |{z } ˆG, mθ gψ nodes(G), fϕ nodes◦gψ graph(G) | {z } ˆT .(8) The detailed implementation of the modules mentioned in this section is provided in Appendix C and the whole model is trained end-to-end with classic batch gradient descent as described in A. 4 Related Works 4.1 Permutation Invariant Graph Variationnal AutoEncoder (PIGV AE) We devote this section to highlighting the differences with the work of Winter et al. [67] who first proposed a graph-level AutoEncoder with a matching free loss (PIGV AE). Graph size. In PIGV AE, the decoder needs to be given the ground truth size of the output graph and the encoder is not trained to encode this critical information in the graph-level representation. In comparison, GRALE is trained to predict the size of the graph through the padding vector h. In the following, we assume that graphs are of fixed size ( h=ˆh=1) to make the methods comparable. Loss. Denoting ˆT[ˆG] = ( ˆTˆF,ˆTˆCˆTT)the reordering of a graph, PIGV AE loss rewrites as LPIGV AE (G,ˆG,ˆT) =LALIGN (G,ˆT[ˆG]) (9) where LALIGN (G,ˆG) =PN i=1ℓF(Fi,ˆFi) +PN i,j=1ℓC(Ci,j,ˆCi,j). This reveals that LPIGV AE andLOTcan be seen as two different relaxations of the same underlying matching problem minP∈σNLALIGN (G, P[ˆG]). We detail this relationship in Proposition 3. However, Proposition 4 highlights an important limitation of the relaxation chosen in PIGV AE as the loss can be zero without a perfect graph reconstruction. All proofs are provided in appendix F. Proposition 3. For a permutation P∈σN,LPIGVAE andLOTare equivalent to a matching loss, LPIGVAE (G,ˆG, P) =LOT(G,ˆG, P) =LALIGN(G, P[ˆG]) (10) If we relax to ˆT∈πNwe only have an inequality instead LPIGVAE (G,ˆG,ˆT)≤ L OT(G,ˆG,ˆT). Proposition 4. It can be that LPIGVAE (G,ˆG,ˆT) = 0 while ˆGandGare not isomorphic. 5 Table 1: Reconstruction performances of graph-level AutoEncoders.∗indicates that the decoder relies on the ground truth size of the graph. N.A. indicates that the
https://arxiv.org/abs/2505.22109v1
model is too expensive to train. MODELCOLORING PUBCHEM 16 PUBCHEM 32 EDIT. DIST. (↓) GI A CC. (↑) EDIT. DIST. (↓) GI A CC. (↑) EDIT. DIST. (↓) GI A CC. (↑) GRAPH VAE 2.13 35.90 3.72 07.8 N.A. N.A. PIGVAE∗0.09 85.30 1.69 41.0 2.53 24.91 GRALE 0.02 99.20 0.11 93.0 0.78 66.80 Architecture. PIGV AE architecture is composed of 3 main blocs: encoder, decoder and permuter. The role of the permuter is similar to that of our matcher but it only relies on one-dimensional sorting of the input node embeddings Xwhile we leverage the Sinkhorn algorithm to compute a d dimensional matching between XandˆXwhich makes our implementation more expressive. Besides, training the permuter requires to schedule a temperature parameter. The scheduling scheme is not detailed in the PIGV AE paper which makes it hard to reproduce the reported performances. 4.2 Attention-based architectures for graphs The success of attention in NLP [ 57] has motivated many researchers to apply attention-based models to other types of data [ 32,66]. For graphs, attention mainly exists in two flavors: node-level and edge- level. In the case of node-level attention, each node can pay attention to the other nodes of the graph, resulting in an N×Nattention matrix that is then biased or masked using structural information [78, 76, 49]. Similarly, edge-level attention results in a N2×N2attention matrix. To prevent these prohibitive costs, all edge-level attention models use a form of factorization, which typically results inO(N3)complexity instead [ 9,67,30]. Since the complexity of our loss and that of our matcher is also cubic2, it seemed like a reasonable choice to use edge-level attention in our encoder and decoder. To this end, we select the Evoformer module [ 30], as it elegantly combines node and edge-level attention using intertwined updates of node and edge features. Our main contribution is the design of a novel Evoformer Decoder that enables Evoformer to use cross-attention for graph reconstruction. 4.3 Graph Matching with Neural Networks Our model relies on the prediction of the matching between input and output graphs. In that sense, we are part of a larger effort towards approximating graph matching with deep learning models in a data-driven fashion. However, it is important to note that most existing works treat graph matching as a regression problem, where the inputs are pairs of graphs (G1, G2)and the targets yare either the ground truth matching [ 79,62,65,77,63] or the ground truth matching cost (edit distance) [64,31,3,41,45], or both [ 48]. In contrast, we train our matcher without any ground truth by simply minimizing the matching cost, which is an upper bound of the edit distance. Furthermore, our matcher is not a separate model; it is incorporated and trained end-to-end with the rest of the AutoEncoder. 5 Numerical experiments Training datasets. We train GRALE on three datasets. First, COLORING is a synthetic graph dataset introduced in [ 37] where each instance is a connected graph whose node labels are colors that satisfy the four color theorem. Specifically, we train on the COLORING 20 variant, which is composed of 300k graphs of size less
https://arxiv.org/abs/2505.22109v1
than or equal to 20. Then, for molecular representation learning, we download and preprocess molecules from the PUBCHEM database [ 33]. We denote PUBCHEM 32 and PUBCHEM 16 as the subsets containing 84M and 14M molecules, respectively, with size up to 32 and 16 atoms. PUBCHEM 16 is used for training a lightweight version of GRALE in the ablation studies, while all downstream molecular tasks use the model trained on PUBCHEM 32. We refer to appendix A.1 for more details on the datasets, and A.2 for the training parameters. 5.1 Reconstruction performances and ablation studies In this section, we evaluate the performance of AutoEncoders based on the graph reconstruction quality using the graph edit distance (Edit Dist) [ 61] and Graph Isomorphism Accuracy (GI Acc), i.e., 2Sinkhorn is O(N2), but at test time we use the Hungarian algorithm, its discrete, O(N3)counterpart. 6 Table 2: Ablation Studies (PUBCHEM 16). We report (avg ±std) reconstruction metrics over 5 runs. MODEL COMPONENT PROPOSED REMPLACED BY EDITDIST. GI A CC. LOSS LOT LPIGVAE + R EGULARIZATION [67] 0.13±0.16 91.8±5.01 FEATURIZER ϕ SECOND ORDER FIRST ORDER 0.24±0.17 90.3±6.46 ENCODER f EVOFORMER GNN 1.32±0.16 53.2±2.69 DECODER g EVOFORMER TRANSFORMER DECODER [37] 0.71±0.09 66.6±2.78 MATCHER m SINKHORN SOFTSORT [67] 1.47±0.36 49.7±9.15 DISAMBIGUATION NOISE WITH WITHOUT 1.16±0.03 64.4±1.64 GRALE ( NO REMPLACEMENT ) 0.11±0.0493.0±0.18 the percentage of graphs perfectly reconstructed (see Appendix A.3 for details on the metrics). All models are trained on PUBCHEM 16, with a holdout set of 10,000 graphs for evaluation. Model performance and impact of components. First, we compare GRALE to the other graph- level AutoEncoders (PIGV AE [ 67], GraphV AE [ 52]). Table 1 shows that GRALE outperforms the other models by a large margin. Next, we conduct ablation studies to assess the impact of the individual components of our model. For each component, we replace it with a baseline (detailed in E) and measure the effect on performance. Results are shown in Table 2. Overall, all new components contribute to performance improvements. We also report the variance of the reconstruction metrics over 5 training seeds, revealing that while the choice of loss function and featurizer has limited impact on average performance, it significantly improves training robustness. 1 2 4 8 16 32 64 128 Number of tokens16 32 64 128 256 512 1024 2048Dimension of tokens0.17 0.33 0.48 0.70 0.63 0.88 0.84 0.85 0.32 0.39 0.60 0.72 0.87 0.89 0.92 0.47 0.54 0.75 0.81 0.88 0.93 0.52 0.65 0.79 0.84 0.93 0.60 0.64 0.80 0.85 0.65 0.75 0.79 0.68 0.76 0.69Accuracy 0.20.30.40.50.60.70.80.9 Figure 3: G.I. accuracy vs (K, D ). Both axes are in log-scale so that the diagonals correspond to a given total dimension d=K×D.Size of the embedding space. We now focus on the choice of embedding space and report reconstruction accuracy over a grid of values forKandD(Figure 3). As expected, accuracy increases with the total embedding dimension d=K×D. For a given fixed d, the perfor- mance is generally better when K > 1, with optimal results typically around K≈D. In- terestingly, this choice is also computationally favorable as the cost of
https://arxiv.org/abs/2505.22109v1
a transformer encoder scales with O(dmax( K, D )). These findings align with the broader hypothesis that many types of data benefit from being represented as tokens [ 29,54,26], a direction already explored in recent theoretical works [17]. 5.2 Qualitative properties of GRALE Capturing graph properties in the embedding. We randomly sample 10,000 molecules from two molecular datasets (PUBCHEM 32 and QM9 [ 69]), compute their GRALE embeddings, and apply dimension reduction techniques (PCA and T-SNE). The resulting 2D projections shown in Fig. 4 are colored by graph size, solubility (logP), and internal energy (available only for QM9). In all cases, the projections reveal a clear structure, illustrating GRALE’s ability to capture both geometrical and chemical graph-level properties. Figure 4: 2D projection of graph embeddings learned by GRALE, each point is a graph and the color represents important properties such as graph size, solubility (logP), and internal energy (u0). 7 Average ... ... ...Encode DecodeFigure 5: Interpolating graphs (from COLORING) using GRALE’s latent space. On the left we interpolate between two graphs while on the right we compute the barycenter ¯Gof the whole dataset. Encode Decode Decode Decode DecodeEmbedding Space Figure 6: Latent space edition of the size of a graph. Here, ˆnis a one-hidden-layer MLP trained to predict graph size, and we set ϵ= 0.01. Steps that did not produce any visible change are omitted. Complex graph operations in the latent space. We now showcase that GRALE enables complex graph operations with simple vector manipulations in the embedding space. For instance, graph interpolation [ 6,18,27], is traditionnally defined as the Fréchet mean with respect to some graph distance Li.e.Gt= arg minG(1−t)L(G0, G) +tL(G1, G), which is challenging to compute. Instead, we propose to compute interpolations at lightspeed using GRALE’s embedding space via Gt=f((1−t)g(G0) +t g(G1)). This approach can also be used to compute the barycenter of more than 2 graphs, including an entire dataset as illustrated in Figure 5. We can also perform property-guided graph editing. Given a property of interest p(G)∈R, we train a predictor ˆp:Z →Rsuch that p(G)≈ˆp(g(G)), and compute a perturbation usuch that ˆp(g(G)+u)≥ˆp(g(G)). For instance, one can set u=ϵ∇ˆp(g(G)). The edited graph is then decoded asG′=f(g(G) +u). In Figure 6, we illustrate this on COLORING by setting p(G) =n(G), the size of the graph, and successfully increasing it. Denoising Properties. We explore GRALE’s denoising capabilities by corrupting the COLORING test set through random modifications of node colors, potentially creating invalid graphs where adja- cent nodes share the same color. We then plot, as a function of noise level, the probability that a noisy graph is valid versus that its reconstruction is valid. As shown in Figure 7, reconstruction significantly increases the proportion of valid graphs, highlighting that GRALE’s latent space effectively captures the underlying data structure. Encode DecodeNoisy Input Reconstruction Errors! Valid Figure 7: Left: Percentage of valid coloring vs noise level. Right: example of a corrupted input and its reconstruction. The decoder consistently maps noisy input back toward valid COLORING graphs. 5.3 Quantitative performances of GRALE models on downstream tasks Graph classification/Regression. We use the graph
https://arxiv.org/abs/2505.22109v1
representations obtained by GRALE as input for classification and regression tasks in the MoleculeNet benchmark [ 69]. We compare our method to several graph representation learning baselines, including graph AutoEncoders (PIGV AE [ 67], VGAE [ 36]) and contrastive learning methods (Infograph [ 53], Simgrace [ 70]). For a fair comparison, 8 Table 3: Downstream tasks performance of different graph representation learning methods pretrained on PUBCHEM 32. We report the mean ±std over 5 train/test splits. MODELMLP R EGRESSION (MAE↓) SVR R EGRESSION (MAE↓) SVC C LASSIF . (ROC-AUC ↑) QM9 QM40 ESOL LIPO F REESOLV BBBP BACE SIMGRACE 0.110±0.003 0 .025±0.005 0.293±0.020 0 .534±0.023 0 .374±0.008 0.745±0.072 0.866±0.050 INFOGRAPH 0.122±0.001 0 .020±0.001 0.255±0.016 0.495±0.013 0 .297±0.010 0.729±0.053 0 .845±0.046 GVAE 0.765±0.005 0 .328±0.005 0.306±0.027 0 .668±0.014 0 .344±0.022 0.705±0.060 0 .771±0.049 PIGVAE 0.031±0.001 0 .019±0.001 0.279±0.020 0 .523±0.019 0 .283±0.013 0.675±0.079 0 .816±0.049 GRALE 0.015±0.001 0.018±0.003 0.274±0.014 0 .511±0.022 0.272±0.017 0.731±0.025 0 .821±0.051 Table 4: Performances of different methods on graph prediction benchmarks from [37]. MODELEDITDIST. (↓) GI A CC. (↑) COLORING 10 COLORING 15 QM9 GDB13 COLORING 10 COLORING 15 QM9 GDB13 FGWBARY 6.73 N.A. 2.84 N.A. 01.00 N.A. 28.95 N.A. RELATIONFORMER 5.47 2 .64 3 .80 7 .45 18.14 21 .99 09 .95 00 .05 ANY2GRAPH 0.19 1.22 2 .13 3.63 84.44 43 .77 29 .85 16 .25 GRALE 0.39 0 .67 3.62 4 .43 89.66 86.02 30.77 32.02 GRALE + F INETUNING 0.27 0.45 2.61 2.25 90.04 88.87 35.27 53.22 Table 5: We report (avg ±std) edit distance and compute time over 1000 pairs of test graphs. All solvers use the default Pygmtools parameters except Greedy A∗which is A∗with beam width one. MODELEDITDIST. (↓) COMPUTE TIME IN SECONDS (↓) COLORING PUBCHEM COLORING PUBCHEM IPFP 21.10±10.83 40 .66±12.16 0.006±0.013 0 .073±0.052 RRWM 20.66±10.90 42 .24±12.71 0.540±0.190 1 .271±0.244 SM 20.90±10.91 43 .20±12.86 0.002±0.001 0 .076±0.022 GREEDY A∗20.41±11.21 32.53±8.15 0.021±0.016 0 .110±0.054 A∗19.66±11.01 N.A. 3.487±1.928 >100 GRALE 08.92±05.61 32 .77±11.67 0.005±0.001 0.008±0.002 all models are pre-trained on PUBCHEM 32, and the same predictive head is used for the downstream tasks. Detailed settings are provided in Appendix A. Overall, GRALE outperforms the other graph AutoEncoders and performs similarly, if not better, than the other baselines. Graph prediction. We now consider the challenging task of graph prediction, where the goal is to map an input x∈ X to a target graph G∗. Following the surrogate regression strategy of [ 74,8,6], we train a surrogate predictor φ:X 7→ Z to minimize ∥φ(x)−g(G∗)∥2 2. At inference time, the predicted embedding is decoded into a graph using the pretrained decoder ˆG=f◦φ(x). We also consider a finetuning phase, where φandfare jointly trained with the end-to-end loss L(f◦φ(x), G∗). We evaluate this approach on the Fingerprint2Graph and Image2Graph tasks introduced in [ 37], using the same model for φ. Results are reported in Table 4. Thanks to pre-training (on COLORING for Image2Graph and PUBCHEM for Fingerprint2Graph), GRALE significantly outperforms prior methods on most tasks. Graph Matching. Finally, we propose to use GRALE to match arbitrary graphs G1andG2. To this end, we compute the node embeddings of both
https://arxiv.org/abs/2505.22109v1
G1andG2and plug them into the matcher T(G1, G2) =m gnodes(G1), gnodes(G2) (11) where we enforce that T(G1, G2)∈σNby replacing the Sinkhorn algorithm with the Hungarian algorithm [ 15] inside the matcher. Then, we use this (potentially suboptimal) matching to compute an upper bound of the edit distance. In Table 5, we compare this approach to more classical methods as implemented in Pygmtools [ 61]. Pygmtools is fully GPU compatible, which enables a fair comparison with the proposed approach in terms of time complexity. The matcher operates out of distribution since it is trained to match input/output graphs that are expected to be similar G1≈G2at convergence and here we sample random pairs of graphs. Despite this, our model outperforms the classical solvers as it returns better upper bounds of the edit distance and is typically orders of magnitude faster. 6 Conclusion, limitations and future works We introduced GRALE, a novel graph-level AutoEncoder, and showed that its embeddings capture intrinsic properties of the input graphs, achieving SOTA performance and beyond, across numerous 9 tasks. Trained on large-scale molecule data, GRALE provides a strong foundation for solving graph-level problems through vectorial embeddings. Arguably, GRALE’s main limitation is its computational complexity (mostly cubic), which led us to restrict graphs to N= 32 . This could be mitigated in future work using approximate attention in the Evoformer-based modules [ 2,11,13] and accelerated Sinkhorn differentiation in the matcher [ 5,16]. Beyond this, GRALE opens up several unexplored directions. For instance, GRALE could serve as the basis for a new graph variational autoencoder for generative tasks. Finally, given its strong performance in graph prediction, applying GRALE to the flagship task of molecular elucidation [34, 42] appears to be a promising next step. Acknowledgments and Disclosure of Funding This work was performed using HPC resources from GENCI-IDRIS (Grant 2025-AD011016098) and received funding from the European Union’s Horizon Europe research and innovation programme under grant agreement 101120237 (ELIAS). Views and opinions expressed are however those of the authors only and do not necessarily reflect those of the European Union or European Commission. Neither the European Union nor the granting authority can be held responsible for them. This research was also supported in part by the French National Research Agency (ANR) through the PEPR IA FOUNDRY project (ANR-23-PEIA-0003) and the MATTER project (ANR-23-ERCC-0006-01). Finally, it received funding from the Fondation de l’École polytechnique. References [1]Aflalo, Y ., Bronstein, A., and Kimmel, R. (2015). On convex relaxation of graph isomorphism. Proceedings oftheNational Academy ofSciences, 112(10):2942–2947. [2]Ahdritz, G., Bouatta, N., Floristean, C., Kadyan, S., Xia, Q., Gerecke, W., O’Donnell, T. J., Berenberg, D., Fisk, I., Zanichelli, N., et al. (2024). Openfold: Retraining alphafold2 yields new insights into its learning mechanisms and capacity for generalization. Nature Methods , 21(8):1514–1524. [3]Bai, Y ., Ding, H., Bian, S., Chen, T., Sun, Y ., and Wang, W. (2019). Simgnn: A neural network approach to fast graph similarity computation. In Proceedings ofthetwelfth ACM international conference onweb search anddata mining, pages 384–392. [4]Blum, L. C. and Reymond, J.-L. (2009). 970 million druglike small molecules for virtual screening in the chemical
https://arxiv.org/abs/2505.22109v1
universe database gdb-13. Journal oftheAmerican Chemical Society , 131(25):8732–8733. [5]Bolte, J., Pauwels, E., and Vaiter, S. (2023). One-step differentiation of iterative algorithms. Advances inNeural Information Processing Systems, 36:77089–77103. [6]Brogat-Motte, L., Flamary, R., Brouard, C., Rousu, J., and d’Alché Buc, F. (2022). Learning to predict graphs with fused gromov-wasserstein barycenters. In International Conference on Machine Learning, pages 2321–2335. PMLR. [7]Bronstein, M. M., Bruna, J., LeCun, Y ., Szlam, A., and Vandergheynst, P. (2017). Geometric deep learning: going beyond euclidean data. IEEE Signal Processing Magazine, 34(4):18–42. [8]Brouard, C., Shen, H., Dührkop, K., d’Alché Buc, F., Böcker, S., and Rousu, J. (2016). Fast metabolite identification with input output kernel regression. Bioinformatics, 32(12):i28–i36. [9]Buterez, D., Janet, J. P., Oglic, D., and Lio, P. (2024). Masked attention is all you need for graphs. arXiv preprint arXiv:2402.10793. [10] Chen, D., Zhu, Y ., Zhang, J., Du, Y ., Li, Z., Liu, Q., Wu, S., and Wang, L. (2023). Uncov- ering neural scaling laws in molecular representation learning. Advances inNeural Information Processing Systems, 36:1452–1475. [11] Cheng, S., Zhao, X., Lu, G., Fang, J., Yu, Z., Zheng, T., Wu, R., Zhang, X., Peng, J., and You, Y . (2022). Fastfold: Reducing alphafold training time from 11 days to 67 hours. arXiv preprint arXiv:2203.00854. 10 [12] Cuturi, M. (2013). Sinkhorn distances: Lightspeed computation of optimal transport. Advances inneural information processing systems, 26. [13] Dao, T., Fu, D., Ermon, S., Rudra, A., and Ré, C. (2022). Flashattention: Fast and memory- efficient exact attention with io-awareness. Advances inneural information processing systems , 35:16344–16359. [14] Du, X., Yu, J., Chu, Z., Jin, L., and Chen, J. (2022). Graph autoencoder-based unsupervised outlier detection. Information Sciences, 608:532–550. [15] Edmonds, J. and Karp, R. M. (1972). Theoretical improvements in algorithmic efficiency for network flow problems. Journal oftheACM (JACM), 19(2):248–264. [16] Eisenberger, M., Toker, A., Leal-Taixé, L., Bernard, F., and Cremers, D. (2022). A unified framework for implicit sinkhorn differentiation. In Proceedings oftheIEEE/CVF Conference on Computer Vision andPattern Recognition, pages 509–518. [17] Erba, V ., Troiani, E., Biggio, L., Maillard, A., and Zdeborová, L. (2024). Bilinear sequence regression: A model for learning from long sequences of high-dimensional tokens. arXiv preprint arXiv:2410.18858. [18] Ferrer, M., Valveny, E., Serratosa, F., Riesen, K., and Bunke, H. (2010). Generalized me- dian graph computation by means of graph embedding in vector spaces. Pattern Recognition , 43(4):1642–1655. [19] Flamary, R., Cuturi, M., Courty, N., and Rakotomamonjy, A. (2018). Wasserstein discriminant analysis. Machine Learning, 107:1923–1945. [20] Fortin, S. (1996). The graph isomorphism problem. [21] Gao, H. and Ji, S. (2019). Graph u-nets. In international conference onmachine learning , pages 2083–2092. PMLR. [22] Gao, X., Xiao, B., Tao, D., and Li, X. (2010). A survey of graph edit distance. Pattern Analysis andapplications, 13:113–129. [23] Genevay, A., Peyré, G., and Cuturi, M. (2018). Learning generative models with sinkhorn divergences. In International Conference onArtificial Intelligence andStatistics , pages 1608– 1617. PMLR. [24] Gold, S. and Rangarajan, A. (2002). A graduated assignment algorithm for graph matching. IEEE Transactions onpattern analysis andmachine intelligence, 18(4):377–388. [25] Hamilton, W. L. (2020). Graph representation learning. Morgan & Claypool Publishers. [26] He, X., Hooi,
https://arxiv.org/abs/2505.22109v1
B., Laurent, T., Perold, A., LeCun, Y ., and Bresson, X. (2023). A generalization of vit/mlp-mixer to graphs. In International conference onmachine learning , pages 12724–12745. PMLR. [27] Hlaoui, A. and Wang, S. (2006). Median graph computation for graph clustering. Soft Computing, 10:47–53. [28] Hu, W., Fey, M., Zitnik, M., Dong, Y ., Ren, H., Liu, B., Catasta, M., and Leskovec, J. (2020). Open graph benchmark: Datasets for machine learning on graphs. Advances inneural information processing systems, 33:22118–22133. [29] Jaegle, A., Gimeno, F., Brock, A., Vinyals, O., Zisserman, A., and Carreira, J. (2021). Perceiver: General perception with iterative attention. In International conference onmachine learning , pages 4651–4664. PMLR. [30] Jumper, J., Evans, R., Pritzel, A., Green, T., Figurnov, M., Ronneberger, O., Tunyasuvunakool, K., Bates, R., Žídek, A., Potapenko, A., et al. (2021). Highly accurate protein structure prediction with alphafold. nature, 596(7873):583–589. [31] Kaspar, R. (2017). Graph matching toolkit. 11 [32] Khan, S., Naseer, M., Hayat, M., Zamir, S. W., Khan, F. S., and Shah, M. (2022). Transformers in vision: A survey. ACM computing surveys (CSUR), 54(10s):1–41. [33] Kim, S., Thiessen, P. A., Bolton, E. E., Chen, J., Fu, G., Gindulyte, A., Han, L., He, J., He, S., Shoemaker, B. A., et al. (2016). Pubchem substance and compound databases. Nucleic acids research, 44(D1):D1202–D1213. [34] Kind, T. and Fiehn, O. (2010). Advances in structure elucidation of small molecules using mass spectrometry. Bioanalytical reviews, 2:23–60. [35] Kingma, D. P. and Ba, J. (2014). Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. [36] Kipf, T. N. and Welling, M. (2016). Variational graph auto-encoders. arXiv preprint arXiv:1611.07308. [37] Krzakala, P., Yang, J., Flamary, R., d’Alché-Buc, F., Laclau, C., and Labeau, M. (2024). Any2graph: Deep end-to-end supervised graph prediction with an optimal transport loss. Advances inNeural Information Processing Systems, 37:101552–101588. [38] Landrum, G. (2013). Rdkit documentation. Release, 1(1-79):4. [39] Lee, J., Lee, Y ., Kim, J., Kosiorek, A., Choi, S., and Teh, Y . W. (2019). Set transformer: A framework for attention-based permutation-invariant neural networks. In International conference onmachine learning, pages 3744–3753. PMLR. [40] Li, J., Yu, T., Juan, D.-C., Gopalan, A., Cheng, H., and Tomkins, A. (2020). Graph autoencoders with deconvolutional networks. arXiv preprint arXiv:2012.11898. [41] Ling, X., Wu, L., Wang, S., Ma, T., Xu, F., Liu, A. X., Wu, C., and Ji, S. (2021). Multilevel graph matching networks for deep graph similarity learning. IEEE Transactions onNeural Networks andLearning Systems, 34(2):799–813. [42] Litsa, E. E., Chenthamarakshan, V ., Das, P., and Kavraki, L. E. (2023). An end-to-end deep learning framework for translating mass spectra to de-novo molecules. Communications Chemistry, 6(1):132. [43] Liu, J., Kumar, A., Ba, J., Kiros, J., and Swersky, K. (2019). Graph normalizing flows. Advances inNeural Information Processing Systems, 32. [44] Lu, S., Gao, Z., He, D., Zhang, L., and Ke, G. (2024). Data-driven quantum chemical property prediction leveraging 3d conformations with uni-mol+. Nature Communications, 15(1):7104. [45] Moscatelli, A., Piquenot, J., Bérar, M., Héroux, P., and Adam, S. (2024). Graph node matching for edit distance. Pattern Recognition Letters, 184:14–20. [46] Müller, L., Galkin, M., Morris, C., and Rampášek, L. (2023). Attending to graph transformers.
https://arxiv.org/abs/2505.22109v1
arXiv preprint arXiv:2302.04181. [47] Pan, S., Hu, R., Long, G., Jiang, J., Yao, L., and Zhang, C. (2018). Adversarially regularized graph autoencoder for graph embedding. arXiv preprint arXiv:1802.04407. [48] Piao, C., Xu, T., Sun, X., Rong, Y ., Zhao, K., and Cheng, H. (2023). Computing graph edit distance via neural graph matching. Proceedings oftheVLDB Endowment, 16(8):1817–1829. [49] Rampášek, L., Galkin, M., Dwivedi, V . P., Luu, A. T., Wolf, G., and Beaini, D. (2022). Recipe for a general, powerful, scalable graph transformer. Advances inNeural Information Processing Systems, 35:14501–14515. [50] Sanfeliu, A. and Fu, K.-S. (1983). A distance measure between attributed relational graphs for pattern recognition. IEEE transactions onsystems, man, andcybernetics, (3):353–362. [51] Shit, S., Koner, R., Wittmann, B., Paetzold, J., Ezhov, I., Li, H., Pan, J., Sharifzadeh, S., Kaissis, G., Tresp, V ., et al. (2022). Relationformer: A unified framework for image-to-graph generation. InEuropean Conference onComputer Vision, pages 422–439. Springer. 12 [52] Simonovsky, M. and Komodakis, N. (2018). Graphvae: Towards generation of small graphs using variational autoencoders. In Artificial Neural Networks andMachine Learning–ICANN 2018: 27th International Conference onArtificial Neural Networks, Rhodes, Greece, October 4-7,2018, Proceedings, PartI27, pages 412–422. Springer. [53] Sun, F.-Y ., Hoffmann, J., Verma, V ., and Tang, J. (2019). Infograph: Unsupervised and semi-supervised graph-level representation learning via mutual information maximization. arXiv preprint arXiv:1908.01000. [54] Tolstikhin, I. O., Houlsby, N., Kolesnikov, A., Beyer, L., Zhai, X., Unterthiner, T., Yung, J., Steiner, A., Keysers, D., Uszkoreit, J., et al. (2021). Mlp-mixer: An all-mlp architecture for vision. Advances inneural information processing systems, 34:24261–24272. [55] Tu, W., Liao, Q., Zhou, S., Peng, X., Ma, C., Liu, Z., Liu, X., Cai, Z., and He, K. (2023). Rare: Robust masked graph autoencoder. IEEE Transactions onKnowledge andData Engineering , 36(10):5340–5353. [56] Ucak, U. V ., Ashyrmamatov, I., and Lee, J. (2023). Reconstruction of lossless molecular representations from fingerprints. Journal ofCheminformatics, 15(1):1–11. [57] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, Ł., and Polosukhin, I. (2017). Attention is all you need. Advances inneural information processing systems, 30. [58] Vinyals, O., Bengio, S., and Kudlur, M. (2015). Order matters: Sequence to sequence for sets. arXiv preprint arXiv:1511.06391. [59] Wang, C., Pan, S., Long, G., Zhu, X., and Jiang, J. (2017). Mgae: Marginalized graph autoencoder for graph clustering. In Proceedings ofthe2017 ACM onConference onInformation andKnowledge Management, pages 889–898. [60] Wang, H., Wang, J., Wang, J., Zhao, M., Zhang, W., Zhang, F., Xie, X., and Guo, M. (2018). Graphgan: Graph representation learning with generative adversarial nets. In Proceedings ofthe AAAI conference onartificial intelligence, volume 32. [61] Wang, R., Guo, Z., Pan, W., Ma, J., Zhang, Y ., Yang, N., Liu, Q., Wei, L., Zhang, H., Liu, C., et al. (2024). Pygmtools: A python graph matching toolkit. Journal ofMachine Learning Research, 25(33):1–7. [62] Wang, R., Yan, J., and Yang, X. (2019). Learning combinatorial embedding networks for deep graph matching. In Proceedings oftheIEEE/CVF international conference oncomputer vision , pages 3056–3065. [63] Wang, R., Yan, J., and Yang, X. (2020a). Combinatorial learning of robust deep graph matching: an embedding based approach. IEEE transactions onpattern analysis andmachine intelligence
https://arxiv.org/abs/2505.22109v1
, 45(6):6984–7000. [64] Wang, R., Zhang, T., Yu, T., Yan, J., and Yang, X. (2021). Combinatorial learning of graph edit distance via dynamic embedding. In Proceedings oftheIEEE/CVF conference oncomputer vision andpattern recognition, pages 5241–5250. [65] Wang, T., Liu, H., Li, Y ., Jin, Y ., Hou, X., and Ling, H. (2020b). Learning combinatorial solver for graph matching. In Proceedings oftheIEEE/CVF conference oncomputer vision andpattern recognition, pages 7568–7577. [66] Wen, Q., Zhou, T., Zhang, C., Chen, W., Ma, Z., Yan, J., and Sun, L. (2022). Transformers in time series: A survey. arXiv preprint arXiv:2202.07125. [67] Winter, R., Noé, F., and Clevert, D.-A. (2021). Permutation-invariant variational autoencoder for graph-level representation learning. Advances inNeural Information Processing Systems , 34:9559–9573. [68] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., and Yu, P. S. (2020). A comprehensive survey on graph neural networks. IEEE transactions onneural networks andlearning systems, 32(1):4–24. 13 [69] Wu, Z., Ramsundar, B., Feinberg, E. N., Gomes, J., Geniesse, C., Pappu, A. S., Leswing, K., and Pande, V . (2018). Moleculenet: a benchmark for molecular machine learning. Chemical science, 9(2):513–530. [70] Xia, J., Wu, L., Chen, J., Hu, B., and Li, S. Z. (2022). Simgrace: A simple framework for graph contrastive learning without data augmentation. In Proceedings oftheACM web conference 2022 , pages 1070–1079. [71] Xiong, R., Yang, Y ., He, D., Zheng, K., Zheng, S., Xing, C., Zhang, H., Lan, Y ., Wang, L., and Liu, T. (2020). On layer normalization in the transformer architecture. In International conference onmachine learning, pages 10524–10533. PMLR. [72] Xu, K., Hu, W., Leskovec, J., and Jegelka, S. (2018). How powerful are graph neural networks? arXiv preprint arXiv:1810.00826. [73] Yan, S., Yang, Z., Li, H., Song, C., Guan, L., Kang, H., Hua, G., and Huang, Q. (2023). Implicit autoencoder for point-cloud self-supervised representation learning. In Proceedings of theIEEE/CVF International Conference onComputer Vision, pages 14530–14542. [74] Yang, J., Labeau, M., and d’Alché Buc, F. (2024a). Learning differentiable surrogate losses for structured prediction. arXiv preprint arXiv:2411.11682. [75] Yang, J., Labeau, M., and d’Alché Buc, F. (2024b). Exploiting edge features in graph-based learning with fused network gromov-wasserstein distance. Transactions onMachine Learning Research. [76] Ying, C., Cai, T., Luo, S., Zheng, S., Ke, G., He, D., Shen, Y ., and Liu, T.-Y . (2021). Do transformers really perform badly for graph representation? Advances inneural information processing systems, 34:28877–28888. [77] Yu, T., Wang, R., Yan, J., and Li, B. (2019). Learning deep graph matching with channel- independent embedding and hungarian attention. In International conference onlearning representations. [78] Yun, S., Jeong, M., Kim, R., Kang, J., and Kim, H. J. (2019). Graph transformer networks. Advances inneural information processing systems, 32. [79] Zanfir, A. and Sminchisescu, C. (2018). Deep learning of graph matching. In Proceedings of theIEEE conference oncomputer vision andpattern recognition, pages 2684–2693. [80] Zhang, H., Li, P., Zhang, R., and Li, X. (2022). Embedding graph auto-encoder for graph clustering. IEEE Transactions onNeural Networks andLearning Systems, 34(11):9352–9362. [81] Zhang, X., Liu, H., Li, Q., and Wu, X.-M. (2019). Attributed graph clustering via adaptive graph convolution. arXiv preprint arXiv:1906.01210. [82] Zhou, J., Cui, G.,
https://arxiv.org/abs/2505.22109v1
Hu, S., Zhang, Z., Yang, C., Liu, Z., Wang, L., Li, C., and Sun, M. (2020). Graph neural networks: A review of methods and applications. AIopen, 1:57–81. [83] Zhu, Y ., Du, Y ., Wang, Y ., Xu, Y ., Zhang, J., Liu, Q., and Wu, S. (2022). A survey on deep graph generation: Methods and applications. In Learning onGraphs Conference, pages 47–1. PMLR. 14 A Experimental setting A.1 Datasets COLORING. COLORING is a suite of synthetic datasets introduced in [ 37] for benchmarking graph prediction methods. Each sample consists of a pair (Image, Graph) as illustrated in Figure 8. Importantly, all COLORING graphs satisfy the 4-color theorem, that is, no adjacent nodes share the same color (label). Since it is a synthetic dataset, one can generate as many samples as needed to create the train/test set. We refer to the original paper for more details on the dataset generation. Note that, unless specified otherwise, we ignore the images and consider COLORING as a pure graph dataset. Figure 8: Image/Graph pairs from the COLORING dataset (courtesy of [37]). Molecular Datasets. To go beyond synthetic data, we also consider molecular datasets. They represent a very interesting application of GRALE as 1) the graphs tend to be relatively small, 2) a very large amount of unsupervised data is available for training, and 3) the learned representation can be applied to many challenging downstream tasks. Most molecular datasets are stored as a list of SMILES strings. In all cases, we use the same preprocessing pipeline. First, we convert the SMILES string to graphs using Rdkit, and we discard the datapoint if the conversion fails. Then we remove the hydrogen atoms and use the remaining atom numbers as node labels and bond type (none, single, double, triple, or aromatic) as edge labels. Finally, we remove all graphs with more than N= 32 atoms to save computations. Training Datasets. We train GRALE on 3 different datasets. First of all, we train on COLORING 20, a variant of COLORING where we sample 300k graphs of size ranging from 5 to 20. This version of GRALE is applied to all downstream tasks related to COLORING. Then, we also train on PUBCHEM 32, a large molecular dataset that we obtained by downloading all molecules, with up to 32 heavy atoms from the PUBCHEM database [ 33]. This version is applied to all downstream tasks related to molecules. For the ablation studies, we reduce computation by training a smaller version of the model on PUBCHEM 16, a truncated version of PUBCHEM 32 that retains only graphs with up to 16 nodes. Table 6 presents the main statistics for each dataset. Table 6: Training Datasets. DATASET GRAPH SIZE: AVG±STD GRAPH SIZE: M AX NSAMPLES COLORING 20 12.50±4.33 20 300 K PUBCHEM 16 13.82±2.17 16 14M PUBCHEM 32 22.62±5.79 32 84M Downstream Datasets: Classification/Regression. For classification and regression downstream tasks, we consider supervised molecular datasets from the MoleculeNet benchmark [ 69]. We choose datasets that cover a wide range of fields from Quantum Mechanics (QM9, QM403), to Physical Chemistry (Esol, Lipo, Freesolv), to
https://arxiv.org/abs/2505.22109v1
Biophysics (BACE), to Physiology (BBBP). In all cases, we preprocess the molecules using the same pipeline as for PUBCHEM 32 to enable transfer learning. In 3Out of the many available regression targets we focus only on internal energy. 15 particular, we discard all graphs with more than N= 32 atoms, resulting in a truncated version of the datasets. This motivates us to sample new random train/test splits (90 %/10%). For regression tasks, we also normalize the target with a mean of 0 and a variance of 1. We provide the main statistics of those datasets in Table 7. Table 7: Downstream datasets (Classification/Regression). We also report the number of samples in the original dataset, before the truncation due to the preprocessing pipeline. DATASET GRAPH SIZE: AVG±STD GRAPH SIZE: M AX NSAMPLES NSAMPLES ORIGINAL QM9 8.80 ± 0.51 9 133885 133885 QM40 21.06 ± 6.26 32 137381 162954 ESOL 13.06 ± 6.40 32 1118 1118 LIPO 24.12 ± 5.53 32 3187 4200 FREESOLV 8.72 ± 4.19 24 642 642 BBBP 21.10 ± 6.38 32 1686 2050 BACE 27.23 ± 3.80 32 735 1513 Downstream Datasets: Graph Prediction. For graph prediction, we select two challenging tasks proposed in [ 37]. First of all, we consider the Image2Graph task where the goal is to map the image representation of a COLORING instance to its graph representation, meaning that the inputs are the first row of Figure 8 and the targets are the second row. For this task, we use COLORING 10 and COLORING 15, which are referred to as COLORING medium and COLORING big in the original paper. Secondly, we consider a Fingerprint2Graph task. Here, the goal is to reconstruct a molecule from its fingerprint representation [ 56], that is, a list of substructures. Once again, we consider the same molecular datasets as proposed in the original article, namely QM9 and GDB13 [ 4]. Table 8 presents the main statistics of those datasets. Table 8: Downstream datasets (Graph prediction). DATASET GRAPH SIZE: AVG±STD GRAPH SIZE: M AX NSAMPLES COLORING 10 7.52 ± 1.71 10 100 K COLORING 15 9.96 ± 3.15 15 200 K QM9 8.79 ± 0.51 9 120 K GDB13 12.76 ± 0.55 13 1.3M A.2 Model and training hyperparameters As detailed in the previous section, we train 3 variants of our models, respectively, on COLORING 20, PUBCHEM 32 and PUBCHEM 16. We report the hyperparameters used in Table 9. Note that to reduce the number of hyperparameters, we set the number of layers in all modules (evoformer encoder, transformer decoder, transformer encoder, evoformer decoder) to the same value L, idem for the number of attention heads Hand node/edge hidden dimensions. In all cases, we trained with ADAM [ 35], a warm-up phase, and cosine annealing. We also report the number of GPUs and the total time required to train the models with these parameters. A.3 Graph reconstruction metrics To measure the error between a predicted graph ˆGand a target graph Gwe first consider the graph edit distance [ 22,50], that is the number of modification (editions) that should be applied to get
https://arxiv.org/abs/2505.22109v1
to ˆG from G(and vice-versa). The possible editions are node or edge addition, deletion, or modification, where node/edge modification stands for changing the label of a node/edge. In this paper, we set the cost of all editions to 1. It is well known that computing the edit distance is equivalent to solving a graph matching problem [ 1]. For instance, in the case of two graphs of the same size G= (F, C) andˆG= (ˆF,ˆC), the graph edit distance is written as 16 Table 9: For every dataset used to train GRALE, we report: 1) The architecture parameters, 2) The training Parameters, 3) The computational resources required. Note that when the hidden dimension of MLPs is set to "None", the MLPs are replaced by a linear layer plus a ReLU activation. This makes the model much more lightweight. TRAINING DATASET COLORING PUBCHEM 16 PUBCHEM 32 MAXIMUM OUTPUT SIZE N 20 16 32 NUMBER OF TOKENS K 4 8 16 DIMENSION OF TOKENS D 32 32 32 TOTAL EMBEDDING DIM d=K×D 128 256 512 NUMBER OF LAYERS 5 5 7 NUMBER OF ATTENTION HEADS 4 4 8 NODE DIMENSIONS 128 128 256 NODE HIDDEN DIMENSIONS (MLP S) NONE 128 256 EDGE DIMENSIONS 64 64 128 EDGE HIDDEN DIMENSIONS (MLP S) NONE NONE NONE TOTAL PARAMETER COUNT 2.0M 2.5M 11.7M NUMBER OF GRADIENT STEPS 300 K 700 K 1.5 M BATCH SIZE 64 128 256 EPOCHS 64 5 5 NUMBER OF WARMUP STEPS 4K 8K 16K BASE LEARNING RATE 0.0001 0.0001 0.0001 GRADIENT NORM CLIPPING 0.1 0.1 0.1 NUMBER OF GPU S(L40S) 1 1 2 TRAINING TIME 8H 20H 100 H Edit (G,ˆG) = min P∈σNLALIGN (G, P[ˆG]), (12) where P[ˆG] = (PˆF, P ˆCPT) and LALIGN (G,ˆG) =X i1[Fi̸=ˆFi] +X i,j1[Ci,j̸=ˆCi,j] (13) Note that this rewrites as a Quadratic Assignment Problem (QAP) known as Lawler’s QAP Edit (G,ˆG) = min P∈σNvect(P)TKvect(P) (14) For the proper choice of K∈RN2×N2. This formulation can be extended to cases where GandˆG have different sizes, up to the proper padding of K, which is equivalent to padding the graph directly as we do in this paper [ 24]. Since the average edit distance that we observe with GRALE is typically lower than 1, we also report a more challenging metric, the Graph Isomorphism Accuracy (GI Acc.), also known as top-1-error or hit1, that measures the percentage of samples with edit distance 0. Acc(G,ˆG) =1[Edit (G,ˆG) = 0] (15) B Additionnal experiments B.1 Training Complexity When it comes to molecules, unsupervised data is massively available. At the time this paper was written, the PubChem database [ 33] contained more than 120 million compounds, and this number is increasing. In the context of this paper, this raises the question of how much of this data is necessary 17 to train the model. To answer this, we propose to train GRALE on a truncated dataset and observe the performance. Once again, we train on PUBCHEM 16 using the medium-sized model described in Table 9 to reduce computational cost. Note that we keep the size of the model fixed. For more in-depth
https://arxiv.org/abs/2505.22109v1
results on neural scaling laws in molecular representation learning, we refer to [10]. We propose 2 different performance measures. On the one hand, we report the quality of reconstruction (on a test set) against the size of the data set (Figure 9, left). On the other hand, we report the results achieved when the learned representation is applied to a downstream task. More precisely, we report the MAE observed when the learned representation is used for regression on the QM9 dataset (Figure 9, right). Figure 9: GRALE performances vs pretraining dataset size (in log scale). Left: Train/test recon- struction accuracy. Right: Downstream performance on the QM9 regression task using the learned embeddings. For the reconstruction accuracy, we observe overfitting for all datasets smaller than one million molecules. More interestingly, we observe that the performance on the downstream task is also highly dependent on the size of the pretraining dataset, as the performances keep improving past one million samples. Overall, it appears that this version of GRALE is able to leverage a very large molecular dataset. To apply GRALE to a different field, where pretraining datasets are smaller, it might require considering a different set of parameters of perhaps a more frugal architecture. For instance, the encoder and decoder could be replaced by the more lightweight baselines proposed in E. B.2 Learning to AutoEncode as a pretraining task and the choice of the latent dimension The original motivation for the AutoEncoder is dimensionality reduction: given an input x∈Rdx, the goal is to find some equivalent representation z∈Rdsuch that d≪dx. More generally, learning to encode/decode data in a small latent space is a challenging unsupervised task, well-suited for pretraining. However, in the case of graphs, the original data xis not a vector, which makes it hard to estimate what is a "small" latent dimension d. To explore this question, we propose to train GRALE for various values of dand report the corresponding performances. As in the previous experiment, we report both the reconstruction accuracy and the downstream performance on QM9 (Figure 10). Figure 10: GRALE performances vs total embedding dimension d(in log scale). Left: Train/test reconstruction accuracy. Right: Downstream performance on the QM9 regression task using the learned embeddings. 18 Figure 11: Two reconstruction failure cases (sampled in an adversarial fashion from PUBCHEM 32). In both cases, the input graph exhibits many symmetries. For instance, in the second sample, 6 nodes are perfectly symmetric (in the sense of the Weisfeiler-Leman test). Node color represents the atomic number: Black for carbon, Blue for Oxygen etc. The different types of bonds are not represented, but we represent the predicted probability for edge existence with its width. First, we observe that reconstruction performance and downstream task performance are highly correlated, which is consistent with the observation made in the previous experiment and the results of this work in general. This confirms the intuition that learning to encode and decode a graph is a suitable pretraining task. We also observe that small embedding dimensions act as a regularizer, preventing overfitting in terms of graph reconstruction accuracy. Despite this,
https://arxiv.org/abs/2505.22109v1
we do not observe that the downstream performance deteriorates with higher embedding dimensions4. This suggests that learning to encode/decode entire graphs into an Euclidean space is always a challenging task, even when the latent space is of large dimensions. B.3 Reconstruction failure case We now highlight an interesting failure case of the AutoEncoder. To this end, we plot the pair of inputs and outputs with the maximum reconstruction error. We observe that the hardest cases to handle for the AutoEncoder are those where the input graph exhibits many symmetries. When this happens, it becomes difficult for the matcher to predict the optimal matching between input and output. The main reason for this is that similar nodes have similar embeddings, and since the matcher is based on these hidden node features, it might easily confuse nodes with apparently similar positions in the graph. As a result, GRALE has difficulty converging in this region of the graph space. To illustrate this phenomenon, we select the two graphs with the highest reconstruction error out of 1000 random test samples and their reconstruction (Figure 11). For comparison, we also provide 3 random samples that are correctly reconstructed (Figure 12). C GRALE architecture details C.1 Featurizer There are many ways to parametrize the featurizers ϕandϕ′. In this paper, we focus on simple choices that have already demonstrated good empirical performances [ 44,76,67,37]. From a datapoint x∈ D , the featurizer first constructs a node label matrix F0(x)∈Rn×d0, an adjacency matrix A(x)∈ {0,1}n×nand a shortest path matrix SP(x)∈Nn×nwhere nis the size of the graph. We then augment the node features with k-th order diffusion F(x) =CONCAT [F0(x), A(x)F0(x), ..., Ak(x)F0(x)] (16) where kis an hyperparameter set to k= 2in our experiments. Then the edge features are defined as 4Within this "reasonable" range of values. 19 Figure 12: Three random samples from the PUBCHEM 32 dataset along with their GRALE recon- struction. Node color represents the atomic number: Black for carbon, Blue for Oxygen, etc. The different types of bonds are not represented, but we represent the predicted probability for edge existence with its width. Ci,j(x) =CONCAT [Fi(x), Fj(x),ONE-HOT (Ai,j(x)),PE(SPi,j(x))] (17) where ONE-HOT denotes one-hot encoding and PEdenotes sinusoidal positionnal encoding [ 57]. If edge labels are available, they are concatenated to Cas well. Finally, the featurizers are defined as ϕ(x) = (F(x) +Noise , C(x)), ϕ′(x) =PADDING (F(x), C(x)) (18) where Noise is a random noise matrix, and PADDING pads all graphs to the same size N > n as defined above. The noise component breaks the symmetries in the input graph, which enables the encoder to produce distinct embeddings for all the nodes. We demonstrate empirically that this is crucial for the performance of the matcher and provide a more qualitative explanation in B.3. We leave the exploration of more complex, and possibly more asymmetric, featurizers to future work. C.2 Encoder The encoder gtakes as input a graph G= (F, C)and returns both the node level embeddings gnodes(G) =Xand graph level embedding ggraph(G) =Z. The main component of gis a stack of L Evoformer Encoder layers [
https://arxiv.org/abs/2505.22109v1
30] that produces the hidden representation (FL, CL)of the input graph. (Fl+1, Cl+1) =EvoformerEncoder (Fl, Cl) (19) where F1∈Rn×dF(resp. C1∈Rn×n×dc) are initialized by applying a node-wise (resp. edge-wise) linear layer on F(resp. C). The Evoformer Encoder layer used in GRALE is represented in Figure 13. Compared to the original implementation, we make two notable changes that make this version much more lightweight. First, we replace the MLP of the Feed Forward Block (FFB) with a simple linear layer followed by an activation function. Then, of the 4 modules dedicated to the update of C, we keep only one that we call Triangular Self Attention Block (tSAB) to highlight the symmetry with a Transformer Encoder. The definitions for all these blocks are provided in Appendix D. Once the hidden representation (FL, CL)has been computed, the node level embeddings are simply derived from a linear operation 20 bSABtSAB OPB FFBFFBFigure 13: Architecture of the Evoformer Encoder layer used for encoding the input graph G= (F, C). Xi=Linear (FL i)ifi < n, X i=uotherwise. (20) where uis a learnable padding vector. Finally, the graph-level representation is obtained by pooling Cwith a Transformer Decoder. More precisely, we first flatten Cto be a n2×dCmatrix, then we pass it to a standard Llayer Transformer Decoder to output the graph level embedding Z=ZL Q∈RK×D. Zl+1 Q=TransformerDecoder (Zl Q, CL) (21) where Z0 Qis a learnable query matrix in RK×D. This module is illustrated in Figure 14. SAB CAB FLATTENFFB Figure 14: Architecture of the Transformer Decoder used for pooling CL. C.3 Decoder The decoder takes as input the graph embedding Zand should output both the reconstruction ˆG= (ˆh,ˆF,ˆC)and the node embeddings ˆXthat are required to match input and output graphs. To this end, the proposed architecture mimics that of a Transformer Encoder-Decoder except that we replace the usual Transformer Decoder with a novel Evoformer Decoder. More formally, the latent representation Z=Z0is first updated by a Transformer Encoder with Llayers Zl+1=TransformerEncoder (Zl). (22) For completeness, we also recall the content of a transformer encoder layer in Figure 15. SAB FFB Figure 15: Architecture of the Transformer Encoder. Then, similarly to a Transformer Decoder, we define a learnable graph query (F0 Q, C0 Q)where F0 Q∈RN×dFandC0 Q∈RN×N×dCthat we update using L layers of the novel Evoformer Decoder (Fl+1 Q, Cl+1 Q) =EvoformerDecoder (Fl Q, Cl Q, ZL) (23) The proposed Evoformer Decoder is to the Evoformer Encoder, the same as the Transformer Decoder is to the Transformer Encoder, i.e., it is the same, plus up to a few Cross-Attention Blocks (CAB) that enable conditioning on ZL. See Figure 16 for the details of the proposed layer. 21 bSAB CABtSAB OPB FFBFLATTEN UNFLA TTEN CAB FFBNEW !Figure 16: Similarly to a Transformer Decoder, the proposed Evoformer Decoder layer augments the Evoformer Encoder with 2 Cross-Attention Blocks (CAB) so that the output can be conditioned on some source Z. All the inner blocks are defined in D. Finally, the reconstructed graphs are obtained with a few linear heads ˆh=Sigmoid (Linear (FL Q)),ˆF=Linear (FL Q),ˆC=Linear (CL Q) (24) where the linear layers are applied
https://arxiv.org/abs/2505.22109v1
node and edge-wise. Similarly, the node-level embeddings of the output graphs are defined as ˆX=Linear (FL Q) (25) C.4 Matcher The matcher uses the node embeddings of the input graph Xand target graph ˆXto compute the matching ˆTbetween the two graphs. The first step is to build an affinity matrix K∈RN×Nbetween the nodes. We propose to parametrize Kusing two one hidden layer MLPs MLPinandMLPout Ki,j= exp( −|MLPin(Xi)−MLPout(ˆXj)|) (26) Note that Kis positive, but might not be a bistochastic matrix. To this end, we project KonσN using Sinkhorn projections ˆT=SINKHORN (K) (27) We fix the number of Sinkhorn steps to 100, and to ensure stability, we perform the iterations in the log domain [ 12]. At train time, we backpropagate through Sinkhorn by unrolling through these iterations [ 16]. At test time, we fully replace Sinkhorn by the Hungarian algorithm [ 15] to ensure that the matching ˆTis a discrete, permutation matrix. C.5 Loss Recall that we use the loss originally introduced in Any2Graph [37]: LOT(G,ˆG, T) =NX i,jℓh(hi,ˆhj)Ti,j+NX i,jhiℓF(Fi,ˆFj)Ti,j+NX i,j,k,lhihkℓC(Ci,k,ˆCj,l)Ti,jTk,l, (28) where the ground losses ℓh, ℓFandℓCstill need to be defined. We decompose the node reconstruction lossℓFbetween the part devoted to node labels (discrete) ℓd Fand node features (continuous) ℓc F. Similarly, we decompose ℓCintoℓd Candℓc C. Following Any2Graph [ 37], we use cross-entropy loss for all discrete losses ℓh, ℓd F, ℓd Cand L2 loss for all continuous losses ℓc F, ℓc C. This makes a total of 5 terms that we balance using 5 hyperparameters αh, αd F, αc F, αd Candαc C. Once again, we follow the guidelines derived in the original paper and set 22   αh=1 N, αd F=1 N, αc F=1 2N, αd C=1 N2, αc C=1 2N2,(29) D Definitions of attention based models inner blocks For completeness, we devote this section to the definition of all blocks that appear in the modules used in GRALE. For more details, we refer to specialized works such as Lee et al. [39] for Transformer and Jumper et al. [30] for Evoformer. From layers to blocks. We adopt the convention that a block is always made of a layer plus a normalization and a skip connection. BLOCK (x) =NORM (x+LAYER (x)) (30) Note that, in some works, the layer normalization is placed at the start of the block instead [71]. Parallelization. To lighten the notations we denote f[X]whenever function fis applied to the last dimension of X. For instance, when X∈RN×Dis a feature matrix ( Nnodes with features of dimension D), we have f[X]i=f(Xi). Similarly, when C∈RN×N×Dis an edge feature tensor, we have f[C]i,j=f(Ci,j). Subscripts convention. In the following, we use the subscripts i, j, k for the nodes/tokens indexing, a, b, c for features dimensions and lfor indexing the heads in multi-head attention. D.1 Node level blocks Feed-Forward Block ( FFB).Given an input X∈RN×D, the FF layer simply consist in applying the same MLP to all lines/tokens/nodes of Xin parallel FFB(X) =NORM (X+MLP[X]) (31) By construction, the FFBblock is permutation equivariant w.r.t. X. Dot Product Attention. Given a query matrix Q∈RN×D, key and value matrices K, V∈RM×D
https://arxiv.org/abs/2505.22109v1
and biais matrix B∈RN×M, the Dot Product Attention attention writes as: DPA(Q, K, V, B ) =Softmax [QKT+B]V (32) More generally, for B∈RN×M×h, Multi-Head Attention writes as MHA(Q, K, V, B ) =CONCAT (O1, ..., O h) (33) where Ol=DPA ql[Q], kl[K], vl[V], bl (34) and are linear layers and bl i,j=Bi,j,l. 23 Cross-Attention Block ( CAB).Given X∈RN×DandY∈RM×Dwe define: CA(X, Y ) =MHA q[X], k[Y], v[Y],0) (35) where q, k, v :RD7→RDare linear layers. The Cross-Attention Block is: CAB(X, Y ) =NORM (X+CA(X, Y )) (36) The Cross Attention layer is permutation equivariant with respect to Xand invariant with respect to the permutation of context Y. Self-Attention Block ( SAB).Given X∈RN×D, the Self-Attention Block writes as SAB(X) =CAB(X, X ) (37) The Self Attention layer is permutation equivariant with respect to X. D.2 Graph level blocks Einstein Notations. In the following, we adopt the Einstein summation convention for tensor operations. For instance, given matrices A∈RN×DandB∈RD×M, the matrix multiplication C=AB, defined as Ci,j=P kAi,kBk,j, is denoted compactly as Ci,j=Ai,kBk,jand the unused indices are implicitly summed over. Triangular Attention. Triangle Attention is the equivalent of self-attention for the edges. To reduce the size of the attention matrix, edge (i, j)can only attend to its neighbouring edges (i, k). Thus for Q, K, V ∈RN×N×D, the triangle attention layer writes as: TA(Q, K, V )i,j,a=Ai,j,kVi,k,a (38) where the Ai,j,kis the attention between (i, j)and(i, k)defined as Ai,j,k=Softmax (Qi,j,aKi,k,a). Multi-head attention can be defined in the same way as for the self-attention layer. Note that the original Evoformer also includes 3 similar layers where (i, j)can only attend to (k, i),(k, j)and (j, k). For the sake of simplicity, we remove those layers in our implementation. triangular Self-Attention Block ( tSAB ).Denoting C∈RN×N×D, we define: tSA(C) =TA(q[C], k[C], v[C]) (39) where q, k, v :RD7→RDare linear layers. The triangular Self-Attention Block is defined as: TSAB (C) =NORM (C+tSA(C)) (40) The triangular self-attention layer satisfies second-order permutation equivariance with respect to C. Outer Product Block ( OPB).Given a node feature matrix X∈RN×Dand edge feature tensor C∈RN×N×D, the Outer Product layer enables information flow from the nodes to the edges. OP(X)i,j,c=Xi,aWa,b,cXj,b (41) where W∈RD×D×Dis a learnable weight tensor. The Outer Product Block is defined as: OPB(C, X) =NORM (C+OP(X)) (42) 24 biased Self-Attention Block ( bSAB ).Conversely, the biased self-attention layer enables information flow from the edges to the nodes. Given a node feature matrix X∈RN×Dand edge feature tensor C∈RN×N×Dwe define: bSA(X, C) =MHA q[X], k[X], v[X], b[C]) (43) where q, k, v :RD7→RDandb:RD7→Rhare linear layers. Finally, the biased Self-Attention Block is: bSAB (X, C) =NORM (X+bSA(X, C)) (44) E Ablation studies details In section 5.1, we conduct an extensive ablation study where we validate the choice of our model components by replacing them with a baseline. We now provide more precise details on the baselines used for this experiment. Loss. Recall the expression of the loss we propose for GRALE: LOT(G,ˆG,ˆT) =NX i,jℓh(hi,ˆhj)ˆTi,j+NX i,jhiℓF(Fi,ˆFj)ˆTi,j+NX i,j,k,lhihkℓC(Ci,k,ˆCj,l)ˆTi,jˆTk,l(45) For the ablation study, we replace it with the one proposed to train PIGV AE [ 67]. Since the original PIGV AE
https://arxiv.org/abs/2505.22109v1
loss cannot take into account the node padding vector h, we propose the following extension LPIGVAE+ (G,ˆG,ˆT) =NX iℓh(hi,[ˆTˆh]i) +NX ihiℓF(Fi,[ˆTˆF]i) +NX i,jhihjℓC([Ci,j;ˆTˆCˆTT]i,j). (46) We also add a regularization term as suggested in the original paper, and extend it to take into account the padding ΩPIGVAE+ (ˆT) =−X i,jˆTi,jlog(ˆTi,j)hj. (47) Finally, we replace our loss by LPIGVAE+ (G,ˆG,ˆT) +λΩPIGVAE+ (ˆT)and we report the results for λ= 10 (after a basic grid-search λ∈ {0.1,1,10}). Featurizer. As detailed in C, the proposed featurizer ϕaugments the graph representation with high-order properties such as the shortest path matrix. We check the importance of this preprocessing step by removing it entirely. More precisely, we change the equation (16) that defines the node features into F(x) =F0(x) (48) and the equation (17) that defines the edge features into Ci,j(x) =CONCAT [Fi(x), Fj(x),ONE-HOT (Ai,j(x))] (49) 25 Encoder. To assess the importance of the Evoformer Encoder module, we swap it with a graph neural network (GNN). More precisely, we change equation (19) into Fl+1=GNN(Fl, A) (50) where Ais the adjacency matrix. Since the GNN does not output hidden edge representations, we define them as CL i,j=CONCAT [FL i, FL j]. For this experiment, we use a 4-layer GIN [72]. Decoder. Similarly, we check the importance of the novel Evoformer Decoder by swapping it with a more classical Transformer Decoder. More precisely, we change equation (23) into Fl+1 Q=TransformerDecoder (Fl Q, ZL) (51) Since the Transformer Decoder does not reconstruct any edges, we add an extra MLP (CL Q)i,j= MLP CONCAT [(FL Q)i,(FL Q)j] . Matcher. Since the role of our matcher is very similar to that of the permuter introduced for PIGV AE [ 67], we propose to plug it inside our model instead. For completeness, we recall the definition of PIGV AE permuter: m(ˆX, X ) =SoftSort (XUT) (52) where U∈Rdis learnable scoring vector and the SoftSort :RN7→RN2operator is defined as the relaxation of the ArgSort operator SoftSort (s)i,j=softmax|si−sort (s)j| τ (53) where τ >0is a fixed temperature parameter. Note that, compared to the GRALE matcher, this implementation does not leverage the node features of the output graphs. Instead, it assumes that the permutation between input and output can be seen as a sorting of some node scores si. Importantly, the original paper mentions that the Permuter benefits from decaying the parameter τduring training. However, detailed scheduling training is not provided in the original article; therefore, we report the best results from the grid search τ∈ {1e−5,1e−4,1e−3,1e−2}. Disanbiguation noise. Finally, we propose to remove the disambiguation noise added to the input features. That is ϕ(x) = (F(x), C(x)). (54) Recall that the expected role of this noise is to enable the model to produce distinct node embeddings for nodes that are otherwise undistinguishable (in the sense of the Weisfeiler-Lehman test). F Proofs of the theoretical results F.1 Loss properties Proposition 1: Computational cost. This proposition is a trivial extension of Proposition 5 from Any2Graph [ 37]. The only difference is that, as proposed in [ 75], we consider an edge feature tensor C instead of an adjacency matrix A.
https://arxiv.org/abs/2505.22109v1
The proof remains the same. Note that the assumption made in the original paper is that there exist h1, h2, f1, f2such that ℓC(a, b) =f1(a) +f2(b)− ⟨h1(a), h2(b)⟩. Instead, we make the slightly stronger (but arguably simpler) assumption that ℓCis a Bregman divergence. By definition, any Bregman divergence ℓwrites as ℓ(a, b) =F(b)−F(a)− ⟨∇ F(a), b−a⟩ (55) 26 and thus, the original assumption is verified for f1(a) =⟨∇F(a), a⟩−F(a), f2(b) =F(b), h1(a) = ∇F(a)andh2(b) =b. Proposition 2: Positivity. This is a direct extension to the case nC>1of Proposition 3 from [ 37]. F.2 Positioning with respect to PIGV AE In the following, we assume that the ground losses ℓFandℓCare Bregman divergences as defined above. G= (F, C)andˆG= (ˆF,ˆC)are graphs of size Nand we omit the padding vectors, enabling fair comparison with PIGV AE. In this context, the proposed loss is rewritten as LOT(G,ˆG,ˆT) =NX i,jℓF(Fi,ˆFj)ˆTi,j+NX i,j,k,lℓC(Ci,k,ˆCj,l)ˆTi,jˆTk,l. (56) We also recall the expression of PIGV AE’s loss LPIGV AE (G,ˆG,ˆT) =LALIGN (G,ˆT[ˆG]), (57) where LALIGN (G,ˆG) =PN i=1ℓF(Fi,ˆFi) +PN i,j=1ℓC(Ci,j,ˆCi,j)andˆT[ˆG] = ( ˆTˆF,ˆTˆCˆTT). Proposition 3: Link between LPIGV AE andLOT.LetˆT∈πNbe a bistochastic matrix. Since ℓF is a Bregman divergence, it is convex with respect to the second variable and the Jensen inequality gives: NX jℓF(Fi,ˆFj)ˆTi,j≥ℓF Fi,NX jˆFjˆTi,j =ℓF(Fi,[ˆTˆF]i). (58) Note that Jensen’s inequality applies because, by definition of a bistochastic matrix,P jTi,j= 1. Applying the same reasoning twice we get that for any i, k NX j,lℓC(Ci,k,ˆCj,l)ˆTi,jˆTk,l≥ℓC Ci,k,NX j,lˆCj,lˆTi,jˆTk,l =ℓC(Ci,k,[ˆTˆCˆTT]i,k)) (59) which concludes that LOT(G,ˆG,ˆT)≥ L PIGV AE (G,ˆG,ˆT) (60) With equality if and only if all the Jensen inequalities are equalities, that is, if and only if ˆTis a permutation matrix. Proposition 4: Failure case of LPIGV AE .PIGV AE’s loss can be zero even if ˆGandGare not isomorphic. This can be demonstrated with a very simple counterexample. Let N= 2,C=ˆC= 0 and F= 0.5 0.5 ,ˆF= 1 0 (61) While it is obvious that the two graphs are not isomorphic (the sets of nodes are different), when we set the matching matrix to ˆT= 0.5 0.5 0.5 0.5 (62) 27 we have that ˆTˆF=Fand that LPIGV AE (G,ˆG,ˆT) = 0 . Therefore, we conclude that LPIGV AE does not satisfy proposition 2. 28
https://arxiv.org/abs/2505.22109v1
arXiv:2505.22112v1 [cs.AI] 28 May 20251 Visual Large Language Models Exhibit Human-Level Cognitive Flexibility in the Wisconsin Card Sorting Test Guangfu Hao1,2,Frederic Alexandre3,Shan Yu1,2,* 1Laboratory of Brain Atlas and Brain-inspired Intelligence, Institute of Automation, Chinese Academy of Sciences, Beijing, China 2School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, China 3Inria centre at the university of Bordeaux, Neurodegenerative Diseases Institute, Bordeaux, France Abstract —Cognitive flexibility has been extensively studied in human cognition but remains relatively unexplored in the context of Visual Large Language Models (VLLMs). This study assesses the cognitive flexibility of state-of-the-art VLLMs (GPT- 4o, Gemini-1.5 Pro, and Claude-3.5 Sonnet) using the Wisconsin Card Sorting Test (WCST), a classic measure of set-shifting ability. Our results reveal that VLLMs achieve or surpass human- level set-shifting capabilities under chain-of-thought prompt- ing with text-based inputs. However, their abilities are highly influenced by both input modality and prompting strategy. In addition, we find that through role-playing, VLLMs can simulate various functional deficits aligned with patients having impairments in cognitive flexibility, suggesting that VLLMs may possess a cognitive architecture, at least regarding the ability of set-shifting, similar to the brain. This study reveals the fact that VLLMs have already approached the human level on a key component underlying our higher cognition, and highlights the potential to use them to emulate complex brain processes. Index Terms —Cognitive flexibility, Wisconsin Card Sorting Test, Visual large language models, Prefrontal cortex, Prompting strategy, Cognitive impairment simulation. I. I NTRODUCTION COGNITIVE flexibility, a key component of executive function, is fundamental to human adaptability and problem-solving [1], [2]. This ability to shift between men- tal sets or strategies in response to changing environmental demands is crucial for everyday functioning and has been extensively studied in cognitive psychology research [3]. The prefrontal cortex (PFC) is known to be central to this cog- nitive process [4], [5], facilitating goal-directed behavior and controlled processing. Recent advancements in artificial intelligence (AI), partic- ularly in visual large language models (VLLMs) [6], have sparked a growing need to assess these systems’ cognitive abilities using paradigms analogous to human cognitive assess- ment [7]–[9]. State-of-the-art VLLMs such as GPT-4o [10], Gemini-1.5 Pro [11], and Claude-3.5 Sonnet [12] have demon- strated remarkable capabilities in processing and interpreting both textual and visual information, excelling in tasks that demand complex reasoning and contextual understanding. Despite these achievements, the extent to which VLLMs exhibit cognitive flexibility, especially in tasks requiring set- shifting and adaptation to changing rules, remains largely *Corresponding author: shan.yu@nlpr.ia.ac.cnunexplored. While these models have demonstrated impressive performance across diverse tasks, their ability to flexibly adapt to changing environmental demands has not been systemati- cally evaluated. This gap in our understanding is particularly significant given the increasing integration of VLLMs into complex, dynamic real-world environments where adaptability is crucial. The Wisconsin Card Sorting Test (WCST), developed in the 1940s [13] and refined over decades, has emerged as the gold standard for assessing cognitive flexibility in both clinical and research settings [14], [15]. Originally developed to evaluate PFC function, it requires participants to discover sorting rules based on feedback and then flexibly shift
https://arxiv.org/abs/2505.22112v1
to new rules when the criteria change (see Figure 1). The test’s sensitivity to PFC function has been consistently demonstrated through lesion studies [16], neuroimaging research [17], and clinical observations, cementing its status as a crucial tool in understanding the cognitive flexibility. While other measures of cognitive flexibility exist, the WCST’s established validity make it a core benchmark for evaluating this fundamental cognitive capacity. This research aims to evaluate the cognitive flexibility in VLLMs using the WCST paradigm and investigate how differ- ent input modalities (image-based vs. text-based) ,prompting strategy (direct vs. chain-of-thought reasoning) and rule de- scription specificity affect their performance. Additionally, we explore the potential of VLLMs to simulate specific patterns of cognitive impairment through role-playing, which enables us to understand human cognitive architecture. By comparing VLLM performance across varied conditions, we aim to elu- cidate their cognitive flexibility and inherent limitations. This investigation not only advances our understanding of VLLMs but also offers insights into the nature of cognitive flexibility itself. II. R ELATED WORK A. Cognitive Flexibility and Assessment Methods Neuroimaging studies have consistently implicated the PFC in cognitive flexibility tasks. The dorsolateral prefrontal cortex (DLPFC) and anterior cingulate cortex (ACC) play critical roles in set-shifting, with the DLPFC maintaining and updating task rules, and the ACC involved in conflict monitoring and 2 Stimulus & Response Rule Switch 64 Trials . . . Task Description Image/ Image Description Feedback Text Image/ Image Description vLLM Human Output Choice Stimulus Cards WCST Task Response Cards x641234 Feedback Correct/Incorrect Rule Change (After 10 correct matches) Sorting Criteria: Shape, Number or Color No Explicit Notification of Rule Changes Feedback Feedback Text Image/ Image Description Feedback Feedback Text Image/ Image Description Feedback Stimulus & Response Rule Switch Stimulus & Response Rule Switch Stimulus & Response Fig. 1. WCST Procedure and Sample Stimuli. The WCST consists of matching response cards to four stimulus cards based on a sorting rule (color, shape, or number) that changes periodically. Participants receive feedback on the accuracy of each match but are not explicitly told the sorting rule or when it changes. error detection [18]. The fronto-parietal network, encompass- ing these regions, dynamically reconfigures during flexibility- demanding tasks [19]. Cognitive flexibility is closely interre- lated with other executive functions: working memory main- tains task-relevant information and goals [3], while inhibitory control suppresses previous cognitive sets when rules change [20]. Several tasks have been developed to assess cognitive flex- ibility in humans, with the WCST being a widely recognized measure of set-shifting ability [21]. The WCST’s sensitivity to PFC dysfunction has been extensively validated [22]. Com- plementary assessments include the Dimensional Change Card Sort (DCCS) task for children [23] and the computerized Intra- Extra Dimensional Set Shift (IED) subtest of the Cambridge Neuropsychological Test Automated Battery (CANTAB) [24], offering targeted measures across different populations and modalities. B. Multifaceted Evaluation of LLMs Recent studies have employed diverse assessments to eval- uate large language models (LLMs) and VLLMs across var- ious domains and tasks. Models like GPT-4 demonstrated human-level or superior performance on most theory of mind tests [25]. Similarly, research on
https://arxiv.org/abs/2505.22112v1
human creativity found that ChatGPT-assisted ideas were more creative compared to those generated without LLM assistance [26]. However, challengespersist in other areas. The Test of Time (ToT) benchmark exposed difficulties with complex temporal reasoning tasks, particularly those requiring multi-fact integration and intricate arithmetic operations [27]. Despite strong performance on high-level vision tasks, state-of-the-art VLLMs struggled with basic geometric tasks that are straightforward for humans [28]. A neuropsychological investigation revealed a discontinuous profile in ChatGPT’s prefrontal functioning, with performance ranging from superior to impaired across different cognitive tasks [29]. To address the multifaceted nature of artificial intelligence, researchers have proposed new evaluation frame- works. A comprehensive framework for artificial general in- telligence (AGI) tests inspired by cognitive science empha- sizes the need for multidimensional intelligence assessment [9]. Additionally, the concept of Turing Experiments (TEs) was introduced as a method for evaluating LLMs’ ability to simulate human behavior in experimental settings [30]. C. Prompting Strategies Prompting strategies significantly influence the performance of LLMs [31]. The simplest approach, ”Straight-to-Answer” (STA), directly queries the model without additional context. While effective for straightforward tasks, STA often falters on complex problems requiring multi-step reasoning. Chain-of- Thought (CoT) prompting encourages step-by-step reasoning [32], substantially improving performance on complex reason- 3 ing tasks [33]. Variations such as zero-shot CoT [34] and self- consistency CoT [35] have further refined this approach, adapt- ing it to scenarios with limited or no task-specific examples. In multimodal contexts, visual CoT have extended these concepts to VLLMs [36], demonstrating the potential for improved reasoning in tasks that combine textual and visual information. Other task-specific strategies, such as least-to-most prompting address challenges of easy-to-hard generalization [37], while meta-prompting and automatic prompt engineering techniques aim to optimize the prompts themselves [38], [39]. III. M ETHOD A. Models and Experimental Procedure This study focuses on three state-of-the-art VLLMs: GPT- 4o, Gemini-1.5 Pro, and Claude-3.5 Sonnet. These models rep- resent the current pinnacle of multimodal LLMs capabilities, demonstrating proficiency in processing both textual and visual inputs (see Table V for details). We employs a standard version of the WCST-64 [15] to assess the cognitive flexibility of VLLMs. Our experimental design incorporates a 2x2 factorial structure, manipulating input modality (Visual Input (VI) / Textual Input (TI)) and prompting strategy (Straight to Answer (STA) / Chain of Thought (CoT)) to comprehensively evaluate VLLMs performance. This design resulted in four experimen- tal conditions: STA-VI, STA-TI, CoT-VI, and CoT-TI. Each VLLM was tested independently across all four conditions, with 10 repetitions per condition. The arrangement of stimulus cards and the sequence of sorting rules were randomized for each repetition. Algorithm 1 WCST for VLLMs 1:Initialize r∈ {color, shape, number },c←0,s←0, t←0 2:Inform model: ” {Task description }” 3:while t <64do 4: Present card and prompt for sorting (image/text for VI/TI, using STA/CoT) 5: Record and parse model’s response to extract selection 6: ifselection is correct then 7: c←c+ 1 8: ifc= 10 then 9: Change active rule r 10: s←s+ 1 11: c←0 12: end if 13: else 14: c←0 15: end if 16: Provide feedback on correctness of current selection 17: t←t+
https://arxiv.org/abs/2505.22112v1
1 18:end while Additionally, we collected data from 30 cognitively healthy human participants (aged 20-35) as a baseline for comparison. Human participants interacted with a web-based interfacedesigned to replicate the WCST experience while accom- modating human response patterns (supplementary Figure s- 5). The interface presented cards sequentially and allowed participants to indicate their sorting choices via button presses. The language used in instructions was carefully adapted to be more intuitive for human subjects while maintaining the essential structure of the task. We adapted the WCST for use with VLLMs while main- taining its core principles (see Algorithm 1 for the implemen- tation). The test consists of a series of virtual cards, each featuring shapes (circle, cross, triangle, or star) in varying colors (red, green, yellow, or blue) and quantities (one to four). The models are tasked with sorting these cards according to an undisclosed rule (color, shape, or number), which changes periodically without explicit notification. The sorting rule changed after ten consecutive correct categorizations. The assessment concluded after 64 trials. Detailed descriptions of the task instructions are provided in supplementary Figure s-2. Example stimuli for VI and TI conditions, and prompt templates for STA and CoT strategies are provided in supple- mentary Figure s-3. Data collection was fully automated using API calls to each VLLM. Model responses were recorded verbatim for each trial. Human participant data was collected through the web- based interface. All participants provided informed consent, and the study was approved by the institutional review board. To address potential concerns regarding model memoriza- tion of the classic WCST paradigm, we also designed a novel variant called the ALIEN Task that preserves the logical struc- ture of the WCST while using entirely different terminology, stimuli, and framing. In this variant, participants explore an alien civilization by categorizing extraterrestrial astronomical systems. The original WCST dimensions were replaced with thematically distinct alternatives: shape was replaced with planetary orbit types (spiral, elliptical, circular, Z-shaped); color was replaced with atmospheric composition (hydro- gen/blue, helium/yellow, nitrogen/purple, oxygen/green); and count was replaced with number of moons (1, 2, 3, 4). The underlying logical structure and rule-switching mechanisms remained identical to the WCST. We implemented this variant under STA-TI and CoT-TI conditions to examine whether performance patterns would persist with entirely novel surface features. The specific instructions and example stimuli for the ALIEN Task are detailed in supplementary Figure s-4. B. Evaluation Metrics Performance was primarily assessed using the following metrics which were chosen for their ability to quantify dif- ferent aspects of cognitive flexibility: Categories Completed (CC): The number of categories (sets of 10 consecutive correct sorts) completed. CC=nX i=1I(ci= 10) (1) where nis the total number of trials, ciis the number of consecutive correct sorts at trial i, and I(·)is the indicator function. 4 Perseverative Errors (PE): The number of errors where the model persisted with a previously correct but currently incorrect rule. PE=nX i=1I(ri=rprev∧ri̸=rcurrent) (2) where riis the rule used by the model at trial i,rprevis the previously correct rule, and rcurrent is the current correct rule. Non-Perseverative Errors (NPE): All errors that are not
https://arxiv.org/abs/2505.22112v1
perseverative. NPE =Total Errors −PE (3) NPE captures non-perseverative errors, potentially indicating exploration or random mistakes. Trials to First Category (TFC): The number of trials required to complete the first category, indicating how quickly the model can deduce and consistently apply the first sorting rule. TFC = min{i:ci= 10} (4) where iis the trial number and ciis as defined in CC. Conceptual Level Responses (CLR): The percentage of responses occurring in runs of three or more correct sorts, indicating conceptual understanding. CLR =Pn i=1I(ci≥3) n×100% (5) where I(ci≥3)is an indicator function that equals 1 if the number of consecutive correct sorts up to and including trial iis 3 or more, and 0 otherwise. Failure to Maintain Set (FMS): The number of times the model makes an error after five or more consecutive correct sorts but before completing a category. FMS =n−1X i=1I(5≤ci<10)·I(ci+1= 0) (6) where iis the trial number and ciis as defined in CC. These metrics collectively provide a comprehensive view of the VLLMs’ cognitive flexibility [40], capturing various aspects such as rule learning, set-shifting, perseveration, and conceptual understanding. IV. R ESULTS A. WCST Task Performance Across Models and Conditions The WCST performance of GPT-4o, Gemini-1.5 Pro, and Claude-3.5 Sonnet exhibited marked variations across the four conditions (Figure 2). Their cognitive flexibility was measured using the CC metric, standardized on a 0-1 scale, with the human baseline( µ= 0.95, σ= 0.09). The CoT-TI condition consistently yielded superior outcomes across all VLLMs, followed by CoT-VI, STA-TI, and STA-VI, respectively, un- derscoring the critical influence of both prompting strategies and input modalities on VLLMs’ set-shifting capabilities. In the STA-VI condition, all VLLMs struggled significantly, with mean performances ranging from 0.02 to 0.04. The transition to STA-TI yielded modest improvements, particu- larly for Claude-3.5 Sonnet ( µ= 0.10, σ= 0.10). However, the introduction of CoT prompting precipitated a dramaticperformance surge across all models. In the CoT-VI condition, Claude-3.5 Sonnet exhibited remarkable improvement ( µ= 0.96, σ= 0.08), while Gemini-1.5 Pro and GPT-4o also showed substantial gains. This stark contrast between STA and CoT conditions illuminates the pivotal role of explicit reasoning in augmenting VLLMs’ cognitive flexibility. The CoT-TI condition elicited peak performances, with Claude-3.5 Sonnet achieving perfection ( µ= 1.00, σ= 0.00), surpassing even the human baseline. Gemini-1.5 Pro ( µ= 0.96, σ= 0.08) and GPT-4o ( µ= 0.92, σ= 0.16) also demonstrated near-human or human-equivalent performance in this setting. Notably, the performance variability ( σ) was generally higher in CoT conditions for Gemini-1.5 Pro and GPT-4o, indicating potential instability in their cognitive processes. The consistent superiority of TI over VI across all conditions suggests a potential advantage in processing textual over visual inputs. The observed performance gradient, from near-chance levels in STA-VI to human-surpassing in CoT-TI, demonstrates the potential of VLLMs to exhibit human-like cognitive flexibil- ity under appropriate conditions, while also highlighting the critical impact of prompting strategies and input modalities on their performance in tasks requiring set-shifting and rule adaptation. To investigate whether VLLMs’ performance might be influenced by memorization of similar tasks in their training data, we evaluated all three
https://arxiv.org/abs/2505.22112v1
models on our novel ALIEN Task variant under both STA-TI and CoT-TI conditions. The results, presented in Figure s-1 and Table t-1, demonstrate performance patterns remarkably consistent with those ob- served in the original WCST. Under the STA-TI condition, all models struggled to complete the ALIEN Task, while under the CoT-TI condition, all models demonstrated high performance levels that closely mirrored their achievements on the standard WCST. This consistent performance strongly suggests that the models’ capabilities reflect genuine cognitive flexibility rather than memorization of specific task patterns. B. Detailed Analysis by Evaluation Metric To offer a comprehensive assessment of the VLLMs’ per- formance on the WCST, we analyzed six key metrics outlined in the previous section. Table I presents the mean scores and standard deviations across all evaluation metrics for each VLLM and condition. This analysis reveals distinct patterns in cognitive flexibility and set-shifting abilities among the models. PE were most prevalent in the STA-TI condition for all models, with Claude-3.5 Sonnet showing the highest number of errors in this condition ( µ= 15.90, σ= 17.53). In the STA- VI condition, PE were relatively low for all models, as they largely failed to follow the rules at all. However, the transi- tion to CoT conditions reduced PE, with Claude-3.5 Sonnet demonstrated the lowest number of PE in the CoT-TI condition (µ= 6.30, σ= 0.82). This suggests that Claude-3.5 Sonnet may surpass human performance in adapting to changing rules, especially when provided with explicit reasoning prompts and textual descriptions. NPE showed a dramatic reduction from STA to CoT con- ditions across all models, with improvements observed in the 5 STA-VI STA-TI CoT-VI CoT-TI Human0.00.20.40.60.81.0Standardized Performance=0.02 =0.06 =0.04 =0.08 =0.02 =0.06 =0.08 =0.10 =0.06 =0.09 =0.10 =0.10 =0.58 =0.30 =0.50 =0.27 =0.96 =0.08 =0.96 =0.08 =0.92 =0.16 =1.00 =0.00 =0.95 =0.09 Gemini-1.5 Pro GPT-4o Claude-3.5 Sonnet Human Fig. 2. WCST Task Performance Across Models and Conditions. The distribution of standardized Categories Completed (CC) scores for GPT-4o, Gemini- 1.5 Pro, and Claude-3.5 Sonnet under four experimental conditions: STA-VI (Straight to Answer - Visual Input), STA-TI (Straight to Answer - Textual Input), CoT-VI (Chain of Thought - Visual Input), and CoT-TI (Chain of Thought - Textual Input), with the human baseline performance. TABLE I WCST P ERFORMANCE METRICS ACROSS MODELS AND EXPERIMENTAL CONDITIONS Model Condition CC PE NPE TFC CLR (%) FMS Gemini-1.5 ProSTA-VI 0.10 (0.32) 1.70 (5.38) 40.00 (9.51) 11.00 (-) 5.00 (6.15) 0.20 (0.42) STA-TI 0.40 (0.52) 10.10 (13.99) 29.90 (17.90) 19.25 (-) 9.22 (7.88) 0.10 (0.32) CoT-VI 2.90 (1.60) 7.40 (4.27) 12.30 (12.37) 19.11 (-) 45.00 (18.95) 0.50 (0.71) CoT-TI 4.80 (0.42) 6.80 (1.55) 3.50 (1.72) 13.30 (1.95) 63.12 (4.67) 0.10 (0.32) GPT-4oSTA-VI 0.20 (0.42) 6.20 (13.21) 32.70 (18.57) 19.00 (-) 10.31 (12.11) 0.80 (1.48) STA-TI 0.30 (0.48) 11.60 (18.72) 28.70 (19.81) 12.00 (-) 8.28 (5.90) 0.30 (0.67) CoT-VI 2.50 (1.43) 7.60 (5.10) 11.20 (10.03) 17.38 (-) 44.53 (15.51) 1.10 (0.88) CoT-TI 4.60 (0.84) 7.60 (1.84) 2.10 (0.88) 12.60 (2.46) 63.28 (5.86) 0.10 (0.32) Claude-3.5 SonnetSTA-VI 0.10 (0.32) 3.10 (9.80) 24.50 (10.97) 17.00 (-) 20.47 (12.60) 1.60 (0.97) STA-TI 0.50 (0.53) 15.90 (17.53)
https://arxiv.org/abs/2505.22112v1
22.90 (18.22) 15.80 (-) 8.90 (7.14) 0.10 (0.32) CoT-VI 4.80 (0.42) 7.20 (2.82) 2.20 (1.40) 12.70 (1.57) 65.16 (5.32) 0.00 (0.00) CoT-TI 5.00 (0.00) 6.30 (0.82) 2.00 (0.82) 12.00 (0.94) 67.50 (2.74) 0.00 (0.00) Human STA-VI 4.73 (0.45) 6.87 (1.63) 2.80 (1.69) 12.93 (1.62) 65.15 (4.35) 0.10 (0.31) transition from VI to TI inputs. In STA conditions, NPE were extremely high, indicating near-random performance. The near-elimination of NPE in CoT-TI (e.g., Claude-3.5 Sonnet: µ = 2.00, σ= 0.82) suggests that VLLMs can achieve a level of consistent rule application that exceeds human performance. This suggests that explicit reasoning prompts enable VLLMs to maintain a more consistent internal representation of the current sorting rule, reducing random errors. All models required the fewest trials to complete the first category in the CoT-TI condition, with Claude-3.5 Sonnet performing best ( µ= 12.00, σ= 0.94), followed closely by GPT-4o ( µ= 12.60, σ= 2.46) and Gemini-1.5 Pro ( µ= 13.30, σ= 1.95). Notably, Claude-3.5 Sonnet outperformed the human baseline ( µ= 12.93, σ= 1.62). CLR patterns showed substantial improvement from STA toCoT conditions for all VLLMs, with the highest percentages observed in the CoT-TI condition. Claude-3.5 Sonnet achieved the highest CLR in this condition ( µ= 67.50%, σ= 2.74%), followed by GPT-4o ( µ= 63.28%, σ= 5.86%) and Gemini-1.5 Pro (µ= 63.12%, σ= 4.67%). This indicates that under CoT- TI conditions, VLLMs can maintain conceptual understanding at a level comparable to or exceeding human capability. FMS were generally low in STA conditions, but this reflects the models’ overall poor performance rather than true set maintenance. The transition to CoT conditions led to increased FMS in the VI condition, suggesting that improved overall performance paradoxically led to more instances of set loss after initial successful rule application. However, in the CoT- TI condition, Claude-3.5 Sonnet achieved perfect set mainte- nance (FMS = 0.00), outperforming the human baseline. This 6 indicates that VLLMs can maintain exceptional consistency in rule application, potentially surpassing human capabilities in this aspect of cognitive flexibility. These detailed metrics collectively reinforce the finding that CoT prompting, particularly when combined with textual inputs, substantially enhances VLLMs’ cognitive flexibility as measured by WCST performance. While all models showed similar patterns across conditions, Claude-3.5 Sonnet consis- tently demonstrated superhuman cognitive flexibility in CoT- TI condition. The consistent pattern across all six metrics highlights the robustness of the effects of prompting strategy and input modality, while also revealing subtle differences in the cognitive capabilities of these VLLMs. C. Analysis of Input modality To investigate the performance difference between visual and textual input conditions, we conducted a detailed analysis of each model’s ability to accurately perceive and interpret the WCST card features. This analysis aimed to determine whether the performance gap was due to limitations in visual processing or differences in cognitive flexibility across modal- ities. We evaluated the models’ accuracy in identifying the three key features of WCST cards: color, shape, and number, comparing their descriptions against actual card features for each trial (detailed in Appendix -E). The results indicate that all three models demonstrated high accuracy in
https://arxiv.org/abs/2505.22112v1
visual feature recognition (Table II). Claude- 3.5 Sonnet demonstrated perfect accuracy across all features, while Gemini-1.5 Pro and GPT-4o showed a decline in visual capabilities, particularly when recognizing how many cards were present in the image and the number of shapes on each card. Notably, GPT-4o almost always misidentified 5 cards as 6 cards. TABLE II VISUAL FEATURE RECOGNITION ACCURACY (%) Model Count Color Shape Number Overall Gemini-1.5 Pro 75 100 100 97.81 96.97 GPT-4o 0 100 100 96.56 89.55 Claude-3.5 Sonnet 100 100 100 100 100 These findings suggest that the performance gap between VI and TI conditions is not solely attributable to limitations in visual feature extraction, but rather to the cascading ef- fects of occasional visual misinterpretations on higher-order cognitive processes. In the VI condition, visual recognition errors can disrupt the model’s ability to consistently apply a rule, necessitating re-exploration of the problem space. This phenomenon explains the increased variance observed in model performance under the CoT-VI condition compared to CoT-TI. The textual input’s inherent precision eliminates this source of variability, allowing models to demonstrate more consistent cognitive flexibility. These results reveal the complex interplay between visual perception and executive function in VLLMs, highlighting the need for more robust visual processing pipelines.D. Impact of Explicit Rule Exclusivity All results in previous analyses were obtained under con- ditions that included both a general rule statement specifying “ The correct answer depends on a rule, which will be based solely on either the number of shapes, the color of the shapes, or the shape type itself ” and an explicit rule exclusivity constraint stating “ There will be no combination of these characteristics to define the rule ”. To further investigate the robustness of VLLMs’ cognitive flexibility, we conducted an additional study examining performance when explicit rule exclusivity was removed. We examined this under the CoT- TI condition, which had previously demonstrated near-human or superior cognitive flexibility for all models. The results are in Table III. TABLE III IMPACT OF EXPLICIT RULE EXCLUSIVITY ON CC Model Normal w/o Constraints Decline Gemini-1.5 Pro 4.8 (0.42) 2.6 (2.01) 2.2 GPT-4o 4.6 (0.84) 3.5 (1.27) 1.1 Claude-3.5 Sonnet 5.0 (0.00) 4.7 (0.67) 0.3 Gemini-1.5 Pro exhibited the most pronounced sensitivity to the absence of explicit rule exclusivity, with mean CC decreasing from 4.8 ( σ= 0.42) with the constraint to 2.6 (σ= 2.01) without it, representing a 2.2 decline. GPT-4o demonstrated moderate sensitivity, with performance dropping from 4.6 ( σ= 0.84) to 3.5 ( σ= 1.27) categories. Claude-3.5 Sonnet showed the most robust performance, maintaining high functionality even without the explicit exclusivity statement, with only a marginal decline from perfect performance ( µ= 5.0,σ= 0.00) to near-perfect ( µ= 4.7, σ= 0.67). The observed increases in standard deviations across all models when the exclusivity constraint was removed indicate that explicit rule exclusivity not only enhances performance but also promotes more consistent cognitive flexibility. The differential declines observed among models reflect disparities in their ability to maintain simple rule structures in the absence of explicit constraints against more complex possibilities. Claude-3.5 Sonnet’s robust performance suggests
https://arxiv.org/abs/2505.22112v1
a superior ability to infer and adhere to simpler rule structures, even when the possibility of more complex rules is not explicitly excluded. E. Simulating Cognitive Impairment To explore the potential of VLLMs in modeling human cognitive impairment without modifying the models, we em- ployed role-playing prompts to simulate three key aspects of the PFC function commonly impaired in various neurological conditions [41], [42]: goal maintenance, inhibitory control, and adaptive updating. This method leverages the models’ ability to imagine and simulate different cognitive states, allowing us to study how they conceptualize and perform under various impairment conditions without modifying the underlying model architecture. We focused on the CoT-TI condition, as it consistently yielded the best performance across all models in our previous 7 TABLE IV WCST P ERFORMANCE UNDER NORMAL AND SIMULATED IMPAIRMENT CONDITIONS (COT-TI) Model Condition CC PE NPE TFC CLR (%) FMS Gemini-1.5 ProNormal 4.80 (0.42) 6.80 (1.55) 3.50 (1.72) 13.30 (1.95) 63.12 (4.67) 0.10 (0.32) Goal Maint. ( ↓) 1.90 (1.60) 8.60 (7.83) 12.00 (12.38) 16.86 (-) 37.03 (22.39) 0.60 (0.70) Inhib. Ctrl. ( ↓) 1.70 (1.49) 6.40 (5.66) 20.10 (14.04) 32.00 (-) 30.63 (18.91) 0.90 (0.99) Adapt. Upd. ( ↓) 3.90 (0.74) 8.30 (2.71) 6.70 (4.57) 17.50 (6.20) 56.09 (9.53) 0.00 (0.00) GPT-4oNormal 4.60 (0.84) 7.60 (1.84) 2.10 (0.88) 12.60 (2.46) 63.28 (5.86) 0.10 (0.32) Goal Maint. ( ↓) 3.50 (1.65) 9.80 (2.66) 4.50 (3.78) 18.10 (9.46) 52.34 (16.49) 0.80 (1.03) Inhib. Ctrl. ( ↓) 4.20 (1.03) 10.10 (6.66) 3.70 (2.41) 14.00 (3.50) 57.97 (11.66) 0.30 (0.48) Adapt. Upd. ( ↓) 4.30 (0.82) 8.30 (3.13) 2.80 (2.49) 13.10 (1.79) 61.56 (9.24) 0.10 (0.32) Claude-3.5 SonnetNormal 5.00 (0.00) 6.30 (0.82) 2.00 (0.82) 12.00 (0.94) 67.50 (2.74) 0.00 (0.00) Goal Maint. ( ↓) 3.20 (1.40) 12.50 (5.64) 5.50 (4.79) 17.50 (7.20) 47.19 (17.44) 0.60 (0.84) Inhib. Ctrl. ( ↓) 1.50 (1.65) 12.80 (13.82) 18.70 (19.82) 18.83 (-) 23.59 (20.75) 0.40 (0.52) Adapt. Upd. ( ↓) 3.60 (1.26) 8.60 (4.93) 7.50 (9.35) 20.60 (12.55) 51.56 (14.91) 0.00 (0.00) experiments. The specific role-playing prompts and analysis methods for this component are detailed in supplementary Figure s-6. The results presented in Table IV, reveal that all three VLLMs demonstrated significant performance decre- ments under simulated impairment conditions, with patterns that align with neuropsychological observations of patients with prefrontal dysfunction. Gemini-1.5 Pro exhibited the highest sensitivity to sim- ulated impairments, with substantial declines in CC across all conditions. The most severe impact was observed under inhibitory control impairment (CC reduced from 4.80 to 1.70), accompanied by a marked increase in NPE from 3.50 to 20.10. GPT-4o demonstrated greater resilience, maintaining rela- tively stable performance across impairment conditions. The model’s CC decreased modestly from 4.60 to 3.50-4.30, with PE showing consistent increases across conditions. Notably, NPE remained stable, indicating a robust ability to maintain overall response consistency even under simulated impair- ments. This stability suggests that GPT-4o’s decision-making processes may be more resistant to perturbation. Claude-3.5 Sonnet, despite showing the highest baseline performance (CC = 5.00), exhibited significant vulnerability to simulated impairments. The model showed increases in both PE and NPE under impairment conditions, particularly
https://arxiv.org/abs/2505.22112v1
for inhibitory control (PE: 12.80, NPE: 18.70). This pattern suggests that Claude-3.5 Sonnet’s high baseline performance may rely on finely tuned cognitive processes that are more susceptible to disruption when specific aspects of executive function are impaired. Across all models, inhibitory control impairment consis- tently produced the most severe performance decrements and led to increased NPE and FMS, aligning with observations in patients with orbitofrontal damage [43]. Models frequently mentioned irrelevant card features, simulating distraction and impulsivity. Goal maintenance impairment primarily affected CLR and FMS, reflecting difficulties in consistently applying rules. This pattern is consistent with observations in patients with dorsolateral prefrontal cortex lesions [44]. Adaptive updating impairment had a more moderate impact, mainlyaffecting CC and CLR, while having less effect on FMS, con- sistent with difficulties in switching to new rules [45]. These distinct patterns of impairment across models suggest that while VLLMs can simulate aspects of cognitive dysfunction, the underlying mechanisms of their decision-making processes may differ. V. D ISCUSSION AND CONCLUSION This study demonstrates that state-of-the-art VLLMs can achieve, and in some cases surpass, human-level cognitive flexibility as measured by the WCST, suggesting a potential for emulating and exceeding human set-shifting abilities in spe- cific contexts. The observed performance gradient underscores the complex interplay between input modalities and prompting strategies. The consistent performance across both the standard WCST and our novel ALIEN Task variant provides compelling evidence that these models demonstrate genuine cognitive flexibility rather than relying on memorized patterns. Despite the complete transformation of surface features, the models exhibited virtually identical performance patterns across con- ditions, suggesting engagement with the underlying logical structure of set-shifting tasks rather than merely recalling similar examples from their training data. The performance gap between VI and TI conditions indicates that current VLLMs may rely more heavily on language-based reasoning pathways, even when processing visual information. Explicit reasoning prompts enable VLLMs to maintain more stable internal representations of task rules. Our analysis of explicit rule exclusivity reveals a critical dependence of VLLMs on precise task instructions. The significant performance decline observed when specific rule constraints were removed highlights the models’ reliance on explicit information to guide their decision-making processes. This finding suggests that VLLMs’ impressive performance in structured tasks may not fully generalize to more ambiguous real-world scenarios without careful consideration of instruc- tion design. The simulation of cognitive impairments through role- playing prompts demonstrates the potential of VLLMs to 8 model complex behavioral patterns of executive dysfunction. The distinct performance profiles observed under simulated goal maintenance, inhibitory control, and adaptive updating impairments closely mirror behavioral patterns seen in human neuropsychological research. However, as recent research has shown that language models can form biased associations that mirror societal stereotypes , it is important to acknowledge that VLLMs’ simulations of cognitive impairments may reflect stereotypical representations of clinical populations rather than authentic mechanisms underlying the disorders themselves. Future research should focus on elucidating the underlying mechanisms that enable VLLMs to perform set-shifting tasks and investigating the generalizability of these abilities to other domains of executive function. Additionally, the development of more sophisticated visual processing
https://arxiv.org/abs/2505.22112v1
capabilities and the in- tegration of multimodal information processing warrant further exploration. The potential of VLLMs to simulate specific pat- terns of cognitive impairment also opens up new possibilities for creating realistic models of neuropsychological conditions, which could have applications in both clinical research and AI safety. By analyzing VLLMs’ internal representations during simulated impairments, we could potentially decode the com- putational principles underlying various cognitive functions, complementing traditional neuroscience methods. In conclusion, this study provides insights into our under- standing of cognitive flexibility in VLLMs, revealing capabil- ities that match or exceed human performance and important limitations that depend on task framing and input modalities. As these models continue to evolve, a deeper understanding of their cognitive processes will be crucial for harnessing their potential while addressing their constraints, ultimately leading to more adaptive and robust AI systems that can flexibly navigate complex, real-world environments. MODEL DETAILS This study employs three state-of-the-art visual language models: Gemini-1.5 Pro, GPT-4o, and Claude-3.5 Sonnet. Table A1 presents a comparative overview of these models’ key characteristics.A. Gemini-1.5 Pro Gemini-1.5 Pro, created by Google, employs a Mixture of Experts architecture, allowing for efficient processing of both textual and visual inputs. While specific architectural details are proprietary, the model demonstrates strong performance across various tasks. For image processing, Gemini-1.5 Pro uses a standardized approach where each image is equivalent to 258 tokens, regardless of size. Large images are scaled down to a maximum of 3072x3072 pixels, while small images are scaled up to 768x768 pixels, both preserving aspect ratio. B. GPT-4o GPT-4o, developed by OpenAI, represents an advanced iteration of the GPT series. It utilizes a transformer-based architecture and incorporates visual processing capabilities. GPT-4o offers adaptive image processing with low and high resolution modes, allowing for a balance between processing speed and detail level. In low resolution mode, it processes a 512px x 512px version of the image, representing it with 85 tokens. The high resolution mode initially processes a low- res image, then creates detailed 512px x 512px crops, each represented by 170 tokens. C. Claude-3.5 Sonnet Developed by Anthropic, Claude-3.5 Sonnet builds upon previous Claude models, incorporating enhanced visual under- standing capabilities. The model utilizes a transformer-based architecture optimized for multimodal inputs. Claude-3.5 Son- net balances multi-image processing by resizing images that exceed 1568 pixels on the long edge or approximately 1,600 tokens. It calculates token usage based on image dimensions (tokens = (width px * height px)/750) and emphasizes image clarity and text legibility for optimal performance. PROMPTS AND TOKEN USAGE D. Detailed prompts The WCST setup consists of four stimulus cards, each featuring unique combinations of color (red, green, yellow, blue), shape (triangle, star, cross, circle), and number (one, TABLE V COMPARISON OF VISUAL LANGUAGE MODELS Characteristic Gemini-1.5 Pro GPT-4o Claude-3.5 Sonnet Multimodal Capabilities Text, audio, image, video Text, audio, image Text, image API Version gemini-1.5-pro gpt-4o-2024-05-13claude-3-5-sonnet -20240620 Access Method aistudio.google.com platform.openai.com console.anthropic.com Context Window 2M tokens 128K tokens 200K tokens Maximum Out-Tokens 8,192 tokens 4,096 tokens 4,096 tokens Knowledge Cutoff November 2023 October 2023 April 2024 Release Date May 2024 May 2024
https://arxiv.org/abs/2505.22112v1
June 2024 Model Ranking (till 2024-08-31)LMSYS #4 #1 #2 OpenCompass #5 #1 #2 Benchmarks #4 #2 #1 9 two, three, four) of symbols. A series of 64 response cards is used, each sharing properties with the stimulus cards but in different combinations. The sorting rules are based on three possible categories: color, shape, or number. In our implementation, each trial is presented to the VLLMs as a single image containing two rows. The top row displays the four stimulus cards, while the bottom row shows one response card. This format is consistent across all 64 trials, providing a standardized visual input for the models to process. For the text-based conditions (TI), detailed descriptions of these images are provided instead. The test procedure begins with the VLLM being instructed to match each response card to one of the stimulus cards based on a rule that it must deduce from feedback. After each match attempt, the VLLM receives feedback (correct or incorrect) without explicit mention of the current sorting rule. The sorting rule changes after 10 consecutive correct matches, without notification to the VLLM. The test concludes after all 64 cards have been presented. We implemented four distinct experimental conditions to assess the VLLMs’ performance: STA-VI (Straight to Answer with Original Image input), STA-TI (Straight to Answer with Original Text description input), CoT-VI (Chain of Thought reasoning with Original Image input), and CoT-TI (Chain of Thought reasoning with Original Text description input). Supplementary Figure s-2 provides a visual representation of the WCST procedure and sample stimuli used in our study. The prompting strategies for each condition are illustrated in supplementary Figure s-3. For the CoT conditions, VLLMs were explicitly instructed to verbalize their reasoning process, including their observations, hypotheses about the current rule, and justification for their sorting decisions. Supplementary Figure s-5 presents the web-based interface developed for human participants. This interface was designed to closely mimic the experience of VLLMs while accommo- dating human interaction patterns. It features a clear presen- tation of stimulus and response cards, along with intuitive controls for participants to indicate their sorting choices. The interface also provides immediate feedback on sorting deci- sions, mirroring the feedback mechanism used with VLLMs. To explore the VLLMs’ capacity to simulate cognitive im- pairments, we introduced role-playing scenarios as described in supplementary Figure s-6. This figure outlines the specific instructions given to models for simulating various prefrontal cortex dysfunctions, including impaired goal maintenance, inhibitory control deficits, and adaptive updating impairments. These scenarios were carefully designed to mimic specific cog- nitive deficits commonly observed in neurological conditions, allowing us to assess the models’ ability to flexibly adapt their behavior to simulate human-like cognitive impairments. Supplementary Figure s-7 provides a detailed example of a typical VLLM interaction during the WCST. This example illustrates how models process the presented cards, articulate their reasoning (in CoT-TI conditions), and make decisions. Supplementary Figure s-8 showcases the visual processing capabilities of VLLMs.E. Visual Accuracy Calculation The visual accuracy of VLLMs was assessed using a comprehensive scoring system that evaluated their ability to correctly identify key features of the
https://arxiv.org/abs/2505.22112v1
WCST cards across 64 trials. The system encompassed five distinct measures: Card Count Accuracy, Color Accuracy, Shape Accuracy, Number Accuracy, and Overall Accuracy. For each trial, models were evaluated on their ability to correctly identify the presence of five cards and accurately describe the color, shape, and number of symbols on each card. Detailed descriptions of the Visual instructions are provided in supplementary Figure s- 3. Supplementary Figure s-8 showcases the visual processing capabilities of VLLMs. Count Accuracy was calculated as the proportion of trials where the model correctly identified the presence of five cards: ACC count =P64 i=1I(ci= 5) 64×100% (7) where I(ci= 5) is an indicator function that equals 1 if the model correctly counted 5 cards in trial i, and 0 otherwise. Color Accuracy , Shape Accuracy , and Number Accuracy were calculated similarly, assessing the model’s performance across all cards in all trials: ACC feature =P64 i=1P5 j=1I(fij=f∗ ij) 64∗5×100% (8) where feature ∈color, shape, number ,fijis the model’s identification of the feature for card jin trial i, and f∗ ijis the correct feature. The Overall Accuracy ( ACC overall ) was computed as a composite score, incorporating all correct identifications while applying a penalty for overcounting cards. First, we define a penalty function Pfor overcounting: P=P64 i=10.5×max(0 , ci−5) 64×100% (9) where ciis the number of cards counted by the model in triali. This penalty deducts 0.5 points for each card counted beyond the correct number of 5 in any given trial. The Overall Accuracy is then calculated as: ACC overall =ACC count +X ACC feature −P (10) This assessment of the VLLMs’ visual processing capabili- ties enables detailed comparisons across models and features. By evaluating multiple aspects of visual perception, from basic counting to complex feature recognition, the system offered insights into the strengths and limitations of each model’s visual cognition in the context of the WCST. F . Token Usage To provide insight into the computational resources re- quired, we list token usage across models and conditions (Table VI). Across all models, VI conditions consistently required more tokens than TI conditions, reflecting the addi- tional computational demand of processing visual information. CoT conditions consumed significantly more tokens than STA conditions, indicating the increased computational cost of 10 TABLE VI TOKEN USAGE AND COST ANALYSIS Model Condition Task Last Token Avg Session Token Avg Session Price Avg Total Tokens Total Price Gemini-1.5 ProSTA-VI WCST 18,221 604,260 $2.12 6,042,605 $21.17 STA-TI WCST 6,631 227,378 $0.8 2,273,782 $7.95 CoT-VI WCST 19,898 658,489 $2.32 6,584,887 $23.18 CoT-TIWCST 9,885 338,747 $1.21 3,387,469 $12.09 WCST w/o restriction 12,795 421,199 $1.52 4,211,991 $15.19 WCST Goal Maint 8,970 311,489 $1.11 3,114,890 $11.08 WCST Inhib Ctrl 10,447 355,556 $1.27 3,555,564 $12.74 WCST Adapt Upd 8,423 295,753 $1.05 2,957,531 $10.49 GPT-4oSTA-VI WCST 7,373 251,712 $1.26 2,517,120 $12.62 STA-TI WCST 6,831 233,900 $1.17 2,338,995 $11.66 CoT-VI WCST 20,210 670,972 $3.49 6,709,718 $34.87 CoT-TIWCST 20,216 672,265 $3.5 6,722,651 $34.96 WCST w/o restriction 23,093 749,301 $3.91 7,493,007 $39.12 WCST Goal Maint 18,414 619,910 $3.22 6,199,099 $32.19 WCST Inhib Ctrl 18,642 624,323 $3.24 6,243,230 $32.43 WCST Adapt Upd 18,824
https://arxiv.org/abs/2505.22112v1
634,350 $3.29 6,343,505 $32.93 Claude-3.5 SonnetSTA-VI WCST 27,404 903,104 $2.72 9,031,040 $27.2 STA-TI WCST 7,073 242,113 $0.73 2,421,131 $7.34 CoT-VI WCST 43,704 1,426,502 $4.48 14,265,023 $44.78 CoT-TIWCST 19,257 641,037 $2.08 6,410,367 $20.77 WCST w/o restriction 20,718 675,461 $2.2 6,754,606 $21.96 WCST Goal Maint 23,806 778,877 $2.54 7,788,771 $25.42 WCST Inhib Ctrl 24,550 802,087 $2.62 8,020,867 $26.18 WCST Adapt Upd 24,774 799,338 $2.62 7,993,378 $26.16 explicit reasoning processes. Among the models, Claude-3.5 Sonnet showed the highest token usage across all conditions, suggesting a more computationally intensive approach to task processing. These token usage patterns provide valuable in- sights into the relative efficiency and resource requirements of different VLLMs and experimental conditions in cognitive flexibility tasks. CODE AVAILABILITY The complete code and experimental data are available at: https://drive.google.com/file/d/ 1uUyPn6fP4JDI50zRcdMeGJKwRaI-JTUs/view?usp= sharing. ACKNOWLEDGMENTS This work was supported in part by the Strategic Prior- ity Research Program of the Chinese Academy of Sciences (CAS)(XDB1010302), CAS Project for Young Scientists in Basic Research, Grant No. YSBR-041 and the International Partnership Program of the Chinese Academy of Sciences (CAS) (173211KYSB20200021). REFERENCES [1] D. R. Dajani and L. Q. Uddin, “Demystifying cognitive flexibility: Implications for clinical and developmental neuroscience,” Trends in neurosciences , vol. 38, no. 9, pp. 571–578, 2015. [2] L. Q. Uddin, “Cognitive and behavioural flexibility: neural mechanisms and clinical considerations,” Nature Reviews Neuroscience , vol. 22, no. 3, pp. 167–179, 2021. [3] T. Ionescu, “Exploring the nature of cognitive flexibility,” New ideas in psychology , vol. 30, no. 2, pp. 190–200, 2012.[4] T. Spellman, M. Svei, J. Kaminsky, G. Manzano-Nieves, and C. Liston, “Prefrontal deep projection neurons enable cognitive flexibility via persistent feedback monitoring,” Cell, vol. 184, no. 10, pp. 2750–2766, 2021. [5] S. Funahashi and J. M. Andreau, “Prefrontal cortex and neural mech- anisms of executive function,” Journal of Physiology-Paris , vol. 107, no. 6, pp. 471–482, 2013. [6] L. Chen, B. Li, S. Shen, J. Yang, C. Li, K. Keutzer, T. Darrell, and Z. Liu, “Large language models are visual reasoning coordinators,” Advances in Neural Information Processing Systems , vol. 36, 2024. [7] I. Momennejad, H. Hasanbeig, F. Vieira Frujeri, H. Sharma, N. Jojic, H. Palangi, R. Ness, and J. Larson, “Evaluating cognitive maps and planning in large language models with cogeval,” Advances in Neural Information Processing Systems , vol. 36, 2024. [8] Y . Chang, X. Wang, J. Wang, Y . Wu, L. Yang, K. Zhu, H. Chen, X. Yi, C. Wang, Y . Wang et al. , “A survey on evaluation of large language models,” ACM Transactions on Intelligent Systems and Technology , vol. 15, no. 3, pp. 1–45, 2024. [9] Y . Qu, C. Wei, P. Du, W. Che, C. Zhang, W. Ouyang, Y . Bian, F. Xu, B. Hu, K. Du et al. , “Integration of cognitive tasks into artificial general intelligence test for large models,” Iscience , vol. 27, no. 4, 2024. [10] OpenAI, “Hello gpt-4o,” https://openai.com/index/hello-gpt-4o/, 2024, accessed: 2024-05-18. [11] M. Reid, N. Savinov, D. Teplyashin, D. Lepikhin, T. Lillicrap, J.-b. Alayrac, R. Soricut, A. Lazaridou, O. Firat, J. Schrittwieser et al. , “Gemini 1.5: Unlocking multimodal understanding across
https://arxiv.org/abs/2505.22112v1
millions of tokens of context,” arXiv preprint arXiv:2403.05530 , 2024. [12] Anthropic, “Announcements: Claude 3.5 sonnet,” https://www.anthropic. com/news/claude-3-5-sonnet, 2024, accessed: 2024-06-21. [13] E. A. Berg, “A simple objective technique for measuring flexibility in thinking,” The Journal of general psychology , vol. 39, no. 1, pp. 15–22, 1948. [14] S. Miles, C. A. Howlett, C. Berryman, M. Nedeljkovic, G. L. Moseley, and A. Phillipou, “Considerations for using the wisconsin card sorting test to assess cognitive flexibility,” Behavior research methods , vol. 53, no. 5, pp. 2083–2091, 2021. [15] K. W. Greve, “The wcst-64: A standardized short-form of the wisconsin card sorting test,” The Clinical Neuropsychologist , vol. 15, no. 2, pp. 228–234, 2001. [16] K. Jodzio and D. Biechowska, “Wisconsin card sorting test as a measure of executive function impairments in stroke patients,” Applied neuropsychology , vol. 17, no. 4, pp. 267–277, 2010. 11 [17] C.-H. Lie, K. Specht, J. C. Marshall, and G. R. Fink, “Using fmri to decompose the neural processes underlying the wisconsin card sorting test,” Neuroimage , vol. 30, no. 3, pp. 1038–1049, 2006. [18] C. Kim, N. F. Johnson, S. E. Cilles, and B. T. Gold, “Common and distinct mechanisms of cognitive flexibility in prefrontal cortex,” Journal of Neuroscience , vol. 31, no. 13, pp. 4771–4779, 2011. [19] L. Qiao, M. Xu, X. Luo, L. Zhang, H. Li, and A. Chen, “Flexible adjustment of the effective connectivity between the fronto-parietal and visual regions supports cognitive flexibility,” NeuroImage , vol. 220, p. 117158, 2020. [20] A. Diamond, “Executive functions,” Annual Review of Psychology , vol. 64, no. 1, pp. 135–168, 2013. [21] P. Eling, K. Derckx, and R. Maes, “On the historical and conceptual background of the wisconsin card sorting test,” Brain and cognition , vol. 67, no. 3, pp. 247–253, 2008. [22] D. T. Stuss and B. Levine, “Adult clinical neuropsychology: lessons from studies of the frontal lobes,” Annual review of psychology , vol. 53, no. 1, pp. 401–433, 2002. [23] P. D. Zelazo, “The dimensional change card sort (dccs): A method of assessing executive function in children,” Nature protocols , vol. 1, no. 1, pp. 297–301, 2006. [24] A. Heinzel, G. Northoff, H. Boeker, P. Boesiger, and S. Grimm, “Emo- tional processing and executive functions in major depressive disorder: dorsal prefrontal activity correlates with performance in the intra–extra dimensional set shift,” Acta neuropsychiatrica , vol. 22, no. 6, pp. 269– 279, 2010. [25] J. W. Strachan, D. Albergo, G. Borghini, O. Pansardi, E. Scaliti, S. Gupta, K. Saxena, A. Rufo, S. Panzeri, G. Manzi et al. , “Testing theory of mind in large language models and humans,” Nature Human Behaviour , pp. 1–11, 2024. [26] B. C. Lee and J. Chung, “An empirical investigation of the impact of chatgpt on creativity,” Nature Human Behaviour , pp. 1–9, 2024. [27] B. Fatemi, M. Kazemi, A. Tsitsulin, K. Malkan, J. Yim, J. Palowitch, S. Seo, J. Halcrow, and B. Perozzi, “Test of time: A benchmark for eval- uating llms on temporal reasoning,” arXiv preprint arXiv:2406.09170 , 2024. [28] P. Rahmanzadehgervi, L. Bolton, M. R. Taesiri, and A. T. Nguyen,
https://arxiv.org/abs/2505.22112v1
“Vision language models are blind,” arXiv preprint arXiv:2407.06581 , 2024. [29] R. Loconte, G. Orru, M. Tribastone, P. Pietrini, and G. Sartori, “Chal- lenging chatgpt’intelligence’with human tools: a neuropsychological investigation on prefrontal functioning of a large language model,” Intelligence , 2023. [30] G. V . Aher, R. I. Arriaga, and A. T. Kalai, “Using large language models to simulate multiple humans and replicate human subject studies,” in International Conference on Machine Learning . PMLR, 2023, pp. 337–371. [31] P. Liu, W. Yuan, J. Fu, Z. Jiang, H. Hayashi, and G. Neubig, “Pre- train, prompt, and predict: A systematic survey of prompting methods in natural language processing,” ACM Computing Surveys , vol. 55, no. 9, pp. 1–35, 2023. [32] J. Wei, X. Wang, D. Schuurmans, M. Bosma, F. Xia, E. Chi, Q. V . Le, D. Zhou et al. , “Chain-of-thought prompting elicits reasoning in large language models,” Advances in neural information processing systems , vol. 35, pp. 24 824–24 837, 2022. [33] Z. Chu, J. Chen, Q. Chen, W. Yu, T. He, H. Wang, W. Peng, M. Liu, B. Qin, and T. Liu, “A survey of chain of thought reasoning: Advances, frontiers and future,” arXiv preprint arXiv:2309.15402 , 2023. [34] T. Kojima, S. S. Gu, M. Reid, Y . Matsuo, and Y . Iwasawa, “Large lan- guage models are zero-shot reasoners,” Advances in neural information processing systems , vol. 35, pp. 22 199–22 213, 2022. [35] X. Wang, J. Wei, D. Schuurmans, Q. Le, E. Chi, S. Narang, A. Chowdh- ery, and D. Zhou, “Self-consistency improves chain of thought reasoning in language models,” arXiv preprint arXiv:2203.11171 , 2022. [36] Q. Chen, L. Qin, J. Zhang, Z. Chen, X. Xu, and W. Che, “M ˆ3cot: A novel benchmark for multi-domain multi-step multi-modal chain-of- thought,” arXiv preprint arXiv:2405.16473 , 2024. [37] D. Zhou, N. Sch ¨arli, L. Hou, J. Wei, N. Scales, X. Wang, D. Schu- urmans, C. Cui, O. Bousquet, Q. Le et al. , “Least-to-most prompting enables complex reasoning in large language models,” arXiv preprint arXiv:2205.10625 , 2022. [38] R. Pryzant, D. Iter, J. Li, Y . T. Lee, C. Zhu, and M. Zeng, “Automatic prompt optimization with” gradient descent” and beam search,” arXiv preprint arXiv:2305.03495 , 2023. [39] Y . Zhou, A. I. Muresanu, Z. Han, K. Paster, S. Pitis, H. Chan, and J. Ba, “Large language models are human-level prompt engineers,” arXiv preprint arXiv:2211.01910 , 2022.[40] D. J. Schretlen, Modified Wisconsin Card Sorting Test®: M-WCST; Professional Manual . PAR, 2010. [41] E. K. Miller and J. D. Cohen, “An integrative theory of prefrontal cortex function,” Annual review of neuroscience , vol. 24, no. 1, pp. 167–202, 2001. [42] D. T. Stuss and M. P. Alexander, “Is there a dysexecutive syndrome?” Philosophical Transactions of the Royal Society B: Biological Sciences , vol. 362, no. 1481, pp. 901–915, 2007. [43] D. T. Stuss, D. Benson, W. Weir, M. Naeser, I. Lieberman, and D. Ferrill, “The involvement of orbitofrontal cerebrum in cognitive tasks,” Neuropsychologia , vol. 21, no. 3, pp. 235–248, 1983. [44] D. Stuss, B. Levine, M. Alexander, J. Hong, C. Palumbo,
https://arxiv.org/abs/2505.22112v1
Multimodal Forecasting of Sparse Intraoperative Hypotension Events Powered by Language Model Jintao Zhang1Zirui Liu1Mingyue Cheng1Shilong Zhang1Tingyue Pan1 Qi Liu1∗Yanhu Xie2 1University of Science and Technology of China 2The First Affiliated Hospital of University of Science and Technology of China {zjttt, liuzirui}@mail.ustc.edu.cn, mycheng@ustc.edu.cn, {zhangshilong, pty12345}@mail.ustc.edu.cn, qiliuql@ustc.edu.cn, xyh200701@sina.cn Abstract Intraoperative hypotension (IOH) frequently occurs under general anesthesia and is strongly linked to adverse outcomes such as myocardial injury and increased mortality. Despite its significance, IOH prediction is hindered by event sparsity and the challenge of integrating static and dynamic data across diverse patients. In this paper, we propose IOHFuseLM , a multimodal language model framework. To accurately identify and differentiate sparse hypotensive events, we leverage a two-stage training strategy. The first stage involves domain adaptive pretraining on IOH physiological time series augmented through diffusion methods, thereby enhancing the model sensitivity to patterns associated with hypotension. Subse- quently, task fine-tuning is performed on the original clinical dataset to further enhance the ability to distinguish normotensive from hypotensive states. To enable multimodal fusion for each patient, we align structured clinical descriptions with the corresponding physiological time series at the token level. Such alignment enables the model to capture individualized temporal patterns alongside their corre- sponding clinical semantics. In addition, we convert static patient attributes into structured text to enrich personalized information. Experimental evaluations on two intraoperative datasets demonstrate that IOHFuseLM outperforms established baselines in accurately identifying IOH events, highlighting its applicability in clinical decision support scenarios. Our code is publicly available to promote reproducibility at https://github.com/zjt-gpu/IOHFuseLM . 1 Introduction Intraoperative hypotension (IOH) is a common complication during surgery and has been consistently associated with adverse postoperative outcomes [ 1,2], including myocardial injury [ 3] and increased mortality [ 4]. Given its high prevalence and substantial clinical implications [ 5], the development of accurate IOH prediction models has become a critical objective in perioperative monitoring [6]. Conventional approaches to IOH prediction primarily rely on physiological features such as arterial blood pressure [ 7], and increasingly incorporate deep learning models to capture temporal depen- dencies in time series data [ 8]. These methods typically leverage convolutional neural networks to extract local series patterns [ 9], or employ recurrent and attention-based architectures to model sequential dynamics [ 10], achieving moderate performance gains. However, most existing methods either focus exclusively on physiological time series [ 11,12] or adopt simple feature-level fusion strategies by concatenating static patient attributes [ 13], without fully modeling the semantic and contextual complexity of individual patients. Recent advances in deep learning for time series forecasting have led to notable progress across diverse domains, with models such as LSTM [ 14], Transformer-based [ 15–17], and MLP-based ∗Corresponding author. Preprint. Under review.arXiv:2505.22116v1 [cs.CL] 28 May 2025 Patient B Late onset IOH with abrupt drop. Patient A Early onset IOH with short duration . Patient C Multiple IOH phases with prolonged fluctuations . (a) IOH events are sparse and heterogeneous. Age: 38 Sex: Female Surgery: Abdominal surgery Age: 57 Sex: Female Surgery: Abdominal surgery Age: 47 Sex: Male Surgery: Urologic surgery Patient B Patient A Patient C (b) MAP
https://arxiv.org/abs/2505.22116v1
patterns vary with static attributes. Figure 1: (a) IOH events are sparse and exhibit substantial inter patient variability in onset time, duration, and waveform morphology. (b) MAP series vary significantly across static attributes including age groups, genders and surgery types. architectures [ 18,19] demonstrating strong capabilities in modeling temporal dynamics. Beyond deterministic models, diffusion-based generative approaches are effective for time series analysis [ 20, 21] and data augmentation via realistic sample synthesis [ 22]. More recently, language models [ 23, 24] have expanded the field of time series forecasting by enabling cross-modal representation learning and effectively aligning textual and temporal features. Despite substantial progress, predicting IOH remains challenging [ 25]. As shown in Figure 1 (a), IOH events are sparse, brief, and highly variable in onset time, waveform morphology, and temporal dynamics. Figure 1 (b) further shows that MAP fluctuations vary significantly across patients due to factors such as age and type of surgery, making it difficult for models to generalize across diverse populations. Effective IOH modeling thus requires capturing critical temporal patterns while jointly integrating static patient attributes [26] with dynamic physiological signals. To address the challenge posed by sparse IOH events, we propose a multimodal language model framework, IOHFuseFM , which integrates static patient attributes with dynamic physiological series. The training process consists of two stages. First, domaim adaptive pretraining is conducted on a dataset augmented with a diffusion strategy to capture a diverse range of fine-grained patterns. This is followed by task fine-tuning using a customized loss function that improves the model’s sensitivity to IOH-related abnormalities. Static attributes are transformed into clinically informed descriptions, enabling cross-modal alignment through token level interaction for precise semantic fusion. Our main contributions are summarized as follows: •We propose IOHFuseLM, a novel multimodal language model framework for IOH prediction. The model is trained using a two-stage paradigm: domaim adaptive pretraining on a diffusion- augmented physiological dataset, followed by task fine-tuning on real intraoperative records. •We develop a clinically informed multimodal fusion strategy that aligns static patient context with temporal physiological series by converting patient attributes into structured clinical text and aligning it at the token level with physiological series. •Experiments on two real-world intraoperative datasets demonstrate that our method con- sistently outperforms competitive baselines, including a curated dataset based on raw intraoperative blood pressure recordings. 2 Related Work Intraoperative Hypotension Forecasting. Modeling intraoperative arterial pressure has emerged as a key strategy for early prediction of intraoperative hypotension (IOH), enabling timely clinical interventions and improved patient safety. Early efforts primarily focused on high fidelity arterial pressure series, leading to the development of the Hypotension Prediction Index [ 7]. Subsequent machine learning approaches, including ensemble methods [ 27] and gradient boosting techniques [ 28], integrated both preoperative and intraoperative variables. However, these models often treated each data point in isolation, overlooking the intrinsic temporal dependencies. To address this limitation, deep learning architectures, including recurrent neural networks [ 11] and attention-based models [ 13], were introduced to better capture sequential patterns. More recently, interpretable models [ 29,30] have improved clinical utility, although they still depend on predefined
https://arxiv.org/abs/2505.22116v1
features and structured inputs. Meanwhile, the frequency-domain perspective [ 31] has also been explored. 2 While existing IOH prediction methods have made considerable progress, most are grounded in either biomarker identification or deep learning models that lack the capacity to align patient specific clinical narratives with the evolving temporal dynamics of physiological series. Despite progress, existing IOH prediction methods still lack the capacity to bridge multimodal disparities and effectively model personalized, temporally evolving risk patterns. Time Series Forecasting. Time series forecasting plays a pivotal role in domains such as healthcare, weather, and energy. Classical statistical models, including ARIMA [ 32], often struggle to capture the complex dynamics present in physiological series that exhibit high dimensionality and nonlinearity. Deep learning models, such as long short term memory networks [ 14] and gated recurrent units [ 33], have demonstrated strong capabilities in modeling temporal dependencies over extended time horizons by leveraging gated mechanisms. In recent years, Transformer-based architectures [ 34–37] have achieved notable progress in time series forecasting. For instance, PatchTST [ 38] introduces patch- level embeddings, providing a principled approach to tokenizing time series. MLP-based models [18,39–41] have also shown competitive performance with reduced computational complexity, while effectively preserving temporal structures through simplified model designs. In addition to architectural advancements, generative modeling has emerged as a promising paradigm for time series forecasting, with diffusion-based approaches gaining increasing attention. Recent models [ 42–44] effectively capture complex temporal dynamics by iteratively denoising noise- perturbed series through learned reverse processes. At the same time, large language models have exhibited increasing potential in time series modeling [ 23,24,45,46]. Through pretraining and instruction tuning, LLMs are capable of generalizing forecasting capabilities across a wide range of tasks and domains, thereby enabling more flexible and adaptive series understanding. These advancements establish a solid foundation for developing unified and generalizable time series forecasting frameworks that combine high representational capacity with strong adaptability to the intricate dynamics characteristic of IOH prediction scenarios. 3 Preliminaries Threshold Historical Window Warning Window Monitoring WindowPredicted WindowNormotensive Segment IOH Segment ? P5 P4 P6 P7 P8 P9 P10 P3 P2 P1 S2 (Colorectal ) S3 (Transplantation ) S1 (Vascular ) P1 , P2 , P3 P4 , P5 P6 , P7 , P8 Training SetP9 Validation SetP10 Testing Set Figure 2: Top: Temporal segmentation for IOH prediction. The MAP curve is divided into histor- ical window (orange), warning window (purple), and monitoring window (red). Bottom: Patients are split by procedure into training, validation, and test sets to ensure subject independence and prevent data leakage.Definition of Intraoperative Hypotension. In- traoperative hypotension (IOH) is defined accord- ing to clinically established thresholds. An IOH event is identified when the mean arterial pressure (MAP) remains below 65 mmHg for at least one continuous minute [ 47,48]. Systolic blood pres- sure (SBP) and diastolic blood pressure (DBP) de- note the maximum and minimum pressures within a cardiac cycle, respectively. The MAP [ 49], a critical indicator of cardiac output and systemic vascular resistance [50], is calculated as: MAP =SBP+ 2×DBP 3, (1) Series Instance Construction. Given a histor- ical window of
https://arxiv.org/abs/2505.22116v1
length L, the model predicts a future MAP series of length T, as illustrated in Fig. 2. To prevent label leakage and enable realis- tic forecasting, instances with historical windows overlapping IOH episodes are excluded. To miti- gate class imbalance and capture temporal dynam- ics, we adopt an adaptive slicing strategy: negative instances are sampled at regular intervals ∆Normal to reduce redundancy, while positive instances linked to IOH are sampled more frequently at intervals ∆IOHto ensure adequate coverage of critical transitions. Surgery Aware Subject Splitting. To ensure realistic generalization, we employ a subject inde- pendent split by assigning each patient exclusively to the training, validation, or test set, as shown in Fig. 2. Patients are grouped according to their surgery type, and each group is assigned to only 3 one data partition. This stratification helps maintain a balanced distribution of surgery types across splits, thus mitigating any distributional shifts caused by surgery specific hypotension risks. This strategy also prevents the memorization of subject-specific patterns and reflects real world deployment scenarios. Moreover, it enables a clear separation of each patient’s static attributes and temporal waveform data across splits, which facilitates model generalization to unseen individuals. IOH Event Evaluation. The ground-truth label for each timestamp is assigned based on whether the subsequent one-minute MAP series remains continuously below 65 mmHg. A predicted IOH event is assigned if more than 60% of the forecasted MAP values within the same one-minute window fall below this threshold. Model performance is evaluated using pointwise metrics to assess the accuracy of MAP forecasting at each timestep, and instance-level metrics to capture the model’s effectiveness in detecting IOH events. Problem Definition. We define the multimodal dataset as X={(ai, gi, si,xi)|i∈[N]}, where each record corresponds to a perioperative patient pi∈ P ={p1, p2, . . . , p N}. Here, ai, gi, and sidenote the patient’s age, gender, and type of surgery, respectively, and xirepresents the associated MAP time series. To support early detection and timely intervention, we formulate IOH prediction as a future MAP forecasting task. For each patient pi, given a MAP historical seriesxi,1:land static attributes (ai, gi, si), the objective is to forecast the predicted series xi,l+1:l+t. The predicted series is partitioned into two segments: a two-minute warning window that captures rapid hemodynamic fluctuations for early identification of instability [ 51], followed by a monitoring window for confirming the occurrence of IOH events. 4 Methodology In this section, we describe the proposed multimodal framework IOHFuseLM for intraoperative hypotension (IOH) prediction. As shown in Fig. 3, the framework consists of four components: per- sonalized clinical description generation, multi-scale trend-residual diffusion augmentation, domain adaptive pretrain, and task fine-tuning. The model is built on the GPT-2 architecture [52]. 4.1 Personalized Clinical Description Generation To incorporate static features effectively, we propose a template-guided personalized clinical de- scription generation ( PCDG ) framework that integrates multiple sources of medical knowledge, including physician recommendations, institutional expertise, and relevant literature. By leveraging hospital-specific data and retrospective studies [ 53,54], the framework generates individualized narratives enriched with personalized insights and contextualized by static IOH-related features. For
https://arxiv.org/abs/2505.22116v1
each patient pi, the personalized clinical description is defined as: di=ϕ(ai, gi, si), (2) Here, ϕrepresents GPT-4o that generates clinical descriptions dibased on the static attributes (ai, gi, si), following a predefined medical template. To enhance the clinical relevance of the language model, the tokenizer is extended with hormone and surgery related terms. The resulting dataset can be formally represented as: X1={(di,xi)|i∈[N]}. (3) 4.2 Multi-Scale Trend-Residual Diffusion Augmentation To alleviate the challenge of scarce IOH cases, which hampers accurate modeling of hemodynamic series, we propose the Multi-Scale Trend Residual Diffusion Augmentation ( MTRDA ) framework. This approach enhances the representation and generation of sparse MAP series, particularly those containing IOH events. MTRDA improves the model’s ability to learn both broad temporal patterns and fine-grained variations within hypotensive intervals. Adaptive smoothing is initially performed on the MAP series defined over the historical window to extract underlying trends. Specifically, we employ a set of centered sliding average filters with predefined odd-length window kernels S={w1, w2, . . . , w |S|}. For each scale ws∈ S, the smoothed series x(s) iis computed as: x⟨s⟩ i,t=1 ws⌊ws/2⌋X τ=−⌊ws/2⌋xi.t+τ, (4) 4 Different Kernel Size Diffusion Augmentation Network⊕MTSDA Historical Series DescriptionsDomain Pretrained ModelFine -tuningIOH Segment Normal SegmentIOH Historical Series Predicted Window Series IOH Augmented Series Multi -scale Trend Series MTSDA Augmented M oduleCross AttentionGender Age SurgeryDescriptionsPCDG Expanded TokenizerCross -Modal Fusion Language ModelProjector Self-Supervised LearningIOH Domain Adaptive PretrainHormones Surgery Patch EmbeddingPatch MaskPatient -specific Mask Q K V Gpt-4o Gaussian Distribution Initial NoiseFigure 3: Illustration of our framework. MTRDA decomposes MAP series into multi-scale, and enhances IOH historical series via diffusion-based augmentation. IOH domain adaptive pretraining aligns the augmented IOH series and clinical descriptions through dual-masked cross-attention under a self-supervised objective. Task fine-tuning incorporates labeled normotensive and IOH series with an IOH-specific MSE loss to refine event detection. where ⌊·⌋denotes the floor operator. Boundary values are handled via symmetric padding. The final multiscale trend estimate is obtained by averaging across all scales: xi,trend=1 |S||S|X s=1x⟨s⟩ i, (5) xi,residual =xi,1:l−xi,trend, (6) where xi,trendandxi,residual represent the trend and residual components of the series, respectively. Short windows capture rapid fluctuations indicative of oscillatory patterns, while long windows reveal sustained trends linked to patient status. This multiscale smoothing strategy preserves structural patterns Xtrend iacross temporal levels and facilitates early IOH detection. The residual component ri retains detailed variations reflecting subtle physiological dynamics. To enhance the residual component xi,residual by preserving the overall MAP trend while enriching fine-grained fluctuations, MTRDA incorporates a diffusion-based generative mechanism that learns to reconstruct and refine the residual series through iterative denoising. x(k) i,residual =√¯αkx(0) i,residual +√ 1−¯αkϵ, (7) where kdenotes the diffusion step, ¯αkdenotes the cumulative product of the noise schedule coeffi- cients, and ϵ∼ N(0,I)is standard Gaussian noise. LELBO=Exi,residual,k x(0) i,residual−fθ x(k) i,residual , k 2 . (8) The diffusion augmentation network fθcomprises three modules. During training, the embedding module encodes the residual series into a high-dimensional latent space using multilayer perceptrons and learnable positional encodings, with diffusion step kintegrated via sinusoidal encodings [ 55,56] and Adaptive Layer Normalization (AdaLN) [ 57]. The encoded features are passed
https://arxiv.org/abs/2505.22116v1
to a lightweight denoising decoder composed of stacked linear layers and normalization blocks, which iteratively refine the residual series while reducing computational overhead. A projection layer then maps the refined representation back to the residual space and combines it with the trend component xi,trendto generate the output and compute the training loss. For each original series, initial noise is sampled from a Gaussian distribution and passed through the trained network fθtogether with the trend component xi,trend, generating Haugmented MAP series. These are denoted as X′={x′(1), . . . ,x′(H)}and preserve both transient anomalies and subtle fluctuations. We then construct the extended dataset as: X2=X1∪n di,x′(j) i,1:l⊕xi,l+1:l+t |i∈[N], j∈[H]o . (9) This reconstruction process captures fine-grained residual patterns within the historical IOH window, thereby enhancing the fidelity and informativeness of sparse IOH series. 5 4.3 Domain Adaptive Pretraining Pretraining has demonstrated remarkable effectiveness in time series analysis [ 58]. Our objective is to endow a language model with the capability to identify and comprehend temporal IOH dynamics, while enabling effective cross-modal fusion between physiological time series and patient-specific clinical text. To harness this potential in the context of IOH forecasting, we propose a domain adaptive pretraining strategy that aligns personalized clinical context with IOH physiological patterns. Specifically, each input pair (ˆxi, di)is sampled from X2. To enable modality alignment, the MAP series ˆxiis segmented into fixed-length patches of size p, which are linearly projected into MAP patch tokens. A random masking ratio Ris applied to the resulting tokens to enhance representation learning. The corresponding clinical description diis tokenized using expanded language model tokenizer, yielding the text token Tiwith a maximum length of η. To enable selective fusion of patient-specific textual and physiological features, we construct a patient-specific attention mask Mi. Specifically, we define two binary vectors: the first is a vector 1l+t p, whose length matches the number of series tokens, with all elements set to active. The second is a binary vector miof length η, corresponding to the text token Ti, where active elements represent valid tokens and inactive elements represent padding positions. Additionally, we define an all-active vector 1ηof length η. These masks are combined using elementwise logical conjunction to form the joint attention mask Mi, defined as: Mi=1l+t p·(1η−mi)⊤(10) Attention (Q,K,V) =softmax QK⊤ √ d−λ·Mi! V, (11) where λis a large constant to suppress attention to semantically misaligned regions. We adopt the token level alignment mechanism [59], which aligns the text token Tiwith the patch token of series ˆxi. The query matrix Qis derived from the tokenized and projected text tokens Ti, while the key and value matrices KandVare obtained from the corresponding MAP series tokens. The resulting representations are concatenated with the series tokens and processed by a pretrained language model, followed by a projection layer. The model is optimized to minimize the mean squared error (MSE) on the masked positions of the time-series tokens. This pretraining strategy enables the model to learn semantically meaningful interactions between the IOH related MAP series and the corresponding static patient features. It also facilitates modality alignment between IOH series and patient specific
https://arxiv.org/abs/2505.22116v1
information, thereby providing a stronger language model foundation for subsequent hypotension prediction. 4.4 Task Fine-tuning To adapt the pretrained model to the downstream IOH prediction task, the task fine-tuning stage further refines the representations learned during domain adaptive pretraining, thereby enhancing the model’s ability to distinguish IOH physiological patterns from normotensive fluctuations. Each input pair (di,xi)∈ X 1is first processed to derive token level representations that capture instance-specific semantic and physiological characteristics. These representations are integrated with the corresponding series embeddings and then passed to the pretrained model. During task fine-tuning, the series embedding and output projection layers are reinitialized for task adaptation, while all other parameters are initialized from the domain adaptive pretraining stage and jointly optimized to improve temporal sensitivity to IOH dynamics. To enhance sensitivity to IOH abnormalities, an additional loss term is introduced for the timestamps of IOH series, defined as the MSE on those timestamps and weighted by a hyperparameter ρ. The total IOH loss function is given by: Loss=MSE normal+ρ·MSE IOH, (12) where MSE normal andMSE IOHrepresent the mean squared errors computed over normotensive and hypotensive series, respectively. This task fine-tuning strategy encourages the model to attend to subtle temporal variations indicative of IOH, thereby enhancing predictive performance and facilitating timely clinical intervention. 6 5 Experiments 5.1 Experimental Setup Datasets. We use two clinical intraoperative hypotension (IOH) datasets collected in real-world surgical settings. Clinical IOH Dataset. The dataset includes intraoperative records from 6,822 patients, featuring MAP time series resampled at 6 and 10 seconds from arterial blood pressure waveforms, together with patient attributes such as age, gender, and surgery type. A total of 1,452 high-quality recordings were retained after preprocessing. VitalDB Dataset. This dataset originally consisted of 6,388 ABP recordings. After filtering out low-quality samples, 1,522 recordings were retained for downstream experiments. Both datasets are split into training, validation, and test sets in a 3:1:1 ratio with temporal consistency. We use a 15-minute historical window and predicted horizons of 5, 10, and 15 minutes, guided by clinical evidence on IOH predictability [ 60]. Detailed preprocessing steps and dataset statistics are provided in Appendix A. Baselines. We compare our method against six representative time series forecasting models. These include the MLP-based DLinear [ 18], the Transformer-based PatchTST [ 38], and the frequency domain enhanced Fredformer [ 61]. We also include HMF [ 12], a model specifically designed for in- traoperative hypotension prediction, along with two language model-based approaches: GPT4TS [ 23], built on GPT-2, and TimeLLM [24], based on LLaMA2-7B [62]. Implementation. To assess IOH prediction performance, we report Mean Squared Error (MSE) and Mean Absolute Error (MAE) on hypotensive timestamps. Discriminative ability is measured by the Area Under the ROC Curve (AUC), and Recall reflects early warning effectiveness. All models are trained with the Adam optimizer [ 63], using a batch size of 8 and an initial learning rate of 0.0001 with a decay factor of 0.75. Experiments are conducted on a NVIDIA RTX 4090 GPU and a server equipped with eight Tesla A100 GPUs. To ensure result reliability, we report averages over three independent runs.
https://arxiv.org/abs/2505.22116v1
Full training configurations are provided in Appendix B. 5.2 Results and Discussion Main Results. We conduct comprehensive experiments on the Clinical IOH dataset and VitalDB dataset. Results are summarized in Table 1. Experimental results highlight key differences among baseline models. DLinear’s moderate performance reflects limitations in capturing complex temporal dynamics using simple linear decomposition. PatchTST excels in recall and AUC by segmenting series into semantically meaningful patches. Fredformer improves performance by reducing frequency bias but struggles with the high variability of IOH events. HMF extracts temporal features using sliding windows but lacks semantic modeling of baseline blood pressure associated with surgery type, leading to poor generalization across procedures. GPT4TS performs well on high-frequency data, capturing short physiological trends effectively. TimeLLM’s fixed parameters limit its adaptability to distribution shifts in MAP series. IOHFuseLM outperforms others in sparse and high-variability settings by aligning static patient attributes with MAP series and augmenting sparse data with realistic signals, effectively addressing IOH sparsity and variability. Performance varies distinctly across datasets and sampling rates. VitalDB, with higher IOH event density, generally yields better metrics, particularly for IOHFuseLM, which excels at fine granularity, demonstrating strong temporal pattern extraction capabilities. Clinical IOH, characterized by sparser events, presents greater modeling challenges, yet IOHFuseLM consistently maintains strong perfor- mance across coarser sampling intervals. These results emphasize adaptability and the effectiveness of its multimodal context integration and data augmentation strategies. Ablation Results. The following ablated variants are evaluated: •IOHFuseLM1: Excludes the clinical description di, only modeling the MAP time series. •IOHFuseLM2: Utilizes the original GPT-2 tokenizer without vocabulary expansion. •IOHFuseLM3: Conducts domaim adaptive pretraining exclusively on the original dataset X1, omitting any diffusion-based augmentation. •IOHFuseLM4: Removes the domaim adaptive pretraining stage and directly applies task fine- tuning on the downstream IOH prediction task. 7 Table 1: Performance comparison of different models on the Clinical IOH and VitalDB datasets under varying sampling rates. The best result for each metric is indicated in bold. Dataset Historical Window Sampling (s) Model MSE IOH MAE IOH Recall (%) AUC Clinical IOH150 6DLinear 178.6592 10.9190 36.61% 0.6406 PatchTST 122.2292 8.5687 63.09% 0.6948 Fredformer 99.0389 7.7170 59.98% 0.6985 HMF 114.6592 8.3079 50.76% 0.6737 GPT4TS 119.0686 8.4467 59.37% 0.6991 TimeLLM 133.2503 9.2422 46.58% 0.6687 IOHFuseLM 88.9192 7.4921 74.00% 0.7130 90 10DLinear 127.0857 8.6489 51.49% 0.6933 PatchTST 125.5609 8.8750 56.84% 0.7044 Fredformer 103.5407 8.0945 53.17% 0.6935 HMF 121.1721 8.7806 51.40% 0.6853 GPT4TS 91.4255 7.4369 62.68% 0.7309 TimeLLM 118.6838 8.7864 50.45% 0.6913 IOHFuseLM 87.6147 7.3933 74.46% 0.7425 VitalDB 3 300DLinear 92.2800 7.3100 33.48% 0.6300 PatchTST 99.3965 7.6942 52.62% 0.6443 Fredformer 69.5776 6.0244 49.94% 0.6640 HMF 76.5757 6.5151 49.73% 0.6501 GPT4TS 61.7742 5.5824 57.39% 0.6885 TimeLLM 82.3817 7.0517 30.36% 0.6068 IOHFuseLM 58.3511 5.1251 70.10% 0.7086 We conduct a detailed ablation study on the Clinical IOH dataset sampled at 10-second intervals, using a historical window of 15 minutes and predicted horizons of 5, 10, and 15 minutes. Table 2 summarizes the impact of removing each key component from IOHFuseLM. The results confirm that every component is essential for addressing the challenges discussed above. Removing static attributes degrades performance by eliminating personalized priors that help
https://arxiv.org/abs/2505.22116v1
identify MAP trends and variability, reducing the ability to detect abnormal patterns and generalize across populations and surgery types. Excluding the expanded tokenizer weakens the model’s ability to associate clinical terminology with physiological patterns, diminishing cross-modal representation learning. Using only the original dataset X1for domain adaptation pretraining, instead of the MTRDA dataset, reduces sensitivity to rare and short-duration IOH episodes, demonstrating the benefit of synthetic variability in mitigating data sparsity. Finally, omitting the domain adaptation pretraining stage leads to consistent performance degradation across all metrics, confirming that prior exposure to IOH patterns enhances generalization under limited supervision. Table 2: Ablation study of model components on the Clinical IOH dataset. Dataset Model Variant MSE IOH MAE IOH Recall AUC Clinical IOHIOHFuseLM 87.6147 7.3933 74.46% 0.7425 IOHFuseLM187.6824 7.4604 68.62% 0.7215 IOHFuseLM295.4979 7.8967 69.42% 0.7287 IOHFuseLM3107.7359 8.3523 67.78% 0.7213 IOHFuseLM498.1118 7.9061 67.85% 0.7192 Transfer Results. To evaluate the model’s generalization ability, we conducted transfer learning experiments on a newly curated cohort of patients with surgical durations between 600 and 1000 seconds, sampled every 6 seconds. The historical and predicted windows were set to 3 and 5 minutes, respectively. The model was pretrained on the Clinical IOH dataset and tested under the settings, focusing on adult patients aged 18–65 years. As shown in Table 3, transfer learning substantially improved performance across all metrics, particularly in recall and AUC. These gains indicate enhanced sensitivity to IOH events and better discrimination under demographic and procedural variability. The results highlight the effectiveness of domaim adaptive pretraining and personalized context integration in enabling effective generalization across diverse clinical scenarios. 8 Table 3: Performance comparison with and without transfer learning on the Clinical IOH dataset. Transfer Setting Historical Window Predicted Window MSE IOH MAE IOH Recall AUC Without Transfer Learning 30 50 216.4261 14.2554 0.00% 0.5000 Transfer Learning 30 50 97.9322 9.0675 17.65% 0.5805 Visual Evidence for the Impact of Domaim Adaptive Pretraining. To qualitatively evaluate the effect of domain adaptive pretraining, we visualize MAP forecasts under two representative IOH patterns: one with a gradual decline and the other with a rapid decline. As shown in Figure 4, we compare models trained with and without pretraining using the same total series length l+t, where thelare set to 50 ,100 and 150, respectively. In both patterns, the models with pretraining produce predictions that more closely follow the ground truth, especially in their ability to capture downward trends in blood pressure. This advantage is particularly clear in the rapid-decline scenario, where models without pretraining tend to respond more slowly and deviate further from actual MAP values. When the historical length lis sufficiently long, for example 150, the pretrained model produces stable and accurate forecasts, benefiting from richer temporal context and more reliable trend estimation. This suggests that the proposed pretraining method enhances the model’s sensitivity to temporal changes and improves its ability to recognize IOH-specific patterns. (a)Pretrained Without Pretrained Pretrained Without Pretrained (b) Figure 4: Qualitative comparison of models with and without domaim adaptive pretraining under two representative IOH patterns: (a) gradually declining MAP; (b) rapidly declining MAP. Deployability Evaluation. To evaluate the
https://arxiv.org/abs/2505.22116v1
practical deployability of our framework, we compare its training and inference efficiency with HMF [ 12], a baseline for IOH prediction. As shown in Figure 5, IOHFuseLM consistently achieves higher runtime efficiency across configurations from the VitalDB and Clinical IOH datasets. The improvements in both training and inference highlight the model’s computational advantage and its suitability for deployment in clinical environments where timely response is critical. Figure 5: Comparison of training and inference speed between IOHFuseLM and HMF. 6 Conclusion In this work, we introduced IOHFuseLM, a multimodal framework for sparse intraoperative hypoten- sion (IOH) prediction that integrated static patient attributes with dynamic physiological time series data. Evaluations on two real world intraoperative datasets showed that IOHFuseLM consistently outperformed competitive baselines, with particularly strong performance under coarse sampling and sparse event conditions. The proposed approach achieved higher recall and area under the curve (AUC) by effectively capturing patient-specific variability and extracting features with rich temporal and semantic content. Compared to previous methods that relied solely on physiological series or used simple feature concatenation, IOHFuseLM demonstrated better flexibility and generalization through data augmentation during pretraining and structured integration of patient attributes. These results suggest that the model can support personalized and timely clinical monitoring, which may help facilitate early intervention, improve hemodynamic stability during surgery, and reduce the risk of postoperative complications. 9 References [1]Karim Kouz, Phillip Hoppe, Luisa Briesenick, and Bernd Saugel. Intraoperative hypotension: Pathophysiology, clinical relevance, and therapeutic approaches. Indian journal of anaesthesia , 64(2):90–96, 2020. [2]Jianghui Cai, Mi Tang, Huaye Wu, Jing Yuan, Hua Liang, Xuan Wu, Shasha Xing, Xiao Yang, and Xiao-Dong Duan. Association of intraoperative hypotension and severe postoperative complications during non-cardiac surgery in adult patients: A systematic review and meta- analysis. Heliyon , 9(5), 2023. [3]Judith AR Van Waes, Wilton A Van Klei, Duminda N Wijeysundera, Leo Van Wolfswinkel, Thomas F Lindsay, and W Scott Beattie. Association between intraoperative hypotension and myocardial injury after vascular surgery. Survey of anesthesiology , 60(5):212, 2016. [4]M Wijnberge, J Schenk, E Bulle, AP Vlaar, K Maheshwari, MW Hollmann, JM Binnekade, BF Geerts, and DP Veelo. Association of intraoperative hypotension with postoperative morbid- ity and mortality: systematic review and meta-analysis. BJS open , 5(1):zraa018, 2021. [5]EM Wesselink, TH Kappen, HM Torn, AJC Slooter, and WA Van Klei. Intraoperative hypoten- sion and the risk of postoperative adverse outcomes: a systematic review. British journal of anaesthesia , 121(4):706–721, 2018. [6]Wael Saasouh, Navid Manafi, Asifa Manzoor, and George McKelvey. Mitigating intraoperative hypotension: a review and update on recent advances. Advances in Anesthesia , 42(1):67–84, 2024. [7]Feras Hatib, Zhongping Jian, Sai Buddi, Christine Lee, Jos Settels, Karen Sibert, Joseph Rinehart, and Maxime Cannesson. Machine-learning algorithm to predict hypotension based on high-fidelity arterial pressure waveform analysis. Anesthesiology , 129(4):663–674, 2018. [8]Solam Lee, Hyung-Chul Lee, Yu Seong Chu, Seung Woo Song, Gyo Jin Ahn, Hunju Lee, Sejung Yang, and Sang Baek Koh. Deep learning models for the prediction of intraoperative hypotension. British journal of anaesthesia , 126(4):808–817, 2021. [9]Heejoon Jeong, Donghee Kim, Dong Won Kim, Seungho Baek, Hyung-Chul Lee, Yusung Kim, and Hyun Joo Ahn. Prediction of intraoperative
https://arxiv.org/abs/2505.22116v1
hypotension using deep learning models based on non-invasive monitoring devices. Journal of Clinical Monitoring and Computing , pages 1–9, 2024. [10] Joon-myoung Kwon, Youngnam Lee, Yeha Lee, Seungwoo Lee, and Jinsik Park. An algorithm based on deep learning for predicting in-hospital cardiac arrest. Journal of the American Heart Association , 7(13):e008678, 2018. [11] Young-Seob Jeong, Ah Reum Kang, Woohyun Jung, So Jeong Lee, Seunghyeon Lee, Misoon Lee, Yang Hoon Chung, Bon Sung Koo, and Sang Hyun Kim. Prediction of blood pressure after induction of anesthesia using deep learning: A feasibility study. Applied Sciences , 9(23):5135, 2019. [12] Mingyue Cheng, Jintao Zhang, Zhiding Liu, Chunli Liu, and Yanhu Xie. Hmf: A hybrid multi-factor framework for dynamic intraoperative hypotension prediction. arXiv preprint arXiv:2409.11064 , 2024. [13] Feng Lu, Wei Li, Zhiqiang Zhou, Cheng Song, Yifei Sun, Yuwei Zhang, Yufei Ren, Xiaofei Liao, Hai Jin, Ailin Luo, et al. A composite multi-attention framework for intraoperative hypotension early warning. In Proceedings of the AAAI Conference on Artificial Intelligence , volume 37, pages 14374–14381, 2023. [14] Alex Graves and Alex Graves. Long short-term memory. Supervised sequence labelling with recurrent neural networks , pages 37–45, 2012. [15] Yunhao Zhang and Junchi Yan. Crossformer: Transformer utilizing cross-dimension dependency for multivariate time series forecasting. In The eleventh international conference on learning representations , 2023. 10 [16] Yong Liu, Tengge Hu, Haoran Zhang, Haixu Wu, Shiyu Wang, Lintao Ma, and Mingsheng Long. itransformer: Inverted transformers are effective for time series forecasting. arXiv preprint arXiv:2310.06625 , 2023. [17] Yuxuan Wang, Haixu Wu, Jiaxiang Dong, Guo Qin, Haoran Zhang, Yong Liu, Yunzhong Qiu, Jianmin Wang, and Mingsheng Long. Timexer: Empowering transformers for time series forecasting with exogenous variables. arXiv preprint arXiv:2402.19072 , 2024. [18] Aixin Zeng, Ming Chen, Lei Zhang, and Qiang Xu. Are transformers effective for time series forecasting? In Proceedings of the AAAI Conference on Artificial Intelligence , volume 37, pages 11121–11128, 2023. [19] Kun Yi, Qi Zhang, Wei Fan, Shoujin Wang, Pengyang Wang, Hui He, Ning An, Defu Lian, Longbing Cao, and Zhendong Niu. Frequency-domain mlps are more effective learners in time series forecasting. Advances in Neural Information Processing Systems , 36:76656–76679, 2023. [20] Xinyu Yuan and Yan Qiao. Diffusion-ts: Interpretable diffusion for general time series genera- tion. arXiv preprint arXiv:2403.01742 , 2024. [21] Jingwei Liu, Ling Yang, Hongyan Li, and Shenda Hong. Retrieval-augmented diffusion models for time series forecasting. Advances in Neural Information Processing Systems , 37:2766–2786, 2024. [22] Brandon Trabucco, Kyle Doherty, Max Gurinas, and Ruslan Salakhutdinov. Effective data augmentation with diffusion models. arXiv preprint arXiv:2302.07944 , 2023. [23] Tian Zhou, Peisong Niu, Liang Sun, Rong Jin, et al. One fits all: Power general time series analysis by pretrained lm. Advances in neural information processing systems , 36:43322–43355, 2023. [24] Ming Jin, Shiyu Wang, Lintao Ma, Zhixuan Chu, James Y Zhang, Xiaoming Shi, Pin-Yu Chen, Yuxuan Liang, Yuan-Fang Li, Shirui Pan, et al. Time-llm: Time series forecasting by reprogramming large language models. arXiv preprint arXiv:2310.01728 , 2023. [25] Phillip Hoppe, Karim Kouz, and Bernd Saugel. Perioperative hypotension: clinical impact, diagnosis, and therapeutic approaches. Journal of emergency and critical
https://arxiv.org/abs/2505.22116v1
care medicine , 4, 2020. [26] Netsanet Temesgen, Efrem Fenta, Chernet Eshetie, and Moges Gelaw. Early intraoperative hypotension and its associated factors among surgical patients undergoing surgery under general anesthesia: An observational study. Annals of Medicine and Surgery , 71:102835, 2021. [27] Ményssa Cherifa, Alice Blet, Antoine Chambaz, Etienne Gayat, Matthieu Resche-Rigon, and Romain Pirracchio. Prediction of an acute hypotensive episode during an icu hospitalization with a super learner machine-learning algorithm. Anesthesia & Analgesia , 130(5):1157–1166, 2020. [28] Samir Kendale, Prathamesh Kulkarni, and Rosenberg et al. Supervised machine-learning predictive analytics for prediction of postinduction hypotension. Anesthesiology , 129(4):675– 688, 2018. [29] Jodie Ritter, Xiaoyu Chen, Lihui Bai, and Jiapeng Huang. Predicting hypotension by learning from multivariate mixed responses. In Proceedings of the International MultiConference of Engineers and Computer Scientists , 2023. [30] Eugene Hwang, Yong-Seok Park, Jin-Young Kim, Sung-Hyuk Park, Junetae Kim, and Sung- Hoon Kim. Intraoperative hypotension prediction based on features automatically generated within an interpretable deep learning model. IEEE Transactions on Neural Networks and Learning Systems , 2023. [31] Jeong-Hyeon Moon, Garam Lee, Seung Mi Lee, Jiho Ryu, Dokyoon Kim, and Kyung-Ah Sohn. Frequency domain deep learning with non-invasive features for intraoperative hypotension prediction. IEEE Journal of Biomedical and Health Informatics , 2024. 11 [32] Adebiyi A. Ariyo, Adewumi O. Adewumi, and Charles K. Ayo. Stock price prediction using the arima model. In 2014 UKSim-AMSS 16th International Conference on Computer Modelling and Simulation , pages 106–112, 2014. [33] Rahul Dey and Fathi M Salem. Gate-variants of gated recurrent unit (gru) neural networks. In 2017 IEEE 60th MWSCAS , pages 1597–1600. IEEE, 2017. [34] Haoyi Zhou, Shanghang Zhang, Jieqi Peng, Shuai Zhang, Jianxin Li, Hui Xiong, and Wancai Zhang. Informer: Beyond efficient transformer for long sequence time-series forecasting. In Proceedings of the AAAI conference on artificial intelligence , volume 35, pages 11106–11115, 2021. [35] Haixu Wu, Jiehui Xu, Jianmin Wang, and Mingsheng Long. Autoformer: Decomposition trans- formers with auto-correlation for long-term series forecasting. Advances in neural information processing systems , 34:22419–22430, 2021. [36] Tian Zhou, Ziqing Ma, Qingsong Wen, Xue Wang, Liang Sun, and Rong Jin. Fedformer: Frequency enhanced decomposed transformer for long-term series forecasting. In International conference on machine learning , pages 27268–27286. PMLR, 2022. [37] Zelin Ni, Hang Yu, Shizhan Liu, Jianguo Li, and Weiyao Lin. Basisformer: Attention-based time series forecasting with learnable and interpretable basis. Advances in Neural Information Processing Systems , 36:71222–71241, 2023. [38] Yuqi Nie, Nam H Nguyen, Phanwadee Sinthong, and Jayant Kalagnanam. A time series is worth 64 words: Long-term forecasting with transformers. arXiv preprint arXiv:2211.14730 , 2022. [39] Vijay Ekambaram, Arindam Jati, Nam Nguyen, Phanwadee Sinthong, and Jayant Kalagnanam. Tsmixer: Lightweight mlp-mixer model for multivariate time series forecasting. In Proceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining , pages 459–469, 2023. [40] Kun Yi, Qi Zhang, Wei Fan, Shoujin Wang, Pengyang Wang, Hui He, Ning An, Defu Lian, Longbing Cao, and Zhendong Niu. Frequency-domain mlps are more effective learners in time series forecasting. Advances in Neural Information Processing Systems , 36:76656–76679, 2023. [41] Shengsheng Lin, Weiwei Lin, Xinyi Hu,
https://arxiv.org/abs/2505.22116v1
Wentai Wu, Ruichao Mo, and Haocheng Zhong. Cyclenet: enhancing time series forecasting through modeling periodic patterns. Advances in Neural Information Processing Systems , 37:106315–106345, 2024. [42] Yusuke Tashiro, Jiaming Song, Yang Song, and Stefano Ermon. Csdi: Conditional score-based diffusion models for probabilistic time series imputation. Advances in neural information processing systems , 34:24804–24816, 2021. [43] Lifeng Shen and James Kwok. Non-autoregressive conditional diffusion models for time series prediction. In International Conference on Machine Learning , pages 31016–31029. PMLR, 2023. [44] Jingwei Liu, Ling Yang, Hongyan Li, and Shenda Hong. Retrieval-augmented diffusion models for time series forecasting. Advances in Neural Information Processing Systems , 37:2766–2786, 2024. [45] Zijie Pan, Yushan Jiang, Sahil Garg, Anderson Schneider, Yuriy Nevmyvaka, and Dongjin Song. s2ip-llm: Semantic space informed prompt learning with llm for time series forecasting. In Forty-first International Conference on Machine Learning , 2024. [46] Mingtian Tan, Mike Merrill, Vinayak Gupta, Tim Althoff, and Tom Hartvigsen. Are language models actually useful for time series forecasting? Advances in Neural Information Processing Systems , 37:60162–60191, 2024. [47] Daniel I Sessler and Ashish K Khanna. Perioperative myocardial injury and the contribution of hypotension. Intensive care medicine , 44(6):811–822, 2018. 12 [48] Daniel I Sessler, Joshua A Bloomstone, Solomon Aronson, Colin Berry, Tong J Gan, John A Kellum, James Plumb, Monty G Mythen, Michael PW Grocott, Mark R Edwards, et al. Pe- rioperative quality initiative consensus statement on intraoperative blood pressure, risk and outcomes for elective surgery. British journal of anaesthesia , 122(5):563–574, 2019. [49] Eduardo Meaney, Felix Alva, Rafael Moguel, Alejandra Meaney, JUAN ALV A, and RICHARD WEBEL. Formula and nomogram for the sphygmomanometric calculation of the mean arterial pressure. Heart , 84(1):64–64, 2000. [50] S Magder. The meaning of blood pressure. Critical Care , 22(1):257, 2018. [51] Marije Wijnberge, Bart F Geerts, Liselotte Hol, Nikki Lemmers, Marijn P Mulder, Patrick Berge, Jimmy Schenk, Lotte E Terwindt, Markus W Hollmann, Alexander P Vlaar, et al. Effect of a machine learning–derived early warning system for intraoperative hypotension vs standard care on depth and duration of intraoperative hypotension during elective noncardiac surgery: the hype randomized clinical trial. Jama , 323(11):1052–1060, 2020. [52] Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. Language models are unsupervised multitask learners. OpenAI blog , 1(8):9, 2019. [53] Mathias Maleczek, Daniel Laxar, Angelika Geroldinger, Andreas Gleiss, Paul Lichtenegger, and Oliver Kimberger. Definition of clinically relevant intraoperative hypotension: A data-driven approach. Plos one , 19(11):e0312966, 2024. [54] Wael Saasouh, Navid Manafi, Asifa Manzoor, and George McKelvey. Mitigating intraoperative hypotension: a review and update on recent advances. Advances in Anesthesia , 42(1):67–84, 2024. [55] Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffusion probabilistic models. Advances in neural information processing systems , 33:6840–6851, 2020. [56] Shuyang Gu, Dong Chen, Jianmin Bao, Fang Wen, Bo Zhang, Dongdong Chen, Lu Yuan, and Baining Guo. Vector quantized diffusion model for text-to-image synthesis. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition , pages 10696–10706, 2022. [57] Yunhui Guo, Chaofeng Wang, Stella X Yu, Frank McKenna, and Kincho H Law. Adaln: a vision transformer
https://arxiv.org/abs/2505.22116v1
for multidomain learning and predisaster building information extraction from images. Journal of Computing in Civil Engineering , 36(5):04022024, 2022. [58] Qianli Ma, Zhen Liu, Zhenjing Zheng, Ziyang Huang, Siying Zhu, Zhongzhong Yu, and James T Kwok. A survey on time-series pre-trained models. IEEE Transactions on Knowledge and Data Engineering , 2024. [59] Qingxiang Liu, Xu Liu, Chenghao Liu, Qingsong Wen, and Yuxuan Liang. Time-ffm: To- wards lm-empowered federated foundation model for time series forecasting. arXiv preprint arXiv:2405.14252 , 2024. [60] Hamdy Awad, Gabriel Alcodray, Arwa Raza, Racha Boulos, Michael Essandoh, Sujatha Bhandary, and Ryan Dalton. Intraoperative hypotension–physiologic basis and future directions. Journal of Cardiothoracic and Vascular Anesthesia , 36(7):2154–2163, 2022. [61] Xihao Piao, Zheng Chen, Taichi Murayama, Yasuko Matsubara, and Yasushi Sakurai. Fred- former: Frequency debiased transformer for time series forecasting. In Proceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining , pages 2400–2410, 2024. [62] Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timo- thée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, et al. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971 , 2023. [63] Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 , 2014. [64] Kashif Rasul, Calvin Seward, Ingmar Schuster, and Roland V ollgraf. Autoregressive denois- ing diffusion models for multivariate probabilistic time series forecasting. In International Conference on Machine Learning , pages 8857–8868. PMLR, 2021. 13 A Dataset Details Table 4: Statistics of the VitalDB and Clinical IOH datasets under varying sampling settings. Dataset Sampling(s) Historical Window Predicted Window Train Val Test Surgeries N IOH train Clinic-IOH6 15050 2749 852 884 28 1452 135100 3348 937 1138 150 4507 1578 1804 10 9030 10765 3236 3481 60 12334 3752 4227 90 13320 4166 4609 VitalDB 3 300100 18172 2308 2299 12 1522 2026 200 27031 3635 3356 300 38479 5201 4647 Table 4 summarizes the key statistics of the two intraoperative hypotension (IOH) datasets used in this study: Clinical IOH and VitalDB. For each dataset, we list the sampling frequency, historical historical length l, predicted horizon t, and the number of training, validation, and test samples. The table also includes the number of unique patients ( N) and surgeries, along with the number of IOH events identified in the training set (IOH train) according to our clinical threshold definition. Clinical IOH Dataset. This dataset consists of intraoperative data collected from 6,822 patients undergoing anesthesia. It contains high-resolution arterial blood pressure (ABP) waveforms sampled at 100 Hz and structured patient information including age, gender, and surgery type. To accommodate different temporal resolutions, ABP waveforms were processed into MAP series and resampled at 6,s and 10,s intervals. Segments shorter than 1,000 seconds were discarded, yielding 1,452 valid recordings for analysis. Data acquisition was approved by the institutional ethics committee. VitalDB Dataset. Derived from the public VitalDB repository, this dataset initially included 6,388 intraoperative recordings with ABP values sampled every 3 seconds. We excluded recordings with more than 20% missing data in the observation window. After filtering, 1,522 high-quality samples remained for downstream tasks.
https://arxiv.org/abs/2505.22116v1
Data Splitting and Forecasting Settings. Both datasets are split into training, validation, and test subsets using a 3:1:1 ratio, preserving temporal consistency without shuffling. Each model ingests a fixed 15-minute historical MAP window and predicts MAP trajectories over future horizons of 5, 10, or 15 minutes. These predicted lengths are chosen based on prior clinical research demonstrating their practical relevance for IOH risk forecasting [ 60]. The Clinical IOH dataset was de-identified under the HIPAA Safe Harbor method by removing all 18 identifiers. The VitalDB dataset is publicly available under the CC0 1.0 license, allowing unrestricted use for research. Table 5: Hyperparameter configurations across different datasets and settings. Dataset Sampling(s) Predicted Window Pretrain LR Finetune LR H E ∆Normal ∆IOH CH-OBPB 650 10−410−54 5 150 2 100 10−55∗10−54 5 150 10−410−43 2 CH-OBPB 1030 10−410−45 2 20 1 60 10−55∗10−54 4 90 5∗10−510−41 5 VitalDB 3100 3∗10−510−43 2 150 10 200 5∗10−510−44 2 300 10−510−45 2 B Experiment Details Table 5 summarizes the hyperparameter configurations used across different datasets and experimental settings. Specifically, it includes the predicted window length t, learning rates for domain adaptive pretraining and task fine-tuning, the number of augmented series H, the GPT Layers E, and the sampled intervals ∆Normal and∆IOH. The batch size is fixed at 4 for pretraining and 8 for fine-tuning. The pretraining masking ratio Ris set to 0.2, and the hyperparameter ρof IOH loss is set to 10. The diffusion process utilizes K= 50 steps with a cosine variance schedule [ 64] from β1= 10−4toβK= 0.5. Most baseline models, including our proposed method, are evaluated on a single NVIDIA RTX 4090 GPU to ensure a fair comparison under practical deployment 14 settings. However, due to the substantial memory requirements of TimeLLM, which is based on the LLaMA 7B language model, both training and inference for this model are conducted on a server equipped with NVIDIA A100 GPUs. We acknowledge the discrepancy in hardware and note that TimeLLM cannot be executed on the RTX 4090 due to out-of-memory limitations, making the A100 the minimal viable hardware configuration for its evaluation. Additionally, the generation of clinical descriptions in the PCDG module is performed using GPT-4o. C Visualization C.1 Model Prediction Visualization IOHFuseLM Fredformer TimeLLM HMF GPT4TS PatchTST DLinear Figure 6: Visual comparison of MAP prediction results across different models. Figure 6 presents a visual comparison of seven models under the 6-second sampling granularity, with a historical window length l= 150 and a predicted horizon t= 150 . Each row corresponds to one model and each column represents a distinct IOH case. It can be observed that our proposed model consistently identifies hypotensive risks across all three representative IOH events, demonstrating both precise forecasting accuracy and effective 15 event discrimination. In contrast, among the baseline models, only Fredformer correctly identifies the third IOH event, while the others fail to capture the hypotensive onset in this scenario. C.2 Augmentation Visualization Figure 7 presents representative examples of MAP time series augmented by the proposed MTRDA framework under two different sampling frequencies. The augmented series preserve the trends extracted through
https://arxiv.org/abs/2505.22116v1
multiscale smoothing and simultaneously introduce fine-grained variations that enrich the temporal structure of the original series. In particular, the augmented outputs retain the essential characteristics of hypotensive episodes while reducing noise, reflecting the ability of MTRDA to reconstruct physiologically meaningful patterns through trend-residual decomposition and diffusion-based enhancement. These results confirm the effectiveness of MTRDA in improving the representation quality of sparse IOH series under varying temporal resolutions. 0 20 40 60 80 100 120 140 Time (s)586062646668Mean Arterial Pressure (mmHg)Measured MAP (mmHg) Augmented MAP (mmHg) (a) Sample 1 (6s) 0 20 40 60 80 100 120 140 Time (s)56586062646668Mean Arterial Pressure (mmHg)Measured MAP (mmHg) Augmented MAP (mmHg) (b) Sample 2 (6s) 0 20 40 60 80 100 120 140 Time (s)646668707274767880Mean Arterial Pressure (mmHg)Measured MAP (mmHg) Augmented MAP (mmHg) (c) Sample 3 (6s) 0 20 40 60 80 Time (s)65707580859095Mean Arterial Pressure (mmHg) Measured MAP (mmHg) Augmented MAP (mmHg) (d) Sample 1 (10s) 0 20 40 60 80 Time (s)60657075808590Mean Arterial Pressure (mmHg)Measured MAP (mmHg) Augmented MAP (mmHg) (e) Sample 2 (10s) 0 20 40 60 80 Time (s)6570758085Mean Arterial Pressure (mmHg) Measured MAP (mmHg) Augmented MAP (mmHg) (f) Sample 3 (10s) Figure 7: Examples of augmented MAP series of MTRDA under different sampling frequencies. D Hyperparameter Sensitivity Table 6: Performance of the model under different historical and predicted lengths. tl= 30 l= 60 l= 90 MSE IOH MAE IOH Recall AUC MSE IOH MAE IOH Recall AUC MSE IOH MAE IOH Recall AUC 30 120.85±11.18 8 .98±0.67 0.482 0.6852 101.09±9.34 8 .28±0.64 0.5743 0.7288 43.29±8.64 4 .65±0.20 0.7205 0.7729 60 72.73±13.06 6 .75±0.90 0.8191 0.7596 65.70±12.91 6 .03±0.82 0.7822 0.7444 82.07±11.10 6 .88±0.77 0.8012 0.7693 90 56.93±5.44 5 .88±0.48 0.8368 0.7184 85.45±2.79 7 .45±0.45 0.7619 0.7318 91.16±15.48 7 .25±0.69 0.7976 0.7642 To assess the sensitivity of the model to varying historical and predicted horizons, we evaluate performance across different combinations of historical length land predicted length t. As shown in Table 6, increasing the historical length generally improves performance across all metrics. The setting with l= 90 andt= 30 achieves the best overall results, with the lowest prediction error, indicating that a longer temporal context enhances short-term IOH risk forecasting. In contrast, extending the predicted length leads to a moderate decline in accuracy, reflecting the increased difficulty of long-range forecasting in clinical settings. Figure 8: Parameter sensitivity analysis on the Clinic IOH dataset. We conduct a sensitivity analysis on the Clinic IOH dataset under 10-second sampling resolution to evaluate the impact of key hyperparameters. As shown in Fig. 8, the results indicate that model performance is moderately 16 sensitive to the fine-tuning learning rate, while the pretraining learning rate exhibits greater stability. Varying the number of GPT layers shows that moderate depth achieves better generalization, whereas excessive depth may lead to overfitting. Additionally, appropriate levels of data augmented by MTRDA consistently improve performance, though excessive augmentation can introduce distributional noise and degrade accuracy. These findings highlight the importance of balanced model capacity and augmentation strategies for stable performance. E Prompt Design for PCDG Age Gender Surgery GPT 4o PCDG Prompt Design
https://arxiv.org/abs/2505.22116v1
# LLMs Text Patient is in the () age group. At this stage, () hormones influence vascular tone.Hemodynamic compliance and compensation are ().It is classified as a () surgery. The estimated blood loss during surgery is (). Figure 9: Illustration of the PCDG Prompt De- sign framework.To generate patient-specific clinical narratives, we de- sign a structured prompt that guides the large language model GPT-4o in producing medically grounded de- scriptions. This prompt incorporates static patient at- tributes including age, gender, and surgery type, and aligns with predefined medical templates contextualized by domain knowledge. As shown in Fig.9, the generated text serves as a personalized semantic representation used for multimodal fusion in the forecasting pipeline. Prompt Template: The age of patient is {age of patient}, gender is {gender of patient}, and the type of surgery is {surgery type of patient}. Please provide the answer directly, separated by commas, without any spaces in between, removing the parentheses when responding. Without any explanations or additional content. The patient belongs to the () age group, whose vascular compliance and cardiovascular compensatory capacity are (). At this time, () hormones act on the blood vessels. This surgery is a () type of surgery, and the blood loss is usually (). This prompt balances consistency with clinical variability by incorporating patient-specific attributes. Grounded in hospital guidelines, clinical heuristics, and literature [ 53,54], it is tokenized with an extended vocabulary covering physiological and surgical terms, enabling the model to embed static medical context into prediction. F Broder Impacts Although developed for intraoperative hypotension (IOH) prediction, IOHFuseLM is applicable to other clinical tasks involving rare but high-risk physiological events. Similar patterns exist in intraoperative hypoxia detection, where brief desaturation episodes require early identification from noisy SpO 2series, and in intensive care units for sepsis onset prediction, which involves subtle temporal shifts across multiple vital signs. Cardiac arrhythmia monitoring and postoperative respiratory depression detection also share the challenge of aligning transient waveform abnormalities with individualized clinical context. By combining structured patient descriptions with physiological series, our approach facilitates personalized event recognition in settings where traditional signal-only models may fall short. As such, IOHFuseLM may serve as a general blueprint for multimodal modeling in personalized clinical monitoring systems. Moreover, as shown in Fig.5, IOHFuseLM achieves 48ms inference time on an NVIDIA RTX 4090, meeting the responsiveness requirements for real-time physiological monitoring as outlined in ISO 80601-2-77:2017. G Limitations While IOHFuseLM shows strong performance on two intraoperative datasets, some limitations remain. The model may be sensitive to differences in data collection protocols across hospitals. Event sparsity and reliance on generated clinical descriptions may also affect its robustness in unfamiliar domains. Future work may explore multi-center pretraining, incorporation of clinical ontologies, or confidence-aware prediction strategies to improve transferability. 17
https://arxiv.org/abs/2505.22116v1
arXiv:2505.22125v1 [cs.MA] 28 May 2025SENTIMENT SIMULATION USING GENERATIVE AI AGENTS Melrose Tia1, Jezreel Sophia Lanuzo1, Lei Rigi Baltazar1, Marie Joy Lopez-Relente2,Diwa Malaya Quiñones3,Jason Albia1∗ 1Netopia AI, Inc., Manila, Philippines 2Institute of Statistics, University of the Philippines Los Baños, Laguna 3Department of Psychology, University of the Philippines Diliman, Quezon City {melrose, sophia, lei, jason}@netopia.ai, {daquinones, mflopez2}@up.edu.ph ABSTRACT Traditional sentiment analysis relies on surface-level linguistic patterns and retrospective data, limiting its ability to capture the psychological and contextual drivers of human sentiment. These limitations constrain its effectiveness in applications that require predictive insight, such as policy testing, narrative framing, and behavioral forecasting. We present a robust framework for sentiment simulation using generative AI agents embedded with psychologically rich profiles. Agents are instantiated from a nationally representative survey of 2,485Filipino respondents, combining sociodemographic information with validated constructs of personality traits, values, beliefs, and socio-political attitudes. The framework includes three stages: (1) agent embodiment via categorical or contextualized encodings, (2) exposure to real-world political and economic scenarios, and (3) generation of sentiment ratings accompanied by explanatory rationales. Using Quadratic Weighted Accuracy (QWA), we evaluated alignment between agent-generated and human responses. Contextualized encoding achieved 92% alignment in replicating original survey responses. In sentiment simulation tasks, agents reached 81%–86% accuracy against ground truth sentiment, with contextualized profile encodings significantly outperforming categorical ( p < 0.0001 , Cohen’s d= 0.70). Simulation results remained consistent across repeated trials ( ±0.2−0.5%SD) and resilient to variation in scenario framing ( p= 0.9676 , Cohen’s d= 0.02). Our findings establish a scalable framework for sentiment modeling through psychographically grounded AI agents. This work signals a paradigm shift in sentiment analysis from retrospective classification to prospective and dynamic simulation grounded in psychology of sentiment formation. Keywords agentic simulation ·sentiment analysis ·sentiment simulation ·generative AI agents ·behavioral science 1 Introduction Sentiment analysis involves assessing the opinions and attitudes toward specific areas of interests, playing a pivotal role in influencing decisions across business, societal, and individual domains [ 1,2]. While the term sentiment analysis gained prominence in the early 2000s [ 3,4], the broader practice of gauging public opinion has long shaped policy-making, democratic discourse, and marketing strategies [ 5]. As digital platforms and user-generated content increasingly serve as channels for public expression, sentiment analysis enables organizations to harness opinion-rich and unstructured data to refine communication strategies and to respond effectively to societal trends. In the socio-political domain, sentiment analysis has supported applications ranging from policy evaluation to campaign strategy by enabling large-scale interpretation of public opinion. Examples include assessments of public engagement with government initiatives [ 6,7,8,9], political campaign analysis [ 10,11,12], and citizen feedback monitoring via social media [ 13]. For instance, Sandoval-Almazan et al. (2020) [ 10] examined Facebook reactions to political campaign posts in Mexico, uncovering patterns in public engagement. In Indonesia, Sukma et al. (2020) [ 7] analyzed Twitter responses to the Omnibus Law, revealing levels of public support and dissent to the policy. In the Philippines, ∗Corresponding author: jason@netopia.ai Sentiment Simulation Using Generative AI Agents Miranda et al. (2021) [ 12] tracked sentiment around presidential state addresses, while Umali et
https://arxiv.org/abs/2505.22125v1
al. (2020) [ 13] assessed citizen satisfaction with various government agencies based on social media commentary. Beyond politics, sentiment analysis is widely used in the private sector, where it serves as a critical tool in marketing, advertising, and customer experience strategies. Rathore et al. (2020) [ 14], for example, analyzed emotional patterns in online comments before and after product launches to assess market reception and product fit. Giannakis et al. (2022) [15] showed how consumer sentiment from social media can inform early-stage product development, while Yin et al. (2022) [ 16] studied brand loyalty and satisfaction through Twitter sentiment toward e-commerce platforms Lazada and Shopee. In addition, sentiment analysis has also been applied to evaluate consumer reviews for predicting behavior and satisfaction [ 17,18] and to generate real-time customer insights [ 19], thereby contributing to product refinement, enhanced customer engagement, and data-driven business strategies. Traditional Sentiment Analysis and Their Limitations Traditional sentiment analysis often relies on structured methods such as surveys, opinion polls, and focus groups, alongside more recent digital sources like social media [ 2]. These approaches have paved the way into computational techniques leveraging machine learning (ML) and natural language processing (NLP) to classify sentiment (e.g., negative, neutral, positive) based on large-scale text analysis. These methods analyze linguistic patterns, including the use of emotionally charged words (e.g., “happy”, “disappointed”) and syntactic structures that convey opinion or emotions. Despite advances in ML and deep learning models that boost classification accuracy [ 20], these approaches are fundamentally limited. First, they primarily capture surface-level linguistic cues, often oversimplifying the complexity and nuance of human emotion and opinion. Second, these models function as black-box systems that lack transparency, offering limited insight into the reasoning behind sentiment predictions [ 21]. This lack of interpretability impairs trust, accountability, and applicability in domains requiring nuanced understanding. Third, and perhaps most critically, current sentiment analysis techniques often fail to account for contextual and psychological factors, including individual biases, personality traits, values, or temporal circumstances [ 22,23,24]. For example, Mahmoudi (2021) [ 22] emphasizes how user-level biases can lead to divergent interpretations of the same event, which are often ignored in traditional models. Because these systems typically offer retrospective summaries rather than dynamic simulations, they struggle to support forward-looking applications such as policy testing, narrative impact studies, or synthetic focus groups [25]. To illustrate, a sentiment model trained on social media posts from a prior election may accurately classify political opinions from that period [26, 27], but it cannot simulate how a specific group—such as rural, first-time voters, might react to a new policy announcement or media event. These limitations reveal a broader issue: these models are inadequate to model sentiment as situated cognition, that is, an emergent, psychologically grounded response shaped by internal dispositions and external stimuli [28]. Sentiment Simulation using AI and Behavioral Science Rooted in the above challenges, we propose a conceptual shift: from retrospective sentiment classification to AI and behavioral science-driven sentiment simulation. This approach integrates two core paradigms: (1) a behavioral science framework that explains how sentiments arise from psychological drivers, and (2) a
https://arxiv.org/abs/2505.22125v1
simulation-based modeling paradigm enabled by generative AI. Behavioral science provides the theoretical foundation for this shift. It conceptualizes sentiment as a dynamic construct shaped by cognition, emotion, and situational context. Social psychology suggests that sentiment reflects attitudes formed from beliefs, values, and environmental factors—factors that, in turn, shape behavior [ 29]. A complementary analysis by Li and Hovy (2017) [ 30] further argue that sentiment originates from emotionally driven preferences and the pursuit of personal goals. These perspectives suggest that sentiment is not just a textual artifact but a behavioral expression rooted in individual psychology. In methodical perspective, unlike traditional models that classify past sentiment, generative models such as large language models (LLMs) enable prospective simulations that can generate behaviorally rich, context-sensitive senti- ment. These generative models can simulate trust dynamics [ 31], personality expression [ 32], and opinion formation [33]—capabilities that align well with psychological realism. In addition, generative models has also catalyzed new research on synthetic populations and simulated human studies [ 34,35], positioning generative AI as a powerful tool for behavioral science. Representative studies illustrating these advances are summarized in Table 1. 2 Sentiment Simulation Using Generative AI Agents Table 1: Recent studies that inform and support this work, highlighting their domains and key findings. Study Domain Key Findings Using LLMs to Simulate Multiple Humans and Replicate Human Subject Studies [32]Behavioral Economics and Social PsychologySimulated classic behavioral studies (e.g., Ultimatum Game, Milgram) and found that larger LLMs (GPT- 3.5/4) could replicate established findings across eco- nomics, psycholinguistics, and social psychology. Generative Agents: Interactive Simulations of Human Behavior [36]Human-AI Interaction Introduced "generative agents"—LLM-driven agents with memory, planning, and reflection. Demonstrated emergent behavior in interactive environments (e.g., autonomously organizing a Valentine’s Day party) from a single prompt. Generative Agent Simulations of 1000 People [24]Social Science Developed an LLM-based agent architecture to simu- late 1,052 real individuals based on interviews. Agents replicated survey responses with ≈85% accuracy, comparable to humans’ own retest accuracy, and pre- dicted personality traits well. User Behavior Simulation with LLM-based Agents [37]User Behavior SimulationDeveloped an LLM-based framework for simulating user behaviors (e.g., web navigation). Captured social dynamics like conformity and information cocooning. Can Large Language Model Agents Simulate Human Trust Behavior? [31]Behavioral Economics Used Trust Games to evaluate agent behavior. GPT-4 agents showed trust-like behavior and strong alignment with human responses in social dilemmas. Evaluating the Ability of LLMs to Emulate Personality [38]Personality Modeling GPT-4 simulated individuals with Big Five profiles. Generated responses showed high internal consistency and strong correlation with self-reported personality scores. While the above prior studies have illustrated potential of LLMs to simulate behaviors, replicate human experiments, or model trust, none have yet grounded sentiment simulation in real psychographic survey data. Our work fills this gap by embedding psychologically validated profiles into generative AI agents to simulate how real people might respond to socio-political and economic scenarios. Contribution of the Article In this study, we present a simple and scalable generative AI agentic framework via structured LLM prompting to simulate the sentiment response of the survey respondents on several socio-political and economic scenarios. The AI
https://arxiv.org/abs/2505.22125v1
agents were instantiated to embody the psychological profiles derived from nationally representative survey and their simulated response is compared with the ground truth data. More precisely, the contributions of this work are as follows: •We demonstrate that AI agents can be effectively instantiated to embody the psychological profiles constructed from empirically generated data. These profiles incorporate socio-demographic data and variables from validated psychological frameworks and attitudes on key socio-political and economic issues, providing agents with psychographically grounded priors. •We show that these AI agents are capable of replicating survey results, as well as sentiment distributions observed in real-world responses, achieving high levels of individual-level alignment. Furthermore, we demonstrate that agent responses are robust across alternative framings of the same scenarios, indicating the consistency and stability of our simulation framework. 3 Sentiment Simulation Using Generative AI Agents 2 Methodology 2.1 Survey Design and Data Collection The survey instrument was designed to provide an interdisciplinary understanding of Filipino citizens’ profiles by integrating multiple well-established psychological frameworks to capture a deeper understanding of public sentiment towards various socio-political and economic issues in the Philippines. The instrument consists of 150items, integrating both sociodemographic variables (age, sex, educational attainment, religion, and other key identifiers) and different psychological dimensions (personality traits, values, attitudinal frameworks, beliefs, and social and political behavior). These frameworks are theoretically grounded and considered temporally stable [ 39,40,41], allowing for the abstraction of consistent psychographic profiles. For greater sensitivity in capturing the intensity and direction of respondents’ responses, most frameworks were measured using a 7-point Likert scale. Respondents were asked to express their level of agreement or disagreement with statements about selected major socio-political and economic issues [42]. Descriptive statistics of a nationally representative sample of 2,485registered Filipino voters with 95% confidence level and 1.97% margin of error are summarized in Table 2. The respondents’ age ranged from 18to89years old, with the majority ( 33%) falling within the adult age group ( 28−42years old). The sample was gender-balanced ( 50% female, 50% male), and the majority were married ( 57%). In terms of socioeconomic status, nearly half of the sample (49%) reported no monthly income, while 30% were categorized as low income. Most participants had completed at least high school ( 52%) or college ( 21%). Table 2: Descriptive statistics of the study sample (N= 2,485) . Variable Category Count (Relative Proportion) Age Group Young Adults (18–27 Years Old) 399 (16%) Adults (28–43 Years Old) 820 (33%) Middle-Aged Adults (44–59 Years Old) 736 (30%) Seniors (60+ Years Old) 530 (21%) Marital Status Single 380 (15%) Live-In 395 (16%) Married 1413 (57%) Separated 77 (3%) Widowed 220 (9%) Monthly Income-Based Socioeconomic Status No Income 1213 (49%) Low Income 740 (30%) Middle Income 530 (21%) High Income 2 ( <1%) Highest Educational Attainment No Formal Education 8 ( <1%) At least Elementary 502 (20%) At least High School 1294 (52%) At least V ocational 152 (6%) At least College 525 (21%) At least Graduate Studies 4 ( <1%) To our knowledge, our data represents the largest and most demographically diverse samples in the Philippines used to examine
https://arxiv.org/abs/2505.22125v1
psychological frameworks, offering a robust basis for generalizing the findings to the broader adult population. Previous psychological studies in the Filipino samples, such as those by Church et al. (1997) [ 43](N= 629) , Del Pilar (2017) [44] (N= 576) , and Wapaño (2021) [45] (N= 828) , were conducted with smaller, more localized samples. 4 Sentiment Simulation Using Generative AI Agents 2.2 Sentiment Simulation The sentiment simulation framework leverages generative AI agents, embodied with psychographic and contextual variables, to model the sentiment of respondents in response to varying socio-political and economic scenarios. The framework enables generative agents to produce dynamic sentiment responses that are not only reactive to input stimuli but also aligned with their internal psychological attributes and contextual stimuli. As shown in Figure 1, the simulation framework consists of three (3) core stages: Agent Embodiment, Agent Exposure to Scenarios, and Agent Response to Scenarios. All simulations were conducted using Llama 3.1 70B1, a state-of-the-art open-weight LLM optimized for instruction following, long-context reasoning, and alignment with human intent. This model is well-suited for simulating agent behavior within psychological frameworks due to its architecture that supports multi-turn coherence and robust language understanding [47]. Figure 1: Sentiment Simulation Framework Using AI Agents. 2.2.1 Agent Embodiment Each AI agent is embodied with a unique set of sociodemographic and psychographic variables derived from empirical survey. These variables were embedded into prompt templates using one of two encoding strategies: categorical or contextualized. •Categorical encoding involved assigning discrete labels (e.g., Low, Moderate, High) to each psychological variable, producing a structured but abstract representation of personality and attitudes. •Contextualized encoding, by contrast, translated these categories into narrative descriptions that reflect how psychological variables might manifest in scenario-relevant contexts. For example, high openness in policy domain might be expressed as receptive to new policy ideas or prone to considering multiple perspectives. 1Llama 3.1 70B was selected following rigorous experimentation with various LLMs evaluating their sensitivity to political and linguistic bias. [46] 5 Sentiment Simulation Using Generative AI Agents To evaluate the effectiveness of embodiment, we conducted a survey replication task wherein each agent, embodied with a specific respondent’s profile, answered the same Likert-scale survey items as the human participant. This task assessed whether the agent could faithfully reflect the individual’s psychological profile through simulated responses. 2.2.2 Agent Exposure to Scenario In this phase, agents were presented with real-world scenarios analogous to campaign messages, policy debates, economic developments, or media coverage of socio-political and economic issues: budget transparency, political dynasties, inflation, the justice system, and wage policies. These scenarios are crafted as narrative prompts designed to elicit affective, cognitive, and psychographically grounded responses, engaging the agent’s internal dispositions. In addition, to examine the impact of scenario framing effects, each scenario was presented with either positive or negative polarity, simulating ideological differences in real-world discourse (e.g., progressive vs. conservative perspective). Respondents were randomly assigned to one framing type, while ensuring equal distribution of framing across the entire sample population. 2.2.3 Agent Response to Scenario Following scenario exposure, each agent produced a structured sentiment response, rated on a 5-point Likert scale
https://arxiv.org/abs/2505.22125v1
(Negative, Slightly Negative, Neutral, Slightly Positive, and Positive), along with a brief explanatory rationale for its simulated sentiment. After generating its initial sentiment, the agent was prompted with a self-assessment task, asking whether its response was logically consistent with its psychographic profile and the characteristics of the scenario (see Supplementary Material D). This iterative validation step reinforced coherence and internal consistency within the simulated responses. 2.3 Performance Evaluation Metrics 2.3.1 Quadratic Weighted Accuracy (QWA) QWA was employed as the primary metric to evaluate alignment between agent-generated and human responses on an ordinal scale. It penalizes distant misclassifications more heavily than near-miss errors, making it particularly suitable for Likert-scale classification tasks, where response categories are inherently ordered. The QWA score is computed using Eq. (1), with weights that increase quadratically based on the distance between simulated and actual responses. This scoring method allows for a more nuanced assessment of model performance, rewarding response predictions that are close to the expected value even when they are not exact matches. wij= 1−dij dmax2 (1) where: wijis the score assigned to the pair of categories i(true response) and j(simulated response); dijis the absolute distance between the true and simulated response categories; and dmaxis the maximum possible distance given the range of all possible response categories. Higher QWA scores indicate that the agents’ responses are statistically accurate and internally coherent, i.e., interpretable within the context of their embodied psychological profiles. Score matrices are visualized in Supplementary Materials E.1 and E.2. 2.3.2 Statistical Tests To evaluate the statistical significance of observed differences in agent–human alignment, we employed both parametric (paired t-test) and non-parametric (Wilcoxon signed-rank) analysis, depending on the distributional properties of the QWA scores. Specifically, paired t-test was used when the assumption of normality was satisfied, whereas Wilcoxon signed-rank test was applied when this assumption was violated, due to their robustness to non-normal distributions. A commonly used threshold of p <0.05was used to determine statistical significance. In addition to hypothesis testing, we computed Cohen’s dto estimate effect sizes and assess the practical relevance of observed differences. Effect sizes were interpreted using standard benchmarks: d≈0.2(small), d≈0.5(medium), andd≥0.8(large). This dual approach enabled a robust interpretation ensuring that the reported improvements in alignment were not only statistically significant but also practically meaningful. 6 Sentiment Simulation Using Generative AI Agents 3 Results and Discussion 3.1 Agent Embodiment Evaluation Agent embodiment was implemented using two distinct encoding strategies: categorical encoding, which uses ranked labels (e.g., Low, Moderate, High), and contextualized encoding, which embeds psychological variables into narrative descriptions. These strategies offer differing levels of abstraction in representing individual profiles, allowing us to compare their effects on simulated sentiment alignment. These encoding strategies draw from recent works that attempt to embed psychological traits into LLM prompts. For example, Wang et al. (2025) [ 38] used personality assessment data, albeit limited to numeric Big Five scores, to prompt GPT-4 in simulating individual behaviors. Their method mirrors our categorical encoding approach, which also draws from empirical data but translates scores into ranked labels such as Low, Moderate, or High. In contrast, Xie et al. (2024)
https://arxiv.org/abs/2505.22125v1
[ 31] used structured prompts with demographic and background details, similar to our contextualized strategy, to elicit trust behaviors from LLMs. Our study advances these efforts by grounding both encoding strategies in real large-scale survey data, allowing systematic comparisons between encoding levels. Agent alignment with human survey responses is measured using QWA, where identical ratings yield 100% accuracy score and one-point differences result in proportionally lower score of 97%, capturing the degree of ordinal misalignment. See Supplementary Material E.1 for details. Figure 2 illustrates the distribution of QWA scores for the two encoding strategies. The contextualized group’s curve (blue) is consistently right-shifted, indicating that a larger proportion of agents achieved higher alignment scores compared to their categorically encoded counterparts. This population-level trend suggests that narrative profile encoding enables more human-consistent responses. Figure 2: Cumulative Distribution Function (CDF) Plot: Distributional Comparison of QWA Scores Across Profile Encoding Strategies. Figure 3 offers an agent-level comparison. Each line connects the categorical and contextualized scores for a single agent, highlighting changes in alignment. Most lines extend rightward, reinforcing that contextualized encoding generally results in improved alignment for individual agents. To determine whether the observed performance difference was statistically significant, we employed a Wilcoxon signed-rank test. Preliminary diagnostics using the Shapiro–Wilk test indicated violations of normality (p= 0.0004 , justifying the use of a non-parametric approach. The Wilcoxon signed-rank test yielded a significant result (p <0.0001) , suggesting that the alignment advantage of contextualized profile encoding is unlikely to be attributable to random variation. To assess the practical significance of this effect, we calculated Cohen’s d= 0.70, indicating a moderate effect size. Interpreted probabilistically, this reflects a 76% chance that a randomly selected agent with contextualized encoding would outperform one using categorical encoding in response alignment [ 48]. These findings provide statistical and practical evidence that contextualized profile encoding yields better alignment with human responses compared to categorical encoding. 7 Sentiment Simulation Using Generative AI Agents Figure 3: Paired Dot Plot: Per-Agent Comparison of QWA Scores Across Profile Encoding Strategies. The vertical axis represents agents that are indexed arbitrarily. On average, agents using contextualized profiles achieved 92% alignment with original human responses, demonstrating the model’s capacity to simulate individual-level psychographic data with high fidelity. These results compare favorably with prior efforts such as [49], which introduced the LLM-Mirror framework to assess the consistency between LLM- generated responses and human survey data. While their persona-based prompting achieved 69% to 73% consistency in domains like online advertising, corporate reputation, and customer loyalty, our approach reaches notably higher alignment levels across a broader array of psychological constructs. Similarly, Yeykelis et al. (2024) [ 50] found that AI personas could reproduce findings from experimental media studies with a 76% success rate. Our 92% alignment suggests a stronger capacity to simulate nuanced attitudinal data, particularly when narrative context is used to express psychological variables. Collectively, these results demonstrate that contextualized psychological profile encoding significantly enhances agent- human alignment and produces more consistent responses. Contextualized encodings guide agents more effectively by embedding psychological traits within descriptive, scenario-relevant narratives. The performance gap between categorical
https://arxiv.org/abs/2505.22125v1
and contextualized encodings highlights the benefits of translating psychological variable labels into rich psychographic contexts, enabling agents to respond more accurately in alignment with their profiles—a critical foundation for generating psychologically coherent sentiment simulations. 3.2 Sentiment Simulation Performance Following the high alignment observed in the agent embodiment task, we next evaluate the ability of psychographically grounded agents to simulate human sentiment across a set of socio-political and economic scenarios: wage policies, budget transparency, inflation, the justice system, and political dynasties. This analysis provides a broader test of the model’s ability to generate human sentiment responses in real-world contexts. Table 3: Sentiment Simulation Accuracy Across Socio-Political and Economic Scenarios. Scenario Categorical Contextualized Average SD Average SD Wage Policies 80.3% ±0.19% 83.4% ±0.20% Budget Transparency 80.1% ±0.21% 82.9% ±0.33% Inflation 74.9% ±0.32% 81.8% ±0.17% Justice System 86.7% ±0.39% 86.2% ±0.26% Political Dynasties 68.4% ±0.20% 81.2% ±0.51% Table 3 summarizes sentiment alignment performance across the scenarios, comparing categorical and contextualized encoding strategies. As shown, contextualized encoding consistently outperformed categorical encoding in four out of five scenarios, with alignment accuracy gains ranging from 2.8% to12.8% points. While categorical encoding achieved 8 Sentiment Simulation Using Generative AI Agents accuracy levels ranging from 68% to87%, contextualized profile encoding yielded more stable and higher performance of81% to 86%. The largest accuracy gain occurred in the political dynasties scenario ( +12.8%), followed by inflation ( +6.9%). For wage and budget transparency, improvements were more modest ( +2.8%and+3.1%, respectively). Interestingly, performance was nearly identical in the justice system scenario ( −0.5%), suggesting that some scenarios may be less influenced by internal psychological factors and more driven by ideological alignment or external cues. These findings reinforce that sentiment simulation is enhanced when agents are grounded in contextually expressed psychological traits, not merely categorical summaries. The more realistically an agent’s internal disposition is modeled, the more accurately it mirrors human responses. This supports existing research [ 31] indicating that contextual richness improves behavioral realism in LLM simulations. Considering the inherent variability of LLMs, stemming from prompt sensitivity and randomness introduced by stochastic decoding, we evaluated the stability of simulation outputs over repeated trials. Each scenario was simulated five ( 5) times, and performance was averaged to assess internal consistency. As shown also in Table 3, sentiment alignment scores were highly stable, with standard deviations for contextualized encoding ranging from ±0.17% to±0.51%, indicating minimal variability in performance across trials. More precisely, the justice system scenario exhibited the highest and most stable performance, with QWA scores ranging narrowly from 86.0% to 86.7%. Wage policies and budget transparency also showed strong stability, with QWA scores clustered tightly around the mid- 83% range. Inflation followed a similar trend, with minor fluctuations around 82%. Although political dynasties had the lowest overall scores, ranging from 80.1% to 81.4%, the variation across trials was still minimal, indicating internal consistency even in comparatively more complex or ideologically loaded scenarios. Ultimately, our framework achieved high alignment performance across all tested scenarios ( 81% to86%), reflecting not only the predictive accuracy of the model, but also its behavioral plausibility. The framework’s consistency across
https://arxiv.org/abs/2505.22125v1
trials is illustrative of its suitability for use in replicable and scalable behavioral simulations. Our findings highlight three pillars of effective simulation in behavioral science specifically in social sciences: (1) psychological grounding through contextualized traits, (2) consistency of performance across diverse and complex scenarios, and (3) sentiment alignment with empirically plausible human behavior [ 51,52]. Moreover, in light of the variability inherent in emotional reasoning and the influence of framing on an individual’s judgment [ 53], our results speak not only to technical performance, but to the psychological credibility of the simulated agents themselves. 3.2.1 Simulation Robustness to Scenario Framing To further evaluate the framework’s generalizability, we investigated its sensitivity to framing effects, i.e., whether sentiment alignment varied substantially depending on whether a scenario was presented in a positive or negative light (e.g., performing well under positive framing but poorly under negative framing). This step is important given that prior studies in behavioral sciences and communication have shown that framing can substantially alter public opinion [54, 55]. Figure 4: Quadratic Weighted Accuracy Between Survey and Simulated Sentiments Across Framing Types of the Different Scenarios. Figure 4 shows a plot comparison between the average QWA for positive (blue) and negative (orange) framings for each scenario. Across the five socio-political and economic scenarios, QWA scores remained high 77% to88%, with no consistent performance degradation or amplification due to framing. While differences between the positively- and 9 Sentiment Simulation Using Generative AI Agents negatively-framed scenarios ranged from 0.4% to 9.7%, the directionality and magnitude of these differences varied across scenarios. For example, negatively-framed scenarios yielded higher alignment in inflation ( +9.7%) and political dynasty topics (+0.4%), whereas positively-framed scenarios outperformed in justice system ( +4.3%), wage policies (+4.4%), and budget transparency ( +0.9%). In addition, to further evaluate whether scenario framing influences sentiment simulation accuracy, we conducted a paired sample t-test comparing agent–human alignment scores across positively- and negatively-framed versions of each issue. The paired t-test was chosen to assess mean differences between framing conditions, with the Shapiro–Wilk test confirming that the normality assumption was sufficiently met ( p= 0.1388 ) . The analysis yielded a non-significant result ( p= 0.9676 ), indicating no statistically meaningful difference in simulation accuracy across framing conditions. Furthermore, to quantify the magnitude of any potential effect, we computed Cohen’s d= 0.02, reflecting a negligible effect size. This suggests that the difference in QWA scores between framing conditions is practically insignificant, with sentiment alignment performance remaining stable regardless of scenario prompt framing. Collectively, these results indicate that scenario framing does not exert a consistent or meaningful influence on simulation accuracy. The framework allows agents to anchor their evaluations to their psychological attributes, rather than being influenced by the differences in the scenario polarity framing. These findings suggest that the agents remained anchored to their psychographic grounding, even under affective variation in scenario prompts. From a behavioral science perspective, this mirrors the consistency of human behavior across varied contexts, as documented in research on trait-based models [ 56]. This coherence supports the notion that rich, context-sensitive embeddings enable psychologically grounded rather
https://arxiv.org/abs/2505.22125v1
than context-reactive responses. 4 Conclusion This study presents a psychographically grounded framework for sentiment simulation, leveraging language model agents embodied with empirically derived psychological profiles. By integrating validated constructs into structured prompts, we enable AI agents to simulate sentiment responses that are context-sensitive, psychologically coherent, and behaviorally plausible. Our evaluation demonstrates that agents instantiated with contextualized profile encodings closely replicate individual- level sentiment patterns. In a survey replication task, these agents achieved alignment scores of up to 92%, significantly outperforming categorical encoding strategies. This result underscores the importance of narrative-rich representations in capturing the depth and nuance of human sentiment. Beyond static replication, the framework also performs reliably in dynamic simulation tasks. When exposed to real- world socio-political and economic scenarios, agents achieved high alignment accuracies indicating their capacity to model realistic sentiment responses. Importantly, these results remained highly stable across five independent trials and different scenario framings, highlighting the internal consistency of the framework despite the stochastic nature of language models. Overall, these results establish a reliable, scalable, and psychologically informed method for modeling public sentiment. The framework offers practical applications in policy testing, narrative framing analysis, and the development of synthetic populations for large-scale social simulation. More broadly, this work marks a paradigm shift—from retro- spective sentiment classification toward prospective, psychologically grounded simulation leveraging the intersection of generative AI and behavioral sciences. Acknowledgments We extend our sincere thanks to Mojhune Gabriel Manzanillo for his dedicated work in generating the experimental results for this study. We also gratefully acknowledge Adrian Gabonada for his insightful contributions, which significantly enriched the behavioral science interpretation and the discussion of our findings. We further thank Dannah Zemirah Junio for her guidance on statistical analysis; her input was instrumental in ensuring the rigor and validity of our evaluation methods. Model inferences and sentiment simulation were performed using compute resources provided by the Google Cloud for Startups Program. References [1]Bo Pang, Lillian Lee, et al. Opinion mining and sentiment analysis. Foundations and Trends ®in information retrieval , 2(1–2):1–135, 2008. 10 Sentiment Simulation Using Generative AI Agents [2] Bing Liu. Sentiment analysis and opinion mining . Springer Nature, 2012. [3]Tetsuya Nasukawa and Jeonghee Yi. Sentiment analysis: Capturing favorability using natural language processing. InProceedings of the 2nd international conference on Knowledge capture , pages 70–77, 2003. [4]Kushal Dave, Steve Lawrence, and David M Pennock. Mining the peanut gallery: Opinion extraction and semantic classification of product reviews. In Proceedings of the 12th international conference on World Wide Web , pages 519–528, 2003. [5]Vincent Price and Peter Neijens. Opinion quality in public opinion research. International Journal of Public Opinion Research , 9(4):336–360, 1997. [6]Yannis Charalabidis, Manolis Maragoudakis, and Euripides Loukis. Opinion mining and sentiment analysis in policy formulation initiatives: The eu-community approach. In Electronic Participation: 7th IFIP 8.5 International Conference, ePart 2015, Thessaloniki, Greece, August 30–September 2, 2015, Proceedings 7 , pages 147–160. Springer, 2015. [7]Eki Aidio Sukma, Achmad Nizar Hidayanto, Adam Imansyah Pandesenda, Arif Nur Yahya, Punto Widharto, and Untung Rahardja. Sentiment analysis of the new indonesian government policy (omnibus law) on social media twitter. In 2020 International Conference on
https://arxiv.org/abs/2505.22125v1
Informatics, Multimedia, Cyber and Information System (ICIMCIS) , pages 153–158. IEEE, 2020. [8]Jiri Hradec, Nicole Ostlaender, Alba Bernini, et al. Fables: framework for autonomous behaviour-rich language- driven emotion-enabled synthetic populations. Technical report, Joint Research Centre, 2023. [9]Jana Flor V Vizmanos, Sheila V Siar, Jose Ramon G Albert, Janina Luz C Sarmiento, and Angelo C Hernandez. Like, comment, and share: Analyzing public sentiments of government policies in social media. Technical report, PIDS Discussion Paper Series, 2023. [10] Rodrigo Sandoval-Almazan and David Valle-Cruz. Sentiment analysis of facebook users reacting to political campaign posts. Digital Government: Research and Practice , 1(2):1–13, 2020. [11] Charles Crabtree, Matt Golder, Thomas Gschwend, and Indri ¯di H Indri ¯dason. It is not only what you say, it is also how you say it: The strategic use of campaign sentiment. The Journal of Politics , 82(3):1044–1060, 2020. [12] John Paul P Miranda and Rex P Bringula. Exploring philippine presidentsâ C™ speeches: A sentiment analysis and topic modeling approach. Cogent Social Sciences , 7(1):1932030, 2021. [13] Julieta M Umali, John Paul P Miranda, and Anicia L Ferrer. Sentiment analysis: A case study among the selected government agencies in the philippines. International Journal , 9(3), 2020. [14] Ashish Kumar Rathore and P Vigneswara Ilavarasan. Pre-and post-launch emotions in new product development: Insights from twitter analytics of three products. International Journal of Information Management , 50:111–127, 2020. [15] Mihalis Giannakis, Rameshwar Dubey, Shishi Yan, Konstantina Spanaki, and Thanos Papadopoulos. Social media and sensemaking patterns in new product development: demystifying the customer sentiment. Annals of Operations Research , 308:145–175, 2022. [16] Jenny Yow Bee Yin, Nor Hasliza Md Saad, and Zulnaidi Yaacob. Exploring sentiment analysis on e-commerce business: Lazada and shopee. Tem journal , 11(4):1508, 2022. [17] Praphula Kumar Jain, Rajendra Pamula, and Gautam Srivastava. A systematic literature review on machine learning applications for consumer sentiment analysis using online reviews. Computer science review , 41:100413, 2021. [18] Pawanjit Singh Ghatora, Seyed Ebrahim Hosseini, Shahbaz Pervez, Muhammad Javed Iqbal, and Nabil Shaukat. Sentiment analysis of product reviews using machine learning and pre-trained llm. Big Data and Cognitive Computing , 8(12):199, 2024. [19] Jan Ole Krugmann and Jochen Hartmann. Sentiment analysis in the age of generative ai. Customer Needs and Solutions , 11(1):3, 2024. [20] Yanying Mao, Qun Liu, and Yu Zhang. Sentiment analysis methods, applications, and challenges: A systematic literature review. Journal of King Saud University-Computer and Information Sciences , page 102048, 2024. [21] Jamin Rahman Jim, Md Apon Riaz Talukder, Partha Malakar, Md Mohsin Kabir, Kamruddin Nur, and Mo- hammed Firoz Mridha. Recent advancements and challenges of nlp-based sentiment analysis: A state-of-the-art review. Natural Language Processing Journal , page 100059, 2024. [22] Amin Mahmoudi. Identifying biased users in online social networks to enhance the accuracy of sentiment analysis: A user behavior-based approach. arXiv preprint arXiv:2105.05950 , 2021. 11 Sentiment Simulation Using Generative AI Agents [23] Junjie Lin, Wenji Mao, and Daniel D Zeng. Personality-based refinement for sentiment classification in microblog. Knowledge-Based Systems , 132:204–214, 2017. [24] Jiyoung Park and Sang Eun Woo. Personality associations with attitudes toward ai. In The Impact of Artificial Intelligence
https://arxiv.org/abs/2505.22125v1