text
string | source
string |
|---|---|
arXiv:2505.21876v1 [cs.CV] 28 May 2025EPiC: Efficient Video Camera Control Learning with Precise Anchor-Video Guidance Zun Wang Jaemin Cho Jialu Li Han Lin Jaehong Yoon Yue Zhang Mohit Bansal UNC Chapel Hill {zunwang, jmincho, jialuli, hanlincs}@cs.unc.edu {jhyoon, yuezhan, mbansal}@cs.unc.edu https://zunwang1.github.io/Epic Abstract Controllable 3D camera trajectories in video diffusion models are highly sought af- ter for content creation, yet remain a significant challenge. Recent approaches often create anchor videos (i.e., rendered videos that approximate desired camera mo- tions) to guide diffusion models as a structured prior, by rendering from estimated point clouds following annotated camera trajectories. However, errors inherent in point cloud estimation often lead to inaccurate anchor videos. Moreover, the requirement for extensive camera trajectory annotations further increases resource demands. To address these limitations, we introduce EPiC, an efficient and pre- cise camera control learning framework that automatically constructs high-quality anchor videos without expensive camera trajectory annotations. Concretely, we create highly precise anchor videos for training by masking source videos based on first-frame visibility. This approach ensures high alignment, eliminates the need for camera trajectory annotations, and thus can be readily applied to any in-the-wild video to generate image-to-video (I2V) training pairs. Furthermore, we introduce Anchor-ControlNet, a lightweight conditioning module that integrates anchor video guidance in visible regions to pretrained video diffusion models, with less than 1% of backbone model parameters. By combining the proposed anchor video data and ControlNet module, EPiC achieves efficient training with substantially fewer parameters, training steps, and less data, without requiring modifications to the diffusion model backbone typically needed to mitigate rendering misalignments. Although being trained on masking-based anchor videos, our method generalizes robustly to anchor videos made with point clouds during inference, enabling pre- cise 3D-informed camera control. EPiC achieves state-of-the-art performance on RealEstate10K and MiraData for I2V camera control task, demonstrating precise and robust camera control ability both quantitatively and qualitatively. Notably, EPiC also exhibits strong zero-shot generalization to video-to-video (V2V) sce- narios. This is compelling as it is trained exclusively on I2V data, where anchor videos are derived from source videos, using only their first frame for visibility referencing. 1 Introduction Recent advancements in video diffusion models (VDMs) [ 6,19,26,33,57,78,9,34] have dramati- cally enhanced the ability to generate dynamic and realistic videos. As video generation becomes increasingly practical and widespread, controllability has emerged as a crucial requirement for creat- ing personalized and creative content. Previous works have explored various control signals to guide video generation, such as optical flow [ 31,35,15], object trajectories [ 71,65,80,52,12,60], human poses [43, 39], and depth maps [39, 14]. In particular, controlling camera trajectories during the video generation process has emerged as a key research focus, facilitating precise spatio-temporal manipulation essential for downstream applications such as film recapturing [ 4,74], virtual cinematography [ 48], and augmented reality rendering [ 51]. To achieve precise camera control, recent works [ 48,74,11,77,48,75] have adopted explicit 3D-informed guidance for generation. The core idea is to construct an ‘anchor video’ (i.e., a video that approximates the desired camera motion to guide a diffusion model as a structured prior), by lifting a condition image into a
|
https://arxiv.org/abs/2505.21876v1
|
3D point cloud and rendering it along the camera trajectory. Training the camera control module typically requires anchor video and the corresponding full source video as input-output pairs, ideally with perfect geometric alignment. This assumes access to ground-truth 3D point clouds and camera trajectories, which are hard to obtain. As a workaround, existing methods synthesize training anchor-source video pairs by using source videos with high-quality camera annotations and estimating a point cloud from the first frame via off-the-shelf estimators [ 58,68], which is then rendered along the annotated trajectory as the anchor video. However, these estimators often introduce geometric inaccuracies, leading to misaligned regions in the rendered anchor videos (as illustrated in Fig. 1 (a)), making training more challenging, as the model must additionally learn to correct random misalignments beyond filling invisible regions. Moreover, the requirement of annotated camera trajectories from the source video restricts training data to multi-view video datasets such as RealEstate10K [ 84] and DL3DV [ 40]. These datasets mainly feature static scenes, thereby limiting the generalization ability of the trained camera control module to more dynamic or diverse real-world settings. To address the issues, we propose EPiC, for learning Efficient and Precise V ideoCamera control by crafting precisely-aligned training anchor videos with a lightweight ControlNet model design (Sec. 4). Our key insight is that anchor videos should be well-aligned with the source videos to make learning both easier and more efficient, transforming the task from one of repairing misaligned content to the simpler task of copying visible regions. Thus, unlike previous approaches that render anchor videos from inaccurate 3D point clouds which often misaligned with the source video and reliant on annotated camera trajectories (Fig. 1 (a) right), we directly synthesize anchor videos by masking the source video based on first-frame visibility (Sec. 4.1), as described in Fig. 1 (b). Specifically, for each subsequent frame, we estimate its pixels trajectory with respect to the first frame from dense optical flow [ 54], preserving only those pixels that can be reliably traced back to the first frame. Pixels with no valid correspondence in the first frame are masked out. This process effectively mimics the key property of anchor videos—all new regions relative to the first frame are invisible—while ensuring precise alignment in visible regions (Fig. 1 (b) right). Furthermore, our approach eliminates the need for camera trajectory annotations, allowing anchor videos to be created from any in-the-wild source. Furthermore, in contrast to prior methods that require extensive backbone modifications or heavy fine- tuning, we introduce a lightweight Anchor-ControlNet (Sec. 4.2), which has only 30M parameters (less than 1% parameters of the CogVideoX 5B backbone) and injects anchor-video-based control signals into the generation process with the base model frozen. Unlike previous methods, such as ViewCrafter [ 75], which condition on the entire anchor video without visibility awareness, we apply visibility-aware masking to the outputs of our Anchor-ControlNet. Specifically, the ControlNet’s output is added to the latent representation only within the visible regions, leaving the unseen areas untouched. This design simplifies the ControlNet’s task to copying visible content, while delegating the
|
https://arxiv.org/abs/2505.21876v1
|
synthesis of occluded or invisible regions entirely to the base diffusion model. This clear division of responsibility not only reduces learning difficulty but also improves overall generation quality. Combining these components, we demonstrate that anchor-video-based camera control can be learned in a highly efficient manner, achieving strong performance with just 5K in-the-wild training videos and 500 training steps, which is less than 10% of the data and iterations used in prior approaches. Extensive experiments demonstrate that EPiC achieves state-of-the-art performance in camera ac- curacy ( e.g., RotErr, TransErr) and camera motion stability (measured by the standard deviation of generated trajectories across different seeds), on image-to-video (I2V) camera control tasks in both indoor and game environments. In addition to being significantly more efficient in data, computation, and model size, EPiC also generalizes effectively to video-to-video (V2V) camera control in a zero-shot manner, despite being trained solely on I2V data. Ablation study shows the effectiveness of our anchor video method and ControlNet design. Our contributions are as follows: •A novel anchor video construction pipeline with visibility-based masking that produces well-aligned anchor–source video pairs without requiring camera trajectory annotations, enabling learning from in-the-wild videos. 2 Annotated Camera Trajectory 1 16 32 1 16 32Anchor VideoforTraining1st Frame Lift to 3D 🙁Misaligned visible regionsRequire camera trajectory annotation 🙁 1 16 32(a) Previous method to create anchor videobasedon3DPointCloud 😃Aligned visible regionsNo need for camera trajectory annotation 😃(b) Our method to create anchor videobasedonVisibilityMaskingOur Anchor VideoforTraining 🙁 😃Efficient training(5kvideos;500iterations)Sourcevideo Inefficient training (>50kvideos;>10kiterations)Preserving pixels withfirst-framesourceMasking pixels lackingtrackedfirst-frame source Visibility-BasedMaskingRendering Figure 1: Comparison of anchor video creation methods for training camera control models. (a) Previous methods ([48,75]) estimate the 3D point cloud (through depth estimation) using the first frame and render anchor videos with annotated camera trajectories, but suffer from region misalignment due to point-cloud estimation errors while limited to camera-pose annotated data, resulting in inefficient training. (b) Our method creates anchor videos via visibility masking based on first-frame pixel tracking. This not only guarantees accurate geometric alignment but also supports diverse data while largely reducing training costs. We highlight the video regions in red and green boxes to compare the alignment quality. •A lightweight Anchor-ControlNet architecture with visibility-aware output masking, allowing efficient and precise conditioning on anchor videos. •State-of-the-art performance on both I2V and V2V camera control tasks with high efficiency in training, data, and model size compared to state-of-the-art methods. 2 Related Work Image/Text-Based Camera Control in VDMs. Controlling camera trajectories in text-to-video (T2V) generation and I2V generation has recently received increasing attention. A common approach is to inject explicit camera parameters (e.g. plücker Embedding) into VDMs [ 62,28,2,1,53,24, 81,67,63,76,38,81,23,83,37] for conditioning. However, such parameter-conditioned models often generate world-inconsistent content due to the lack of explicit 3D guidance, especially in out- of-distribution scenarios. To mitigate this, recent works have shifted toward guiding generation with point-cloud renderings (anchor videos) as conditions to leverage geometric cues for more accurate camera control [ 75,46,27,48,82,50,11,44,41,77,79,86,69,7]. Alternatively, some methods rely on trajectory tracking and encoding as intermediate guidance [ 31,17,66,21], but such guidance is generally less direct than anchor video conditions and often results
|
https://arxiv.org/abs/2505.21876v1
|
in lower accuracy. Despite these advances, rendered anchor videos are often misaligned due to point-cloud estimation errors and require accurate camera annotations, limiting training to datasets like RealEstate10K. In addition, these methods rely on large-scale data to correct misalignment and address limited diversity. To overcome these limitations, we propose a masking-based anchor video construction method that achieves precise alignment while eliminating the need for camera annotations during training. We further introduce a visibility-aware ControlNet that learns to condition on the anchor video both efficiently and effectively. Video-Based Camera Control. V2V camera control (also known as video recapturing) refers to redirecting camera trajectories in existing videos, enabling new possibilities in filmmaking, augmented reality, and other applications. However, such a task presents unique challenges compared to T2V and I2V tasks. Specifically, it is difficult to capture comprehensive 4D information from original videos, making accurately reconstruction challenging. Additionally, obtaining ground-truth paired 4D videos for effective end-to-end training remains challenging. To address these issues, one research direction explores test-time optimization or fine-tuning on specific scenes [ 72,77], allowing models to capture individual videos, thus reducing the reliance on large-scale annotated datasets. However, these methods require adaptation or optimization for each new video, resulting in considerable inference- time overhead. Another direction involves collecting large-scale paired videos from simulators such 3 as Unreal Engine5 [ 4,5], the Kubric simulator [ 20,55], or Animated Objaverse [ 16,64,18,73,56], but simulated videos often lack realism and diversity, reducing generalization to diverse real-world scenarios. The most closely related approaches to ours are [ 8,74], which also use structured 3D priors like anchor video to guide video-to-video camera controllable generation. Unlike their methods that require extensive backbone tuning on large-scale, carefully crafted 4D dataset for V2V camera control, our method achieves efficient training using only a small amount of I2V data, with minimal backbone modification, yet generalizes well to the V2V setting. 3 Background: Video Diffusion Models We build on the framework of latent video diffusion models (VDMs), which generate videos by iteratively denoising latent representations in a compressed space. Given an RGB video x∈ RL×3×H×W, a pre-trained 3D-V AE is used to encode the video into a latent variable z=E(x)∈ RL′×C×h×w, where Lis the number of input frames and H×Wthe frame resolution; and L′, C, and h×wthe sequence length, channel count, and spatial resolution of the zrespectively. Training diffusion models involves learning the reverse of a forward (noising) process. In the forward process, a clean latent sample z0∼pdata(z)is gradually corrupted with Gaussian noise zt=√¯αtz0+√1−¯αtϵ,ϵ∼ N(0, I). At each timestep t, the model is trained to predict the noise ϵfrom the noisy latent ztconditioned on external signals c(e.g., image or text), by minimizing the denoising objective: Ldenoise =Ez0,t,ϵ,ch ∥ϵθ(zt, t, c)−ϵ∥2 2i (1) At inference time, the model progressively denoises from Gaussian noise to the final latent represen- tations ˆz, which is decoded by the 3D V AE decoder Dto generate the output video: ˆx=D(ˆz). Base Model. We adopt CogVideoX [ 70] as our base model, which employs a DiT-style [ 45] transformer backbone with full 3D self-attention to jointly model spatial and
|
https://arxiv.org/abs/2505.21876v1
|
temporal dependencies across video frames. Specifically, we use the CogVideoX-5B-I2V variant, which supports both image and text conditions for flexible multimodal control during video generation. Guiding VDMs with Anchor Video as a Structured Prior for Camera Control. Recent meth- ods [ 75,74,11,77] have leveraged anchor videos to enable controllable video generation with explicit camera motion control. Anchor videos are typically rendered given camera trajectories from 3D point clouds constructed by lifting a single RGB image into 3D space, either using multi-view stereo approaches like DUST3R [ 59], or by pixel unprojection from estimated monocular depth [ 68]. These anchor videos provide explicit geometry and camera motion signals, serving as a structured prior to guide the video generation to follow the intended camera trajectory. During training, the anchor video is created by lifting the first frame of the source video into 3D and rendering it along the source video’s camera trajectory. The model then learns to reconstruct the source video conditioned on the anchor video. During inference, the anchor video is constructed similarly using the input image and a user-specified camera trajectory. However, existing methods face two major challenges: (1) Anchor videos derived from 3D point cloud estimations are often imprecise (as shown in Fig. 1 (a)), leading to difficulties during training ( Fig. 5 (a)). The model must not only inpaint missing regions but also correct misaligned visible areas, re- sulting in inefficient learning. (2) Conditioning on anchor videos in the latent space typically requires fine-tuning the base model or injecting dense additional modules, which increases computational overhead and reduces model generalization (Table 1). To overcome these limitations, we introduce EPiC, a novel and efficient framework for learning precise camera control with masking-based anchor video and a lightweight Anchor-ControlNet, which we will describe in detail next. 4 EPiC: An Efficient Framework for Learning Precise Camera Control Our key idea is to enable controllable video generation through precise anchor-video guidance. Fig. 2 illustrates the overall architecture of our framework. We first construct precisely aligned anchor and source videos as training input-output pairs with a visibility-based masking strategy (Sec. 4.1). Then, we introduce a lightweight Anchor-ControlNet that learns to reconstruct the source video from the anchor video efficiently (Sec. 4.2). Finally, we describe our training and inference details (Sec. 4.3). 4 Source Image Or Source Video First Frame Anchor Video … Latent Mask DiT block (c )Anchor Videos for I2V Inference with Masked Point Clouds CtrlNet DiT block DiT block DiT block DiT block CtrlNet DiT block CtrlNet DiT block Downsampling “There is a smiling dog sitting in a sunshine beach….” Textual Caption Latent Noise Textual Caption (a )Model Architecture Generated Video T5 3D Video Decoder (b )Anchor Videos for I2V Inference with Full Point Clouds 3D Video Encoder Anchor Latent Image Latent Element-wise addition Element-wise multiplication CtrlNet DiT block … V2V: Source Video I2V: Source Image Anchor Video 3D Point Clouds Anchor Video (d )Anchor Videos for V2V Inference with Dynamic Point Clouds I2V: Source Image Anchor Video 3D Point Clouds 3D Point Clouds Anchor ControlNet Video Diffusion Model 🔥 ❄ Figure 2:
|
https://arxiv.org/abs/2505.21876v1
|
EPiC Model Architecture. (a) shows an overview of our EPiC framework. EPiC supports multiple inference scenarios. (b) and (c) illustrate our I2V inference scenarios using full and masked point clouds, respectively. (d) depicts V2V inference scenario employing dynamic point clouds. 4.1 Constructing Precise Anchor Videos from Source Videos via Visibility-Based Masking We aim to construct anchor videos that are well-aligned with the source videos, making the learning process easier and more efficient. To achieve this, we construct anchor videos through a masking strategy that preserves alignment while mimicking the geometric characteristics of point-cloud- rendered videos. Specifically, our process consists of the following two steps: Figure 3: Anchor video construction.Step 1: Pixel-Level Visibility Tracking and Mask- ing. We estimate pixel trajectories in the source video using dense optical flow from the first frame (computed via RAFT [ 54]) to determine whether each pixel remains visible from the original viewpoint (see Appendix for details). This pixel tracking simulates how content moves or disappears due to viewpoint shifts or occlusion. We provide a binary visibility mask for each frame based on such tracking informa- tion, retaining only regions consistently traced from the original view and masking out the rest. This pro- cess effectively mimics the core property of anchor videos, which excludes newly revealed content while ensuring precise alignment in the visible regions. In cases where the visible region becomes too small due to large viewpoint shifts, we freeze the mask in subsequent frames to prevent further degradation. The masked source video is obtained by applying the visibility mask to the source video, as shown in Fig. 3. Step 2: Artifact Injection. A major limitation of estimated point clouds is the presence of flying- pixel artifacts, especially around object boundaries (see Fig.2(d), where splatted flying pixels appear near the dog’s edges in both point cloud examples). These errors propagate to the anchor video, resulting in flying-pixel artifacts (see Fig.2(d)). To improve robustness, we simulate this flying-pixel effect during training by injecting synthetic dashed rays into the masked anchor video to better align training and inference gap (see Fig. 3 bottom red box). Specifically, we randomly sample a direction and draw multiple rays perpendicular to it, with colors sampled from the first frame to ensure temporal consistency. These rays are faded and dashed to resemble flying-pixel artifacts, and are applied only within the visible regions defined by the mask, which helps the model learn to ignore such artifacts during inference. The artifact-injected video is used as the final anchor video for training. 5 4.2 Guiding Video Diffusion with Anchor-ControlNet We introduce Anchor-ControlNet, a variant of ControlNet to guide the base video diffusion model using the constructed anchor video as the condition (Fig. 2 (a)). Unlike previous methods such as ViewCrafter [ 75], which fine-tune the entire model, or Gen3C [ 48], which fine-tunes all temporal layers of the backbone, we follow the principle of using minimal parameters for downstream adap- tation to preserve the model’s core generation capability [ 49]. To this end, we adopt a lightweight ControlNet design ( <30M parameters) and keep the entire
|
https://arxiv.org/abs/2505.21876v1
|
backbone frozen during training. Model Architecture. Anchor-ControlNet is a lightweight DiT-based module designed to inject anchor video guidance into the base diffusion model. Given an anchor video A, we encode it using the 3D V AE from the backbone model to obtain latent features zanchor . During the reverse diffusion process, the noisy latent ztis concatenated with zanchor along the channel dimension. The combined representation is then patchified and fed into the ControlNet DiT block. The DiT block in Anchor-ControlNet adopts a reduced hidden dimension ( 256compared to 3072 in the base model) to maintain efficiency. Its output is projected back to match the backbone’s dimension and added to the corresponding layer in the base DiT model. The projection layer is zero-initialized, following the standard practice in ControlNet, to ensure stable integration at the beginning of training. Visibility-Aware Output Masking. Previous work, such as ViewCrafter [ 75], condition directly on the entire anchor video without visibility awareness. This forces the model to simultaneously repair misaligned regions and inpaint invisible (black) areas, making the learning task unnecessarily difficult and increasing the risk of incorrect region repair during inference. In contrast, with our aligned anchor videos, we can address these issues by clearly distinguishing visible and invisible content: the ControlNet focuses solely on copying visible content, while the synthesis of occluded or invisible regions is entirely delegated to the base diffusion model. Formally, we require the control signal from the anchor video only affecting visible regions by applying a binary visibility mask M∈ {0,1}T′×h×wto the output of the ControlNet. We downsample the invisibility mask derived from the renderings to match the latent resolution, and use it to selectively update the base model’s latent features (Fig. 2 (a) latent mask). The ControlNet output is first computed as ˜z=Proj(DiT ctrl([zt,zanchor])), and then added to the base model output at visible positions: ˆzi,j=DiT base(zt)i,j+˜zi,j,ifMi,j= 1 DiT base(zt)i,j, otherwise ,(2) where i, jare the indices for height and width. This visibility-aware latent fusion is applied during both training and inference, allowing the base model to inpaint disoccluded or invisible regions, while Anchor-ControlNet focuses on controlling the visible content aligned with the anchor video. 4.3 Training and Inference In this section, we outline the training and inference paradigm of our framework. EPiC supports mul- tiple inference scenarios, including I2V and V2V , enabling flexible adaptation to diverse applications. Training. We create our masking-based anchor video from in-the-wild source videos to construct training data. We train the Anchor-ControlNet on our collected anchor and source video pairs by conditioning on the anchor video to predict the source video with the training objective in Eq. 1. Details of our in-the-wild video data are provided in Sec. 5.1. I2V Inference. We consider two distinct inference scenarios for I2V: inference (i) with full point clouds (illustrated in Fig. 2 (b)) and (ii) with masked point clouds (shown in Fig. 2 (c)). In the first scenario, given an input image and a target camera trajectory, we first estimate the metric depth using DAv2 [ 68]. We then unproject the image into a 3D point cloud
|
https://arxiv.org/abs/2505.21876v1
|
and render the anchor video along the specified camera trajectory. However, this approach produces anchor videos where objects remain static, as rendering is performed from a stationary point cloud. For example, the character in Fig. 2 (b) retains the same position and pose throughout the video, limiting its dynamic realism. To overcome this limitation and support dynamic object movement while preserving precise camera control, we propose inference with masked point clouds. Specifically, given a single input image, we employ GroundedSAM [ 47] to identify and segment potentially dynamic objects ( e.g., “person”, “animal”) from a predefined category list. Users may also provide customized category lists or click-based prompts to generate tailored segmentation masks. During 3D point cloud projection, we exclude points within the segmented regions (note that we dilate each mask boundary to capture outlier points 6 Table 1: Quantitative evaluation results on RealEstate10K [ 85] and MiraData [ 32] for I2V camera control task. The best numbers are highlighted in bold . The total score is computed by averaging all quality metrics. †indicates re-implementation results on the I2V task. Dataset MethodQuality Score Camera Score TotalSubject Bg Motion Temporal Aesthetic Imaging Rotation TransitionCamMC ( ↓)Consist Consist Smooth Flicker Quality Quality Error ( ↓) Error ( ↓) RE10KCameraCtrl [22] 78.35 89 .95 91 .25 97 .16 91 .99 43 .32 56 .43 1.12±0.44 1.78±0.93 2.36±1.01 AC3D†[1] 82 .63 91 .96 92.77 98 .30 96 .23 50 .97 65 .56 0.86±0.37 1.50±0.82 1.97±0.86 ViewCrafter [75] 81.18 90 .23 92 .99 97 .74 93 .51 48 .29 64 .33 0.50±0.16 1.05±0.32 1.35±0.40 EPiC (Ours) 82 .63 91.62 93 .43 98 .48 96 .47 51 .19 64.57 0.40±0.110.86±0.181.17±0.23 MIRACameraCtrl [22] 78.06 89 .28 91 .15 97 .30 90 .22 49 .35 51 .11 1.62±0.84 4.67±1.47 5.66±2.06 AC3D†[1] 82.78 91 .75 92 .81 98 .20 94 .77 57.64 61 .51 1.13±0.74 3.98±1.50 4.79±1.53 ViewCrafter [75] 79.87 86 .56 91 .55 96 .26 91 .71 54 .21 58 .92 1.16±0.34 2.95±0.98 3.42±1.04 EPiC (Ours) 82 .89 91 .82 92 .94 98 .75 94 .86 57 .94 61.03 0.66±0.221.78±0.672.10±0.60 near the edges). These masked areas are omitted when rendering the anchor video. Our design allows the reserved background to drive camera motion while leaving the segmented foreground objects unconstrained, enabling natural movement within the generated video. V2V Inference. EPiC also supports V2V camera control (Fig. 2 (d)). Given an input video, we apply DepthCrafter [ 29] to estimate continuous depths and construct dynamic point cloud. The anchor video is then rendered by replaying the target trajectory over 4D representation. Note that since the base I2V model is frozen, we provide the first frame of the conditional video as input to the model. 5 Experiments 5.1 Experimental Setup Datasets and Baselines. We compare EPiC and recent baselines for I2V setting on the RealCam- Vid test set [ 38] from two data source, RealEstate10K (RE10K) [ 85] and MiraData (MIRA) [ 32], consisting of mainly indoor scene and gaming environments. For each dataset, we sample 500 videos for evaluation. For baselines, we consider SoTA methods including CameraCtrl [ 22], AC3D [
|
https://arxiv.org/abs/2505.21876v1
|
1] and ViewCrafter [ 75]. For consistency, we use similar anchor videos per test sample for both ViewCrafter and EPiC. For V2V setting, we follow Gen3C [ 48] to qualitatively evaluate it using Sora videos [ 10] and provide quantitative results on Kubric4D [20] scenes in the Appendix. Implementation Details. EPiC is trained on 5,000 videos from the Panda70M dataset [ 13] for 500 iterations, using a total batch size of 16 across 8 40G A100 GPUs. The text condition for the I2V backbone is obtained from the annotated captions in Panda70M. Training takes less than 3 hours with a learning rate of 2×10−4, using the AdamW [ 42] optimizer. During inference, we apply classifier-free guidance (CFG) with a scale of 6.0 for text conditioning. More details are in the Appendix. Table 2: Training efficiency comparison. EPiC achieves better results (see Table 1) with significantly fewer data and steps. Method # Videos # Iter. Batch Size CameraCtrl [22] >70K 50K 32 AC3D [1] 70K 10K 8 ViewCrafter [75] 630K 50K 16 EPiC (Ours) 5K 0.5K 16Evaluation Metrics. For camera-related metrics, we follow prior works [ 61,22] and report Rotation Error (RotError), Translation Error (TransError), and CamMC, which respectively measure orien- tation differences, positional errors, and overall camera pose consistency between the predicted and ground-truth trajectories. To account for ran- domness, we sample five fixed random seeds per test instance and report the mean and standard de- viation of each camera metric. For visual quality, we adopt the evaluation protocol from VBench [30], including metrics such as Subject Consistency, Background Consistency, Motion Smoothness, Temporal Flickering, Aesthetic Quality, and Imaging Quality. Detailed definitions of these metrics are provided in the Appendix. 5.2 Quantitative Evaluation In Table 1, we compare EPiC and recent SOTA camera control methods (CameraCtrl, AC3D, ViewCrafter) on RealEstate10K (RE10K) and MiraData (MIRA). EPiC achieves comparable quality scores to those of prior approaches across both the RE10K and MIRA benchmarks. EPiC attains the highest total score on both datasets ( 82.63on RE10K and 82.89on MIRA), suggesting strong 7 GTAnchorVideoEPiC(Ours)ViewCrafterAC3DRefVideoAnchorVideoEPiC(Ours)ViewCrafterGCD(a) I2V Camera Control(b) V2V Camera Control Figure 4: Generated videos comparing with other camera control methods for I2V and V2V tasks. subject/background consistency, smooth motion, and reduced temporal flicker. Furthermore, our method significantly outperforms existing baselines in Camera Score, achieving the lowest rotation and transition errors as well as CamMC. This demonstrates superior fidelity in controlling camera trajectories, along with the best robustness across different seeds, as reflected by the lowest standard deviations. These results highlight EPiC’s ability to ensure both high-quality video generation and precise camera control. Notably, as shown in Table 2, EPiC achieves better performance while using less than 10% of the training data and at most 5% of the training steps required by baseline methods. 5.3 Qualitative Examples Fig. 4 compares camera control results from EPiC and SOTA open-source baselines on both I2V and V2V settings. For I2V , we include ViewCrafter [ 75] and AC3D [ 1]; for V2V , we compare against GCD [ 55] and ViewCrafter. AC3D is excluded from the V2V comparison as it is
|
https://arxiv.org/abs/2505.21876v1
|
conditioned on a single image and cannot follow dense source video motions. AC3D and GCD are conditioned on camera embeddings, whereas ViewCrafter, like ours, is conditioned on anchor videos. I2V Camera Control. As shown in Fig. 4 (a), both ViewCrafter (3rd row) and our method (4th row) are capable of following anchor videos. However, as shown in the ViewCrafter row, it often introduces content inconsistencies (red boxes): for example, it gradually changes a painting to glass-like material (3rd column), and produces severe distortions around the sofa (4th column) and chairs (5th column). Such deviations from the anchor video are potentially due to ViewCrafter learning to over-repair misaligned regions—a side effect of being trained with misaligned point-cloud-based anchor videos. In contrast, our method faithfully preserves visible content thanks to learning from aligned anchor videos (shown in green boxes). As a baseline without anchor video guidance, AC3D fails to follow the desired camera trajectory. It is worth noting that this example is taken from the RealEstate10K test set, which is an in-domain evaluation setting for both ViewCrafter and AC3D, as they are trained densely with RealEstate10K videos. Even so, our method demonstrates superior accuracy and quality. V2V Camera Control. As shown in Fig.4 (b), while ViewCrafter can roughly follow the anchor video in the background ( e.g., beach and trees), it fails to reproduce the foreground motion accurately. In the 2nd column of ViewCrafter row, the dog does not turn its head as in the reference video, and in the 3rd column, the dog’s shape appears distorted ( e.g.hind leg and nose). GCD produces blurry foregrounds and lacks fidelity. In contrast, our method successfully captures both background and foreground motion, faithfully recapturing the reference video through anchor-video guidance. 5.4 Ablation Studies Effects of Different Types of Anchor Videos. We evaluate the effects of different types of anchor videos in Table 3 and Fig. 5 (a). For a fair comparison, we select 5K videos with significant camera movement from RealEstate10K, and obtain the anchor video using either a classical point cloud-based method or our visibility-based masking method. We train on point cloud-based anchor videos for 1500 iterations, and masking-based ones for 500 iterations. Table 3 shows that training with point cloud-based anchors leads to higher errors and less stable results with larger standard deviation. 8 Table 3: Results of training with different anchor video types on the RealEstate10K dataset. Anchor Video Type RotErr (↓)TransErr (↓)CamMC (↓) Point cloud-based (1500 iters) 0.60±0.20 1.07±0.39 1.45±0.62 Masking-based (500 iters; Ours) 0.40±0.11 0.86±0.18 1.17±0.23 AnchorVideoPointCloud-Based(1500 iter)Masking-Based(500 iter)(a)TrainingResultswithDifferentAnchorVideosSource Image (d)MaskedPointCloudsWithMaskedPointCloudsWithFullPointClouds (b)ArtifactInjectionWithoutArtifactInjectionWithArtifactInjection Anchor VideoSource Video Source Image (c)Visibility-AwareOutput MaskingAnchorVideoFrameWithoutMaskingWithMasking AnchorVideoFrameGenerated Video FrameAnchorVideoFrameGenerated Video Frame Figure 5: Qualitative examples for ablation study. In Fig. 5(a), due to misalignment, point cloud-based anchor videos lead to slower convergence, producing significantly higher loss than masking-based ones, even with 3 ×more training. Qualitative results show that models trained with point cloud-based anchors fail to follow the anchor precisely, producing misaligned geometry (red dashed lines in the point cloud-based row), as the model learns an additional task of repairing visible regions, whereas ours faithfully follow (green dashed
|
https://arxiv.org/abs/2505.21876v1
|
lines). Effects of Artifact Injection for Constructing Training Anchor Videos. Fig. 5 (b) demonstrates the effectiveness of artifact injection, as described in Sec. 4.1. Due to point cloud estimation errors, flying pixels often appear when rendering from rapidly changing camera poses, resulting in incorrect guidance even within visible regions. Without artifact injection, the model follows these flawed inputs, leading to similar artifacts at inference (red box). In contrast, with artifact injection, the model learns to repair such artifacts during training, resulting in cleaner outputs (green box). Effects of Visibility-Aware Output Masking. One crucial design in our Anchor-ControlNet is the visibility-aware output masking strategy, which enables the model to control only the visible regions, as described in Sec. 4.2. We conduct an ablation study by training modules without mask awareness, similar to ViewCrafter. As shown in Fig. 5 (c), without output masking, the model is influenced by tearing artifacts rendered from the point cloud, which guide it to generate ambiguous content in these corrupted regions (see red boxes). In contrast, our method excludes such regions from the control signal, allowing the model to generate reasonable and faithful content (green boxes). Effects of Masked Point Clouds for Dynamic Objects. Fig. 5 (d) shows examples of results using the masked point cloud to enable dynamic objects, as described in Sec. 4.3. Without masking (with full point cloud), the generated video is static—the character (in the red boxes) stands still due to strong 3D guidance in the anchor video. In contrast, masking the point cloud removes control signals from the character, allowing it to move freely and enabling a natural walking motion (as shown in the green box). 6 Conclusion We propose EPiC, an efficient framework that constructs high-quality training anchors by masking source videos based on first-frame visibility, reducing the need for any camera-trajectory annotations and enabling application to in-the-wild videos. We further introduce Anchor-ControlNet, a lightweight adapter that learns to copy visible regions from the anchor video, requiring neither large models, extensive data, nor backbone modifications to correct misalignment. EPiC outperforms previous methods in various visual quality and camera control metrics. Qualitative experiments in I2V and V2V scenarios, along with comprehensive ablation studies, also validate our design choices. 9 Acknowledgments This work was supported by DARPA ECOLE Program No. HR00112390060, NSF-AI Engage Institute DRL-2112635, DARPA Machine Commonsense (MCS) Grant N66001-19-2-4031, ARO Award W911NF2110220, ONR Grant N00014-23-1-2356, Accelerate Foundation Models Research program, and a Bloomberg Data Science PhD Fellowship. The views contained in this article are those of the authors and not of the funding agency. References [1]S. Bahmani, I. Skorokhodov, G. Qian, A. Siarohin, W. Menapace, A. Tagliasacchi, D. B. Lindell, and S. Tulyakov. Ac3d: Analyzing and improving 3d camera control in video diffusion transformers. arXiv preprint arXiv:2411.18673 , 2024. [2]S. Bahmani, I. Skorokhodov, A. Siarohin, W. Menapace, G. Qian, M. Vasilkovsky, H.-Y . Lee, C. Wang, J. Zou, A. Tagliasacchi, et al. Vd3d: Taming large video diffusion transformers for 3d camera control. arXiv preprint arXiv:2407.12781 , 2024. [3]J. Bai, S. Bai, S. Yang, S. Wang, S. Tan, P. Wang, J. Lin, C. Zhou,
|
https://arxiv.org/abs/2505.21876v1
|
and J. Zhou. Qwen-vl: A versatile vision-language model for understanding, localization, text reading, and beyond. arXiv preprint arXiv:2308.12966 , 2023. [4]J. Bai, M. Xia, X. Fu, X. Wang, L. Mu, J. Cao, Z. Liu, H. Hu, X. Bai, P. Wan, et al. Recammaster: Camera-controlled generative rendering from a single video. arXiv preprint arXiv:2503.11647 , 2025. [5]J. Bai, M. Xia, X. Wang, Z. Yuan, X. Fu, Z. Liu, H. Hu, P. Wan, and D. Zhang. Syncammaster: Synchronizing multi-camera video generation from diverse viewpoints. Proc. ICLR , 2025. [6]O. Bar-Tal, H. Chefer, O. Tov, C. Herrmann, R. Paiss, S. Zada, A. Ephrat, J. Hur, G. Liu, A. Raj, et al. Lumiere: A space-time diffusion model for video generation. In SIGGRAPH Asia 2024 Conference Papers , pages 1–11, 2024. [7]E. Bernal-Berdun, A. Serrano, B. Masia, M. Gadelha, Y . Hold-Geoffroy, X. Sun, and D. Gutier- rez. Precisecam: Precise camera control for text-to-image generation. arXiv preprint arXiv:2501.12910 , 2025. [8]W. Bian, Z. Huang, X. Shi, Y . Li, F.-Y . Wang, and H. Li. Gs-dit: Advancing video generation with pseudo 4d gaussian fields through efficient dense 3d point tracking. arXiv preprint arXiv:2501.02690 , 2025. [9]A. Blattmann, T. Dockhorn, S. Kulal, D. Mendelevitch, M. Kilian, D. Lorenz, Y . Levi, Z. English, V . V oleti, A. Letts, et al. Stable video diffusion: Scaling latent video diffusion models to large datasets. arXiv preprint arXiv:2311.15127 , 2023. [10] T. Brooks, B. Peebles, C. Holmes, W. DePue, Y . Guo, L. Jing, D. Schnurr, J. Taylor, T. Luhman, E. Luhman, C. Ng, R. Wang, and A. Ramesh. Video generation models as world simulators. OpenAI technical reports , 2024. [11] C. Cao, J. Zhou, S. Li, J. Liang, C. Yu, F. Wang, X. Xue, and Y . Fu. Uni3c: Unifying precisely 3d-enhanced camera and human motion controls for video generation. arXiv preprint arXiv:2504.14899 , 2025. [12] T.-S. Chen, C. H. Lin, H.-Y . Tseng, T.-Y . Lin, and M.-H. Yang. Motion-conditioned diffusion model for controllable video synthesis. arXiv preprint arXiv:2304.14404 , 2023. [13] T.-S. Chen, A. Siarohin, W. Menapace, E. Deyneka, H.-w. Chao, B. E. Jeon, Y . Fang, H.-Y . Lee, J. Ren, M.-H. Yang, and S. Tulyakov. Panda-70m: Captioning 70m videos with multiple cross-modality teachers. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , 2024. 10 [14] W. Chen, Y . Ji, J. Wu, H. Wu, P. Xie, J. Li, X. Xia, X. Xiao, and L. Lin. Control-a-video: Controllable text-to-video diffusion models with motion prior and reward feedback learning. arXiv preprint arXiv:2305.13840 , 2023. [15] Y . Cong, M. Xu, C. Simon, S. Chen, J. Ren, Y . Xie, J.-M. Perez-Rua, B. Rosenhahn, T. Xiang, and S. He. Flatten: optical flow-guided attention for consistent text-to-video editing. arXiv preprint arXiv:2310.05922 , 2023. [16] M. Deitke, D. Schwenk, J. Salvador, L. Weihs, O. Michel, E. VanderBilt, L. Schmidt, K. Ehsani, A. Kembhavi, and A. Farhadi. Objaverse: A universe of annotated 3d objects. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition , pages 13142–13153, 2023. [17] W. Feng, J. Liu, P. Tu, T.
|
https://arxiv.org/abs/2505.21876v1
|
Qi, M. Sun, T. Ma, S. Zhao, S. Zhou, and Q. He. I2vcontrol-camera: Precise video camera control with adjustable motion strength. arXiv preprint arXiv:2411.06525 , 2024. [18] R. Gao, A. Holynski, P. Henzler, A. Brussee, R. Martin-Brualla, P. Srinivasan, J. T. Barron, and B. Poole. Cat3d: Create anything in 3d with multi-view diffusion models. In Proc. NeurIPS , 2024. [19] R. Girdhar, M. Singh, A. Brown, Q. Duval, S. Azadi, S. Rambhatla, A. Shah, X. Yin, D. Parikh, and I. Misra. Emu video: Factorizing text-to-video generation by explicit image conditioning (2023). arXiv preprint arXiv:2311.10709 , 2023. [20] K. Greff, F. Belletti, L. Beyer, C. Doersch, Y . Du, D. Duckworth, D. J. Fleet, D. Gnanapragasam, F. Golemo, C. Herrmann, et al. Kubric: A scalable dataset generator. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition , pages 3749–3761, 2022. [21] Z. Gu, R. Yan, J. Lu, P. Li, Z. Dou, C. Si, Z. Dong, Q. Liu, C. Lin, Z. Liu, et al. Diffusion as shader: 3d-aware video diffusion for versatile video generation control. arXiv preprint arXiv:2501.03847 , 2025. [22] H. He, Y . Xu, Y . Guo, G. Wetzstein, B. Dai, H. Li, and C. Yang. Cameractrl: Enabling camera control for text-to-video generation. arXiv preprint arXiv:2404.02101 , 2024. [23] H. He, Y . Xu, Y . Guo, G. Wetzstein, B. Dai, H. Li, and C. Yang. Cameractrl: Enabling camera control for video diffusion models. In The Thirteenth International Conference on Learning Representations , 2025. [24] H. He, C. Yang, S. Lin, Y . Xu, M. Wei, L. Gui, Q. Zhao, G. Wetzstein, L. Jiang, and H. Li. Cameractrl ii: Dynamic scene exploration via camera-controlled video diffusion models. arXiv preprint arXiv:2503.10592 , 2025. [25] J. Ho and T. Salimans. Classifier-free diffusion guidance. arXiv preprint arXiv:2207.12598 , 2022. [26] W. Hong, M. Ding, W. Zheng, X. Liu, and J. Tang. Cogvideo: Large-scale pretraining for text-to-video generation via transformers. arXiv preprint arXiv:2205.15868 , 2022. [27] C. Hou, G. Wei, Y . Zeng, and Z. Chen. Training-free camera control for video generation. arXiv preprint arXiv:2406.10126 , 2024. [28] Y . Hou, L. Zheng, and P. Torr. Learning camera movement control from real-world drone videos. arXiv preprint arXiv:2412.09620 , 2024. [29] W. Hu, X. Gao, X. Li, S. Zhao, X. Cun, Y . Zhang, L. Quan, and Y . Shan. Depthcrafter: Generat- ing consistent long depth sequences for open-world videos. arXiv preprint arXiv:2409.02095 , 2024. [30] Z. Huang, Y . He, J. Yu, F. Zhang, C. Si, Y . Jiang, Y . Zhang, T. Wu, Q. Jin, N. Chanpaisit, et al. Vbench: Comprehensive benchmark suite for video generative models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages 21807–21818, 2024. 11 [31] W. Jin, Q. Dai, C. Luo, S.-H. Baek, and S. Cho. Flovd: Optical flow meets video diffusion model for enhanced camera-controlled video synthesis. arXiv preprint arXiv:2502.08244 , 2025. [32] X. Ju, Y . Gao, Z. Zhang, Z. Yuan, X. Wang, A. Zeng, Y . Xiong, Q. Xu, and Y . Shan. Miradata: A large-scale video dataset with
|
https://arxiv.org/abs/2505.21876v1
|
long durations and structured captions. Advances in Neural Information Processing Systems , 37:48955–48970, 2024. [33] L. Khachatryan, A. Movsisyan, V . Tadevosyan, R. Henschel, Z. Wang, S. Navasardyan, and H. Shi. Text2video-zero: Text-to-image diffusion models are zero-shot video generators. In Proceedings of the IEEE/CVF International Conference on Computer Vision , pages 15954– 15964, 2023. [34] D. Kondratyuk, L. Yu, X. Gu, J. Lezama, J. Huang, G. Schindler, R. Hornung, V . Birodkar, J. Yan, M.-C. Chiu, et al. Videopoet: A large language model for zero-shot video generation. arXiv preprint arXiv:2312.14125 , 2023. [35] M. Koroglu, H. Caselles-Dupré, G. J. Sanmiguel, and M. Cord. Onlyflow: Optical flow based motion conditioning for video diffusion models. arXiv preprint arXiv:2411.10501 , 2024. [36] J. Li, D. Li, S. Savarese, and S. Hoi. Blip-2: Bootstrapping language-image pre-training with frozen image encoders and large language models. In International conference on machine learning , pages 19730–19742. PMLR, 2023. [37] L. Li, Z. Zhang, Y . Li, J. Xu, W. Hu, X. Li, W. Cheng, J. Gu, T. Xue, and Y . Shan. Nvcomposer: Boosting generative novel view synthesis with multiple sparse and unposed images. arXiv preprint arXiv:2412.03517 , 2024. [38] T. Li, G. Zheng, R. Jiang, T. Wu, Y . Lu, Y . Lin, X. Li, et al. Realcam-i2v: Real-world image-to- video generation with interactive complex camera control. arXiv preprint arXiv:2502.10059 , 2025. [39] H. Lin, J. Cho, A. Zala, and M. Bansal. Ctrl-adapter: An efficient and versatile framework for adapting diverse controls to any diffusion model. arXiv preprint arXiv:2404.09967 , 2024. [40] L. Ling, Y . Sheng, Z. Tu, W. Zhao, C. Xin, K. Wan, L. Yu, Q. Guo, Z. Yu, Y . Lu, et al. Dl3dv-10k: A large-scale scene dataset for deep learning-based 3d vision. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages 22160–22169, 2024. [41] F. Liu, W. Sun, H. Wang, Y . Wang, H. Sun, J. Ye, J. Zhang, and Y . Duan. ReconX: reconstruct any scene from sparse views with video diffusion model. arXiv preprint arXiv:2408.16767 , 2024. [42] I. Loshchilov. Decoupled weight decay regularization. arXiv preprint arXiv:1711.05101 , 2017. [43] Y . Ma, Y . He, X. Cun, X. Wang, S. Chen, X. Li, and Q. Chen. Follow your pose: Pose-guided text-to-video generation using pose-free videos. In Proceedings of the AAAI Conference on Artificial Intelligence , 2024. [44] N. Müller, K. Schwarz, B. Rössle, L. Porzi, S. R. Bulò, M. Nießner, and P. Kontschieder. Multidiff: Consistent novel view synthesis from a single image. In Proc. CVPR , 2024. [45] W. Peebles and S. Xie. Scalable diffusion models with transformers. In Proc. ICCV , 2023. [46] S. Popov, A. Raj, M. Krainin, Y . Li, W. T. Freeman, and M. Rubinstein. Camctrl3d: Single- image scene exploration with precise 3d camera control. arXiv preprint arXiv:2501.06006 , 2025. [47] T. Ren, S. Liu, A. Zeng, J. Lin, K. Li, H. Cao, J. Chen, X. Huang, Y . Chen, F. Yan, et al. Grounded sam: Assembling open-world models for diverse visual tasks. arXiv preprint arXiv:2401.14159 , 2024. [48] X. Ren, T.
|
https://arxiv.org/abs/2505.21876v1
|
Shen, J. Huang, H. Ling, Y . Lu, M. Nimier-David, T. Müller, A. Keller, S. Fidler, and J. Gao. Gen3c: 3d-informed world-consistent video generation with precise camera control. arXiv preprint arXiv:2503.03751 , 2025. 12 [49] N. Ruiz, Y . Li, V . Jampani, Y . Pritch, M. Rubinstein, and K. Aberman. Dreambooth: Fine tuning text-to-image diffusion models for subject-driven generation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages 22500–22510, 2023. [50] J. Seo, K. Fukuda, T. Shibuya, T. Narihira, N. Murata, S. Hu, C.-H. Lai, S. Kim, and Y . Mitsufuji. Genwarp: Single image to novel views with semantic-preserving generative warping. In The Thirty-eighth Annual Conference on Neural Information Processing Systems , 2024. [51] J. Shi, Q. Wang, Z. Li, and P. Wonka. Stereocrafter-zero: Zero-shot stereo video generation with noisy restart. arXiv preprint arXiv:2411.14295 , 2024. [52] X. Shi, Z. Huang, F.-Y . Wang, W. Bian, D. Li, Y . Zhang, M. Zhang, K. C. Cheung, S. See, H. Qin, et al. Motion-i2v: Consistent and controllable image-to-video generation with explicit motion modeling. In ACM SIGGRAPH 2024 Conference Papers , pages 1–11, 2024. [53] W. Sun, S. Chen, F. Liu, Z. Chen, Y . Duan, J. Zhang, and Y . Wang. Dimensionx: Create any 3d and 4d scenes from a single image with controllable video diffusion. arXiv preprint arXiv:2411.04928 , 2024. [54] Z. Teed and J. Deng. Raft: Recurrent all-pairs field transforms for optical flow. In Com- puter Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part II 16 , pages 402–419. Springer, 2020. [55] B. Van Hoorick, R. Wu, E. Ozguroglu, K. Sargent, R. Liu, P. Tokmakov, A. Dave, C. Zheng, and C. V ondrick. Generative camera dolly: Extreme monocular dynamic novel view synthesis. InEuropean Conference on Computer Vision , pages 313–331. Springer, 2024. [56] C. Wang, P. Zhuang, T. D. Ngo, W. Menapace, A. Siarohin, M. Vasilkovsky, I. Skorokhodov, S. Tulyakov, P. Wonka, and H.-Y . Lee. 4real-video: Learning generalizable photo-realistic 4d video diffusion. arXiv preprint arXiv:2412.04462 , 2024. [57] J. Wang, H. Yuan, D. Chen, Y . Zhang, X. Wang, and S. Zhang. Modelscope text-to-video technical report. arXiv preprint arXiv:2308.06571 , 2023. [58] S. Wang, V . Leroy, Y . Cabon, B. Chidlovskii, and J. Revaud. Dust3r: Geometric 3d vision made easy. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages 20697–20709, 2024. [59] S. Wang, V . Leroy, Y . Cabon, B. Chidlovskii, and J. Revaud. Dust3r: Geometric 3d vision made easy. In Proc. CVPR , 2024. [60] Z. Wang, J. Li, H. Lin, J. Yoon, and M. Bansal. Dreamrunner: Fine-grained storytelling video generation with retrieval-augmented motion adaptation. arXiv preprint arXiv:2411.16657 , 2024. [61] Z. Wang, Z. Yuan, X. Wang, T. Chen, M. Xia, P. Luo, and Y . Shan. Motionctrl: A unified and flexible motion controller for video generation. In SIGGRAPH , 2024. [62] Z. Wang, Z. Yuan, X. Wang, Y . Li, T. Chen, M. Xia, P. Luo, and Y . Shan. Motionctrl: A unified and flexible motion controller for video generation. In
|
https://arxiv.org/abs/2505.21876v1
|
ACM SIGGRAPH 2024 Conference Papers , pages 1–11, 2024. [63] D. Watson, S. Saxena, L. Li, A. Tagliasacchi, and D. J. Fleet. Controlling space and time with diffusion models. In The Thirteenth International Conference on Learning Representations , 2024. [64] R. Wu, R. Gao, B. Poole, A. Trevithick, C. Zheng, J. T. Barron, and A. Holynski. Cat4d: Create anything in 4d with multi-view video diffusion models. Proc. CVPR , 2025. [65] W. Wu, Z. Li, Y . Gu, R. Zhao, Y . He, D. J. Zhang, M. Z. Shou, Y . Li, T. Gao, and D. Zhang. Draganything: Motion control for anything using entity representation. In Proc. ECCV , 2024. [66] Z. Xiao, W. Ouyang, Y . Zhou, S. Yang, L. Yang, J. Si, and X. Pan. Trajectory attention for fine-grained video motion control. arXiv preprint arXiv:2411.19324 , 2024. 13 [67] D. Xu, W. Nie, C. Liu, S. Liu, J. Kautz, Z. Wang, and A. Vahdat. Camco: Camera-controllable 3d-consistent image-to-video generation. arXiv preprint arXiv:2406.02509 , 2024. [68] L. Yang, B. Kang, Z. Huang, Z. Zhao, X. Xu, J. Feng, and H. Zhao. Depth anything v2. Advances in Neural Information Processing Systems , 37:21875–21911, 2024. [69] X. Yang, J. Xu, K. Luan, X. Zhan, H. Qiu, S. Shi, H. Li, S. Yang, L. Zhang, C. Yu, et al. Omni- cam: Unified multimodal video generation via camera control. arXiv preprint arXiv:2504.02312 , 2025. [70] Z. Yang, J. Teng, W. Zheng, M. Ding, S. Huang, J. Xu, Y . Yang, W. Hong, X. Zhang, G. Feng, et al. Cogvideox: Text-to-video diffusion models with an expert transformer. arXiv preprint arXiv:2408.06072 , 2024. [71] S. Yin, C. Wu, J. Liang, J. Shi, H. Li, G. Ming, and N. Duan. Dragnuwa: Fine-grained control in video generation by integrating text, image, and trajectory. arXiv preprint arXiv:2308.08089 , 2023. [72] M. You, Z. Zhu, H. Liu, and J. Hou. Nvs-solver: Video diffusion model as zero-shot novel view synthesizer. arXiv preprint arXiv:2405.15364 , 2024. [73] H. Yu, C. Wang, P. Zhuang, W. Menapace, A. Siarohin, J. Cao, L. Jeni, S. Tulyakov, and H.-Y . Lee. 4real: Towards photorealistic 4d scene generation via video diffusion models. Advances in Neural Information Processing Systems , 37:45256–45280, 2024. [74] M. YU, W. Hu, J. Xing, and Y . Shan. Trajectorycrafter: Redirecting camera trajectory for monocular videos via diffusion models. arXiv preprint arXiv:2503.05638 , 2025. [75] W. Yu, J. Xing, L. Yuan, W. Hu, X. Li, Z. Huang, X. Gao, T.-T. Wong, Y . Shan, and Y . Tian. ViewCrafter: taming video diffusion models for high-fidelity novel view synthesis. arXiv preprint arXiv:2409.02048 , 2024. [76] W. Yu, S. Yin, S. Easterbrook, and A. Garg. Egosim: Egocentric exploration in virtual worlds with multi-modal conditioning. In The Thirteenth International Conference on Learning Representations , 2025. [77] D. J. Zhang, R. Paiss, S. Zada, N. Karnad, D. E. Jacobs, Y . Pritch, I. Mosseri, M. Z. Shou, N. Wadhwa, and N. Ruiz. Recapture: Generative video camera controls for user-provided videos using masked video fine-tuning. arXiv preprint arXiv:2411.05003 , 2024. [78] D. J. Zhang, J. Z. Wu, J.-W. Liu, R.
|
https://arxiv.org/abs/2505.21876v1
|
Zhao, L. Ran, Y . Gu, D. Gao, and M. Z. Shou. Show-1: Marrying pixel and latent diffusion models for text-to-video generation. International Journal of Computer Vision , pages 1–15, 2024. [79] Z. Zhang, D. Chen, and J. Liao. I2v3d: Controllable image-to-video generation with 3d guidance. arXiv preprint arXiv:2503.09733 , 2025. [80] Z. Zhang, J. Liao, M. Li, Z. Dai, B. Qiu, S. Zhu, L. Qin, and W. Wang. Tora: Trajectory-oriented diffusion transformer for video generation. arXiv preprint arXiv:2407.21705 , 2024. [81] G. Zheng, T. Li, R. Jiang, Y . Lu, T. Wu, and X. Li. Cami2v: Camera-controlled image-to-video diffusion model. arXiv preprint arXiv:2410.15957 , 2024. [82] S. Zheng, Z. Peng, Y . Zhou, Y . Zhu, H. Xu, X. Huang, and Y . Fu. Vidcraft3: Camera, object, and lighting control for image-to-video generation. arXiv preprint arXiv:2502.07531 , 2025. [83] J. J. Zhou, H. Gao, V . V oleti, A. Vasishta, C.-H. Yao, M. Boss, P. Torr, C. Rupprecht, and V . Jampani. Stable virtual camera: Generative view synthesis with diffusion models. arXiv e-prints , pages arXiv–2503, 2025. [84] T. Zhou, R. Tucker, J. Flynn, G. Fyffe, and N. Snavely. Stereo magnification: Learning view synthesis using multiplane images. In SIGGRAPH , 2018. [85] T. Zhou, R. Tucker, J. Flynn, G. Fyffe, and N. Snavely. Stereo magnification: Learning view synthesis using multiplane images. arXiv preprint arXiv:1805.09817 , 2018. [86] Z. Zhou, J. An, and J. Luo. Latent-reframe: Enabling camera control for video diffusion model without training. arXiv preprint arXiv:2412.06029 , 2024. 14 A Implementation Details A.1 Method Details EPiC is trained on a subset of 5,000videos from the Panda70M dataset [ 13] for 500 iterations, using a total batch size of 16across 840GB A100 GPUs. The text condition for the I2V backbone is obtained from the annotated captions in Panda70M. The subset is selected based on optical flow scores, where we rank videos by their average flow magnitude and retain those with sufficient motion to ensure meaningful camera control training. Training takes less than 3hours with a learning rate of 2×10−4, using the AdamW [ 42] optimizer. For our visibility-aware output masking, we apply average pooling to downsample the raw visibility mask to the latent resolution. We train the Anchor-ControlNet at a resolution of 480×720for49frames per video (which is the default setting of CogVideoX-5B-I2V [70]), with ControlNet weights set to 1.0. During inference, we apply classifier-free guidance (CFG) [ 25] with a scale of 6.0 for text conditioning. Following AC3D [ 1], we only inject the ControlNet into the first 40% diffusion steps at inference. We apply max pooling to downsample the raw visibility mask to the latent resolution for visibility-aware output masking. For videos with caption annotations, we directly use the annotations as the textual condition. For those without annotations, we either generate the text condition using advanced vision-language models [ 36,3] based on the visual input, or manually write prompts for specific usage scenarios. A.2 Evaluation Metrics We adopt three standard camera pose evaluation metrics to measure the alignment between predicted and ground-truth camera trajectories: Rotation Error (RotErr) ,Translation Error (TransErr)
|
https://arxiv.org/abs/2505.21876v1
|
, andCamera Matrix Consistency (CamMC) following MotionCtrl [61] and CameraCtrl [22]. •Rotation Error (RotErr) measures the angular deviation (in radians) between the predicted and ground-truth camera rotations: RotErr =nX i=1arccos tr(˜RiR⊤ i)−1 2! where ˜RiandRiare the predicted and ground-truth rotation matrices at frame i, and nis the number of frames in the video. •Translation Error (TransErr) computes the L2distance between normalized translation vectors: TransErr =nX i=1 ˜Ti ˜si−Ti si 2 where ˜TiandTiare the predicted and ground-truth camera translations, and ˜si,siare their respective scene scales—defined as the L2distance between the first and farthest frame in each video. •Camera Matrix Consistency (CamMC) evaluates overall pose alignment by comparing full camera-to-world matrices with scale normalization: CamMC =nX i=1 " ˜Ri˜Ti ˜si#3×4 − RiTi si3×4 2 where ˜Ri,˜Ti, and ˜siare the predicted rotation, translation, and scene scale; Ri,Ti, andsiare their ground-truth counterparts. For visual quality, we adopt the evaluation protocol from VBench [ 30], including metrics such as Subject Consistency, Background Consistency, Motion Smoothness, Temporal Flickering, Aesthetic Quality, and Imaging Quality. We refer to VBench [30] for more details. B Additional V2V Camera Control Quantitative Evaluation We evaluate our method in the zero-shot video-to-video (V2V) camera control setting on the Kubric- 4D [ 55] test set. Specifically, we sample 20 held-out examples and compare our method with 15 Table 4: V2V camera control results on Kubric-4D. Method PSNR ↑SSIM ↑ GCD [55] 19.72 0.59 EPiC (Ours) 19.65 0.60 AnchorVideoSourceVideo Panda70MTrainedRE10KTrained Figure 6: Qualitative V2V camera control results of models trained from different data sources. GCD, one of the state-of-the-art methods on Kubric-4D v2v camera control, using its publicly released checkpoint (gradual mode, max 180 °rotation) trained on Kubric. For fair comparison, we downsample our generated videos to 256×384. Quantitative results are provided in Tab. 4. Despite performing V2V camera control in a zero-shot manner, our method achieves performance comparable to GCD. Moreover, as shown in Fig. 4(b) of the main paper, our model generalizes better to wild domains with complex and dynamic motions. C Ablation Studies In this section, we provide additional ablations on the training data, the use of Anchor-ControlNet, and the lightweight ControlNet design. C.1 Effects of Training Data Sources A key advantage of our method is that it does not rely on camera pose annotations, which enables train- ing on diverse, in-the-wild video datasets beyond multi-view datasets with limited domain coverage. To validate this, we conduct an ablation comparing training on the widely used RealEstate10K [ 85], which is a mulit-view dataset limited to static indoor scenes, with training on Panda70M [ 13], which contains more diverse and dynamic videos. We report quantitative results in Tab. 5. We observe that both data sources yield comparable performance on RealEstate10K, while training with Panda70M achieves slightly better results on MiraData, likely due to its more diverse training content. However, in the V2V setting, especially when the reference video involves fine-grained motion ( e.g., detailed limb articulation), models trained on RealEstate10K fail to generalize effectively. Specifically, as shown in Fig. 6, the crab’s legs exhibit intricate, localized motion patterns. While the model trained on Panda70M is
|
https://arxiv.org/abs/2505.21876v1
|
able to precisely follow these details by following the anchor video, the model trained on RealEstate10K can only capture a coarse moving direction, failing to reproduce the fine motion in the crab’s legs. This limitation is likely due to the lack of diverse and dynamic videos in the RealEstate10K dataset, which mainly consists of indoor scenes that differ significantly from the domain of the crab video. 16 Table 5: Ablation of using different data sources for training EPiC. Training Data SourceRealEstate10K MiraData Rot. Err ( ↓) Trans. Err ( ↓) CamMC ( ↓)Rot. Err ( ↓) Trans. Err ( ↓) CamMC ( ↓) RealEstate10K [85] 0.43±0.10 0.84±0.22 1.06±0.25 0.73±0.32 1.88±0.75 2.21±0.65 Panda70M [13] 0.40±0.11 0.86±0.18 1.17±0.23 0.66±0.22 1.78±0.67 2.10±0.60 Table 6: Ablation on lightweight ControlNet design. Our selected setting is bolded (no pretrain, 256 hidden dimension, 8 layers). Pretrained Hidden Dimension #LayersRealEstate10K Rot. Err ↓Trans. Err ↓CamMC ↓ ✓ 3072 21 0.42 0 .83 1 .19 ✗ 256 21 0.38 0 .90 1 .21 ✗ 256 8 0.40 0 .86 1 .17 ✗ 256 2 0.70 1 .32 1 .89 C.2 Effects of Lightweight Anchor-ControlNet Design We ablate the design of our lightweight ControlNet in Tab. 6. Specifically, we compare injecting into half of the backbone layers ( 21layers here (CogVideoX-5B-I2V has 42layers totally), as in the default ControlNet setting) with and without using pretrained weights, and further study the effect of reducing the number of injection layers. Our results show that using a high-dimensional feature space ( 3072 ) with pretrained CogVideoX weights performs comparably to using no pretraining and a much smaller dimension ( 256), suggesting that the region-copying control is relatively easy to learn. In addition, reducing the number of injection layers to 8 does not hurt performance, while further reducing it to only 2layers results in a noticeable decreased control accuracy. Based on these findings, we adopt the most cost-effective configuration: injecting into 8 layers with a control dimension of 256. C.3 Training Anchor-ControlNet only vs. Full-Finetuning As ViewCrafter [ 75] directly fine-tunes the entire backbone, we compare our ControlNet-based training strategy with this standard full-finetuning approach to highlight the efficiency of our design. Specifically, we encode the anchor video directly as the conditioning input,replacing the original image-conditioned latent, and full-finetune the base model for 1000 iterations. As shown in Fig. 7, despite training for twice as many steps, the output remains blurry and noisy. We attribute this to a mismatch in the conditioning distribution: replacing image-based conditioning with anchor-video conditioning disrupts the pre-learned first-frame embedding priors, making end-to-end fine-tuning less effective and harder to optimize. In contrast, our ControlNet design enables effective anchor- video conditioning without modifying the backbone, by treating the anchor video as an external control signal. Text Prompt:(Camera move forward)…The area is then seen with a wooden table set for six, a china cabinet, and a floral-patterned rug. the room is warmly lit by a chandelier and natural light, with a framed artwork and a red armchair adding to the ambiance. Text Prompt: (Camera move forward)…The area is then seen with a set of
|
https://arxiv.org/abs/2505.21876v1
|
dumbbells neatly arranged on a rack, a yoga mat laid out near the window, and a treadmill in one corner. Text Prompt: (Camera move forward)…The area is then seen with a bed tucked against one wall, a closet near the curtain, and a dresser with a mirror, giving the space a cozy, bedroom-like feel. Text Prompt: (Camera move forward)…The area is then seen with a freestanding bathtub near a tiled wall, a vanity sink with a round mirror, and a towel rack with neatly folded linens. Source FrameAnchor Video FrameFull-finetune BaseModel (1K iter.)Ours w/ Anchor ControlNet (500 iter.) Figure 7: Results of training with Anchor-ControlNet compared to full-finetuning. D Robustness to Different Random Seeds We demonstrate the robustness of our method in Fig. 8. Given a conditioned image, we use a specific object (highlighted with a white box) as the reference for spatial consistency. For AC3D, varying the random seed leads to noticeable changes in the spatial positions of other objects (highlighted in red 17 (a) Ground TruthSeed 1Seed 2Seed 3Seed 1Seed 2Seed 3(b) AC3D(c) EPiC (Ours)Figure 8: Robustness to different random seeds Text Prompt: (Camera move forward)…The area is then seen with a wooden table set for six, a china cabinet, and a floral-patterned rug. the room is warmly lit by a chandelier and natural light, with a framed artwork and a red armchair adding to the ambiance. Text Prompt: (Camera move forward)…The area is then seen with a set of dumbbells neatly arranged on a rack, a yoga mat laid out near the window, and a treadmill in one corner. Text Prompt: (Camera move forward)…The area is then seen with a bed tucked against one wall, a closet near the curtain, and a dresser with a mirror, giving the space a cozy, bedroom-like feel. Text Prompt: (Camera move forward)…The area is then seen with a freestanding bathtubnear a tiled wall, a vanity sink with a round mirror, and a towel rack with neatly folded linens. Source FrameAnchor Video FrameSFT Backbone (1K iter.)ControlNet (500 iter.) Figure 9: Examples of text-guided scene control. boxes). This is especially evident in Seed 3, where the generated object’s position drifts significantly from the reference, failing to maintain spatial alignment. In contrast, our method consistently pre- serves the spatial relationship across different seeds. The objects in our generated videos (highlighted in green boxes) remain stable and aligned with the referenced object, demonstrating strong robustness to seed variation. E Additional Applications: Fine-Grained Control We present several additional applications demonstrating different types of fine-grained control based on a single image with our anchor-video conditioning. Text-Guided Scene Control. Our model effectively demonstrates dynamic text-guided video generation capabilities, enabling flexible scene synthesis across different styles while maintaining temporal and spatial consistency. Fig. 9 illustrates examples of our text-guided scene control. Starting from an initial frame with a fixed forward camera trajectory, our method generates subsequent video frames conditioned on different textual prompts. The newly prompted objects are introduced into the generated scene (highlighted in red text and boxes), while the objects present in the initial frame remain consistently visible throughout
|
https://arxiv.org/abs/2505.21876v1
|
the video (highlighted in green text and boxes). Object 3D Trajectory Control via Anchor Video Manipulation. We also demonstrate the flexi- bility of our method in enabling 3D trajectory control for objects. The input is usually a 3D trajectory (e.g., indicating moving backwards with 2 meters) applied to a specific object ( e.g.corgi). We encode the desired motion into the anchor video by manipulating it based on the 3D trajectory. Specifically, following a similar approach to our inference setup with masked point clouds, we use Grounded- SAM [ 47] to obtain the segmentation mask of the corgi, extract the point cloud corresponding to the corgi, and isolate the background point cloud without the corgi. We then simulate motion by translating the corgi’s point cloud backward by 2 meters relative to the background over time (we don’t move the background point cloud), producing a dynamic point cloud sequence for rendering. In this setup, we focus solely on trajectory control, thus, we remain the camera trajectory static during rendering. The resulting anchor video depicts the corgi moving backward and serves as strong 18 Text Prompt: A cheerful corgi stepping backward 2 metersat a tropical beach, with palm trees and waves in the background.EPiC(Ours)AnchorVideo AC3DFirst Frame ConditionBackward3mFigure 10: Examples of object 3D trajectory control via anchor video manipulation. guidance. Our results are illustrated in Fig. 10, where our approach successfully generates scenarios in which the corgi steps backward. In contrast, AC3D, which conditions only on camera embeddings, which lack explicit trajectory information, fails to generate this backward motion even with “stepping backward” included in the textual condition. This comparison highlights the strength of our method in interpreting and executing precise object-level movements in 3D space, showcasing its superior capability for controllable video generation. Regional Animation. Our method is also applicable to regional image animation, where motion is localized to a specific area based on a short text prompt and a user-provided click or prior mask. To achieve this, we directly create the anchor video by repeating the source image and applying the regional mask to each frame. As shown in Fig.11 (a), given the prompt “the corgi shakes its head," with corresponding corgi head mask, our method generates a video in which only the corgi’s head moves while the rest of its body remains still, accurately following both the textual instruction and the specified region. In contrast, Fig.11 (b) highlights a failure case of AC3D—when the intended motion is for the palm tree to move, AC3D incorrectly animates the corgi instead. Our method, however, successfully isolates and animates the palm tree, demonstrating its ability to localize motion precisely based on regional guidance and text. This showcases the fine-grained spatial control ability enabled by our approach. F Additional Visual Examples Examples of Constructed Anchor Videos. We present examples of high-quality anchor videos constructed from Panda70M source videos in Fig. 12. Our method consistently maintains spatial coherence and masks regions that were initially not visible in the first frame, even when objects exhibit significant movements across frames, while the Panda70M provides both diverse and dynamic video data.
|
https://arxiv.org/abs/2505.21876v1
|
Such high-quality and diverse anchor videos further help the efficient learning by our model. Examples of I2V Camera Control. Fig. 13 shows additional qualitative examples of I2V camera control. Given diverse image inputs and a variety of camera trajectories, our method consistently generates high-quality videos that accurately follow the specified motions. The results demonstrate effective camera control across multiple scene types, including gaming (first- and third-person), outdoor, close-up views, etc. Moreover, it effectively maintains dynamic objects and preserves scene coherence across different scenarios, highlighting the flexibility and robustness of our approach in handling diverse I2V scenarios. Examples of V2V Camera Control. We provide additional visualizations demonstrating our V2V camera control capabilities. As illustrated in examples of Fig 14, our method successfully generates high-quality videos given challenging source videos such as movie clips, which typically contain 19 (a)Regional Animation on Corgi’s head Text Prompt: A cheerful corgi in sunglasses and a flower lei is shaking its head at a tropical beach.Generated VideoAnchor Video Text Prompt: A cheerful corgi in sunglasses and a flower lei is shaking its head at a tropical beach.EPiC(Ours)AnchorVideo AC3DFirst Frame Condition EPiC(Ours)AnchorVideo AC3DEPiCGreenBoxZoom-in (b)Regional Animation on Trees and WavesText Prompt: A corgiis sitting while palm trees sway in the breeze and ocean waves roll gently in the background.First Frame ConditionFigure 11: Examples of Regional Animation complex objects and dynamic movements. This underscores the robustness and versatility of our approach in handling realistic and demanding V2V scenarios. G Limitations and Broader Impacts EPiC trains a lightweight adapter on a backbone video diffusion model. As such, its performance, output quality, and potential visual artifacts are inherently influenced by the capabilities and limitations of the underlying backbone models it relies on. For instance, if the backbone model struggles with generating complex, rare, or previously unseen scenes and objects, then EPiC may also exhibit suboptimal generation results. This dependency highlights the importance of selecting strong and reliable backbone models when applying EPiC. While EPiC can benefit numerous applications in video generation, similar to other visual generation frameworks, it can also be used for potentially harmful purposes (e.g., creating false information or misleading videos). Therefore, it should be used with caution in real-world applications. 20 Caption: A lobby with red and white lamps hanging from the ceiling. Caption: People are visiting a temple with scaffolding around it. Caption: A black chevrolettruck is driving on a rural road. Caption: A group of men in uniform standing in a lobby.AnchorVideoSourceVideoAnchorVideoSourceVideo AnchorVideoSourceVideo AnchorVideoSourceVideo Figure 12: Examples of constructed anchor videos. The source video and corresponding captions are obtained from Panda70M. 21 AnchorVideoSourceVideo Panda70MTrainedRE10KTrained Source ImageGenerated FramesFigure 13: Qualitative examples of I2V camera control with diverse image inputs and camera trajectories. 22 EPiC SourceVideo TranslationDown EPiCSourceVideo TranslationUp EPiCSourceVideoArcRight EPiCSourceVideoArcLeft EPiCSourceVideoZoom out EPiCSourceVideoZoom in Figure 14: Qualitative examples of V2V camera control on movie clips with multiple kinds of camera trajectories. 23
|
https://arxiv.org/abs/2505.21876v1
|
arXiv:2505.21879v1 [cs.SC] 28 May 2025Symbolic Foundation Regressor on Complex Networks Weiting Liu1,2, Jiaxu Cui1,2*, Jiao Hu1,2, En Wang1,2*, Bo Yang1,2* 1College of Computer Science and Technology, Jilin University, Changchun, 130012, China. 2Key Laboratory of Symbolic Computation and Knowledge Engineering of Ministry of Education, Jilin University, Changchun, 130012, China. *Corresponding author(s). E-mail(s): cjx@jlu.edu.cn; wangen@jlu.edu.cn; ybo@jlu.edu.cn; Contributing authors: liuwt23@mails.jlu.edu.cn; hujiao22@mails.jlu.edu.cn; Abstract In science, we are interested not only in forecasting but also in understanding how predictions are made, specifically what the interpretable underlying model looks like. Data-driven machine learning technology can significantly streamline the complex and time-consuming traditional manual process of discovering scientific laws, helping us gain insights into fundamental issues in modern science. In this work, we introduce a pre-trained symbolic foundation regressor that can effectively compress complex data with numerous interacting variables while producing interpretable physical representations. Our model has been rigorously tested on non-network symbolic regression, symbolic regression on complex networks, and the inference of network dynamics across various domains, including physics, biochem- istry, ecology, and epidemiology. The results indicate a remarkable improvement in equation inference efficiency, being three times more effective than baseline approaches while maintaining accurate pre- dictions. Furthermore, we apply our model to uncover more intuitive laws of interaction transmission from global epidemic outbreak data, achieving optimal data fitting. This model extends the appli- cation boundary of the pre-trained symbolic regression models to complex networks, and we believe it provides a foundational solution for revealing the hidden mechanisms behind changes in complex phenomena, enhancing interpretability, and inspiring further scientific discoveries. Keywords: Scientific discovery, Symbolic regression, Complex networks, Foundation model Introduction Mankind’s pursuit of scientific laws and the eagerness to explore the unknown have never ceased. From Galileo’s principle of relativity and Hermann’s law of conservation of energy to Schr¨ odinger’s wave equation in modern quantum mechanics, scientific revolutions have long been linked to the milestone dis- coveries of these laws. Traditionally, this discovery process has depended primarily on human intelligence, requiring strong empirical assumptions [1] and often taking a considerable amount of time, accompanied by elements of chance. For instance, it took thousands of years, from ancient stargazing activities and hypotheses about celestial movement, to the discovery of the law of universal gravitation, which helps us understand the solar system. Fortunately, with the flourishing development of artificial intelligence technology, data-driven machine learning methods show significant potential to accelerate the process 1 of scientific discovery [2–4]. For example, by using 30 years of trajectory data from our solar system’s sun, planets, and large moons, Newton’s law of gravitation has been successfully and efficiently identified through a combination of graph networks and symbolic regression [5]. The origins of data-driven methods for mining symbolic scientific laws trace back to the 1950s, when genetic algorithms [6] were utilized to search for target mathematical expressions fitting input-output pairs across a broad expression space. The regression tools developed based on this principle, such as Eureqa [7], GPLearn [8], and PySR [9], have been instrumental in deriving analytical formulas for the design of one-dimensional sonic crystals [10], discovering a mass scaling relationship for planar black
|
https://arxiv.org/abs/2505.21879v1
|
holes in spiral galaxies [11], and aiding in jet background subtraction during heavy-ion collisions [12]. However, when exploring a vast expression space, search-based approaches often require extensive time and yield complex, hard-to-understand outcome expressions, limiting their practical applications. Learning-based symbolic regression approaches introduce assumptions such as sparsity [13–19] and physical symmetry [20], along with external inference systems [21], to identify simple and scientifically meaningful math- ematical expressions, and employ cutting-edge techniques like reinforcement learning [22–26] or Monte Carlo tree search [27–29] to efficiently perform trial-and-error processes. Nonetheless, they continue to struggle with defining scientifically meaningful expressions, experience long inference times, and do not fully leverage existing prior knowledge and data. With the emergence of large language models such as ChatGPT [30] and DeepSeek [31], artificial intelligence technology has reached a new stage of development, achieving impressive results in protein design [32] and natural evolution simulation [33]. Due to being built on extensive data and knowledge, they typically exhibit remarkable generality and can adapt to various tasks. Inspired by this principle, a new generation of scientific discovery techniques has emerged, specifically the pre-trained symbolic regression models [34–38], which create extensive mappings from data to equations, such as 100 million data-equation pairs [35], to pre-train transformer-based autoregressive models. Through the integration of massive knowledge, these pre-trained models have the potential to emerge predictive abilities for new equations. Meanwhile, in contrast to the hour-long search overhead associated with search-based approaches, these models do not require starting from scratch during testing. Instead, the input data is only propagated forward, allowing for the quick generation of the corresponding equation in just minutes. However, the performance of existing pre-trained models for symbolic regression problems significantly declines when dealing with more than three variables [35], and they can typically handle up to a maximum of ten variables [36]. This can lead to resistance in discovering scientific laws, as complex phenomena often arise from the coevolution of numerous interrelated variables. For example, global epidemic transmission involves free variables from over 200 countries and regions [39]. If the modeling is fine enough to consider each individual as a variable, the transmission system could encompass over a billion free variables. Therefore, how to represent the numerous complex phenomena generated by so many variables and the various interactive relationships formed between variables, while effectively and efficiently uncovering the underlying simple governing laws, remains an open challenge. In this work, we propose a symbolic foundation regressor on complex networks that can effectively handle spatio-temporal data of complex phenomena with massive correlated variables and efficiently reconstruct the corresponding governing equations to address the challenge. We create approximately 2 billion mathematical expressions on complex networks as the training set to pre-train the foundation model, allowing it to learn a unified mapping from the data space to the expression space. Our analysis of symbolic regression tasks involving independent (non-network) low-dimensional variables, as well as those applied to complex networks, alongside various network dynamics scenarios in fields such as physics, biochemistry, ecology, and epidemiology, shows that our model is highly applicable, effective, and efficient. In particular, within the
|
https://arxiv.org/abs/2505.21879v1
|
context of network dynamics scenarios, our model can reconstruct the network dynamics equation in only 1.2 minutes, which is more than three times faster than the search-based and learning-based approaches, and shorten the time for discovering more accurate new laws of real- world global infectious disease outbreaks to within half a minute. To our knowledge, this is the first foundation symbolic regression model designed specifically for complex networks, extending pre-trained models to effectively address complex phenomena involving a large number of variables. We believe 2 Pre-processing Post-processing Final result: optional Pretrained SFR Raw data Sampling & Scaling Physical Representation of Scientific Laws (ℱ)Response (ℛ) Observations () on Complex NetworksQuery () Compressing Time t() () () () =()(0.5−()) − =1 ()()’ A’ ’(0)Regulating and analyzing / / / Predicting Decompressing→ℱ ×ℱ→ℛ (’)=’(0)+ 0’ ()(0.5−())− =1 ’()() Human learning Machine learning Symbolic Foundation Regressor Machine learning attempts to imitate human learninga bc =(+) + ×[×(−)]=(+) + ×[×(−)]=(+) + ×[×(−)]=(+) + ×[×(−)]=(+) + ×[×(−)]Expression for interactive message passingObservation Data () Symbolic Foundation Regressor on Complex Networks (SFR)xi xj + - * /... xi xj + - * /... xi xj + - * /... Expression for self message passing Equation skeletons Data representation model () Self branch model () Interaction branch model () Data representation space ... ... ... ... ... ... ... ... ... ... ... ... Network topology spaceBarabasi-Albert modelWatts-Strogatz model Erdos-Renyi modelStochastic block model ,,..., ~ Gaussian distributionsDomain spaceSymbol library True equation skeletons Cross entropy lossSimulatingGenerating Equation space (): [*,xi,-,C,xj](): [+,xi,C]Fig. 1 :a.Human beings can condense observations into scientific laws, and then use these compressed laws to analyze and regulate various systems. We attempt to imitate this process of compressing inter- pretable physical representations in human learning through machine learning. b.The overall process of our Symbolic Foundation Regressor (SFR), including the generation of massive high-quality synthetic data-equation pairs, model architecture with dual branches, and the pre-training process. c.After pre- training the SFR, we can effectively derive the target equation for the unseen downstream task through a single forward propagation. Additionally, data pre-processing and equation post-processing are included to enhance the accuracy of the recovery equation. that our foundation regressor can serve as a basis for accelerating the discovery of scientific laws while enhancing the practicality of symbolic regression tools, marking an important step towards the long-term goal of developing artificial intelligence scientists. Results The discovery process of scientific laws can be broadly modeled as O → F , where Orepresents the observed data, Frepresents the scientific laws. Human scientists can induce spatio-temporal observations of complex phenomena into an interpretable physical representation [40], i.e., compressing (O → F ). In fact, recent neuroscience research reveals that the human brain has the ability to compress and encode complex behaviors and phenomena into basis functions to facilitate social decisions [41]. When querying questions ( Q) about physical settings later, such as predicting, regulating, and
|
https://arxiv.org/abs/2505.21879v1
|
analyzing, scientists should 3 be able to provide correct responses ( R) using only representations rather than raw data, i.e., decompress- ing(Q×F → R ). In this work, we mainly focus on using machine learning to attempt to mimic the human learning process, as shown in Fig. 1(a), especially in the compressing part, which is more challenging than directly solving with numerical methods for decompressing. Specifically, we introduce a pre-trained symbolic foundation regressor to accomplish this challenging task by integrating massive amounts of data and knowledge of equations, revealing scientific laws from observations on complex networks. Universality in symbolic regression on complex networks Uncovering the underlying scientific laws from complex phenomena can be modeled as a symbolic regres- sion task on complex networks, formulated as yi=F(xi,{xj}j∈Ni), where xirepresents the observation state at node i,Niis the set of neighbors of i, and yidenotes the output at that node. Frepresents the mathematical expression to be regressed using input-output data pairs. Note that the input ( xi,{xj}j∈Ni) consists not only of the state of the node itself but also the states of its neighboring nodes. This implies that classical symbolic regression can be considered a special case of symbolic regression on complex net- works in which there is only one independent node and no neighboring nodes involved. Typically, the number of neighbors for nodes within a network varies, denoted as k, and this distribution is closely tied to the network’s topology. Furthermore, the states of the nodes can be multidimensional, i.e., d. To effectively address this complexity, symbolic regression on complex networks is necessary to recon- struct mathematical equations that are both varying and high-dimensional from the data, i.e., k×d. Consequently, the variety of network topologies and the flexible combination equations arising from high- dimensional variables create significant challenges for pre-training a model that can accurately represent the vast spaces of network topologies and equations. Decoupling network topology through local sampling . Representing a large topological struc- ture space, produced by various scales, types, degree distributions, and graph generation parameters is seemingly impossible. To bypass this, we introduce a sampling strategy to collect the local observation states that are independent of global topological features. Due to the fact that nodes and their neigh- bors are the fundamental building blocks of topological structures, we thus model an observation sample by concatenating the central node ( xi) with its directly connected neighbors ( {xj}j∈Ni) and its corre- sponding output signal ( yi), as a triplet, i.e., oi= (xi,{xj}j∈Ni, yi).By sampling various central nodes, we can utilize this strategy to produce observation data, i.e., O={o1, o2, ..., o i, ...}. It enables us to decouple the observations from factors like topological scale and type, allowing us to concentrate on local regions, which is particularly important in real-world situations where incomplete observations are often encountered. Furthermore, it enhances modeling flexibility, which simplifies the representation of diverse scenarios, such as heterogeneous propagation equations on complex networks. Simplifying mathematical equations through physical priors . When confronted with complex, high-dimensional mathematical equations, simply testing each equation by increasing its length could take
|
https://arxiv.org/abs/2505.21879v1
|
longer than the age of the universe to reach the desired outcome [20]. To alleviate the curse of dimensionality, we propose using a physical prior, suggesting that network states can be influenced both by their own states and by the states of their neighbors [17, 39, 42, 43]. Specifically, we can decompose the mathematical equation ( F) into two coupled components: the self part ( f(self)) and the interaction part ( f(inter )), i.e., yi=f(self)(xi) +PN j=1Aijf(inter )(xi, xj),where the subscripts iandjrepresent the corresponding nodes, Nis the network size, and Aijis the adjacency matrix. Here, f(self)captures the evolution of an individual node’s state, while f(inter )describes the dynamics governing the interactions between neighboring nodes. Thus, Fconsists of two functions, i.e., F:={f(self), f(inter )}. Actually, the feasibility and universality of this formulation have been confirmed [42, 44], which can accommodate a broad spectrum of real-world information propagation mechanisms on complex networks with appropriate selections of f(self)andf(inter ). Significantly, such formulation can achieve the desired dimensionality reduction for high-dimensional mathematical equations, by learning the fixed d-variate f(self)and 2 d- variate f(inter ), rather than directly inferring the varying ( k×d)-variate F. The universality provided by local sampling and physical priors lays the basis of our model construction. 4 The overall flow of the symbolic foundation regressor (SFR) Our implementation of the symbolic foundation regressor (SFR), which maps observed data to inter- pretable physical representations, consists of three main steps: generating training samples, building and training the model, and applying it to various downstream scenarios. Creating a corpus of equations and synthesizing observational data. The quality of data plays a crucial role in determining model performance. Unlike natural language processing models that can utilize extensive real-world data, obtaining large volumes of complete data from actual network observation scenarios poses a challenge. This difficulty stems not only from the complexities of data collection but also from the scarcity of labeled data—specifically, real equations required for supervised training models. Therefore, we create a sufficiently rich corpus of equations and sample data from these equations to meet training needs. To achieve this, we randomly generate an expression tree using a symbol library that includes constants, variables, unary operators, binary operators, and more. In this tree, the leaf nodes represent constants or variables, while the non-leaf nodes represent operators. To ensure that the generated equations, characterized by the expression tree, are meaningful, we implement specific rules, including avoiding nested trigonometric and exponential functions and ensuring that the interaction function f(inter )contains at least one xjterm. As a result, we can generate a large number of f(self)and f(inter ), ultimately synthesizing approximately 2 billion effective equations on complex networks. This process involves randomly sampling topological structures ( Aij) from the network topology space formed by networks with various sizes and types, including grid, random, power law, small world, and community. Built on these equations, we sample their inputs ( xi) from a domain space, e.g., a standard normal distribution, to simulate the outputs ( yi), thereby forming the training samples, i.e., ( O,F). Additional details regarding the equation generation process
|
https://arxiv.org/abs/2505.21879v1
|
and the associated validity rules can be found in the Method section. Constructing a set-to-sequence model with dual branches. Since the input Oof the model is a set, we encounter a translation problem from a set to a sequence. We thus propose a set-to-sequence model with dual branches for symbolic regression on complex networks, consisting of a data representa- tion model ( enc), a self branch model ( decself), and an interaction branch model ( decinter). The data representation model is primarily implemented based on Set Transformer [45], which maps the sampled set of observed data to the representation space, i.e., h=enc(O), where his the data representa- tion. As the data is simulated using equations, the similarities in the data representation space can also reflect the similarities among the underlying equations, providing a solid initial representation for the subsequent branch decoders. To enable the generation of functions f(self)andf(inter ), we design two transformer-based branch decoders to synchronously produce prefix expression sequences for f(self)and f(inter )in an autoregressive manner, i.e., p(e(self) k+1|h, e(self) 1, ..., e(self) k) =decself(h, e(self) 1, ..., e(self) k) and p(e(inter ) k+1|h, e(inter ) 1 , ..., e(inter ) k) =decinter(h, e(inter ) 1 , ..., e(inter ) k), where ekrepresents the expression sym- bol generated at step k. Using the extensive data-equation pairs generated earlier, we employ cross-entropy loss to pre-train the model thoroughly, as demonstrated in Fig. 1(b). The detailed model architecture can be found in the Method section. Applying to various downstream scenarios. After pre-training the SFR, we can efficiently derive the target equation for unseen downstream tasks using a single forward propagation, as illustrated in Fig. 1(c). It is important to highlight that the training data is constructed by sampling xifrom a known distribution. However, in real-world scenarios, this distribution is often unknown and may differ from the one used during pre-training. To tackle this issue, we apply a scaling transformation operation before inputting the data into the model, converting real data to match the model’s desired distribution. Once the model generates the output equation, we optimize the constants within the equation using optimization algorithms such as BFGS [46], which enhances the precision of the equation. The final equation is then obtained through an unscaling operation. Additionally, during the process of the constrained beam search, we have the option to incorporate domain knowledge from experts or large language models, facilitating the equation generation. This not only creates an interface for integrating our method with existing large language models but also opens avenues for increased flexibility in our model. 5 a USE-F AI-Feynman IN-Domain OUT-Domain IN-Domain(USE-F) OUT-Domain(USE-F) b c IN-Domain(USE-F) OUT-Domain(USE-F) = 2(1+)= 2(0.99+)= 1.975+1.971= 2.365+−1.352 =0.03∗−0.126−0.029+0.416 =(0.374 )Glass Steeld Ours E2E SINDyNeSymRes PySR Cross section of material Vertical external force Sectional external forceFig. 2 : Results on classical non-network symbolic regression tasks. a.Comparison of the execution accuracy ( R2,Close 0.001) from various methods (PySR, SINDy, NeSymRes, E2E, and Ours) on 2 datasets (AI-Feynman and USE-F). b.Comparison of the execution accuracy from various methods for equations with different lengths in USE-F. c.The influence of the
|
https://arxiv.org/abs/2505.21879v1
|
number of test data points on the results. d.A physical equation from the AI-Feynman dataset that describes the relationship between the modulus of rigidity G, modulus of elasticity E, and Poisson’s ratio µin material science for regression analysis. our SFR can reconstruct the equation closest to ground truth with the same amount of data, demonstrating the applicability and potential of our model in classical symbolic regression tasks. Validation on classical non-network symbolic regression To evaluate the effectiveness of the SFR in handling classical non-network symbolic regression, we test it on two datasets, i.e., the AI-Feynman [20] and USE-F (Unseen Synthetic Equations with only Self parts). The former is a commonly used standard dataset for symbolic regression tasks, which collects 100 physical equations describing natural phenomena from the Feynman Lectures [47]. The latter refers to a testing set we created randomly, which has not appeared in the training set before, to enhance testing completeness, comprising approximately 30,000 equations with varying dimensions and complexities. When applying our SFR to classical symbolic regression, we mask the neighboring states of the input data and only retain 6 the output equation of the self branch decoder. Evaluation tasks include reconstruction of input data (IN-Domain) and prediction of unknown data (OUT-Domain) using regression equations. By comparing our model, SFR, to cutting-edge search-based approaches such as PySR [48, 49], learning-based methods like SINDy [50], and pre-trained models including NeSymRes [35] and E2E [36], we can see from Fig. 2(a) that SFR significantly outperforms these baselines in terms of R2andClose p. The R2value measures the fit between the predictions produced by the reconstructed equation and the actual data, while Close p provides a more rigorous assessment by quantifying the percentage of data points for which the relative error between the predictions and the real data is less than p. It should be noted that we also compared our results with Kolmogorov–Arnold Networks (KAN) [18, 19], but adjusting the network structure and parameters for each equation proved challenging. Even with a generalized structure, we could not achieve satisfactory results, as indicated by an average R2of less than -10. We thus have chosen not to include its results in our analysis. Through multiple experiments on the USE-F, we found that both the length of the equation and the number of test data points affect the results of symbolic regression methods. As the length of the equation increases, the performance of these methods tends to decline (see Fig. 2(b)). Conversely, an increase in the number of test data points enhances performance (see Fig. 2(c)). Notably, our SFR consistently outperforms baselines, regardless of changes in these factors, especially in more stringent measurement. We pick up a physical equation from the AI-Feynman dataset that describes the relationship between the modulus of rigidity G, modulus of elasticity E, and Poisson’s ratio µin material science for regression analysis. As shown in Fig. 2(d), our SFR can reconstruct the equation closest to ground truth with the same amount of data, demonstrating the applicability and potential of our model in classical symbolic regression tasks. For further experimental results concerning
|
https://arxiv.org/abs/2505.21879v1
|
other influencing factors, such as the number of operator and dimensions, as well as additional regression analyses and visualizations, please refer to Appendix B. Validation on symbolic regression on complex networks To assess the effectiveness of our SFR in performing symbolic regression on complex networks, we have expanded the USE-F dataset to USE (Unseen Synthetic Equations), which now includes not only the self and interaction components with various lengths, dimensions, and operators, but also five types of topo- logical structures: grid, random, power law, small world, and community. This results in approximately 5,000 symbolic regression testing tasks on these networks, each containing around 10 to 200 coupled variables (nodes). Through extensive evaluations, we have observed that our SFR performs well across multiple metrics, including R2andClose p, when regressing governing equations on complex networks (see Fig. 3(a)). More importantly, its performance appears to be topology independent, achieved by inte- grating the local sampling strategy and decoupling interaction terms based on prior knowledge. Although the length of the equation and the dimension of the node states are factors that affect performance, it still shows consistent results across different topological types (see Fig. 3(b), more evaluations can refer to Fig. C9 in Appendix C). We also examine how the number of test data points impacts this task. The testing equations are categorized into two types based on their length: normal ( ≤30) and complex ( >30). From Fig. 3(c), we can see that for normal equations, satisfactory regression results can be achieved with a product of observed nodes ( N) and time steps ( T) that exceeds 40. In contrast, complex equations require 60 or more test data points. This demonstrates that our SFR is advantageous in terms of the number of test data points needed, making it suitable for real-world scenarios where data is often sparse and difficult to obtain. In addition, we feed the data generated by equations with various characteristics into the data representation model enc, obtain the data representation h, and perform a dimensionality reduction projection on husing t-SNE. We observed that equations with the same structure but differ- ent constants tend to cluster together, and those that share similar characteristics are also more likely to be grouped in the same region (see Fig. 3(d)), illustrating the effective and distinctive representation power of our model, offering a clear semantic representation as a foundation for future decoders. We also visualize a scenario of symbolic regression on complex networks in Fig. 3(e), showing the ability of our model to regress high-precision equations. For large-scale networks (up to 5000 nodes), our model still 7 IN-Domain(USE) OUT-Domain(USE) OUT-Domain(USE) d eTestpoint(normal) c Exponential LogarithmicPeriodicPolynomial Representation Trajectory Trajectory a b =,02−0.32,0+ 1.749,0(,02+,0)+,0 True =,0(,0−0.32)+ 1.749,0(,02+,0)+,0 Ours Testpoint(complex)Fig. 3 : Results of symbolic regression on complex networks. a.Comparison of the performance ( R2and Close p) of our model on USE with different topologies (Grid, Random, Power Law, Small World and Community). b.Comparison of the performance of our model on equations with different lengths and dimensions. c.The impact of the number of test data points on equations with different
|
https://arxiv.org/abs/2505.21879v1
|
complexity. d. Data representations (denoted as h) generated by equations with various characteristics are visualized through projection using t-SNE. e.A specific example of symbolic regression from USE, demonstrating the ability of our model to regress high-precision equation on complex networks through local observations. has acceptable performance (see Fig. C10). Additional visualizations of data representations and scenario illustrations can be found in Appendix C. Application on inferring interpretable network dynamics As a typical application of symbolic regression on complex networks, inferring interpretable network dynamics mainly aims to learn the equations that govern network dynamics from observed data [5, 17, 51]. We apply our SFR to identify six representative network dynamics from physics, biochemistry, and ecol- ogy, i.e., gene regulatory (Gene) [52], heat diffusion (Heat) [53], epidemics (Epi) [54], biochemical dynamics (Bio) [55], Lotka-Volterra model (LV) [56], and mutualistic interaction (Mutu) [57]. It is important to 8 Power LawEpi Bio LV MutuHeatGene a GridEpi Bio LV MutuHeatGeneGNN+GPOurs TPSINDyb 333% improvement 375% improvement MAPE(All Nodes)Trajectory(Selected Nodes) community 1 community 2community 3 community 4community 1 community 2 community 3 community 4 c d ,0 =0− (1.056,0,0−0.551),0 =−,02+0.5,0− ,0,0 ,0 =(0.790−,0)(,0+1.45+0.012,0)− 0.99,0,0,0 =−1.0,02+0.499,0+5.12−5− 0.99,0,0True equation TPSINDy GNN+GP OursFig. 4 : Results on inferring interpretable network dynamics. aComparison of the performance ( Close p, R2) for reconstructing dynamics from six scenarios, including Epidemic (Epi), Biochemical (Bio), Lotka- Volterra (LV), Mutualistic interaction (Mutu), Heat diffusion (Heat), and Gene regulatory (Gene) dynamics. b.Comparison of the average execution time across all dynamics for various methods. c.The MAPE (Mean Absolute Percentage Error) between the predictive results produced by the discovered gov- erning equations and ground truth in the LV scenario with four communities, and comparison of state prediction curves on selected nodes, where TRandTPare the termination times of IN-Domain and OUT- Domain respectively. d.Comparison of governing equations inferred by various methods. perform difference calculations on the observed data ( xi), e.g., a five point finite difference method, to obtain yi:=dxi/dtbefore conducting the inference task. And we use a mixture of Gaussian clustering sampling and distribution scaling to alleviate the offset between the training data distribution and that in the application scenario. By comparing against state-of-the-art baselines in this task, i.e., a search-based GNN+GP [51] and a learning-based TPSINDy [17], our SFR exhibits the best performance in terms of all metrics, regardless of topology types, whether in linear or nonlinear dynamics (see Fig. 4(a) and more evaluations can be found in Fig. D13 in Appendix D). More importantly, our SFR significantly enhances efficiency by restoring the most accurate control equations in just 1.2 minutes (see Fig. 4(b)). In compar- ison, this represents a 375% improvement in efficiency over the GNN+GP, which takes approximately 5.7 minutes, and a 333% improvement over the TPSINDy, which takes about 5.2 minutes. This showcases our SFR’s ability to accurately predict new equations, leveraging the benefits from pre-training on a large 9 bTrue governing equation ,0 =−,0+ ,0∗(1−,0) =0.5=5 =10=2 community 1community 3a community 2 community 4 True Ours TPSINDycommunity 1community 2 community 3community 4Fig. 5 : Results on heterogeneous epidemic transmission in communities. a.Four heterogeneous transmis- sion equations
|
https://arxiv.org/abs/2505.21879v1
|
by assigning different recovery rates ( δ) in the epidemic equation, where xi,0:=Iimeans the probability of an individual ibeing susceptible. b.Comparison of the state prediction curves gener- ated by the governing equations inferred from observations at sampling nodes within each community. Our SFR has successfully recovered the phenomena exhibited by heterogeneous transmission equations. number of data-equation pairs. We also present the comparative results of different methods for the LV dynaimcs on a community structure, as illustrated in Fig. 4(c). In terms of data fitting, both GNN+GP and ours significantly outperform the TPSINDy. However, our SFR model generates the smallest predic- tion error and produces the outcome equation that is more accurate and closer to the actual equation (Fig. 4(d)). Other network dynamics scenarios also lead to consistent conclusions (see Appendix D). Application on inferring the transmission laws of epidemics Infectious diseases pose a substantial burden on global economies and public health [58]. Understanding how transmission occurs allows us to effectively prevent and control large-scale epidemic outbreaks. Herein, we apply our SFR to infer heterogeneous epidemic transmission patterns and discover new laws that govern global epidemic outbreaks in the real world. Heterogeneous epidemic transmission in communities. We construct four heterogeneous trans- mission equations by assigning different recovery rates ( δ) in the epidemic equation, i.e.,dIi(t) dt= −δIi(t) +βPN j=1AijSi(t)Ij(t), where Si(t) = 1−Ii(t) represents the probability of an individual ibeing susceptible, while Ii(t) describes the probability of an individual being infected with an epidemic [54]. Specifically, we set the infection rate ( β) to 1 and δis chosen from {0.5,2,5,10}, and then start simulat- ing based on these equations in each community, as shown in Fig. 5(a). The basic reproduction numbers (R0=δ/β) for the four transmission equations are as follows: 2 (similar to Mpox [59]), 0.5 (compara- ble to MERS [60]), 0.2, and 0.1. An outbreak will die out if R0<1, while if R0>1, an outbreak will occur. Fig. 5(b) clearly illustrates a significant increase in the number of infections in community 1. In contrast, communities 2 and 3 have established regional transmission, while the disease has had minimal impact on community 4. Our SFR has successfully recovered the phenomena exhibited by heterogeneous transmission equations due to the introduction of local sampling strategies. However, TPSINDy cannot achieve that. The specific discovered equations can be found in Appendix E. Real-world global epidemic transmission. We collect daily global spreading data on H1N1 [61], SARS [61], and COVID-19 [62], and use the worldwide airline network retrieved from OpenFights [63] as a directed, weighted empirical topology to build an empirical system of real-world global epidemic transmission, as shown in Fig. 6(a). Only the early data, prior to government interventions (e.g., the first 45 days), are taken into account to preserve 10 COVID-19 SARSa feb d 95.12s 95.08s 25.71s 28.19s237%~270% improvement in efficiency c TPSINDy: ,0 =,0+ 1 1+−(,0−,0) Ours: ,0 =,0++ (,0+,0++1 ,0+)H1N1 Guatemala Degree=8Australia Degree=26United Kingdom Degree=82 Niger Degree=8Norway Degree=36Germany Degree=88 Canada Degree=21Singapore Degree=23China Mainland Degree=29 Fig. 6 : Results of inferring the transmission laws of real-world global epidemic outbreaks. a.The world- wide airline network, only
|
https://arxiv.org/abs/2505.21879v1
|
displaying countries or regions with populations over 50 million. Node size indicates population, while edge width reflects route flow. b.Comparison of the time spent on discovering transmission laws. c.Comparison of transmission laws discovered by TPSINDy and ours. d-f.Compar- ison of the number of cases over time in various countries or regions generated by TPSINDy and ours on H1N1 (d), COVID-19 (e), and SARS (f). The embedding subplots show the comparison of various standardized evaluation indicators, with higher values being preferable. the diseases’ intrinsic spread dynamics. We use the original infection data for H1N1 across all countries or regions to discover the transmission law, which are illustrated in Fig. 6(c). The homogeneous equation version indicates that the transmission equations are same across all countries or regions, including their constants. In contrast, the heterogeneous equations mean that while the skeleton of the transmission equations remains consistent, the constants differ based on the specific characteristics of each region. In terms of execution efficiency, our model has an improvement of 237 to 270 percent, enabling the discovery of the transmission law in less than half a minute (see Fig. 6(b)). In terms of equation form, the coupling 11 term in the equation identified by TPSINDy is bounded, i.e.,b 1+exp −(xj,0−xi,0). When the difference in the number of infections between neighbors and oneself exceeds 5, which often occurs, it suggests that either the neighbors have no impact at all, or there is a fixed impact represented by a constant bvalue, which restricts the influence of neighbors. In contrast, the coupling term in our discovered equation is unbounded, i.e.,cxi,0+dxj,0+e+1 fxj,0+g, which more intuitively reflects the influence of neighbors on itself and leads to more accurate fitting results, regardless of whether the cases are homogeneous or heterogeneous (see Fig. 6(d)). We apply the transmission law identified in H1N1 to the data from SARS and COVID- 19, focusing on learning the constants in the equations, which assumes that the transmission patterns of these various epidemics are similar, while acknowledging that the disease transmission characteristics (constants) differ. As shown in Fig. 6(e-f), the equation we discovered demonstrates the most effective results, indicating that we have uncovered a general law of transmission for global epidemic outbreaks, particularly one that describes the three epidemics with greater precision. More comparison results and the discovered equations for all countries or regions can be found in Appendix E.3. Discussion We presented a computational tool for accurately and efficiently discovering governing equations from observations on complex networks. Beginning with generating a comprehensive dataset that encompasses a wide range of simulated network observation environments, we introduced a local sampling strategy to decouple global topological features and physical priors to simplify the mathematical form, and designed a set-to-sequence model with dual branches to establish a connection between observed data and the cor- responding equations. By pre-training the symbolic foundation regressor, we have extended the power of the pre-trained models to symbolic regression on complex networks. The advantage of pre-training allows our model to generalize across various new scenarios without extensive searching or learning, significantly reducing computational costs while maintaining
|
https://arxiv.org/abs/2505.21879v1
|
the accuracy of the recovered equations. By comparing our SFR with state-of-the-art techniques in different scenarios, including non-network symbolic regres- sion, symbolic regression on networks, and varied types of network dynamics, the results demonstrated that our model reconstructs the most accurate equations with the highest efficiency. Furthermore, we uncovered new transmission laws that align more closely with data from three real global epidemic out- breaks. In summary, our work addresses the challenges posed by complex phenomena that are difficult to explain and involve numerous correlated variables, thereby facilitating the exploration and discovery of natural science laws. Although the effectiveness and applicability of our model have been thoroughly validated, we still face more challenging scenarios in the real world, such as systems with high-order interactions, non-deterministic systems, partial differential systems, and non-autonomous systems. Fortunately, the fundamental characteristics of our model provide significant potential for various applications, such as advanced training techniques on more data involving higher-order interactions. An exciting develop- ment for this model is to combine with Large Language Models (LLMs) that have access to a vast amount of common knowledge. This integration could enhance the model’s output by producing scientif- ically meaningful equations. To explore this further, we have conducted a preliminary attempt at LLM fusion—putting specific terms into the symbolic regression process to produce equations that better meet the desired requirements. These terms can be automatically extracted from historical equations using LLMs or generated through manual prompts, improving the interaction between the pre-trained model and users, thus increasing its controllability. Preliminary results suggest that our proposed model can effectively leverage the knowledge contained in predefined terms during interactions with LLMs (more results can be found in Appendix F). This finding opens up opportunities for future exploration on integrating our model with LLMs. Additionally, knowledge representation is multimodal, encompassing sound, images, videos, and more. Exploring how to leverage multimodal data to improve the accuracy of our model will be a fascinating direction. 12 Method In this section, we present a detailed introduction to the key components of the proposed symbolic foundation regressor (SFR), including the creation of the corpus, construction of the model, and setup of applications. Creation of the corpus Creating a collection of equations: The generation of equations in a corpus primarily relies on random expression trees. We can control the depth of the tree, the permitted operators and variables, and the probability of each operator’s occurrence to produce the desired equations in a targeted manner. The specific generation steps are as follows: 1. Uniformly sample the number of binary operators bwithin the range [0 , bmax] and the number of unary operators uwithin the range [0 , umax], where bmaxandumaxrepresent the maximum allowed number of binary and unary operators, respectively. 2. Generate and sample an expression tree with bnon-leaf nodes, following the method outlined in [64]. 3. For each non-leaf node, sample a binary operator from the occurrence probability distribution of the binary operators, i.e., Pb. 4. For each leaf node, sample variable from xior a constant cwithin the range [ cmin, cmax]. 5. Randomly select a node
|
https://arxiv.org/abs/2505.21879v1
|
whose subtree has a depth smaller than ddepth, insert a new parent node with a unary operator sampled from the occurrence probability distribution of the unary operators, i.e., Pu, and repeat this process utimes. 6. Convert the produced expression tree into prefix expression for generating f(self). 7. Repeat steps 1 to 6 one additional time, replacing xiwith xiorxjin step 4 to produce f(inter ). 8. Combine f(self)andf(interto obtain the equation, i.e., f(self)+PAijf(inter ), where the topological structure Aijwill be assigned values when generating data. 9. Verify the rationality, effectiveness and repeatability of the generated equation. By repeating the above process, we can generate a large number of valid governing equations on complex networks. The specific settings about generation parameters and distributions can be found in Appendix A. Synthesizing observational data: We first select a governing equation from the generated corpus and then sample a topological structure from the topological space. By combining the sampled equation with the topological structure, we can derive the complete governing equation on a complex network, i.e.,yi=f(self)(xi) +PAijf(inter )(xi, xj). Next, by sampling values for xi∈Rdandxj∈Rdin the domain space, i.e., a standard normal distribution, we can calculate the corresponding yi, which allows us to obtain the observed data, i.e., O={o1, o2, ..., o i, ...}, where oi={xi,{xj}j∈N, yi}. Note that, when generating the training data, the number of sampled central nodes is limited to less than one-tenth of the total number of nodes, which helps prevent the model from learning the features of the global topological structure. Additionally, the number of sampled data points is approximately 200, simulating the sparse scenarios commonly encountered in real-world situations where collecting large amounts of observations can be challenging. Construction of the model Our SFR primarily relies on transformers with a bifurcated structure, consisting of a data representation model ( enc), a self branch model ( decself), and an interaction branch model ( decinter). Data representation model ( enc):The data representation model is primarily used to map the sampled set of observed data to the representation space, i.e. h=enc(O), where Ois the observations andhis the data representation. It is mainly composed of five components, including a float embedding layer ( emb754), a self-state embedding layer ( embxi), an interaction state embedding layer ( embxj), an output embedding layer ( embyi), and a joint embedding layer ( emball).emb754is grounded in the IEEE 13 754 Standard for Floating-Point Arithmetic [65], embedding floating-point numbers in Oas their binary representations to mitigate gradient issues in calculations, particularly those arising from outliers in yi, i.e., O754=emb754(O), where O754={(x754 i,{x754 j}j∈N, y754 i), ...}is the binary representations of observations. Next, we feed O754into their respective embedded layers, i.e., embxi, embxj, embyi, for encoding and combine into Oemb={(embxi(x754 i), embxj({x754 j}j∈N), embyi(y754 i)), ...}. Note that, unlike the direct vector-to-vector mappings of embxiandembyi,embxjembeds a set as a vector, causing us to implement it using Deep Sets [66], which provides a neural network architecture for effectively handling the input of variable-sized sets. Then, the data representation can be obtained by use a joint embedding layer emball, i.e., h=emball(Oemb). Since Oembis
|
https://arxiv.org/abs/2505.21879v1
|
still a set, we apply Set Transformer [45] here to implement emballfor better capturing the contributions of each data point in the observed set while maintaining the characteristics of both permutation invariance and linear complexity in attention computation overhead. Self and interaction branch models ( decselfand decinter):After obtaining the data representa- tionh=enc(O),hwill enter decselfanddecinter simultaneously to obtain functions f(self)andf(inter ), the prefix expression of the equation, in an autoregressive manner, i.e., p(e(self) k+1|h, e(self) 1, ..., e(self) k) = decself(h, e(self) 1, ..., e(self) k) and p(e(inter ) k+1|h, e(inter ) 1 , ..., e(inter ) k) =decinter(h, e(inter ) 1 , ..., e(inter ) k), where ek represents the expression symbol generated at step k. For example, the corresponding target sequence of the formula sin(cx+x2) should be the token sequence [ sin add mul c x pow 2x]. Using the extensive data-equation pairs generated earlier, we employ cross-entropy loss to pre-train the model thoroughly. The detailed model architecture and specific hyper-parameters can be found in Appendix A.4. Setup of applications Applying the pre-trained model to handle specific tasks can be divided into three main steps, including data pre-processing, generating equations via propagating forward, and equation post-processing. Pre-processing: For large-scale observations, according to the central limit theorem [67], the sam- pled data follows a normal distribution, so we perform a distribution scaling transformation on xito match the training distribution, i.e., ˆ xi= (xi−µ)/σ, where µandσare the statistical mean and standard deviation of the data. For the identification of differential equations, since the observations are only {xi,{xj}j∈N}, we need to perform difference calculations on the observations Oto obtaindxi dt asyi. Specifically, we approximate the derivatives through a five-point finite difference method [68]: dxi dt=xi(t−2tδ)−8xi(t−tδ)+8xi(t+tδ)−xi(t+2tδ) 12tδ,where tδrepresents the time interval. Then, we perform clus- tering on sampled data xiand sample data from each class to remove correlations, and perform normal distribution sampling to obtain xi, then, the distribution scaling transformation is applied. Generating equations via propagating forward: The pre-processed data is fed into the pre- trained model SFR for equation regression. We use beam search during the decoding process, which can search for the optimal prefix expression sequence ein a wider range compared to greedy algorithms. Generally, we set the beam size ( Nbeam) to 10. Post-processing: After generating Nbeam output equations, we employ optimization algorithms like BFGS [46] to refine the constants within the equations and identify the most accurate one. This process enhances the accuracy of the equation regression. To finalize the equations, we apply an inverse distri- bution scaling transformation to ensure they are correctly formatted. Assuming the distribution of the sampled data is xi∼N(µ, σ) and the scaling sampled data is x′ i∼N(0,1), the process is as follows: f(self)(x′ i) =f(self)(xi−µ σ), f(inter )(x′ i, x′ j) =f(inter )(xi−µ σ,xj−µ σ).In addition, we can impose specific lim- itations during the beam search process to arrive at the correct form of the equation. These limitations can be based on domain knowledge from experts or large language models. For example, if it’s known that ”the conduction equation of a certain force should
|
https://arxiv.org/abs/2505.21879v1
|
include a cosine function term, such as cos(x)” , this term can be incorporated into the decoding search process as a token. Methods like constant and formal simplification are used to create equations that offer more scientific significance while ensuring accuracy. More detailed pre-processing and post-processing methods can be found in the Appendix A.5. 14 Performance measures The performance measures for evaluating the methods in this work are as follows: R2score, the coefficient of determination, is used to evaluate the fitting degree of the regression model, with a range [ −∞,1], and the closer it is to 1, the better the performance. The formula for calculating R2score is as follows: R2= 1−PN i=1(xi(t)−ˆxi(t))2 PN i=1(xi(t)−¯xi(t))2,where the xi(t) and ˆ xi(t) represent the ground truth and prediction result of node iat time t, ¯xi(t) is the average of xi(t) over Nnodes. Close pmeasure the accuracy of regression equations by evaluating the percentage of sampling points that satisfy the relative error precision, the formula for calculating Close pis as follows: Close p=PN i=11 NI(|(xi(t)−ˆxi(t) xi(t)| ≤p),where the pare the relative error precision of 0 .001,0.01,0.1, and Irepre- sents when the relative error between the ground truth and prediction result is less than p,I= 1, on the contrary, it is 0. MAPE (Mean Absolute Percentage Error) and MAE (Mean Absolute Error), used to evaluate the error between the ground truth and prediction result, represents the absolute percentage error and absolute error between the ground truth and prediction result, with a range [0 ,∞], the smaller the value, the more accurate it is. The formula for calculating MAPE and MAE is as follows: MAPE = 1 N(PN i=1|(xi(t)−ˆxi(t) xi(t)|)×100%, MAE =1 N(PN i=1|(xi(t)−ˆxi(t)|). References [1] Shan, X., Hu, B., Chen, X. & Cai, R.-G. An interference-based method for the detection of strongly lensed gravitational waves. Nature Astronomy 1–9 (2025). [2] Wang, H. et al. Scientific discovery in the age of artificial intelligence. Nature 620, 47–60 (2023). [3] Gao, T.-T. & Yan, G. Data-driven inference of complex system dynamics: A mini-review. Europhysics Letters 142, 11001 (2023). [4] Makke, N. & Chawla, S. Interpretable scientific discovery with symbolic regression: a review. Artificial Intelligence Review 57, 2 (2024). [5] Lemos, P., Jeffrey, N., Cranmer, M., Ho, S. & Battaglia, P. Rediscovering orbital mechanics with machine learning. Machine Learning: Science and Technology 4, 045002 (2023). [6] Holland, J. H. Genetic algorithms. Scientific american 267, 66–73 (1992). [7] Dubˇ c´ akov´ a, R. Eureqa: software review (2011). [8] Richter, S. gplearn: Genetic programming in python. https://github.com/trevorstephens/gplearn (2022). Version 0.4.2. [9] Cranmer, M. Interpretable machine learning for science with pysr and symbolicregression.jl. arXiv preprint arXiv:2305.01582 (2023). [10] Hruˇ ska, V., Furmanov´ a, A. & Bednaˇ r´ ık, M. Analytical formulae for design of one-dimensional sonic crystals with smooth geometry based on symbolic regression. Journal of Sound and Vibration 597, 118821 (2025). [11] Davis, B. L. & Jin, Z. Discovery of a planar black hole mass scaling relation for spiral galaxies. The Astrophysical Journal Letters 956, L22 (2023). 15 [12] Mengel, T., Steffanic, P., Hughes, C., da Silva, A. C. O. & Nattrass,
|
https://arxiv.org/abs/2505.21879v1
|
C. Interpretable machine learning methods applied to jet background subtraction in heavy-ion collisions. Physical Review C 108, L021901 (2023). [13] Brunton, S. Discovering governing equations from data by sparse identification of nonlinear dynamics , Vol. 2017, X49–004 (2017). [14] Rudy, S. H., Brunton, S. L., Proctor, J. L. & Kutz, J. N. Data-driven discovery of partial differential equations. Science advances 3, e1602614 (2017). [15] Kaptanoglu, A. A. et al. Pysindy: A comprehensive python package for robust sparse system identification. arXiv preprint arXiv:2111.08481 (2021). [16] Chen, Z., Liu, Y. & Sun, H. Physics-informed learning of governing equations from scarce data. Nature communications 12, 6136 (2021). [17] Gao, T.-T. & Yan, G. Autonomous inference of complex network dynamics from incomplete and noisy data. Nature Computational Science 2, 160–168 (2022). [18] Liu, Z. et al. Kan: Kolmogorov-arnold networks. arXiv preprint arXiv:2404.19756 (2024). [19] Liu, Z., Ma, P., Wang, Y., Matusik, W. & Tegmark, M. Kan 2.0: Kolmogorov-arnold networks meet science. arXiv preprint arXiv:2408.10205 (2024). [20] Udrescu, S.-M. & Tegmark, M. Ai feynman: A physics-inspired method for symbolic regression. Science advances 6, eaay2631 (2020). [21] Cornelio, C. et al. Combining data and theory for derivable scientific discovery with ai-descartes. Nature Communications 14, 1777 (2023). [22] Petersen, B. K. et al. Deep symbolic regression: Recovering mathematical expressions from data via risk-seeking policy gradients. arXiv preprint arXiv:1912.04871 (2019). [23] Mundhenk, T. et al. Symbolic regression via deep reinforcement learning enhanced genetic programming seeding. Advances in Neural Information Processing Systems 34, 24912–24923 (2021). [24] Glatt, R. et al. Deep symbolic optimization for electric component sizingin fixed topology power converters. Tech. Rep., Lawrence Livermore National Lab.(LLNL), Livermore, CA (United States) (2021). [25] Tenachi, W., Ibata, R. & Diakogiannis, F. I. Deep symbolic regression for physics guided by units constraints: toward the automated discovery of physical laws. The Astrophysical Journal 959, 99 (2023). [26] Li, Z. et al. Bi-level identification of governing equations for nonlinear physical systems. Nature Computational Science (2025). [27] Sun, F., Liu, Y., Wang, J.-X. & Sun, H. Symbolic physics learner: Discovering governing equations via monte carlo tree search. arXiv preprint arXiv:2205.13134 (2022). [28] Xu, Y., Liu, Y. & Sun, H. Reinforcement symbolic regression machine (2024). 16 [29] Yu, Z., Ding, J., Li, Y. & Depeng, J. Symbolic regression via mdlformer-guided search: from minimizing prediction error to minimizing description length (2025). [30] Ouyang, L. et al. Training language models to follow instructions with human feedback. Advances in neural information processing systems 35, 27730–27744 (2022). [31] Liu, A. et al. Deepseek-v3 technical report. arXiv preprint arXiv:2412.19437 (2024). [32] Madani, A. et al. Large language models generate functional protein sequences across diverse families. Nature biotechnology 41, 1099–1106 (2023). [33] Hayes, T. et al. Simulating 500 million years of evolution with a language model. Science eads0018 (2025). [34] Valipour, M., You, B., Panju, M. & Ghodsi, A. Symbolicgpt: A generative transformer model for symbolic regression. arXiv preprint arXiv:2106.14131 (2021). [35] Biggio, L., Bendinelli, T., Neitz, A., Lucchi, A. & Parascandolo, G. Neural symbolic regression that scales , 936–945 (Pmlr, 2021). [36] Vastl, M., Kulh´ anek, J., Kubal´ ık, J., Derner, E.
|
https://arxiv.org/abs/2505.21879v1
|
& Babuˇ ska, R. Symformer: End-to-end symbolic regression using transformer-based architecture. IEEE Access (2024). [37] Li, W. et al. Transformer-based model for symbolic regression via joint supervised learning (2022). [38] d’Ascoli, S., Becker, S., Mathis, A., Schwaller, P. & Kilbertus, N. Odeformer: Symbolic regression of dynamical systems with transformers. arXiv preprint arXiv:2310.05573 (2023). [39] Cui, J., Wang, Q., Sun, B., Liu, J. & Yang, B. Learning continuous network emerging dynamics from scarce observations via data-adaptive stochastic processes. Science China Information Sciences 67, 1–16 (2024). [40] Iten, R., Metger, T., Wilming, H., Del Rio, L. & Renner, R. Discovering physical concepts with neural networks. Physical review letters 124, 010508 (2020). [41] Wittmann, M. K. et al. Basis functions for complex social decisions in dorsomedial frontal cortex. Nature 1–11 (2025). [42] Barzel, B. & Barab´ asi, A.-L. Universality in network dynamics. Nature physics 9, 673–681 (2013). [43] Liu, B., Luo, W., Li, G., Huang, J. & Yang, B. Do we need an encoder-decoder to model dynamical systems on networks? arXiv preprint arXiv:2305.12185 (2023). [44] Barzel, B., Liu, Y.-Y. & Barab´ asi, A.-L. Constructing minimal models for complex system dynamics. Nature communications 6, 7186 (2015). [45] Lee, J. et al. Set transformer: A framework for attention-based permutation-invariant neural networks , 3744–3753 (PMLR, 2019). [46] Nocedal, J. & Wright, S. J. Numerical Optimization 2nd edn (Springer, New York, 2006). [47] Feynman, R. P. Feynman lectures on computation (CRC Press, 2018). 17 [48] Cranmer, M. PySR: Fast & parallelized symbolic regression in python/julia. https://github.com/ MilesCranmer/PySR (2025). [49] Cranmer, M. Discovering symbolic models from deep learning with inductive biases. arXiv preprint arXiv:2006.11287 (2020). [50] de Silva, B. M. et al. Pysindy: A python package for the sparse identification of nonlinear dynamical systems from data. Journal of Open Source Software 5, 2104 (2020). [51] Cranmer, M. et al. Discovering symbolic models from deep learning with inductive biases. Advances in neural information processing systems 33, 17429–17442 (2020). [52] Mazur, J., Ritter, D., Reinelt, G. & Kaderali, L. Reconstructing nonlinear dynamic models of gene regulation using stochastic sampling. BMC bioinformatics 10, 1–12 (2009). [53] Zang, C. & Wang, F. Neural dynamics on complex networks , 892–902 (2020). [54] Pastor-Satorras, R., Castellano, C., Van Mieghem, P. & Vespignani, A. Epidemic processes in complex networks. Reviews of modern physics 87, 925–979 (2015). [55] Voit, E. O. Computational analysis of biochemical systems: a practical guide for biochemists and molecular biologists (Cambridge University Press, 2000). [56] MacArthur, R. Species packing and competitive equilibrium for many species. Theoretical population biology 1, 1–11 (1970). [57] Gao, J., Barzel, B. & Barab´ asi, A.-L. Universal resilience patterns in complex networks. Nature 530, 307–312 (2016). [58] Msemburi, W. et al. The who estimates of excess mortality associated with the covid-19 pandemic. Nature 613, 130–137 (2023). [59] Grant, R., Nguyen, L.-B. L. & Breban, R. Modelling human-to-human transmission of monkeypox. Bulletin of the World Health Organization 98, 638 (2020). [60] Kucharski, A. & Althaus, C. The role of superspreading in middle east respiratory syndrome coronavirus (mers-cov) transmission. Euro surveillance 20, pii–21167 (2015). [61] Nunes, L. A brief comparative study of
|
https://arxiv.org/abs/2505.21879v1
|
epidemics. [EB/OL] (2020). https://www.kaggle.com/code/ lnunes/a-brief-comparative-study-of-epidemics Accessed April 1, 2023. [62] Dong, E., Du, H. & Gardner, L. An interactive web-based dashboard to track covid-19 in real time. The Lancet infectious diseases 20, 533–534 (2020). [63] OpenFlights. OpenFlights: Airport, airline, and route data. https://openflights.org/data.html (2024). [64] Lample, G. & Charton, F. Deep learning for symbolic mathematics. arXiv preprint arXiv:1912.01412 (2019). [65] IEEE. IEEE Standard for Floating-Point Arithmetic (2019). URL https://standards.ieee.org/ standard/754-2019.html. IEEE Std 754-2019, Revision of IEEE Std 754-2008. [66] Zaheer, M. et al. Deep sets (2017). 18 [67] Ross, S. M. Introduction to Probability Models 11th edn (Academic Press, 2014). Central Limit Theorem discussed in Chapter 7. [68] Gautschi, W. Numerical analysis (Springer Science & Business Media, 2011). [69] Hagberg, A. A., Schult, D. A. & Swart, P. J. Exploring network structure, dynamics, and function using networkx , 11–15 (SciPy, 2008). 19 APPENDICES for Symbolic Foundation Regressor on Complex Networks Contents A More details on the method 21 A.1 The specific distribution of equations in the corpus. . . . . . . . . . . . . . . . . . . . . . . 21 A.2 The generation parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 A.3 The check rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 A.4 The specific model setting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 A.5 The specific application process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 B More details on classical non-network symbolic regression 26 B.1 AI-Feynman . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 B.2 USE-F . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 B.3 Details on experimental setting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 B.4 More experimental results and regression analyses . .
|
https://arxiv.org/abs/2505.21879v1
|
. . . . . . . . . . . . . . . . . . . . 26 C More details on symbolic regression on complex networks 33 C.1 USE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 C.2 Topological structures of complex networks . . . . . . . . . . . . . . . . . . . . . . . . . . 33 C.3 Details on experimental setting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 C.4 More experimental results and regression analyses . . . . . . . . . . . . . . . . . . . . . . 34 D More details on inferring interpretable network dynamics 41 D.1 Network dynamics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 D.2 Details on experimental setting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 D.3 More experimental results and regression analyses . . . . . . . . . . . . . . . . . . . . . . 41 E More details on on inferring the transmission laws of epidemics 50 E.1 Real-world global epidemic transmission . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50 E.2 Details on experimental setting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50 E.3 More experimental results and regression analyses . . . . . . . . . . . . . . . . . . . . . . 50 F Preliminary attempts with LLM 59 20 Appendix A More details on the method A.1 The specific distribution of equations in the corpus. We present the distribution of equation length and operators in the corpus (see Fig. A1). The length of the equations primarily ranges from 4 to 50 characters, which aligns with scientific intuition. Equations that are too short or too long may lack practical significance in complex networks. Some operators can be replaced with other combinations to reduce regression categories and improve regression accuracy, for example, /can be replaced
|
https://arxiv.org/abs/2505.21879v1
|
with ×,pow, and the constant 1. Compared to other unary operators, the number of powis relatively large because√x,x2,1 xall need to be represented through pow. In terms of dimensions, the number of equations in each dimension is roughly the same. =1.78,0+(2.23,0)+ (1.99,1+,02(,0+(,1)) =2.45,0+1.89,04+15.45,1+4.34,2 + (,1+,2+0.56 ,0+0.03,0) =,2+3.44(1 ,2+2.89,2)+,0,1 + (,1(4.32(1+0.88,1)2+(0.43+,2)) +14.78,0,12)=1 ,0+3.42,0 + (,0,1) a b Fig. A1 : The distribution of equation length and operators in the corpus. a.The equation lengths primarily range from 4 to 50 characters, with examples provided. The highest proportion of equations is in the length range of 16-17, totaling approximately 962,000. b.The proportion of binary operators is higher, modeling the relationship between variables. For unary operators, aside from the pow operator, which needs to represent multiple operations, the number of other unary operators is roughly the same. A.2 The generation parameters We also list specific parameter settings for generating equations in Table A1, including the maximum dimension of each equation, denoted as D, the maximum number of binary operator, bamx, and the max- imum number of unary operator, uamx. Additionally, the range of constant values is given as [ cmin, cmax]. The occurrence probabilities of binary and unary operators, PbandPu, are detailed in Table A2 and A3. 21 Although the occurrence probability of trigonometric function terms is relatively low, the model encoun- ters a sufficient number of these terms during the training process due to the large number of operators in each equation and the total number of equations. Table A1 : Parameters in corpus construc- tion setting Parameter D b max umax cmin cmax Value 3 5 5 -20 20 Table A2 : The occurrence probabilities of binary operators Pb Binary Operators add mul sub div Probability 0.375 0.375 0.125 0.125 Table A3 : The occurrence probabilities of unary operators Pu Unary Operators inv pow2 exp sin,cos,tan arcsin,arccos,arctan log Probability 0.5 0.3 0.1 0.04 0.04 0.02 A.3 The check rules After generating equations, we set some rules to ensure their applicability. Firstly, we assess the rationality of equation skeletons. This includes checking whether the length, dimensions, and number of operators exceed acceptable limits, identifying any duplicates, and ensuring there are no nested trigonometric or exponential terms. Additionally, we verify that the equation f(inter )includes xj. Importantly, f(self)is allowed to have no equation. Next, we evaluate the validity of equations. When specific coefficients are introduced into the equation skeleton, we perform domain and value checks, along with checks for invalid conditions (such as division by zero), ensuring that the sampled data can be computed correctly within the model. It’s worth noting that since random constant sampling occurs during training, some rules will be enforced throughout this process, and these two checks are designed to maximize the equation’s validity. These checks do not require extensive generation time. With code optimization and multi-threaded processing, we can generate millions of equations in just hours. A.4 The specific model setting We provide a more specific model architecture and data flow (see Fig. A2), and we list the specific roles of each module: •emb754:emb754is an encoding layer based on the IEEE
|
https://arxiv.org/abs/2505.21879v1
|
Standard for Floating-Point Arithmetic [65], converting data {xi,{xj}j∈N, yi}into binary floating-point encoding {x754 i,{x754 j}j∈N, y754 i}to avoid gradient problems during the calculation process. 22 =((,,),ℱ) (,×754) (×754,) (,) (,,,) (,,,) (,) (×754,) (,) (,) (,) (×754,) (,)754∈ℝ×× 754∈ℝ×× 754∈ℝ×× ∈ℝ×× (,,,) (,,,)∈ℝ×× (,,,) (,,,)∈ℝ×× (,,,) (,,,) (,,,) (,,,)... ... × ×ℎ∈ℝ×× (,,,)0∈ℝ×× & (,,,) & & &(,) (,) (,,,) & (,,,) & (,) (,) (,) & &(,) (,) (,) ∈ℝ×× ∈ℝ××Fig. A2 : The specific model architecture. •embxi, embxj, embyi,:{embxi, embxj, embyi}are data encoding layer consisting of Linear and LeakyRelu layers, which maps binary encoding to De-dimension encoding. Their respective results are integrated together to form Oemb. •ISAB : ISAB (Induced Set Attention Block) consists of MAB layers, which are the multi-head attention blocks in the Transformer. The elements in the set data Oembare performed attention calculation in MAB, and high-order interactions can be encoded by stacking multiple blocks. In addition, by introducing the unit vector I,Oembcan be projected onto a low dimensional space through I, reducing computational complexity ( O(N2 s)→O(NsNi)). •PMA : PMA (Pooling Multihead Attention) is used to aggregate ISAB encoding into Nofeatures. •SAB : SAB (Set Attention Block) further models the relationship between PMA outputs through attentions. •embtoken, embpos:embtoken, embposperform token encoding and positional encoding on the equation F. •MA, CMA, FF, FC : MA (Multihead Attention), CMA (Cross Multihead Attention), FF (Feed- Forward layers ), FC (Full-Connected layers) are components in the decoder of the Transformer. The parameter settings for each layer are outlined in Table A4. The model was trained on a hardware setup that includes 64 Intel(R) Xeon(R) silver 4314 CPUs @ 2.40GHZ and an NVIDIA GeForce GTX 4090 GPU with 24GB of memory. The pre-training process lasted for 20 epochs, utilized a batch size of B, and took approximately 16 days to complete. Table A4 : The setting of parameters. B D max D754 De, Ds, DfDl, Dout NdNsNiNoNSAB NISAB 16 3 32 512 50 6 200 50 20 2 3 23 A.5 The specific application process We provide a specific process in Algorithm 1. For data from an unknown scene, we need to process it before feeding it into the model. Firstly, we sample Nnodes based on their in-degree, where nodes with higher in-degree may contain more information (line 2). Afterwards, GaussianMixture (clusters =Ncluster) clustering is used for the data on each node, and Tclusterdata points are sampled from each class to eliminate strong correlations between data points (line 3-9). Then, we calculate the mean µand variance σof the data and perform normal distribution sampling, and for each node, we sample Tdata points to obtain N×Tdata points (line 10-14). Finally, we standardize the data Xand fed X′into the model for fitting f′(X′). We then perform an inverse transformation on the fitted equation f′(X′) to obtain the final result f(X) (line 15-18). Some post-processing methods can be applied to the output equation (line 19). 24 Algorithm 1 Application process Require: Xsample, N, T, Ncluster, Tcluster 1:Xsamplesampled from network dynamics 2:Select Nnodes based in-degree of nodes
|
https://arxiv.org/abs/2505.21879v1
|
3:foritoNdodo: 4: clusters ⇐GaussianMixture (Xsample i ∈RTsample∗D) 5: forjtoNclustersdodo: 6: Randomly select Tclusterssampling points Xsample i ∈RTclusters∗D 7: end for 8:end for 9:Integrate clustered data to obtain Xcluster∈RN∗(Tclusters×Nclusters)∗D 10:Calculate mean and variance, µ, σ⇐Xcluster 11:Perform normal distribution N(µ, σ) sampling on data Xclusterto obtain X 12:foritoNdodo: 13: Select Tsampling points Xi∈RT∗D 14:end for 15:Distribution transformation: X′⇐X−µ σ, X′∼ N(0,1)⇐X∼ N(µ, σ) 16:Fitting equations by beam search with Nbeam beam size: f′(X′)⇐Model (X′) 17:BFGS constants optimization 18:Inverse transformation: f(X) =f′(X−µ σ) 19:Optional post-processing operations 20:Obtain final result f(X) We list some post-processing methods for equation optimization. •Simplification coefficient: For coefficients that are less than the threshold Csand have almost no impact on the accuracy of the equation, we simplify the coefficients and terms to 0, such as y= 0.0000134 xi,0+ 3.84xi,1→y= 3.84xi,1, and generally set Csto 10 e−4. •Simplified form: For equations with complex form, we simplify the overall form of the equation to make it more concise. •Combining with genetic algorithm: For precise but complex equations, we use this equation as the starting point of genetic algorithm or directly optimize the equation form through genetic algorithm to make it more researchable. •Combining with LLM: Optimizing equations through domain knowledge, as detailed in Appendix F. 25 Appendix B More details on classical non-network symbolic regression B.1 AI-Feynman AI-Feynman [20] contains physical model that come from common laws in classical physics [47], such as Newton’s laws, thermodynamic equations, electromagnetic equations, etc., which is commonly used to evaluate the performance of symbolic regression methods. Each physical model in AI-Feynman corre- sponds to a equation and we select 100 equations as the dataset. These equations have 1-3 independent variables, for variables exceeding 3, we replace some of their independent variables with random con- stants, for example, F=Gm 1m2 (x2−x1)2+(y2−y1)2+(z2−z1)2is replaced to y=xi,0xi,1xi,2 (c5−c4)2+(c3−c2)2+(c1−c0)2, where ciis the random constant. Note that all methods are evaluated on replaced equations to ensure fairness. B.2 USE-F The USE-F is a set of unseen synthetic equations with only self parts f(self), describing more diverse and complex scenarios with independent variables. It should be noted that the USE-F is constructed in the same way as the corpus but has no intersection with the corpus. The dataset contains a total of 30,000 f(self)equations with 0-60 length, 1-3 dimension and 0-40 operators. Compared to AI-Feynman, these equations are on average significantly longer and more complex. B.3 Details on experimental setting For experiments on AI-Feynman (see Fig. 2(a)), the test set consists of all 100 equations in the AI- Feynman. The number of input data (IN-Domain) and prediction of unknown data (OUT-Domain) are 200 and 1000, sampled from a normal distribution of random parameters N(a, b). For performance experiments on USE-F (see Fig. 2(a)), 1500 equations are sampled randomly for evaluation each time, and for experiments on equations with various lengths, dimensions, operators and test points in USE-F (see Fig. 2(b,c)), the number of sampled equations is 200 (such as 200 equations with dimension 1, 200 equations with length between 5-10, etc.) each time. For the experiment on test points on USE-F, the number of
|
https://arxiv.org/abs/2505.21879v1
|
test points varies, and for other experiments, the number of input data (IN-Domain) and prediction of unknown data (OUT-Domain) are 200 and 1000, sampled from the distribution of N(0,1) and N(0,10). The parameters for PySR, the library for SINDy, and the models for E2E and NeSymReS follow the configurations specified in their respective published papers or latest version in Github. All methods are evaluated using the same test data, ensuring the fairness. B.4 More experimental results and regression analyses The number of operator in equations and dimension of equations also affect the performance of model. As the number increases, the performance of these methods tends to decline (see Fig. B3(a)). Our SFR outperforms most baselines, especially in more stringent measurement. Although E2E achieves optimal results at some low-precision, its lack of a constant optimization module, resulting in a significant decrease in performance at high-precision. The diversity of the datasets leads to performance limitations of the SINDy and PySR, and fine-tuning parameters for each equation is time-consuming. The insufficient construction of the corpus and simple constant optimization might be the main reasons for the poor performance of NeSymRes. We also show the R2performance of all methods on the equations with different length, dimension and the number of operator (see Fig. B3(b)) For the R2index, it places more emphasis on the degree of fit and does not require high accuracy, therefore our SFR achieve the best results on most indicators, and E2E is better than ours in some indicators. 26 IN-Domain(USE-F) IN-Domain(USE-F) OUT-Domain(USE-F) OUT-Domain(USE-F) OUT-Domain(USE-F)IN-Domain(USE-F) IN-Domain(USE-F) IN-Domain(USE-F) OUT-Domain(USE-F) OUT-Domain(USE-F)a bFig. B3 :a.The effect of the dimension and the number of operator on classical non-network symbolic regression, the precision gradually increases until Close 0.001.b.The effect of the length, dimension and the number of operator on classical non-network symbolic regression ( R2). We provide more visualized equation regression results. In Fig. B4-B5 and Table. B5-B7, we present the regression results of the probability density function (one dimension (1D)), friction force (2D), elastic potential energy equations for a normal distribution (2D), angular distribution in proton decay (3D) and magnetic moment (3D) equations. For the equations with 3 dimensions, considering the y, it is difficult to represent 4 variables with figures, so we only present the form of equations. Most methods can regress to the correct equation, and 27 we use the −to represent failed regression equations (equations where the model regression fails or are meaningless). Probability density function(1D) = − Ours E2E NeSymRes SINDy PySR Fig. B4 : Results of symbolic regression on equations in AI-Feynman with 1 dimension (equation curve), the black line represents the true equation, while the horizontal line indicates the failure of the regression equation. Table B5 : Results of symbolic regression on equations in AI-Feynman with 1 dimension (equation form). MethodPhysical LawProbability density function True f=1√ 2πe−1 2θ2 Ours f= 0.399e−1 2θ2 E2E − NeSymRes f= 0.398e−1 2θ2 SINDy − PySR f= 0.054ecos(0.4θ) 28 Friction force(2D) = Elastic potential energy(2D) = Ours E2E NeSymRes SINDy PySR True Ours E2E NeSymRes SINDy PySR True Fig. B5 : Results of symbolic regression on
|
https://arxiv.org/abs/2505.21879v1
|
equations in AI-Feynman with 2 dimensions (equation surface), the gray surface on the left is the true equation Table B6 : Results of symbolic regression on equations in AI-Feynman with 2 dimensions (equation form). MethodPhysical LawFriction force Elastic potential energy equations True F=µN U =1 2kx2 Ours F=µN U =1 2kx2 E2E F=µN−0.008N U =1 2kx2 NeSymRes F=µN U =1 2kx2 SINDy F=µNU= 0.22k2+ 0.25kx+ 12.17x +0.175x2−0.54k−6.36 PySR F=µN U =1 2kx2 Table B7 : Results of symbolic regression on equations in AI-Feynman with 3 dimensions (equation form). MethodPhysical LawAngular distribution in proton decay Magnetic moment True f=β(1 +αcosθ ) µ=qh 4πm Ours f=β(1 +αcosθ ) µ= 0.079qh m E2Ef= 0.24β((4.4α +0.19)cos(1.1θ+ 0.65) + 4 .02)µ= 0.39qh m NeSymRes f=β(1.57−0.24α)cos(0.3θ) µ= 0.079qh m SINDy − − PySR f=α θµ=1 sin(0.27q) 29 Ours E2E NeSymRes PySR SINDyIN-Domain OUT-Domain Ours E2E NeSymRes PySR SINDy USE-F equations(1D) =,(.,+.,+.) USE-F equations(1D) =,−.(,−.) Fig. B6 : Results of symbolic regression on equations in USE-F with 1 dimensions (equation curve), the gray area represents the IN-Domain task, and the white area represents the OUT-Domain task. Table B8 : Results of symbolic regression on equations in USE-F with 1 dimensions (equation form). Method Equation Ours y=xi,0(0.601x3 i,0+ 0.331xi,0+ 7.87) y= 3xi,0−1.744sin(xi,0−0.013) E2E y=xi,0(0.637x3 i,0+ 7.996)y= (1.303xi,0+ 0.026)(1 − arctan (0.02xi,0−0.215x2 i,0)) NeSymRes y=x3 i,0+ 6.896xi,0 y= 0.643xi,0|xi,0|+ 0.908xi,0+ 0.015 SINDy y= 3.657x2 i,0+ 7.613xi,0−1.638 y= 2.048xi,0 PySR y=exi,0+1.441−5.016 y=xi,0ecos(1 xi,0) In Fig. B6-B7 and Table. B8-B10, we present the results of some representative equations in USE-F, include complex polynomials, fractions, trigonometric functions, exponential functions, etc. Each subfig- ure shows a comparison between the regression equation and the true equation (black), it can be seen that the regression equation from our method not only approaches the ground truth on curve, but also has a more consistent form with the true equations. The gray area represents IN-Domain tasks, while other areas represent OUT-Domain tasks. As shown in the figures, all methods achieved good results on IN-Domain tasks, but when faced with unknown data on OUT-Domain, the equation regressed from our method is more in line with the real equation, indicating that our equation results have stronger ability to predict unknown data and have the potential to discover unknown physical laws. 30 Ours E2E NeSymRes PySR SINDy Ours E2E NeSymRes PySR SINDyIN-DomainOUT-DomainUSE-F equations(2D) =(.,+.,)(,+.) USE-F equations(2D) =−.,+,−. ,Fig. B7 : Results of symbolic regression on equations in USE-F with 2 dimensions (equation surface). Table B9 : Results of symbolic regression on equations in USE-F with 2 dimensions (equation form). Method Equation Ours y= (0.727xi,0+ 0.236ex)(xi,0+ 8.088xi,1) E2Ey=5.897−0.214xi,0 33.378−1.589xi,0(0.654−38q 1−0.746 0.052xi,0−0.186)(−0.07xi,0 −0.018)(0 .944xi,0+ 7.365xi,1+ 0.09) (Wrong Domain) NeSymRes ( y= 0.686xi,0+ 0.132)( xi,0+ 2xi,0xi,1+ 12.13xi,1) SINDy y= 2.137x2 i,0+ 10.329xi,0xi,1+ 1.285xi,0−1.071x2 i,1+ 3.8xi,1−0.112 PySR y=xi,0xi,1(xi,0+ 8.921) Table B10 : Results of symbolic regression on equations in USE-F with 2 dimen- sions (equation form). Method Equation Ours y=xi,1−0.111xi,0−8.233 xi,0 E2E y= 0.98xi,1−0.144xi,0−8.268 xi,0 NeSymRes y=−0.269xi,0−8.243 xi,0 SINDy y=−31.203x2 i,0+ 10.751xi,0xi,1−4.128xi,0−21.932x2 i,1−21.67xi,1+ 89.118 PySR y=xi,1−8.229 xi,0 31 Table B11 : Results of symbolic regression on equations in USE-F with 3 dimensions (equation form). Method Equation
|
https://arxiv.org/abs/2505.21879v1
|
R2 True y= 0.53xi,0+xi,1+ 2.32xi,2+ 10.34tan(xi,2+ 1.28) / Ours y= 0.558xi,0+xi,1+xi,2+ 10.34tan(xi,2+ 1.28) 0 .996 E2E y= 0.558xi,0+ 0.87xi,1+ 11tan(1.105xi,2+ 1.288) + 0 .117 0 .994 NeSymRes y=0.003tan(xi,0+xi,1−0.262) xi,0−0.116>−1 SINDy − >−1 PySR y=1 xi,2−0.297>−1 Table B12 : Results of symbolic regression on equations in USE-F with 3 dimensions (equation form). Method Equation R2 True y=xi,0(2.097xi,0(xi,2−8xi,1)−8.539)−(1.744xi,0+ 7.867xi,2)8xi,1−xi,2 8xi,1−xi,2/ Ours y=xi,0(1.867xi,0(0.1227xi,2−xi,1)−1)−7.979xi,2)xi,1−0.123xi,2 xi,1−0.123xi,20.971 E2Ey=−7.808xi,2+ (0.16−1.082xi,0)(1.047xi,0+ (0.005xi,2 +0.985)(1 .104xi,0+ 0.744−6.09 10.318xi,1−1.649xi,2+0.34) + 0.947)−0.3020.364 NeSymRes y=−0.449xi,0tan(1.179xi,0−xi,1+ 0.547)−x3 i,20.379 SINDy y=−2.085x2 i,0+ 1.332xi,0xi,1−1.547xi,0−0.116x2 i,2−7.993xi,2+ 0.207 0 .604 PySR y=1 sin(1 xi,1)>−1 We also provide some ultra long equations with 3 dimensions, (see Table. B11 and B12), in which the performance of each method has decreased to varying degrees. Our method cannot guarantee a com- pletely consistent equation form with the true equation, but the quantitative result R2is still satisfactory compared to other methods. 32 Appendix C More details on symbolic regression on complex networks C.1 USE The USE is an expanded dataset of USE-F, adding 30000 interaction equations f(inter )of the same quantity as self equations f(self)in USE-F. These self and interaction equation can be freely combined to{f(self), f(inter )}, with different five types of topologies to generate equations in the following form yi=f(self)(xi) +PN j=1Aijf(inter )(xi, xj), modeling various complex networks. C.2 Topological structures of complex networks •Grid : – 1.Construct a grid structure with n×n=N. – 2.Each cell represents a network node and connect with its surrounding cells. •Power Law is Barab´ asi-Albeat network and has the characteristic of power-law degree distribution. The construction steps are as follows: – 1.Initialize a connected graph with n+ 1 nodes (each node has at least one edge) – 2.Each time a new node is added, it has nedges connected to existing nodes until the number of nodes reaches N. The connection condition of a new node depends on the degree of the existing node. •Small World is composed of a large number of nearby nodes and randomly distributed weak connections. The construction steps are as follows: – 1.Initialize a connected graph with Nnodes and kneighbors for each node. – 2.Randomly reconnect nodes with a probability of p. •Community is a network structure with characteristic both within and outside the community. The construction steps are as follows: – 1.Initialize Nnodes and divide them into [ n1, n2, ..., n k] communities. – 2.Nodes in the same community are connected with a probability of pin, while nodes in different communities are connected with a probability of pout •Random is Erd˝ os-R´ enyi network and has the characteristic of randomness. The construction steps are as follows: – 1.Initialize Nnodes. – 2.Connect each node with a probability of p. We use the NetworkX package [69] to generate the above topological structure and also present the generation settings and a connection example for each topology in Table C13. C.3 Details on experimental setting For performance experiments on USE (see Fig. 3(a)), 5000 equations are sampled randomly for evaluation each time, and for experiments on equations with various lengths, dimensions, operators and test points in USE (see Fig. 3(b)), the number of sampled equations
|
https://arxiv.org/abs/2505.21879v1
|
is 100 (such as 200 equations with dimension 1, 200 equations with length between 5-10, etc.) each time. Each equation will be paired with 5 randomly generated topologies (grid, power law, small world, community and random), and the topology generation rules are as described in Section C.2. For the experiment on test points on USE, the number of test data points varies, and for other experiments, the raw data is generated based on the distribution of N(0,1). The number of input data sampled from raw data (IN-Domain) are 200, which is the product of the number of sampled time slices 33 Table C13 : Topology setting Topology grid power law small world community random Aij Parameters N∈[10,200]n= 5 N∈[10,200]k= 5 p= 0.5 N∈[10,200]k= 4 pin= 0.25 pout= 0.01 N∈[10,200]p= 0.1 N∈[10,200] and the number of sampled nodes, and the number of sampled nodes does not exceed 20. Unknown prediction data (OUT-Domain) comes from raw data with the distribution of N(0,10), ranging from 2000 to 40000 in quantity (Each node on the topology randomly samples 200 time slices of data as unknown data). C.4 More experimental results and regression analyses We fully demonstrate the impact of length, dimension and the number of operator on IN and OUT-Domain tasks in Fig. C8. From the figures, it can be observed that the change in dimension has almost no impact on the performance of our model, while as the length of the equation and the number of operator increase, the performance gradually decreases. The satisfactory results can be obtained on equations with 0-30 length or 0-20 operator, The model achieved satisfactory results, benefits from the distribution of equations in the corpus, where the number of equations with lengths below 30 far exceeds those with lengths above 30. Although high-precision symbolic regression on ultra long equations with complex networks is still a challenge, our method can ensure the Close 0.1andR2performance, considering that excessively long equations may not have practical significance on complex networks, such results are acceptable. Moreover, these results demonstrate that the performance of the model is topology independent again. 34 IN-Domain OUT-Domain Fig. C8 : The effect of length, dimension and the number of operator on symbolic regression on complex networks ( R2, Close 0.1, Close 0.01, Close 0.001).35 Exponential Logarithmic PeriodicPolynomial =(,)+(,)+ (,)+(,) =,+,+ ,+,+,=,+,+ + ,+, =,+(,)++ (,)+(,)+, =,+, a bFig. C9 : The distribution of representation on the equations with different topologies and constants. a. Equations with the same skeleton are clustered together, and skeletons with similar structures are located in the same region. b.For the same equation, it is clustered together regardless of topology. We also provide a more detailed representation projection as shown in Fig. C9. Firstly, we randomly select network equations with different structures and the same topology. Each equation is sampled the constant for 10 times randomly. Then, we feed the data from the equations into the model representation layer and project the output into representation with 2 dimensions through t-SNE. For clarity, we select 320 equation data representations from 32 different structures for display, including periodic, exponential,
|
https://arxiv.org/abs/2505.21879v1
|
logarithmic and polynomial types. As shown in Fig. C9(a), even with different constants, equations with the same structure tend to cluster together, such as y=log(x2 i,0) +log(3.76x2 i,0) +Plog(x2 i,0) +log(x2 j,0) andy=log(x2 i,0) +log(0.97x2 i,0) +Plog(x2 i,0) +log(x2 j,0) , and those with similar structural features are more likely to occur in the same region, such as y= 2xi,1+c+xi,2+Pxj,1+c+xi,2+xi,1and y=cxi,0+x2 i,0+Pc+c xj,0+xi,0, proving that our model has interpretable and effective representations. In addition, we also select network equations and combine 5 types of topology for each equation to 36 perform representation projection. It is interesting that for an equation, no matter how its topology changes, its representation gathers together (see Fig. C9(b)), indicating that our local sampling strategy and decoupling interaction term are effective. We also conduct experiments on large-scale scenarios to demonstrate the superiority of our local topology sampling as shown in Fig. C10. Even in a scenario with 5000 nodes (LV dynamic in Appendix D), our model can still ensure performance. Some specific visual examples of symbolic regression on complex networks are shown in Fig. C11-C12 and Table. C14-C16. We select some representative equations with 1 dimension, including polynomials, fractions, exponential, trigonometric functions, etc., combined with a specific topology (100 nodes). Our method can accurately regress the equations, both on the data in IN-Domain and OUT-Domain and the form of equations. For equations with 2 dimensions (see Fig. C12 and Table. C15), we sample some nodes from the global topology of each equation to display the results, and we also provide specific global topologies. Our method can also perform accurate symbolic regression. Fig. C10 : The effect of the scale of topology on symbolic regression performance. 37 Topo: Small World Topo: Power Law Topo: Gridb c a Fig. C11 : Results of symbolic regression on equations in USE with 1 dimension (equation curve). Table C14 : Results of symbolic regression on equations in USE with 1 dimension (equation form). Equation ID Comparison Fig. C11(a)True: y=−11.133xi,0−1.285 xi,0+PAij2x2 i,0x2 j,0 Ours: y=−11.133xi,0−1.285 xi,0+PAij2x2 i,0x2 j,0 Fig. C11(b)True: y=−2.521xi,0−4.707logx2 i,0+PAijxj,0+ 2.771xi,0+cosx j,0 Ours: y=−2.521xi,0−4.707logx2 i,0+PAijxj,0+ 2.771xi,0+cosx j,0 Fig. C11(c)True: y= 0.015x2 i,0+ 0.972xi,0−PAij0.016xi,0−5.712xj,0−13.645e−2.22xi,0 Ours: y= 0.015x2 i,0+ 0.972xi,0−PAij0.016xi,0−5.712xj,0−13.645e−2.22xi,0 We also present some challenging network symbolic regression tasks, which involve equations with long structures and 3 independent variables, combined with a complex network topology. From the Table. C16, although it is not possible to directly and accurately regress highly complex equations, our method can still obtain an approximate structure and a high R2metric, which means that it is possible to obtain a more accurate equation form by further constant optimization or by incorporating the result as the initial population into a genetic algorithm. 38 Topo: Random Topo: Communitya bFig. C12 : Results of symbolic regression on equations in USE with 2 dimensions. The comparison between the true distribution and predicted distribution of the state of the selected nodes (orange) is shown on the right side. Table C15 : Results of symbolic regression on equations in USE with 2 dimension (equation form). Equation ID Comparison Fig. C12(a)True: xi,1+1 xi,1+ 4.48xi,0+ 5.68x2 i,0+PAij4.28xi,1+xi,1 0.243+0 .08xi,1+
|
https://arxiv.org/abs/2505.21879v1
|
7.10xi,0xj,0 Ours: xi,1+1 xi,1+ 4.48xi,0+ 5.68x2 i,0+PAij4.28xi,1+xi,1 0.243+0 .08xi,1+ 7.10xi,0xj,0 Fig. C12(b)True: y=xi,0+ 0.075xi,1+ 0.982exi,1+PAijxj,0+ 0.575xi,0+1.458 xj,1 Ours: y=xi,0+ 0.075xi,1+ 0.982exi,1+PAijxj,0+ 0.575xi,0+1.458 xj,1 39 Table C16 : Results of symbolic regression on equations in USE with 3 dimension (equation form). Comparison R2 True: y=xi,0xi,1+xi,1+ 2xi,2+ 4.44xi,0+PAij0.091 + xj,1+ 0.083xj,2+ 0.93xj,0+0.826 xj,0/ Ours: y=xi,0xi,1+xi,1+ 1.987xi,2+ 4.43xi,0+PAij0.091 + xj,1+ 0.09xj,2+xj,0+0.826 xj,0>0.9 True: y=xi,0+0.329 5.069−xi,1(xi,2+ 0.598xi,0xi,2+ 0.04xi,0) +PAij0.231xi,0+xi,1+ 2.78xi,2+xj,2 / Ours: y=xi,0+ 0.038(xi,2+ 0.54xi,0xi,2) +PAij0.231xi,0+xi,1+ 2.724xi,2+xj,2 >0.9 True: y=xi,1+1 xi,2+ (1.793 + xi,1)2+ 2i,2+ 3.65exi,2+PAij(xj,1−4.926 xi,1(xj,2−14.061)2)(xi,1+xj,0+1 xi,1)/ Ours: y=x2 i,1+1 xi,2+ 11.8xi,1+xi,2−0.132e3.711xi,2+PAijxj,1(xi,1+xi,2+xj,0+26.634 xi,1) >0.9 40 Appendix D More details on inferring interpretable network dynamics D.1 Network dynamics •Biochemical Dynamics (Bio)[55]: It is used to describe the dynamic biochemical process of PPI (Protein- Protein Interactions), and its equation form is as follows:dxi dt=Fi−Bixi(t) +PN j=1Aijxi(t)xj(t), where xi(t) represents the protein iconcentration at time t,Firepresents the average influx rate of proteins and Birepresents the average degradation rate of proteins. •Epidemic Dynamics (Epi)[54]: It is used to describe the dynamic spread of infectious diseases, and its equation form is as follows:dxi dt=−δixi(t) +PN j=1Aij(1−xi(t))xj(t), where xi(t) represents the infection probability of node iand the δirepresent the rate of recovery. •Gene Regulatory Dynamics (Gene)[52]: It is used to describe the dynamic regulation of genes,and its equation form is as follows:dxi dt=−Bixi(t)f+PN j=1Aijxj(t)h xj(t)h+1, where xi(t) represents the expression of gene i,Birepresent the decay rate and hrepresents the Hill coefficient. •Heat Diffusion Dynamics (Heat)[53]: It is used to describe the dynamic heat diffusion, and its equation form is as follows:dxi dt=PN j=1Aijki(xj(t)−xi(t)), where xi(t) represents the heat change of node i and the kirepresent the rate of heat change. •Mutualistic Interaction Dynamics (Mutu)[57]: It is used to describe the dynamic ecology of inter- actions between population, and its equation is as follows:dxi dt=bi+xi(t)(1−xi ki)(xi(t) ci−1) + PN j=1Aijxi(t)xj(t) dj+eixi(t)+hixi(t), where xi(t) represents the population richness of population i,birepresents the migration probability, kirepresents population growth rate and cirepresents population decline threshold. •Lotka-Volterra Model (LV)[56]: It is used to describe the dynamic population of species in competition, and its equation is as follows:dxi dt=xi(t)(αi−θixi(t))−PN j=1Aijxi(t)xj(t), where xi(t) represents the population size of species i,αiandθirepresent the growth parameters of species. D.2 Details on experimental setting To ensure the reproducibility of our findings, we set the parameters of each network dynamics and display them in Table D17. The topology setting is still as shown in Section C.2. The number of nodes in each dynamic scenario under various topologies is 100 −500. Each node is set with an appropriate initial value xi(0) to simulate its dynamic changes during the time period from t= 0 to TR(IN-Domain) or TP(OUT- Domain) with a step-size tδ. The data is needs to be preprocessed before being fed into the model, we perform distribution sampling and distribution scaling on the data (see Method section). We sample 200 data points in [0 , TR] as the observations from dynamics. D.3 More experimental results and regression analyses We present the performance of our model in dynamic scenarios under other topologies (Small World, Community and Random), and our method has the best performance
|
https://arxiv.org/abs/2505.21879v1
|
(see Fig .D13). A billion level corpus is key, compared to TPSINDy and GNN+GP, a larger learning space can effectively regress and find suitable dynamic equations. It is worth noting that in order to compare the optimal performance of the SOTA method, its input data far exceeds 200, while we still only need 200. To compare the performance of various methods in more detail, we also randomly sample some nodes for each dynamics, set initial values for each node, and calculate its trajectory based on the equations regressed from the model. Our method has the highest trajectory fitting degree in all dynamics (see Fig .D14). However, due to the fixed base library setting, the equations regressed from TPSINDy cannot even be integrated in some scenarios to obtain the correct trajectory. We also compare the trajectories of all nodes through the RMSE metric, which is present at the bottom of each trajectory figures. As shown, the color band of our method is the 41 Table D17 : Network dynamics setting Dynamic Scene Parameter setting xi(0) tδTRTP Bio Fi= 1, Bi=−1 xi(0)∼U(0,2) 0.0001 0 .1 0.5 Epi δi= 1.0 xi(0)∼U(0,1) 0.001 1 5 Gene Bi= 1, f= 1, h= 2 xi(0)∼U(0,2) 0.01 5 10 Heat k= 1 xi(0)∼U(0,1) 0.01 1 5 Mutu bi= 1, ki= 5, ci= 1, di= 5, ei= 0.9, hi= 0.1xi(0)∼U(0,2) 0.001 1 5 LV αi= 0.5, θi= 1.0 xi(0)∼U(0,5) 0.0001 0 .1 0.5 Small WorldEpi Bio LV MutuHeatGene GNN+GPOurs TPSINDy Community RandomEpi Bio LV MutuHeatGeneEpi Bio LV MutuHeatGeneGNN+GPOurs TPSINDyGNN+GPOurs TPSINDy Fig. D13 : Result of symbolic regression on network dynamics under small world, community and random topologies. shallowest and purest, indicating the high accuracy of our method. We normalize the MAPE in order to make the color bands easier to distinguish. The specific regression equations are listed in Table E24. Note that some equations may not be consistent with the ground truth, but from the results, they appear to be equivalent forms of the ground truth. 42 Grid Power Law Small World Random Community Bio Epi Gene MutuHeat LV Fig. D14 : Result of symbolic regression on network dynamics under small world, community and random topologies. 43 Table D18 : Results of symbolic regression on on interpretable network dynamics (Bio). TopologyComparison#Bio:dxi,0 dt= 1−xi,0+PAij(xj,0xi,0) Ours TPSINDy GNN+GP Communitydxi,0 dt= 1−xi,0 +PAijxj,0xi,0dxi,0 dt= 0 +PAij(0.840xj,0xi,0+ 0.651)dxi,0 dt= 1.913xi,0−14.995 sin(0 .225xi,0) +PAijxj,0xi,0 Griddxi,0 dt= 1−xi,0 +PAijxj,0xi,0dxi,0 dt= 0 +PAij(0.874xj,0xi,0+ 0.651)dxi,0 dt=−1.130−xi,0 +PAijxj,0xi,0 Power Lawdxi,0 dt= 1−xi,0 +PAijxj,0xi,0dxi,0 dt= 0 +PAij(0.956xj,0xi,0+ 0.513)dxi,0 dt= 1.599xi,0+e(−6.909xi,0) +PAijxi,0(−0.076xi,0xj,0 xi,0+xj,0+xj,0) Randomdxi,0 dt= 1−xi,0 +PAijxj,0xi,0dxi,0 dt= 0 +PAij(0.914xj,0xi,0+ 0.630)dxi,0 dt= log(( xi,0e(3.848xi,0−2.848|xi,0−18.175|)+ 2.716)e−xi,0) +PAijxj,0xi,0 Small Worlddxi,0 dt= 1−xi,0 +PAijxj,0xi,0dxi,0 dt= 0 +PAij((0.940xj,0xi,0+ 0.651)dxi,0 dt=xi,0(7.531 sin( x0.25 i,0)−sin(1.692 sin(√xi,0))−1.0) +PAijxi,0|0.132xi,0−xj,0| 44 Table D19 : Results of symbolic regression on on interpretable network dynamics (Epi). TopologyComparison#Epi:dxi,0 dt=−2xi,0+PAij(xj,0−xi,0xj,0) Ours TPSINDy GNN+GP Communitydxi,0 dt=−19.702xi,0−1.234 6.515xi,0+7.72 +PAij(−0.647xi,0+ 0.311xj,0−0.157 arctan(16 .43xj,0−15.826) + 0 .21)dxi,0 dt=−4.29x2 i,0+ 2.45 +PAij0dxi,0 dt=−xi,0 −PAij(xj,0−x2 j,0) Griddxi,0 dt=−1.267x2 i,0+ 0.279xi,0−0.946 −PAij(0.442xi,0−0.213x2 j,0−0.334xj,0+ 0.437xi,0xj,0−0.39)dxi,0 dt=−4.93x2 i,0 +PAij0.337xj,0dxi,0 dt=−xi,0 +PAij(xj,0−x2 j,0) Power Lawdxi,0 dt=−1.914xi,0−0.045 +PAij(0.797x2 i,0xj,0−0.559x2 i,0−2.213xi,0xj,0+ 0.843xi,0+ 1.446xj,0−0.311)dxi,0 dt= 0.68xi,0 +PAij0.1xj,0dxi,0 dt=−xi,0 +PAij(xj,0−x2 j,0) Randomdxi,0 dt= 0.047−2.048xi,0 +PAij(0.22xi,0+ 1.226xxj,0−1.32xi,0xj,0−0.162)dxi,0 dt=−5.05x2 i,0+
|
https://arxiv.org/abs/2505.21879v1
|
3.29 +PAij0.04xj,0dxi,0 dt=−xi,0 +PAij(xj,0−x2 j,0) Small Worlddxi,0 dt=−43.027x4 i,0+54.464x3 i,0−8.113x2 i,0+15.701xi,0+3.53 24.59x2 i,0−29.75xi,0−4.659 −PAij(0.027xi,0−(0.704xi,0−1)2(1.164xj,0+ 0.722))dxi,0 dt=−3.65x2 i,0+ 1.56 +PAij0dxi,0 dt=−xi,0 +PAij(xj,0−x2 j,0) 45 Table D20 : Results of symbolic regression on on interpretable network dynamics (Gene). TopologyComparison#Gene:dxi,0 dt=−2xi,0+PAijx2 j,0 1+x2 j,0 Ours TPSINDy GNN+GP Communitydxi,0 dt=−0.027x2 i,0−1.858xi,0−0.23 +PAij0.019x3 j,0−0.188x2 j,0+1.062xj,0+2.80 0.225x2 j,0−1.467xj,0+7.676dxi,0 dt= 0 +PAij(0.78)−1.798xi,0 +PAij(0.455xj,0cos(0.169xi,0 xj,0−2.403)) Griddxi,0 dt= 0.362xi,0−2.158 +PAij(0.579−0.204xi,0+1 9.349(0 .353xj,0−1)2+2.027)dxi,0 dt= 0 +PAij(2.02x2 j,0 1+x2 j,0)− Power Lawdxi,0 dt=−2.253xi,0−0.404 +PAij(0.084xj,0−0.396 +1 −0.336xj,0+1.405)dxi,0 dt=−0.17xi,0 +PAij(0.04x2 j,0 1+x2 j,0)dxi,0 dt=xi,0(0.149x2 i,0(xi,0−1.307)( xi,0−1)−2) +PAijx2 j,0 xj,0(0.031x3 i,0+xj,0)+0.979 Randomdxi,0 dt=−1.987xi,0+ 0.004 +PAij(0.522xj,0−0.101)dxi,0 dt= 0 +PAij(−8.29x2 j,0 1+x2 j,0+ 64.39)dxi,0 dt=xi,0(0.119x2 i,0−2.278) +PAij−xi,0(0.017x2 i,0−0.057)( x2 j,0+1.151)+ x2 j,0 x2 j,0+1.151 Small Worlddxi,0 dt=−2.001xi,0+ 0.002 −PAij(0.22x2 j,0−0.958xj,0+ 0.245)dxi,0 dt= 0 +PAij(−16.51 tanh( xi,0xj,0) + 106 .54)dxi,0 dt=xi,0(−sin(cos(0 .506xi,0) + 0.746)−1) +PAijsinvuutxi,0x3 j,0 xi,0x2 j,0√ exi,0+2.086 46 Table D21 : Results of symbolic regression on on interpretable network dynamics (Heat). TopologyComparison#Heat:dxi,0 dt= 0 +PAij(xj,0−xi,0) Ours TPSINDy GNN+GP Communitydxi,0 dt= 0 +PAij(xj,0−xi,0)dxi,0 dt= 0 +PAij(0.1xj,0−xi,0)dxi,0 dt= 0 +PAij(xj,0−xi,0) Griddxi,0 dt= 0 +PAij(xj,0−xi,0)dxi,0 dt= 0 +PAij(0.1xj,0−xi,0)dxi,0 dt= 0 +PAij(xj,0−xi,0) Power Lawdxi,0 dt= 0 +PAij(xj,0−xi,0)dxi,0 dt= 0 +PAij(0.1xj,0−xi,0)dxi,0 dt= 0 +PAij(xj,0−xi,0) Randomdxi,0 dt= 0 +PAij(xj,0−xi,0)dxi,0 dt= 0 +PAij(0.1xj,0−xi,0)dxi,0 dt= 0 +PAij(xj,0−xi,0) Small Worlddxi,0 dt= 0 +PAij(xj,0−xi,0)dxi,0 dt= 0 +PAij(0.1xj,0−xi,0)dxi,0 dt= 0 +PAij(xj,0−xi,0) 47 Table D22 : Results of symbolic regression on on interpretable network dynamics (LV). TopologyComparison#LV:dxi,0 dt= 0.5xi,0−x2 i,0−PAij(xj,0xi,0) Ours TPSINDy GNN+GP Communitydxi,0 dt=−0.999x2 i,0+ 0.50xi,0 −PAij(−0.999xi,0xj,0)dxi,0 dt= 0 +PAij(−1.06xj,0xi,0+ 0.551)dxi,0 dt= (xi,0−0.79)(−xi,0+e(0.012xi,0)−1.451) −PAij(0.999xi,0xj,0) Griddxi,0 dt=−1.00x2 i,0+ 0.499xi,0 −PAij(0.999xi,0xj,0)dxi,0 dt= 0 +PAij(−1.15xj,0xi,0+ 0.266)dxi,0 dt=xi,0(−1.026xi,0+ 0.017e(0.349xi,0)+ 0.509) −PAijxj,0xi,0−0.0006|p e(xi,0)|) Power Lawdxi,0 dt=−1.00x2 i,0+ 0.50xi,0 −PAij(1.00xi,0xj,0)dxi,0 dt= 0 +PAij(−1.039xj,0xi,0+ 0.328)dxi,0 dt=−xi,0(xi,0−0.431) +PAij(−xi,0(xj,0−0.011)) Randomdxi,0 dt=−0.999x2 i,0+ 0.50xi,0 −PAij(0.999xi,0xj,0)dxi,0 dt= 0 +PAij(−1.08xj,0xi,0−0.5)dxi,0 dt=xi,0(0.486−0.998xi,0) +PAij(−xj,0(xi,0)) Small Worlddxi,0 dt=−0.99x2 i,0+ 0.49xi,0+ 2.4e−5 −PAij1.00xi,0xj,0dxi,0 dt= 0 +PAij(−1.16xj,0xi,0+ 0.275)dxi,0 dt=−xi,0(xi,0−0.51) +PAij(−xj,0(xi,0)) 48 Table D23 : Results of symbolic regression on on interpretable network dynamics (Mutu). TopologyComparison#Mutu: −0.2x3 i,0+ 1.2x2 i,0−xi,0+ 1 +PAijxi,0xj,0 5+0.9xi,0+0.1xj,0 Ours TPSINDy GNN+GP Community−1.458x3 i,0+19.397x2 i,0−81.254xi,0+121 .834 1.120x2 i,0−8.278xi,0+16.618 +PAij1 (10.691(0 .169xi,0−1)2+1.023)(1 .338(0 .169xj,0−1)2+0.303)dxi,0 dt= 0.406xi,0 +PAij0.28(xj,0−xi,0)dxi,0 dt= 0.722x2 i,0−0.170x3 i,0+ 1 +PAij(0.192xi,0xj,0+ 0.004xj,0) Griddxi,0 dt=−0.266xi,0−0.195e0.653xi,0+ 1.55 +PAij(0.265xi,0+ 0.415xj,0−0.297)− − Power Lawdxi,0 dt= 11.044−3.977xi,0 +PAij0.002x2 i,0+0.178xi,0+0.567xj,0−0.722 0.092x2 i,0−0.955xi,0+3.333− − Randomdxi,0 dt=30.671x2 i,0−323.132xi,0+729 .502 0.366xi,0−15.8 +PAij0.281xj,0−2.041−3.596 (0.138xi,0−1)2(0.037xj,0+1)2−1.63− − Small Worlddxi,0 dt=−2.353xi,0−0.117x2 i,0+ 12.396 −PAij3.392(0 .19xj,0+1)2 8.775(0 .207xi,0−1)2+4.927− − 49 Appendix E More details on on inferring the transmission laws of epidemics E.1 Real-world global epidemic transmission We collect daily cumulative infected numbers of three classic epidemics, SARS [61], H1N1 [61], and COVID-19 [62], and map a directed weighted empirical topology using the global aviation network in OpenFights [63] to construct an empirical real-world global epidemic transmission system. Specifically, SARS collects 117 days of data from 37 countries and regions (for the convenience of subsequent expres- sion, it is referred to as nodes), H1N1 collects 74 days of data from 130 nodes, and COVID-19 collects 158 days of data from 174 nodes. To maintain the transmission characteristics of the epidemic itself, we only consider early data before government intervention, which we define as the first 45 days. For example, if a node collects the first infected on May 1st, the data from May 1st to
|
https://arxiv.org/abs/2505.21879v1
|
June 14th will be used. E.2 Details on experimental setting For experiments on heterogeneous epidemic transmission, we set the number of topology nodes to 360, and the number of nodes in the four communities is 120 ,120,90,30, respectively. Each community inde- pendently samples 200 data points (in IN-Domain), which will be fed into the model for equation regression. For experiments on real-world global epidemic transmission, considering the number of air passengers and the population of each node, the adjacency matrix of the topology result is modified to ˜Aij=NpPn i=1Aij, where the Nprepresents the total passengers daily and Piis the population of node i. We only randomly sample 30 days of data within 45 days to simulate scenarios where it is difficult to collect information during the early stages of epidemics outbreak. Note that when calculating the interaction dynamics f(inter )(xi, xj) of a node iat time t, the data of all interaction nodes xjcomes from the the calendar date corresponding to that node at time t, to avoid the influence of collection offset caused by different propagation start times of each node on the interaction information. All test data still needs to be preprocessed (see Method section) before being fed into the model, the distribution of data is scaled into a distribution that is as close as possible to the training data. E.3 More experimental results and regression analyses We first provide the specific regression equation in the experiments on heterogeneous epidemic transmis- sion (see Table. E24). The equation regressed by our method is equivalent to the original equation after experimental verification. Longer length predictions and new initial value predictions can also demonstrate the effectiveness of equivalent equations (see Fig .E15). 50 Original initial value New initial value Fig. E15 :a.Heterogeneous epidemic transmission prediction on longer time. b.Heterogeneous epidemic transmission prediction on new initial value, the distribution of the initial value is U(0,0.5). 51 Table E24 : Results of symbolic regression on heterogeneous epidemic transmission. Community IDComparison True Ours TPSINDy 1dxi,0 dt=−0.5xi,0+Pxj,0(1−xi,0)dxi,0 dt=−11.66x2 i,0+ 17.81xi,0−7.135−PAij(0.796xi,0+ 0.762x3 j,0+ 0.97x2 j,0+ 0.554)dxi,0 dt=−0.683x2 i,0−PAij1.199(xj,0−xi,0) 2dxi,0 dt=−2xi,0+Pxj,0(1−xi,0)dxi,0 dt=−11.789x2 i,0+ 12.725xi,0−3.312+PAij(−0.583xi,0−0.114x2 j,0+ 0.292xj,0+ 0.44)dxi,0 dt=−0.466x2 i,0−PAij0.391(xj,0−xi,0) 3dxi,0 dt=−5xi,0+Pxj,0(1−xi,0)dxi,0 dt=−7.047x2 i,0+ 2.142xi,0+PAij(−0.647xi,0−0.192x2 j,0−0.406xj,0+ 0.403)dxi,0 dt=−4.192x2 i,0−PAij0.007(xj,0−xi,0) 4dxi,0 dt=−10xi,0+Pxj,0(1−xi,0)dxi,0 dt=−3.294x2 i,0+ 0.851xi,0−6.932+PAij(0.685xi,0−1)2(1.13xj,0+ 0.03dxi,0 dt=−0.683x2 i,0−PAij1.199(xj,0−xi,0) 52 Fig. E16 : Comparison of epidemic transmission equations in all nodes on H1N1. Each subgraph represents the cumulative infected curve calculated by each regressed equation, as well as the error with the ground truth ( R2, Close 0.1, MAE ). We provide the results of the regression equation at other nodes on H1N1, SARS and COVID-19. For H1N1, the homogeneous equation is regressed from the all test data, and the heterogeneous equation of each node is obtained by optimizing the constants of the homogeneous equation based on the data of each node. As shown in Fig E17-E16, our isomorphic equation is more capable of reflecting the initial transmission characteristics of epidemics at each node, and the optimized heterogeneous equation is more in line with the true transmission curve of each node, compared with TPSINDy. To test the generality of the obtained equations,
|
https://arxiv.org/abs/2505.21879v1
|
We directly apply the equation regressed only from H1N1 data to SARS and COVID-19 and and obtain heterogeneous equations through data from each node. The cumulative infected generated based on our equation is closer to the ground truth (see Fig E18-E19), indicating that our method has stronger generalization ability. Noted that due to the number of infected ranging from 10 to 100000, we use Close 0.1and MAE. 53 Fig. E17 : Comparison of epidemic transmission equations in all nodes on H1N1. 54 Table E25 : Specific parameters in equations on H1N1. Node Ours TPSINDy Alla= 1.103, b= 2.855 c=−0.0004, d=−0.0005, e= 0.538, f=−0.049, g=−159.111a= 0.040 b= 105 .160 Canadaa= 16.398, b= 4.906 c= 18.688, d= 0.002, e= 6.611, f= 9.788, g= 11.216a= 0.058 b= 22.732 United Kingdoma= 3.573, b= 0.196 c=−0.749, d=−0.008, e= 0.610, f= 29.378, g=−71.376a= 0.108 b=−0.525 Spaina= 633 .117, b= 344 .593 c=−234.318, d= 0.013, e=−126.620, f= 1680 .625, g=−992.798a= 0.012 b= 17.050 Greecea=−1.140, b=−0.215 c= 2.755, d= 0.009, e= 1.006, f=−360.972, g= 1443 .838a= 0.093 b= 0.521 Guatemalaa=−1.840, b=−2.757 c= 57.232, d= 0.018, e= 20.727, f= 1477 .795, g= 533 .003a= 0.139 b= 2.740 Mexico= 0.822, b= 80.409 c= 0.289, d=−0.015, e= 53.556, f= 19.122, g= 1102 .118a=−0.022 b= 488 .844 Panamaa= 0.512, b=−2.516 c= 2.165, d= 0.07, e= 0.153, f= 156 .925, g= 96.461a= 0.023 b= 74.139 Costa Ricaa=−6.657, b=−1.111 c= 80.620, d= 0.018, e= 0.119, f=−156.157, g=−91887 .221a= 0.065 b= 7.881 El Salvadora= 0.619, b=−1.020 c= 5.803, d= 0.008, e= 0.056, f=−4.589, g=−375.397a= 0.091 b= 4.335 Japana=−0.863, b= 4.172 c= 2.286, d= 0.032, e= 3.613, f=−672.750, g= 417 .308a= 0.016 b= 75.001 Philippinesa=−8.320, b=−120.541 c= 47.371, d= 1.095, e= 354 .446, f= 26.562, g=−26.563a= 0.142 b= 10.367 Argentinaa=−17.142, b=−37.776 c= 146 .885, d= 0.408, e= 26.043, f= 22.729, g=−227.262a= 0.145 b= 16.403 Brazila=−2.927, b=−1.308 c= 14.665, d= 0.001, e= 1.053, f= 7.184, g=−93.375a= 0.119 b=−3.423 Chilea= 0.712, b=−299.531 c= 2.032, d= 2.747, e=−24.770, f= 2.230, g=−2.228a= 0.071 b= 1342 .554 Ecuadora= 0.924, b=−0.420 c=−0.271, d= 0.052, e= 1.110, f= 317 .152, g= 265 .366a= 0.025 b= 46.512 Perua=−0.272, b= 3.335 c= 12.058, d=−0.017, e= 0.326, f=−20.606, g=−1958.933a= 0.088 b= 5.479 Thailanda=−4.659, b=−6.702 c= 10.451, d= 0.595, e=−3.398, f= 211 .541, g=−349.450a= 0.159 b= 9.869 Malaysiaa=−67.280, b=−3.111 c= 167 .315, d= 0.089, e= 4.312, f=−743.10, g=−4662.720a= 0.178 b=−0.928 Australiaa=−3.199, b=−39.275 c= 9.161, d= 0.380, e=−17.971, f=−14.079, g= 56.303a= 0.085 b= 89.574 Chinaa=−0.068, b=−11.693 c= 0.886, d= 0.024, e=−12.426, f= 8.312, g= 0.028a= 0.122 b= 3.296 United Statesa=−2.333, b= 3.957 c= 0.943, d= 0.452, e= 11.136, f=−264.038, g= 1177 .158a= 0.056 b= 438 .925 Ours:dxi,0 dt=axi,0+b+PAij(cxi,0+dxj,0+e+1 fxj,0+g) TPSINDy:dxi,0 dt=axi,0+bPAij1 1+e−(xj,0−xi,0) 55 Fig. E18 : Comparison of epidemic transmission equations in all nodes on COVID-19. 56 Table E26 : Specific parameters in equations on COVID-19. Node Ours TPSINDy Alla= 1.103, b= 2.855 c=−0.0004, d=−0.0005, e= 0.538, f=−0.049, g=−159.111a= 0.074 b= 7.13 Icelanda= 1.108, b= 8.012 c= 0.608, d=−0.044, e= 2.062, f=−305.231, g=−293.528a= 0.042 b= 706 .343 Canadaa= 46.924, b= 158 .631 c=−57.657, d= 0.0001, e=−182.065, f= 9369 .583, g= 2323 .033a= 0.060 b= 26.534 Algeriaa=−23.233, b=−2.851 c= 154 .532,
|
https://arxiv.org/abs/2505.21879v1
|
d= 0.075, e=−2.640, f=−38.008, g=−259.349a= 0.012 b= 123 .971 Burkina Fasoa= 1.039, b= 6.626 c= 1.831, d=−0.195, e= 1.524, f=−32.223, g= 64.446a= 0.011 b= 922 .518 Ghanaa= 0.995, b=−10.572 c=−0.805, d= 0.072, e=−0.684, f= 32.287, g= 33.454a= 0.076 b= 262 .872 Cote d’Ivoirea= 0.647, b=−1.709 c= 1.563, d= 0.816, e=−2.957, f= 166 .895, g=−132.336a= 0.067 b= 189 .751 Nigera=−0.993, b= 20.614 c= 0.934, d=−0.845, e=−0.911, f= 18.258, g= 9.215a= 0.028 b= 970 .443 Tunisiaa= 0.136, b=−4.475 c= 8.718, d= 0.194, e=−2.805, f= 8175 .448, g=−18914 .28a= 0.049 b= 77.134 Belgiuma=−36.852, b=−22.534 c= 76.289, d= 0.399, e=−55.262, f=−153.424, g= 0.006a= 0.237 b= 8.970 Germanya=−2.739, b=−1.537 c= 1.387, d= 0.195, e=−1.086, f=−13.210, g= 277 .411a= 0.291 b=−6.556 Estoniaa= 1.747, b= 47.225 c=−19.799, d= 0.256, e=−52.340, f= 20.919, g=−0.0002a= 0.056 b= 358 .731 Irelanda= 16.564, b=−46.331 c= 138 .148, d= 0.003, e= 38.312, f=−18.845, g= 101 .045a= 0.099 b= 163 .027 Luxembourga=−44.031, b= 29.240 c= 546 .251, d=−0.113, e= 418 .154, f= 83.217, g= 77.527a= 0.047 b= 623 .419 Norwaya= 0.855, b= 26.478 c= 0.775, d=−0.050, e= 9.815, f= 665 .812, g= 632 .194a= 0.042 b= 515 .854 Polanda=−0.038, b=−12.039 c= 2.417, d= 0.041, e=−5.419, f=−29.190, g= 15.205a= 0.072 b= 201 .713 Swedena=−4.991, b= 18.458 c= 16.816, d= 0.089, e= 0.237, f=−44.419, g= 44.417a= 0.276 b= 7.410 South Africaa= 25.737, b= 25.417 c=−141.837, d=−0.009, e= 16.654, f=−34.385, g= 18.0317a= 0.045 b= 652 .840 Cameroona= 0.688, b= 10.144 c= 1.992, d= 0.464, e=−60.827, f= 151 .537, g= 135 .525a= 0.064 b= 417 .863 Malia= 0.932, b=−7.206 c= 0.499, d= 0.894, e=−0.439, f= 28.823, g= 6.681a= 0.052 b= 201 .635 Spaina= 587 .169, b=−34.708 c=−252.665, d= 0.554, e=−12.002, f= 4.395, g=−35.623a= 0.361 b= 5.163 Ours:dxi,0 dt=axi,0+b+PAij(cxi,0+dxj,0+e+1 fxj,0+g) TPSINDy:dxi,0 dt=axi,0+bPAij1 1+e−(xj,0−xi,0) 57 Fig. E19 : Comparison of epidemic transmission equations in all nodes on SARS. 58 Table E27 : Specific parameters in equations on SARS. Node Ours TPSINDy Alla= 1.103, b= 2.855 c=−0.0004, d=−0.0005, e= 0.538, f=−0.049, g=−159.111a= 0.074 b= 7.13 Canadaa= 1151 .383, b= 2.058 c=−123.860, d= 0.003, e= 0.078, f= 300 .515, g=−1451.585a= 0.009 b= 4.941 Hong Konga= 2.184, b= 1.144 c=−0.205, d= 0.0002, e= 6.909, f=−53.296, g=−86.233a=−0.055 b= 495 .126 Singaporea=−928.491, b= 1589 .761 c= 147 .121, d=−0.003, e=−250.814, f=−41.359, g=−126.697a=−0.044 b= 116 .412 China Mainlanda= 4382 .935, b= 3.857 c=−231.848, d= 0.042, e=−2.813, f=−1849.291, g= 22.359a= 0.045 b= 5.003e−6 Ours:dxi,0 dt=axi,0+b+PAij(cxi,0+dxj,0+e+1 fxj,0+g) TPSINDy:dxi,0 dt=axi,0+bPAij1 1+e−(xj,0−xi,0) Appendix F Preliminary attempts with LLM We use a decoding strategy called constrained beam search to embed domain knowledge into the regression process of the model. Firstly, the representation hgenerated from the data enters the decoder. When henters the decoder decselfanddecinter, the model will call the domain knowledge module to obtain the symbolic representation eknow of domain knowledge from experts or LLMs, such as [+ , xi, xj]. The representation eknow will be forcibly added to each round of beam search ekto ensure that the final symbolic regression result contains eknow. By embedding domain knowledge into the model, we can generate specific forms of equations (see Fig .F20(a)) and further improve the accuracy of the model for high complexity
|
https://arxiv.org/abs/2505.21879v1
|
Incorporating LLMs for Large-Scale Urban Complex Mobility Simulation Yu-Lun Song 1 , Chung-En Tsern 3 , Che-Cheng Wu 2 , Yu-Ming Chang 2 , Syuan-Bo Huang 2 , Wei-Chu Chen 2 , Michael Chia-Liang Lin 1 , Yu-Ta Lin 2 1 Media Lab @ Massachusetts Institute of Technology 2 City Science Lab @ National Taipei University of Technology 3 University College London Summary This study presents an innovative approach to urban mobility simulation by integrating a Large Language Model (LLM) with Agent-Based Modeling (ABM). Unlike traditional rule-based ABM, the proposed framework leverages LLM to enhance agent diversity and realism by generating synthetic population profiles, allocating routine and occasional locations, and simulating personalized routes. Using real-world data, the simulation models individual behaviors and large-scale mobility patterns in Taipei City. Key insights, such as route heat maps and mode-specific indicators, provide urban planners with actionable information for policy-making. Future work focuses on establishing robust validation frameworks to ensure accuracy and reliability in urban planning applications. KEYWORDS: Mobility simulation, Agent-Based Modeling (ABM), Large Language Model (LLM), Synthetic profiles, Urban planning 1. Introduction Mobility reflects the unique geographic, economic, and cultural contexts of cities while being shaped by and confined to the urban infrastructure that supports it. This intricate interplay impacts accessibility, convenience, and quality of life, offering critical insights into how cities function and adapt (Sheller and Urry 2006; Kang et al. 2012). Understanding these dynamics is crucial for urban planning, emphasizing the need for transportation systems that respond to current urban contexts while shaping future mobility patterns. Mobility simulation is a key tool in this process, enabling scenario evaluation, resource optimization, and the design of sustainable urban environments. Agent-based modeling (ABM) is a prominent approach for urban simulation, as it captures the behaviors of individual agents to study the impact of heterogeneous human behaviors on urban —————————————— 1 allen017@media.mit.edu 2 chung-en.tsern.23@ucl.ac.uk 3 reeve0319@gmail.com 4 roger@mail.ntut.edu.tw 5 syuanbo@mail.ntut.edu.tw 6 csl_drew@mail.ntut.edu.tw 7 mcllin@ mit.edu 8 roylin@mail.ntut.edu.tw 1 systems. However, traditional ABM often relies on rule-based behaviors, which can oversimplify the complexity of individual actions within urban environments. This simplification may lead to simulation outcomes that are less representative of real-world dynamics (Heppenstall, Malleson, and Crooks 2016). Recent advancements propose integrating ABM with Large Language Models (LLMs) to overcome these limitations. Platforms like OpenCity and the Smart Agent-Based Modeling (SABM) framework demonstrate LLMs’ potential to enhance scalability, realism, and complexity in urban simulation (Yan et al. 2024; Wu et al. 2023). This research aims to explore the potential of the integration of LLMs with ABM to diversify the simulation and make the outcome be more interpretable. 2. Methods Figure 1 is the demonstration of the research workflow. Figure 1 The workflow of the simulation. 2.1. Profile generation In this study, a large language model (LLM) is utilized as a tool for establishing relationships among diverse statistical data points. De-identified open statistical data—including variables such as age, education level, occupation, salary distribution, and mobility preferences—serve as inputs for the LLM, which models the underlying correlations among these variables to produce a vertically integrated distribution framework. As illustrated in
|
https://arxiv.org/abs/2505.21880v1
|
Figure 2 , the LLM processes statistical data inputs, such as age and education level, to generate proportional distributions for each age group. Iterative Proportional Fitting algorithm ensures that the aggregated educational distribution aligns with real-world population-level statistics. With the LLM’s inherent recognition of society and human behaviors, the generated synthetic profiles exhibit consistent and logically coherent attributes. Consequently, the synthetic profiles relatively closely mirror real-world population characteristics while circumventing individual privacy by operating on de-identified statistical data. 2 Figure 2 The illustration of a LLM establishes relationships between statistical data while the aggregated distribution aligns with the original data distribution. 2.2. Allocation The city is divided into uniformly sized grids of 250 meters by 250 meters to model the spatial distribution of individuals within the urban environment. Each grid's population capacity is determined based on a combination of census data and average income level data. This information guides the allocation of agents' initial and routine locations. Agents’ initial locations are determined by matching their income to the average income level of the corresponding grid, as illustrated in Figure 3 . For routine locations, the LLM is employed to generate descriptions of an agent’s occupation and relevant industrial categories, to increase more information. By performing text similarity matching, each occupation is mapped to its corresponding industry, shown in Figure 4 . Routine locations are then assigned by randomly selecting a point within the matched industry category from the available Points of Interest (POI) dataset. Figure 3 The allocation of initial location based on average income level data. 3 Figure 4 The workflow of the allocation of the routine location. 2.3. Occasional location matching To reflect the dynamics of a city as much as possible, occasional locations are modeled by referring to the schedule of agents generated by LLM. Each activity in the schedule generated by LLM referring to the agent's profile is matched to a corresponding Point of Interest (POI) category through semantic similarity-based mapping. By referring to the Huff model (Garcia-Gabilondo, Shibuya, and Sekimoto 2024), the final location is selected using the modified version ( Equation 1 ) that considers both distance and attractiveness, where attractiveness is a composite score of popularity and credibility. A weight is assigned to each candidate location based on these factors, and the occasional location is chosen probabilistically, maintaining flexibility while reflecting realistic spatial patterns. (1)𝑤𝑒𝑖𝑔ℎ𝑡 = 𝑎𝑡𝑡𝑟𝑎𝑐𝑡𝑖𝑣𝑒𝑛𝑒𝑠𝑠 𝑑𝑖𝑠𝑡𝑎𝑛𝑐𝑒𝑑𝑒𝑐𝑎𝑦 = 𝑝𝑜𝑝𝑢𝑙𝑎𝑟𝑖𝑡𝑦 × 𝑐𝑟𝑒𝑑𝑖𝑏𝑖𝑙𝑖𝑡𝑦 𝑑𝑖𝑠𝑡𝑎𝑛𝑐𝑒𝑑𝑒𝑐𝑎𝑦 2.4. Routing To simulate realistic mobility behavior, the Multi-Criteria Range Raptor (McRAPTOR) algorithm (Delling, Pajor, and Werneck 2015), which accommodates diverse individual preferences, is utilized to generate personalized routing solutions. McRAPTOR iteratively evaluates all possible routes and transfer options across multiple rounds to identify optimal paths that are not outperformed in any single criterion. By incorporating multiple optimization objectives and balancing trade-offs, the algorithm enables the simulation to closely mirror real-world decision-making and transportation behaviors. Additionally, it serves as a crucial tool for calculating key indicators, allowing for a comprehensive assessment of the mobility impact on the simulated environment. 3. Results & Discussion Figure 5 demonstrates a one-day simulation, approximately 100,000 agents
|
https://arxiv.org/abs/2505.21880v1
|
are deployed across Taipei City, Taiwan. Each point on the map represents an agent, with the brightness of the color indicating the density of agents in that area. This comprehensive visualization enables observers or 4 planners to understand agent behavior patterns and environmental impacts during the simulation period. Figure 5 The platform of the simulation outcome. By integrating the capabilities of ABM and LLM, the platform allows observers to delve into individual agent profiles and daily schedules, as demonstrated in Figure 6 . Each agent's trajectory, activity locations, and transportation modes can be tracked and analyzed. This micro-scale observation aids in understanding individual mobility behaviors and the types of people gathered in various locations at different times. Figure 6 Persona of an agent and its one-day schedule in the simulation. 5 In addition to individual-level analysis, it also offers macro-scale insights through route heat maps. Figure 7 displays the route heat map for private vehicles during the morning period, while Figure 8 shows the corresponding heat map for pedestrians. By examining these visualizations, urban planners can assess which roads experience higher traffic and determine the dominant transportation modes during specific time intervals. This information is crucial for identifying traffic hotspots and evaluating the effectiveness of existing policies. Figure 7 Route heat map of private vehicles during the morning in Taipei City. Figure 8 Route heat map of pedestrians during the morning in Taipei City. The left-hand panel also presents key indicators, such as the proportion of various mobility modes, average travel distance, and carbon emissions. These metrics provide valuable quantitative insights 6 that can guide policy-making. With access to both micro-scale agent profiles and macro-scale route patterns, urban planners and researchers can leverage this simulation to derive actionable insights for future urban development. 4. Conclusion This study explores the potential of leveraging the capability of LLM with ABM to conduct more diverse and interpretable urban simulations. The simulation results indicate significant promise in providing urban planners with valuable insights for future city development. Nevertheless, the accuracy of these results requires rigorous verification to ensure reliability. Future research should focus on establishing robust validation frameworks to assess the accuracy of each model component. Achieving this objective will necessitate comprehensive data collection from local urban environments to ensure sufficient coverage and representativeness. Ultimately, this approach will enhance the credibility and applicability of simulation-driven urban planning. References Delling, Daniel, Thomas Pajor, and Renato F Werneck (2015). “Round-based public transit routing”. In: Transportation Science 49.3, pp. 591–604. Heppenstall, Alison, Nick Malleson, and Andrew Crooks (2016). ““Space, the final frontier”: How good are agent-based models at simulating individuals and space in cities?” In: Systems 4.1, p. 9 Kang, Chaogui et al. (2012). “Intra-urban human mobility patterns: An urban morphology perspective”. In: Physica A: Statistical Mechanics and its Applications 391.4, pp. 1702–1717. Sheller, Mimi and John Urry (2006). “The new mobilities paradigm”. In: Environment and planning A 38.2, pp. 207–226. Wu, Zengqing et al. (2023). “Smart agent-based modeling: On the use of large language models in computer simulations”. In: arXiv preprint arXiv:2311.06330. Yan, Yuwei et al. (2024). “OpenCity:
|
https://arxiv.org/abs/2505.21880v1
|
A Scalable Platform to Simulate Urban Activities with Massive LLM Agents”. In: arXiv preprint arXiv:2410.21286. Biographies Yu-Lun Song is a graduate student at the MIT Media Lab and a research assistant in the City Science Lab, specializing in AI and urban mobility simulation. Chung-En Tsern is a graduate student at UCL CASA and a research assistant at the City Science Lab @ Taipei Tech, specializing in spatial data, policy research, and urban simulation. Che-Cheng Wu, Research Assistant at the City Science Lab @ Taipei Tech, focused on utilizing large language models (LLMs) for urban data analysis and simulation. 7 Yu-Ming Chang, Researcher at City Science Lab @ Taipei Tech, specialized in data analysis and transforming urban data research into applications. Syuan-Bo Huang, Researcher at the City Science Lab @ Taipei Tech, dedicated to leveraging the application of AI and urban data in solving intricate urban issues. Wei-Chu Chen, Researcher, focuses on spatial analysis and routing tools to address complex urban challenges. Dr. Michael Lin is Research Scientist at MIT and Director of Taipei City Science, specializing in complex urban systems with over 15 years of experience in leading collaboration with Fortune 500 & the public sectors around the world to craft future urban mobility from concepts to reality. Yu-Ta Lin, Head of Urban Informatics, City Science Lab @ Taipei Tech, specialized in urban data analysis for urban design and planning. 8
|
https://arxiv.org/abs/2505.21880v1
|
arXiv:2505.21887v1 [cs.AI] 28 May 2025 SVRPBench: A Realistic Benchmark for Stochastic Vehicle Routing Problem Ahmed Heakl1Yahia Salaheldin Shaaban1 Martin Taká ˇc1Salem Lahlou1Zangir Iklassov1 1MBZUAI, Abu Dhabi, UAE /githubhttps://github.com/yehias21/vrp-benchmarks ὑ7https://huggingface.co/datasets/MBZUAI/svrp-bench Abstract Robust routing under uncertainty is central to real-world logistics, yet most bench- marks assume static, idealized settings. We present SVRPBench , the first open benchmark to capture high-fidelity stochastic dynamics in vehicle routing at urban scale. Spanning more than 500 instances with up to 1000 customers, it simu- lates realistic delivery conditions: time-dependent congestion, log-normal delays, probabilistic accidents, and empirically grounded time windows for residential and commercial clients. Our pipeline generates diverse, constraint-rich scenar- ios, including multi-depot and multi-vehicle setups. Benchmarking reveals that state-of-the-art RL solvers like POMO and AM degrade by over 20% under distri- butional shift, while classical and metaheuristic methods remain robust. To enable reproducible research, we release the dataset (Hugging Face) and evaluation suite (GitHub). SVRPBench challenges the community to design solvers that generalize beyond synthetic assumptions and adapt to real-world uncertainty. 1 Introduction Efficient vehicle routing is fundamental to modern logistics and last-mile delivery. The classical Vehicle Routing Problem (VRP) [ 8,11] seeks cost-effective routes for servicing customers under con- straints such as vehicle capacities and time windows. Although well studied, real-world deployments face uncertain and dynamic conditions that most existing benchmarks do not adequately capture. One key extension addressing real-world complexity is the Stochastic Vehicle Routing Problem (SVRP). Unlike deterministic VRP, SVRP explicitly incorporates uncertainty into routing decisions, with problem elements such as travel times, customer demands, service times, and even customer presence considered random variables [ 11,22]. Consequently, routes are planned a priori , and corrective actions, known as recourse strategies, are applied when realized conditions deviate from planned values [ 9,2]. Prominent examples include random travel times modeled by probabilistic distributions or random customer presence known as probabilistic VRP (PVRP) [ 18,5]. Despite this extensive body of research, many existing public benchmarks for SVRP still rely on static assumptions, such as deterministic travel times, fixed customer availability, and unchanged route constraints, thus limiting their practical applicability and robustness evaluations, as shown in Table 1. The Case for a Realistic SVRP Benchmark. Urban logistics operates under dynamic and uncertain conditions, yet most existing benchmarks fail to reflect this complexity. Practical routing systems must Preprint. Table 1: Comparison of SVRPBench with existing VRP benchmarks. ✓indicates full support, △ indicates partial or limited support, and ✗indicates no support. Feature SVRPBench CVRPLIB SINTEF VRP-REP TSPLIB RL4CO Stochastic Elements Time-dependent travel delays ✓ ✗ △ △ ✗ ✗ Peak-hour traffic patterns ✓ ✗ ✗ ✗ ✗ ✗ Random travel time noise ✓ ✗ △ △ ✗ △ Probabilistic accidents ✓ ✗ ✗ ✗ ✗ ✗ Heterogeneous time windows ✓ ✗ △ △ ✗ ✗ Problem Configurations Multi-depot support ✓ △ ✓ ✓ ✗ ✗ Multi-vehicle fleets ✓ ✓ ✓ ✓ ✗ ✓ Capacity constraints ✓ ✓ ✓ ✓ ✗ ✓ Time window constraints ✓ △ ✓ ✓ ✗ △ Clustered customer distributions ✓ △ △ ✓ △ ✗ Scale & Diversity Small instances ( ≤100customers) ✓ ✓ ✓ ✓
|
https://arxiv.org/abs/2505.21887v1
|
✓ ✓ Medium instances (100-300) ✓ ✓ ✓ ✓ △ ✓ Large instances (>300) ✓ △ △ △ ✗ △ Varying stochasticity levels ✓ ✗ △ △ ✗ ✗ account for peak-hour congestion, random incidents like accidents, and diverse delivery preferences across customer types [ 14,3,24]. Ignoring these factors leads to overly optimistic performance assessments and misdirects algorithmic development toward unrealistic assumptions [1]. Our Contributions. To address these gaps, we introduce SVRPBench , a novel, open-source bench- mark suite for the Stochastic Vehicle Routing Problem (SVRP), designed to simulate realistic logistics scenarios with embedded uncertainty. Our key contributions include: •Stochastic Realism. We model time-dependent congestion using Gaussian mixtures, inject log- normal delays and probabilistic accidents [ 18], and generate customer time windows from empirical residential and commercial distributions. •Constraint-Rich Instance Generation. Our framework supports multi-depot and multi-vehicle setups, strict capacity constraints, and diverse time window widths, all grounded in spatially realistic demand distributions. •Diverse Baseline Evaluation. We benchmark classical heuristics (e.g., Nearest Neighbor, 2-opt), metaheuristics (e.g., ACO, Tabu Search [ 12,7]), industrial solvers (OR-Tools [ 26], LKH3 [ 31]), and learning-based methods (AM [ 15], POMO [ 17]), highlighting how stochastic conditions affect solution quality, feasibility, and robustness. •Open Community Platform. We release datasets, solvers, and evaluation scripts through a public repository to support reproducibility and foster future contributions. By advancing realism and accessibility in SVRP benchmarking, SVRPBench aims to accelerate the development of robust, deployable routing algorithms suited for real-world logistics. 2 Realistic Stochastic Modeling A core contribution of SVRPBench is its simulation of real-world uncertainty in urban-scale logistics. Classical VRP benchmarks often assume static travel times and rigid customer schedules [ 13], overlooking time-varying conditions and operational stochasticity. Informed by empirical and theoretical literature [ 3,14,1,23,25,27,19,6,10,20], our benchmark introduces: (1) time- dependent congestion, (2) stochastic travel time delays, (3) accident-induced disruptions, and (4) customer-specific time window distributions. 2.1 Time-Dependent Travel Time Modeling We model the travel time from node atobat time tas: T(a, b, t ) =D(a,b) V+B(a, b, t )·R(t) +Iaccidents (t)·Daccident , (1) 2 where D(a, b)is Euclidean distance and Vis average road speed. The congestion factor B(a, b, t )is defined as: B(a, b, t ) =α·Ftime(t)·Fdistance (D(a, b)), (2) with: Ftime(t) =β+γ·[f(t;µmorning , σpeak) +f(t;µevening , σpeak)], (3) f(t;µ, σ) =1 σ√ 2πe−1 2(t−µ σ)2 , (4) Fdistance (D) = 1−e−D/λ dist, (5) where the Gaussian peaks around µmorning = 8andµevening = 17 (σpeak= 1.5) align with observed urban traffic congestion patterns [ 27]. The distance decay λdist= 50 modulates slowdown severity, reflecting empirical findings that longer trips are more likely to encounter congestion [6]. The multiplicative stochastic delay R(t)is drawn from a log-normal distribution: µ(t) =µbase+δ·[f(t;µmorning , σpeak) +f(t;µevening , σpeak)], (6) σ(t) =σbase+ϵ·[f(t;µmorning , σpeak) +f(t;µevening , σpeak)], (7) R(t)∼LogNormal (µ(t), σ(t)), (8) reflecting both the skewed and bursty nature of traffic delays [ 19,6]. Baseline values µbase= 0and σbase= 0.3reflect free-flow conditions, while δ= 0.1andϵ= 0.2capture peak-hour amplification. Accident delays are modeled using a time-inhomogeneous Poisson process: λ(t) =λscale·f(t;µnight, σacc), (9) Iaccidents (t)∼Poisson (λ(t)), (10) Daccident ∼U(dmin, dmax), (11) where accidents peak around
|
https://arxiv.org/abs/2505.21887v1
|
µnight= 21 (σacc= 2) due to elevated nighttime risks from fatigue and impaired driving [ 28]. The delay duration is drawn from U(0.5,2.0)hours, consistent with industry reports on incident clearance times [28]. 2.2 Customer Time Window Sampling Residential and commercial customers exhibit different temporal availability patterns [ 23,20]. For residential profiles, delivery windows are sampled from a bimodal Gaussian mixture: Tstart∼( N(µres,morning , σ2 res,morning ),w.p.0.5, N(µres,evening , σ2 res,evening ),w.p.0.5,(12) where µres,morning = 480 (8:00 AM) and µres,evening = 1140 (7:00 PM), with variances σ= 90 and 120mins, respectively, aligning with common parcel service offerings such as FedEx and Bring [10, 20]. The window duration is drawn from: Wlength∼U(wmin, wmax), T start= max(0 ,min(Tstart,1440−Wlength)). (13) Commercial customers follow a single-mode Gaussian: Tstart∼ N(µcom, σ2 com), W length∼U(wmin, wcom max), (14) withµcom= 780 (1:00 PM), σcom= 60 , and wcom max= 120 minutes, reflecting standard daytime business hours and delivery norms [29]. This probabilistic windowing model encourages algorithms to balance varied service constraints, simulating realistic scheduling trade-offs in last-mile delivery systems. 3 Dataset Construction Pipeline To enable scalable and reproducible experimentation, we develop a unified pipeline that generates diverse, constraint-rich SVRP instances grounded in stochastic realism. It integrates models of customer behavior, traffic patterns, spatial layouts, and routing constraints to produce problem scenarios suited for evaluating both classical and learning-based solvers under realistic uncertainty [ 22, 11]. The complete pipeline is illustrated in Figure 1. 3 City Layout Sampler Customer Profiles Demand SamplerTime WindowsInput Generation Module Stochastic Modeling Engine Time-varying Travel Delays Log-normal Randomness Accident-base DisruptionsEvaluation Framework Solvers Exact, RL, DL, Meta Metrics Runtime, Feasibility Robustness, CostInstance Assembly Multi-depot Multi-vehicle Capacity Constraints Time-distance Matrix Figure 1: SVRPBench pipeline . The framework generates realistic SVRP instances through four stages: input generation, stochastic modeling, instance assembly, and evaluation with standardized metrics and solvers. Location Sampling. We begin by selecting the total number of customers from {10,20,100,500,1000}, then compute the number of cities as max(1 ,#customers //50). To simulate spatial separation between urban clusters, we apply K-Means clustering to generate city centers that are as distant from each other as possible. Customer locations are then sampled around each city center using 2D Gaussian distributions [14]. Demand Assignment. Each customer is assigned a discrete demand selected uniformly at random from a set {1,2, . . . , max _demand }. The number of vehicles and their capacity are computed based on the total customer demand, with vehicle capacity set as total demand ÷number of vehicles . This ensures balanced feasibility across instance scales [9]. Time Window Assignment. Customer time windows are generated stochastically, following the models described in Section 2. Residential and commercial customer patterns are differentiated using realistic temporal distributions [3]. Travel Time Matrix Construction. A full travel time matrix T(a, b, t )is computed for all loca- tion pairs, incorporating deterministic base time, time-dependent congestion patterns, log-normal stochastic variation, and random accident delays, as detailed in Section 2. This captures the nonlinear, time-varying nature of urban transportation systems [18]. Constraint Integration. We support both single-depot and multi-depot configurations. In multi- depot settings, depots can
|
https://arxiv.org/abs/2505.21887v1
|
be placed either randomly or aligned with city centers (one per city). A homogeneous fleet of vehicles is used, and vehicle count is configured to balance demand and capacity. All customer time windows are sampled to ensure feasibility under the assigned travel time model [1]. Validation. Each generated instance undergoes automated validation to ensure feasibility under both capacity and temporal constraints. For CVRP, we verify that the total vehicle capacity (number of vehicles ×per-vehicle capacity) exceeds the sum of all customer demands, ensuring that a feasible route covering all customers exists. For TWVRP, we construct a time-windowed demand histogram by binning the time axis and accumulating customer demands per bin. We then identify the peak- demand bin and ensure that the fleet capacity is sufficient to serve this worst-case demand, i.e., capacity ×num_vehicles ≥max tdemand (t). This provides a conservative guarantee that even under concentrated temporal demand, a feasible schedule remains possible. Infeasible instances (e.g., unreachable nodes or incompatible time windows) are filtered or regenerated. Parameters are selected to reflect urban-scale routing challenges but can be modified for rural or industrial scenarios. Accident frequency and delay magnitudes are parameterized using a Poisson- based arrival model and uniform delay range, respectively. Customer types are split roughly 60% residential to 40% commercial, matching empirical logistics patterns [3]. Various Scales. Our benchmark includes three instance tiers. Small instances (50–100 customers, 1–2 depots) with low noise allow quick testing. Medium instances (100–300 customers, 2–3 depots) 4 (a) Michigan (Real) (b) Abu Dhabi (Real) (c) Milan (Real) (d) Michigan (Synthetic) (e) Abu Dhabi (Synthetic) (f) Milan (Synthetic) Figure 2: Comparison of real (top) and synthetic (bottom) routing instances across three cities. feature moderate stochasticity. Large instances (300+ customers) integrate high travel-time variability and tighter delivery windows to stress-test scalability. All levels are generated with multiple random seeds to support statistical averaging and ensure robustness of comparisons. To validate the realism of our spatial sampling strategy, we visually compare synthetic routing instances against satellite imagery of real-world cities. As shown in Figure 2, our generated layouts closely mimic key structural patterns, grid-like in Michigan, radial in Milan, and dispersed in Abu Dhabi, demonstrating the pipeline’s ability to emulate diverse urban morphologies critical for evaluating routing algorithms in geographically grounded scenarios. 4 Evaluation Protocol To ensure fair, rigorous, and reproducible comparisons across routing algorithms, we propose a standardized evaluation protocol tailored for our stochastic vehicle routing benchmark. This protocol assesses not only solution quality but also robustness, feasibility, and scalability under conditions of realistic uncertainty, addressing limitations of earlier benchmark designs that overlooked stochastic effects [22, 11]. 4.1 Performance Metrics We report a comprehensive suite of metrics to evaluate different facets of algorithmic behavior. The Total Cost (TC) measures the cumulative travel time across all vehicles, including congestion-induced delays and accident-based disruptions. Formally, it is computed as: TC=X k∈VX (i,j)∈routekT(i, j, t i), (15) where T(i, j, t i)is the sampled travel time from node itojat time ti. 5 103 102 101 100101102 Runtime (s) - log scale104105Total Cost ACO (CVRP) ACO (TWCVRP)NN+2OPT (CVRP) NN+2OPT (TWCVRP)OR-TOOLS (CVRP) OR-TOOLS (TWCVRP)TABU
|
https://arxiv.org/abs/2505.21887v1
|
(CVRP) TABU (TWCVRP)0.00.20.40.60.81.0 Feasibility Rate Figure 3: Solver Comparison: Overall Performance Metrics. Constraint Violation Rate (CVR) quantifies the proportion of customers whose service violates time windows or exceeds vehicle capacity, capturing solution feasibility: CVR =#violations #customers×100% . (16) Feasibility Rate (FR) reflects the robustness of solutions across instances and solvers. It is defined as the fraction of problem instances for which a solution satisfies all routing constraints: FR=#feasible instances #total instances. (17) Runtime (RT) captures wall-clock computation time, serving as a proxy for scalability and practical deployability. Robustness (ROB) measures the variability in cost due to stochastic elements by computing the variance across Nindependent samples of the same instance: ROB =1 NPN i=1 TCi−TC2, (18) where TCdenotes the mean total cost. This metric is especially important in stochastic VRP settings [2, 18]. 5 Experimental Results We conduct a comprehensive evaluation of baseline methods on our stochastic VRP benchmark, which systematically varies four key dimensions: instance size, problem type, depot configuration, and vehicle configuration. We generate 10 instances for each combination across instance sizes {10, 20, 50, 100, 200, 500, 1000}, problem types {CVRP, TWVRP}, depot configurations {single, multi}, and vehicle settings {single, multi}, yielding a large-scale, structured test suite. Additionally, we provide a scalable data generator for training. Reinforcement learning models were trained on 100k synthetic instances under the single-depot, single-vehicle CVRP and TWVRP regimes. 5.1 Evaluation Scope All methods were evaluated under the stochastic setting defined in Section 2. Metrics reported include total cost (incorporating all stochastic factors), constraint violation rate (CVR), feasibility rate, runtime, and robustness (measured as variance across stochastic samples). 6 102050100 200 5001000 Problem Size050000100000150000200000Value Total Cost 102050100 200 5001000 Problem Size103 102 101 100101102 Runtime (s) 102050100 200 5001000 Problem Size0.00.20.40.60.81.0Value Feasibility Rate 102050100 200 5001000 Problem Size010000200003000040000500006000070000 Waiting Time ACO (CVRP) ACO (TWCVRP)NN+2OPT (CVRP) NN+2OPT (TWCVRP)OR-TOOLS (CVRP) OR-TOOLS (TWCVRP)TABU (CVRP) TABU (TWCVRP)Figure 4: Solver Performance by Problem Size. Table 2: Performance of baseline methods (mean over all instances, 5 stochastic runs). Method Total Cost ↓CVR (%) ↓Feasibility ↑Runtime (s) ↓Robustness ↓ NN+2opt 40707.5 1.6 0.984 0.697 0.1 Tabu Search 40787.8 1.6 0.690 5.157 0.1 ACO 40566.5 1.6 0.690 11.382 0.1 OR-Tools 40259.3 1.6 0.984 1.940 0.1 Attention Model (AM) 41358.3 1.9 0.910 1.852 0.2 POMO 40650.4 1.7 0.933 1.421 0.1 Classical algorithms, Nearest Neighbor + 2-opt, Tabu Search, and ACO (refer to Appendix B for more details), were evaluated across all settings without modification. Their flexibility allows them to handle diverse configurations out of the box. 5.2 Experimental Setup All baselines were evaluated on a consumer-grade CPU (Intel i7, 16GB RAM), except learning- based models, which used a single NVIDIA RTX 4080. Classical and metaheuristic solvers were implemented in Python; learning models used the RL4CO framework [ 4]. Training for RL models was done on 100k synthetic instances (refer to Appendix D for more details). Evaluation followed the stochastic protocol detailed in Section 2, averaging results over five realizations per test case. 5.3 Results & Analysis Overall Performance. Table 2 and Figure 3 summarize the aggregate performance across all test cases. OR-Tools achieved the best
|
https://arxiv.org/abs/2505.21887v1
|
overall cost (40,259), followed closely by ACO (40,566; +0.8%) and POMO (40,650; +1.0%), with OR-Tools and NN+2opt maintaining the highest feasibility rates (98.4%) while NN+2opt delivered the fastest runtime (0.697s). Learning-based approaches demonstrated a feasibility-speed tradeoff, with POMO offering better solution quality than NN+2opt at competitive runtimes (1.421s) while the Attention Model showed higher constraint violations (CVR: 1.9%) but reasonable performance across other metrics. 7 Table 3: Performance Comparison: CVRP vs TWCVRP. CVRP TWCVRP Impact Method Cost ↓ CVR↓Feas↑ RT↓ Cost↓ CVR↓Feas↑ RT↓ %∆ NN+2opt 10399.2 0.0 1.000 646.3 71015.8 3.2 0.968 747.8 +582.9 Tabu Search 10494.1 0.0 1.000 945.1 71081.5 3.2 0.381 9368.6 +577.3 ACO 10384.9 0.0 1.000 11159.8 70748.1 3.2 0.381 11603.6 +581.3 OR-Tools 9499.7 0.0 1.000 2328.0 71018.8 3.2 0.968 1552.1 +647.6 Attention Model (AM) 11235.6 0.2 0.965 1775.4 71481.0 3.6 0.854 1929.2 +536.2 POMO 10358.7 0.1 0.987 1316.9 70942.1 3.3 0.879 1525.3 +584.8 Table 4: Detailed Performance Analysis by Instance Size. Small ( ≤50) Medium (100-200) Large ( ≥500) Method Cost ↓CVR↓Feas↑ RT↓ Cost↓ CVR↓Feas↑ RT↓ Cost↓ CVR↓Feas↑ RT↓ NN+2opt 6295.0 0.6 0.994 5.9 31486.1 2.3 0.977 90.9 101547.5 2.4 0.976 2340.0 Tabu Search 6232.5 0.6 0.917 251.6 31692.2 2.3 0.542 1339.5 101716.5 2.4 0.500 16332.1 ACO 6080.7 0.6 0.917 69.6 31371.9 2.3 0.542 1530.6 101490.0 2.4 0.500 38201.0 OR-Tools 6008.1 0.6 0.994 513.7 30640.2 2.3 0.977 665.8 101255.0 2.4 0.976 5353.7 Attention Model (AM) 6523.2 0.8 0.975 42.3 32165.5 2.6 0.910 857.4 102756.2 2.9 0.835 4758.9 POMO 6176.4 0.7 0.985 29.7 31024.8 2.4 0.945 642.3 101408.7 2.5 0.860 3586.2 Impact of Time Windows. Table 3 reveals that introducing time windows (TWCVRP) increases total cost by 536–648% across all solvers, with OR-Tools incurring the highest relative penalty (+647.6%) while the Attention Model showed the lowest relative increase (+536.2%). Learning- based methods demonstrated moderate resilience to time constraints with POMO maintaining 87.9% feasibility and Attention Model 85.4%, positioning them between the top performers (NN+2opt and OR-Tools at >96%) and the struggling metaheuristics (ACO and Tabu Search at 38.1%). Scalability by Instance Size. As shown in Table 4 and Figure 4, cost scaled approximately 16 × from small ( ≤50nodes) to large ( ≥500nodes) instances across all methods, with NN+2opt and OR-Tools maintaining feasibility >97% at all scales, while learning-based methods showed moderate degradation (POMO: 86%, AM: 83.5%). Learning-based approaches demonstrated competitive performance-runtime tradeoffs, with POMO offering the fastest runtime on small instances (29.7s) and maintaining feasibility significantly better than ACO and Tabu Search (50%) on large instances, though traditional heuristics still held the advantage for the largest problems. Effect of Depot Configuration. Table 5 shows that multi-depot setups consistently reduced costs and improved feasibility across all methods, with OR-Tools achieving a 72% cost reduction (from 34,611 to 9,561) and POMO showing similarly impressive gains (71% reduction to 10,178). Learning- based methods particularly benefited from multi-depot configurations, with both POMO and Attention Model reaching perfect feasibility (100%) despite their variable performance in single-depot scenarios (92-96.5%), supporting the counterintuitive finding that more flexible depot placements improve both computational and solution efficiency regardless of algorithm class. Key Takeaways. Our evaluation underscores several
|
https://arxiv.org/abs/2505.21887v1
|
important insights: •OR-Tools is the most reliable choice for large-scale offline optimization, balancing quality and feasibility despite higher runtimes. Table 5: Performance Analysis by Depot Configuration. Single Depot Multi Depot Method Cost ↓ CVR↓Feas↑ RT↓ Cost↓ CVR↓Feas↑ RT↓ NN+2opt 34978.5 0.8 0.992 686.3 10625.2 0.0 1.000 643.7 Tabu Search 35072.0 0.8 0.690 4818.2 10713.8 0.0 1.000 946.1 ACO 34852.1 0.8 0.690 10712.0 10614.9 0.0 1.000 11298.7 OR-Tools 34611.0 0.8 0.992 1911.2 9561.4 0.0 1.000 2396.5 Attention Model (AM) 35825.6 1.1 0.920 1785.3 10974.7 0.0 1.000 1852.6 POMO 34786.3 0.9 0.965 1438.2 10178.5 0.0 1.000 1324.8 8 •NN+2opt offers a robust, low-latency alternative for real-time deployment with minimal compro- mise on cost or feasibility. •Metaheuristics underperform at scale, while learning-based methods like POMO offer feasible solutions with better scalability, though still lag behind top heuristics. •The Attention Model demonstrates potential but requires further refinement to match the perfor- mance of top-performing methods, particularly for large instances. •Time windows impose the most significant complexity, sharply degrading performance for non- adaptive solvers, though learning-based methods show moderate resilience. •Multi-depot settings improve both feasibility and runtime across all solver types, offering a practical design consideration for logistics planning. Together, Figures 3 and 4 illustrate these trends across key metrics. SVRPBench successfully reveals scalability bottlenecks, constraint sensitivity, and performance trade-offs, establishing a realistic and informative testbed for stochastic routing research. Please refer to appendix C for additional results. 6 Limitations and Future Directions While SVRPBench advances realism in stochastic vehicle routing, several limitations remain. Our delay models rely on Gaussian and log-normal distributions to simulate traffic peaks and random- ness—efficient and interpretable, yet unable to capture network-level dynamics such as bottlenecks, cascading congestion, or real-time rerouting [ 14]. These assumptions, however, are user-modifiable, allowing injection of domain-specific uncertainty. Reinforcement learning methods like AM and POMO show limited scalability to larger instances, reflecting overfitting and weak generalization. Additionally, our current evaluation protocol lacks standardized procedures to assess robustness across instance scales and distribution shifts, motivating future work on curriculum learning and hierarchical solver design. To further bridge the gap to real-world logistics, future extensions will incorporate road-constrained instances derived from OpenStreetMap or GIS data, enabling geographically grounded routing behavior. Dynamic and multi-day settings—with online updates and rolling horizons—will support evaluation of adaptive strategies [ 2]. We also plan to introduce diagnostic tasks for probing model robustness, generalization under distributional shift, and few-shot performance [ 21,17], enabling more fine-grained analysis of algorithmic reliability in complex environments. 7 Conclusion We presented SVRPBench , a modular and open-source benchmark for evaluating vehicle routing under realistic stochastic dynamics. By incorporating time-dependent congestion, probabilistic delays, and heterogeneous customer time windows, our benchmark departs from static assumptions and reflects the operational uncertainty of real logistics. Empirical results across over 500 instances revealed that classical and metaheuristic methods remain competitive on feasibility and runtime, while reinforcement learning models like POMO and AM, despite strong performance in training regimes, struggled with multi-depot generalization and ex- hibited >20% cost degradation under distributional shift. Surprisingly, multi-depot configurations consistently improved both cost and robustness, even for learning-based solvers, highlighting
|
https://arxiv.org/abs/2505.21887v1
|
the importance of flexible depot placement in practical settings. By supporting large-scale, reproducible evaluations via Hugging Face and GitHub, SVRPBench offers a community platform to benchmark solvers across realism axes. We urge the research community to develop adaptive, noise-aware routing algorithms that bridge the gap between synthetic optimization and deployable, resilient logistics solutions. 9 References [1]Yossiri Adulyasak and Patrick Jaillet. Models and algorithms for stochastic and robust vehicle routing with deadlines. Transportation Science , 50(2):608–626, 2016. [2]Cock Bastian and Alexander H. G. Rinnooy Kan. The stochastic vehicle routing problem revisited. European Journal of Operational Research , 56(3):407–412, 1992. [3]Russell Bent and Pascal Van Hentenryck. Scenario-based planning for partially dynamic vehicle routing with stochastic customers. Operations Research , 52(6):977–987, 2004. [4]F. Berto, C. Hua, J. Park, M. Kim, H. Kim, J. Son, H. Kim, J. Kim, and J. Park. Rl4co: A unified reinforcement learning for combinatorial optimization library. In Proceedings of Advances of Neural Information Processing Systems (workshop) , 2023. [5]Dimitris J. Bertsimas, Patrick Jaillet, and Amedeo R. Odoni. A priori optimization. Operations Research , 38(6):1019–1033, 1990. [6]Werner Brilon, Jürgen Geistefeldt, and Markus Regler. Reliability of travel times: A stochastic modeling approach. Transportation Research Record , 2061(1):1–8, 2008. [7]K. Chepuri and T. Homem-de Mello. Solving the vehicle routing problem with stochastic demands using the cross-entropy method. Annals of Operations Research , 134(1):153–181, 2005. [8]George B Dantzig and John H Ramser. The truck dispatching problem. Management science , 6(1):80–91, 1959. [9]Moshe Dror, Gilbert Laporte, and Pierre Trudeau. Vehicle routing with stochastic demands: Properties and solution frameworks. Transportation Science , 23(3):166–176, 1989. [10] FedEx Corporation. Fedex residential delivery options whitepaper. Whitepaper , 2020. Flexible delivery time window practices. [11] Michel Gendreau, Gilbert Laporte, and Renaud Séguin. Stochastic vehicle routing. European Journal of Operational Research , 88(1):3–12, 1996. [12] Michel Gendreau, Gilbert Laporte, and Renaud Séguin. A tabu search heuristic for the vehicle routing problem with stochastic demands and customers. Operations Research , 44(3):469–477, 1996. [13] Michel Gendreau, Gilbert Laporte, and Rene Seguin. Stochastic vehicle routing. European Journal of Operational Research , 88(1):3–12, 1996. [14] Lars Magnus Hvattum, Arne Lø kketangen, and Gilbert Laporte. Solving a dynamic and stochastic vehicle routing problem with a sample scenario hedging heuristic. Transportation Science , 40(4):421–438, 2006. [15] Wouter Kool, Herke Van Hoof, and Max Welling. Attention, learn to solve routing problems! arXiv preprint arXiv:1803.08475 , 2018. [16] Wouter Kool, Herke van Hoof, and Max Welling. Attention, learn to solve routing problems! International Conference on Learning Representations (ICLR) , 2019. [17] Yeong-Dae Kwon, Jinho Choo, Byoungjip Kim, Iljoo Yoon, Youngjune Gwon, and Seungjai Min. Pomo: Policy optimization with multiple optima for reinforcement learning. Advances in Neural Information Processing Systems , 33:21188–21198, 2020. [18] Gilbert Laporte, François V . Louveaux, and Hélène Mercure. The vehicle routing problem with stochastic travel times. Transportation Science , 26(3):161–170, 1992. [19] Qing Li, Ming Xu, and Yinhai Wang. Modeling travel time variability with lognormal distribu- tion. Transportation Research Record , 2490(1):47–54, 2015. [20] Bring Logistics. Customer preferences in last-mile deliveries: Flexible windows and urban density effects. Industry Report , 2021. Available via company white papers.
|
https://arxiv.org/abs/2505.21887v1
|
10 [21] Mohammadreza Nazari, Afshin Oroojlooy, Lawrence Snyder, and Martin Taká ˇc. Reinforcement learning for solving the vehicle routing problem. In Proceedings of Advances in Neural Information Processing Systems , pages 9861–9871, 2018. [22] Jorge Oyola, Halvard Arntzen, and David L. Woodruff. The stochastic vehicle routing problem, a literature review, part i: Models. EURO Journal on Transportation and Logistics , 7(3):193–221, 2018. [23] Jorge Luis Oyola, Halvard Arntzen, and David L Woodruff. The stochastic vehicle routing problem: A literature review, part i: Models. EURO Journal on Transportation and Logistics , 7(3):193–221, 2018. [24] Nikica Peric, Slaven Begovic, and Vinko Lesic. Adaptive memory procedure for solving real-world vehicle routing problem. arXiv preprint arXiv:2403.04420 , 2024. [25] Nikica Peri ´c, Slaven Begovi ´c, and Vinko Lesi ´c. Adaptive memory procedure for solving real-world vehicle routing problem. arXiv preprint arXiv:2403.04420 , 2024. [26] Laurent Perron and Frédéric Didier. Cp-sat. [27] David Schrank, Bill Eisele, Tim Lomax, et al. 2021 urban mobility report. Texas A&M Transportation Institute , 2021. [28] Federal Highway Administration U.S. Department of Transportation. Manual on uniform traffic control devices (mutcd), 2009 edition, 2009. Accident and incident classification and duration guidelines. [29] Ron van Duin, Tolga Bekta¸ s, Murat Bekta¸ s, and Tavares Tan. Attended home deliveries: Preferences and behavioral patterns. Transportation Research Procedia , 16:30–39, 2016. [30] Ronald J Williams. Simple statistical gradient-following algorithms for connectionist reinforce- ment learning. Machine learning , 8(3):229–256, 1992. [31] Jiongzhi Zheng, Kun He, Jianrong Zhou, Yan Jin, and Chu-Min Li. Reinforced lin– kernighan–helsgaun algorithms for the traveling salesman problems. Knowledge-Based Systems , 260:110144, 2023. A Open Infrastructure To ensure reproducibility, extensibility, and accessibility, we release all components of the benchmark openly on GitHub and Hugging Face. This includes the dataset, instance generator, evaluation engine, and baseline implementations. Evaluation instances can be used out of the box, while the modular codebase allows users to integrate new solvers and adapt evaluation scripts. A public leaderboard on huggingface1serves as the central hub for documentation, instance down- loads, and leaderboard submissions. Submissions are validated automatically and ranked by total cost, feasibility, and runtime. All data and code are versioned, containerized (Docker-supported), and designed to support future extensions such as new routing scenarios or solver classes. We welcome community contributions, including new solvers, datasets, and improvements to docu- mentation or evaluation tools. By sharing the infrastructure broadly, we aim to foster collaboration and accelerate progress in realistic stochastic routing research. A.1 Reproducibility Requirements To maintain transparency and enable fair comparison, submissions intended for leaderboard inclusion or academic publication must satisfy several criteria. Solvers must be evaluated on the official benchmark test set, with all hyperparameters, configuration details, and seed values fully documented. Additionally, we encourage open-source releases or detailed methodological descriptions to ensure 1https://huggingface.co/spaces/ahmedheakl/SVRP-leaderboard 11 algorithm reproducibility. Runtime should be measured using the official script or a clearly defined procedure, consistent across all experiments. These guidelines help uphold reproducibility standards advocated in combinatorial optimization liter- ature [ 7,1] and promote meaningful scientific comparisons under controlled, yet realistic, conditions. B Baseline Models Ant Colony Optimization (ACO). Routes are constructed by sampling next
|
https://arxiv.org/abs/2505.21887v1
|
locations based on pheromone intensity and heuristic proximity. The pheromone matrix is updated as: τij←(1−ρ)τij+mX k=1∆τ(k) ij,∆τ(k) ij=Q L(k),if(i, j)∈tour(k) 0, otherwise ,(19) where ρ= 0.5,m= 50 ants,α= 1, and β= 2. Tabu Search. Candidate solutions are evaluated using a penalized cost function: f(S) =Cost(S) +λ·Penalty (S), (20) where λis adaptively tuned based on violation severity. Learning-Based Methods. The Attention Model is trained to minimize the expected cost: L(θ) =EX∼D Eπθ(a|X)[L(a|X)] . (21) POMO uses multiple rollout agents initialized with distinct permutations. Its gradient signal is computed as: ∇θJ(θ) =1 MMX m=1X t∇θlogπθ(am t|sm t)·(Rm−b), (22) where Mis the number of rollouts and bis a learned baseline for variance reduction. C Detailed Solver Performance Breakdowns Tables 6,7,8,9,10,11 present a comprehensive performance breakdown of various solvers across multiple configurations for Capacitated VRP (CVRP) and Time Window VRP (TWVRP). Each solver, NN+2opt, Tabu Search, ACO, OR-Tools, and RL-based methods (Attention, POMO), is evaluated under different settings including depot configurations (single depot, multi depot, depots equal to cities), problem sizes (ranging from 10 to 1000 customers), and feasibility constraints. Metrics include total cost, CVR (constraint violation rate), feasibility, runtime, and time window violations. Traditional heuristic solvers (NN+2opt, Tabu, ACO) generally yield competitive costs with increasing runtimes as problem size grows, while OR-Tools offers consistent feasibility but with significantly higher runtimes. Reinforcement learning solvers (Attention, POMO) demonstrate exceptionally fast runtimes (in milliseconds), achieving full feasibility across all tested instances, although their cost can vary notably, especially for large-scale problems where some cost inflation is observed (e.g. POMO on 1000-node CVRP). These results highlight trade-offs between solution quality, computational efficiency, and scalability across solver paradigms. 12 Table 6: NN+2opt - Detailed Performance Breakdown. Configuration Size Cost CVR Feas Runtime TW Violations single depot single vehicule sumDemands 10 2290.7 0.0 1.000 0.0 0.00 multi depot 10 2371.8 0.0 1.000 2.0 0.00 single depot single vehicule sumDemands 20 3736.5 0.0 1.000 0.3 0.00 multi depot 20 3662.9 0.0 1.000 3.2 0.00 single depot single vehicule sumDemands 50 4840.4 0.0 1.000 10.5 0.00 multi depot 50 5626.1 0.0 1.000 14.1 0.00 single depot single vehicule sumDemands 100 6841.4 0.0 1.000 31.8 0.00 multi depot 100 7868.2 0.0 1.000 31.3 0.00 single depot single vehicule sumDemands 200 11268.2 0.0 1.000 125.2 0.00 multi depot 200 11479.2 0.0 1.000 135.5 0.00 single depot single vehicule sumDemands 500 16390.0 0.0 1.000 829.5 0.00 multi depot 500 17551.0 0.0 1.000 826.3 0.00 single depot single vehicule sumDemands 1000 25844.3 0.0 1.000 3545.9 0.00 multi depot 1000 25817.4 0.0 1.000 3493.3 0.00 depots equal city 10 4564.6 3.3 0.967 2.0 0.00 single depot 10 4359.0 3.3 0.967 2.0 0.00 depots equal city 20 8192.2 0.0 1.000 5.7 0.00 single depot 20 8347.0 0.0 1.000 5.0 0.00 depots equal city 50 13666.8 0.0 1.000 14.9 0.00 single depot 50 13882.4 0.7 0.993 11.7 0.00 depots equal city 100 38704.2 6.0 0.940 52.2 0.00 single depot 100 30389.4 1.0 0.990 37.8 0.00 depots equal city 200 89937.2 10.1 0.899 145.2 0.00 single depot 200 55400.9 1.0 0.990 167.8 0.00 depots equal city 500 175711.7 7.7 0.923 1318.5
|
https://arxiv.org/abs/2505.21887v1
|
0.00 single depot 500 118279.0 2.2 0.978 929.7 0.00 depots equal city 1000 244956.8 6.1 0.939 3865.2 0.00 single depot 1000 187829.7 2.7 0.973 3911.5 0.00 Table 7: Tabu Search - Detailed Performance Breakdown. Configuration Size Cost CVR Feas Runtime TW Violations single depot single vehicle sumDemands 10 2297.2 0.0 1.000 19.4 0.00 multi depot 10 2373.8 0.0 1.000 13.3 0.00 single depot single vehicule sumDemands 20 3776.7 0.0 1.000 47.6 0.00 multi depot 20 3656.4 0.0 1.000 33.8 0.00 single depot single vehicule sumDemands 50 4897.0 0.0 1.000 79.8 0.00 multi depot 50 5749.3 0.0 1.000 102.9 0.00 single depot single vehicule sumDemands 100 6981.9 0.0 1.000 170.0 0.00 multi depot 100 8058.6 0.0 1.000 169.2 0.00 single depot single vehicule sumDemands 200 11417.8 0.0 1.000 373.9 0.00 multi depot 200 11602.8 0.0 1.000 314.2 0.00 single depot single vehicule sumDemands 500 16554.8 0.0 1.000 1270.4 0.00 multi depot 500 17676.2 0.0 1.000 1445.1 0.00 single depot single vehicule sumDemands 1000 25995.4 0.0 1.000 4647.9 0.00 multi depot 1000 25879.7 0.0 1.000 4544.5 0.00 depots equal city 10 3966.1 3.3 0.667 185.9 0.00 single depot 10 4067.6 3.3 0.667 193.6 0.00 depots equal city 20 8156.1 0.0 1.000 479.8 0.00 single depot 20 7661.3 0.0 1.000 489.9 0.00 depots equal city 50 13918.7 0.0 1.000 719.3 0.00 single depot 50 14269.3 0.7 0.667 654.4 0.00 depots equal city 100 39031.2 6.0 0.000 2013.6 0.00 single depot 100 30820.4 1.0 0.333 1998.3 0.00 depots equal city 200 90028.5 10.1 0.000 2662.6 0.00 single depot 200 55596.2 1.0 0.000 3014.1 0.00 depots equal city 500 176001.3 8.1 0.000 13851.1 0.00 single depot 500 118726.0 2.2 0.000 11822.7 0.00 depots equal city 1000 244953.3 6.2 0.000 50402.1 0.00 single depot 1000 187945.6 2.7 0.000 42673.2 0.00 13 Table 8: ACO - Detailed Performance Breakdown. Configuration Size Cost CVR Feas Runtime TW Violations single depot single vehicule sumDemands 10 2183.6 0.0 1.000 14.3 0.00 multi depot 10 2325.4 0.0 1.000 11.9 0.00 single depot single vehicule sumDemands 20 3725.9 0.0 1.000 34.6 0.00 multi depot 20 3644.2 0.0 1.000 31.4 0.00 single depot single vehicule sumDemands 50 4840.5 0.0 1.000 165.2 0.00 multi depot 50 5626.2 0.0 1.000 179.5 0.00 single depot single vehicule sumDemands 100 6840.4 0.0 1.000 698.1 0.00 multi depot 100 7868.4 0.0 1.000 678.2 0.00 single depot single vehicule sumDemands 200 11264.3 0.0 1.000 2295.7 0.00 multi depot 200 11473.0 0.0 1.000 2380.3 0.00 single depot single vehicule sumDemands 500 16389.2 0.0 1.000 15573.5 0.00 multi depot 500 17551.6 0.0 1.000 16468.6 0.00 single depot single vehicule sumDemands 1000 25840.7 0.0 1.000 58364.4 0.00 multi depot 1000 25815.8 0.0 1.000 59341.2 0.00 depots equal city 10 3931.6 3.3 0.667 9.4 0.00 single depot 10 3819.2 3.3 0.667 9.6 0.00 depots equal city 20 7714.2 0.0 1.000 34.2 0.00 single depot 20 7749.4 0.0 1.000 34.1 0.00 depots equal city 50 13535.4 0.0 1.000 166.9 0.00 single depot 50 13872.4 0.7 0.667 143.6 0.00 depots equal city 100 37800.2 6.0 0.000 629.4 0.00 single depot 100 30389.5 1.0 0.333 679.0 0.00 depots
|
https://arxiv.org/abs/2505.21887v1
|
equal city 200 89937.2 10.1 0.000 2556.8 0.00 single depot 200 55401.8 1.0 0.000 2327.0 0.00 depots equal city 500 175711.1 7.7 0.000 15299.3 0.00 single depot 500 118280.2 2.2 0.000 14781.5 0.00 depots equal city 1000 244999.0 6.1 0.000 70932.6 0.00 single depot 1000 187332.2 2.8 0.000 54846.8 0.00 Table 9: OR-Tools - Detailed Performance Breakdown. Configuration Size Cost CVR Feas Runtime TW Violations single depot single vehicule sumDemands 10 2049.2 0.0 1.000 1037.9 0.00 multi depot 10 2167.6 0.0 1.000 1003.3 0.00 single depot single vehicule sumDemands 20 3238.9 0.0 1.000 999.5 0.00 multi depot 20 3142.2 0.0 1.000 1002.6 0.00 single depot single vehicule sumDemands 50 3773.4 0.0 1.000 1015.9 0.00 multi depot 50 4714.2 0.0 1.000 1015.9 0.00 single depot single vehicule sumDemands 100 6283.5 0.0 1.000 1046.5 0.00 multi depot 100 6250.4 0.0 1.000 1048.8 0.00 single depot single vehicule sumDemands 200 9198.8 0.0 1.000 1174.7 0.00 multi depot 200 8956.2 0.0 1.000 1185.4 0.00 single depot single vehicule sumDemands 500 15677.5 0.0 1.000 2129.5 0.00 multi depot 500 15883.2 0.0 1.000 2085.2 0.00 single depot single vehicule sumDemands 1000 25844.3 0.0 1.000 8412.4 0.00 multi depot 1000 25816.3 0.0 1.000 9434.5 0.00 depots equal city 10 4564.7 3.3 0.967 12.6 0.00 single depot 10 4359.0 3.3 0.967 3.6 0.00 depots equal city 20 8192.3 0.0 1.000 8.2 0.00 single depot 20 8346.9 0.0 1.000 7.2 0.00 depots equal city 50 13666.7 0.0 1.000 30.3 0.00 single depot 50 13882.3 0.7 0.993 27.7 0.00 depots equal city 100 38704.1 6.0 0.940 108.6 0.00 single depot 100 30389.3 1.0 0.990 87.8 0.00 depots equal city 200 89937.5 10.1 0.899 345.3 0.00 single depot 200 55401.8 1.0 0.990 329.8 0.00 depots equal city 500 175711.4 7.7 0.923 2010.0 0.00 single depot 500 118279.4 2.2 0.978 2020.5 0.00 depots equal city 1000 244998.0 6.1 0.939 8273.1 0.00 single depot 1000 187830.1 2.7 0.973 8464.4 0.00 14 Table 10: RL Algorithms – Detailed Performance on CVRP (runtimes in ms). Solver Configuration Size Cost CVR Feas Runtime (ms) TW Violations Attention single depot single vehicule sumDemands 10 2364.12 0.00 1.000 0.365 0.00 POMO single depot single vehicule sumDemands 10 2312.68 0.00 1.000 0.282 0.00 Attention single depot single vehicule sumDemands 20 3222.68 0.00 1.000 0.269 0.00 POMO single depot single vehicule sumDemands 20 3341.56 0.00 1.000 0.279 0.00 Attention single depot single vehicule sumDemands 50 5803.63 0.00 1.000 0.304 0.00 POMO single depot single vehicule sumDemands 50 5920.19 0.00 1.000 0.287 0.00 Attention single depot single vehicule sumDemands 100 8553.26 0.00 1.000 0.319 0.00 POMO single depot single vehicule sumDemands 100 16983.50 0.00 1.000 0.319 0.00 Attention single depot single vehicule sumDemands 200 13228.84 0.00 1.000 0.353 0.00 POMO single depot single vehicule sumDemands 200 12726.96 0.00 1.000 0.360 0.00 Attention single depot single vehicule sumDemands 500 22496.94 0.00 1.000 0.463 0.00 POMO single depot single vehicule sumDemands 500 88789.44 0.00 1.000 0.506 0.00 Attention single depot single vehicule sumDemands 1000 37430.47 0.00 1.000 0.649 0.00 POMO single depot single vehicule sumDemands 1000 184656.10 0.00 1.000 0.689 0.00 Table 11: RL Algorithms
|
https://arxiv.org/abs/2505.21887v1
|
– Detailed Performance on TWVRP (runtimes in ms). Solver Configuration Size Cost CVR Feas Runtime (ms) TW Violations Attention single depot 10 3 940.38 0.00 1.000 0.916 0.00 POMO single depot 10 3 854.6 0.00 1.000 0.707 0.00 Attention single depot 20 6 504.73 0.00 1.000 1.780 0.00 POMO single depot 20 6 744.7 0.00 1.000 1.841 0.00 Attention single depot 50 29 132.94 0.00 1.000 0.731 0.00 POMO single depot 50 29 718.0 0.00 1.000 0.689 0.00 Attention single depot 100 57 778.84 0.00 1.000 0.864 0.00 POMO single depot 100 114 726.7 0.00 1.000 0.864 0.00 Attention single depot 200 113 742.27 0.00 1.000 0.868 0.00 POMO single depot 200 109 427.1 0.00 1.000 0.886 0.00 Attention single depot 500 271 201.60 0.00 1.000 1.412 0.00 POMO single depot 500 438 502.6 0.00 1.000 1.412 0.00 Attention single depot 1000 531 470.88 0.00 1.000 1.638 0.00 POMO single depot 1000 611 307.8 0.00 1.000 1.672 0.00 15 C.1 Qualitative Results As shown in figures 5, 6, 7, 8, 9, 10, 11, 12, 13, and 14, we qualitatively observe that for CVRP instances with a small number of customers, both Attention and POMO models, as well as classical methods (ACO, NN2OPT, and OR-Tools), generate highly structured and near-optimal routes. As the number of customers increases, route complexity grows, making it harder for models to preserve efficiency and structure. For TWVRP, the models’ priority shifts toward satisfying delivery time windows, often at the expense of distance optimization. This results in routes that appear less spatially coherent but better aligned with temporal constraints. Figure 5: CVRP 20 customers – Attention Model Figure 6: CVRP 10 customers – POMO Figure 7: CVRP 200 customers – Attention Model Figure 8: TWVRP 20 customers – Attention Model 16 Figure 9: CVRP 10 customers – ACO Figure 10: TWVRP 10 customers – ACO Figure 11: CVRP 10 customers – NN2OPT Figure 12: TWVRP 10 customers – NN2OPT Figure 13: CVRP 10 customers – OR-Tools Figure 14: TWVRP 10 customers – OR-Tools 17 D Reinforcement Learning D.1 Problem Formulation We model both the Capacitated Vehicle Routing Problem (CVRP) and Vehicle Routing Problem with Time Windows (VRPTW) as a Markov Decision Process (MDP) M= (S,A, P, r, γ ), where each state st∈ S encodes the vehicle’s current position, remaining capacity, visited set (and only for VRPTW the current time and per-customer time windows [ei, ℓi]). Actions at∈ A(st)select the next customer, and transitions P(st+1|st, at)deterministically update the tour while, in VRPTW, adding stochastic delays. The reward is r(st, at) =−di,j−τ[tarrive> ℓi]when visiting customer j, with di,jthe Euclidean distance and τa large penalty for time-window violations, and zero upon return to the depot. We follow a constructive, autoregressive decoding: at each step we append one customer until all are visited. D.2 Policy We adopt the encoder–decoder with multi-head attention of Kool [ 16]. Given embedded node features xi∈Rd, each of the Lencoder layers applies multi-head self-attention. At step t, with context embedding ht, we score each remaining node jbyut,j=v⊤tanh W1ht+W2xj and define πθ(at=j|st) = exp( ut,j)/P k/∈Vtexp(ut,k). We optimize
|
https://arxiv.org/abs/2505.21887v1
|
the policy by maximizing the expected return J(θ) =Eτ∼πθ[R(τ)]using two con- structive, autoregressive policy-gradient methods. A constructive policy builds a complete solution by sequentially selecting one customer at a time until the tour is finished, while an autoregressive policy conditions each action on the history of previous choices, enabling the network to capture dependencies across steps. We first apply REINFORCE [ 30], which updates parameters via ∇θJ(θ) =EP t∇θlogπθ(at| st) (R(τ)−b(st)) , where b(st)is a rollout baseline obtained by greedy decoding; then POMO [ 17] samples Kdifferent start nodes per instance, computes returns Rkand a shared baseline ¯R= 1 KP kRk, and applies ∇θJ(θ) =1 KPK k=1∇θlogπθ(τk) (Rk−¯R). REINFORCE offers simplicity and unbiased gradients, while POMO’s shared baseline exploits VRP permutation symmetry for variance reduction; together they provide a strong comparison between a classical Monte Carlo approach and a state-of-the-art, variance-reduced VRP-specific algorithm. D.3 Training Details All models were implemented in the RL4CO framework and trained end-to-end with Adam at a learning rate of 10−4. For CVRP with REINFORCE we used a batch size of 512 and generated 100 000 synthetic instances on the fly; for VRPTW with POMO we used batch size 64 and 1 000 000 instances. Validation employed greedy decoding under nominal travel-time conditions. VRPTW environments included log-normal delays calibrated to traffic data, Gaussian time-of-day kernels, and Poisson accident events, with infeasible actions heavily penalized to enforce time windows. D.4 Evaluation on SVRPBench After training, we converted each of the 500+ SVRPBench instances into the RL4CO environment format and ran the trained policies in greedy mode, selecting at each step at= arg max jπθ(at= j|st). To assess robustness, we then simulated each resulting tour under multiple sampled delay realizations and reported average tour length and feasibility rates. Despite domain shift, attention- based RL policies maintained high feasibility and near-optimal costs across all problem sizes. 18
|
https://arxiv.org/abs/2505.21887v1
|
SDPO: Importance-Sampled Direct Preference Optimization for Stable Diffusion Training Xiaomeng Yang1Zhiyu Tan1,2Junyan Wang3Zhijian Zhou2Hao Li1,2∗ 1Shanghai Academy of Artificial Intelligence for Science 2Fudan University 3Australian Institute for Machine Learning, The University of Adelaide yangxlarge@gmail.com Code: https://github.com/yangxlarge/SDPO Abstract Preference learning has become a central technique for aligning generative models with human expectations. Recently, it has been extended to diffusion models through methods like Direct Preference Optimization (DPO). However, existing approaches such as Diffusion-DPO suffer from two key challenges: timestep- dependent instability, caused by a mismatch between the reverse and forward diffusion processes and by high gradient variance in early noisy timesteps, and off-policy bias arising from the mismatch between optimization and data collection policies. We begin by analyzing the reverse diffusion trajectory and observe that instability primarily occurs at early timesteps with low importance weights. To address these issues, we first propose DPO-C&M, a practical strategy that improves stability by clipping and masking uninformative timesteps while partially mitigating off-policy bias. Building on this, we introduce SDPO (Importance- Sampled Direct Preference Optimization), a principled framework that incorporates importance sampling into the objective to fully correct for off-policy bias and emphasize informative updates during the diffusion process. Experiments on CogVideoX-2B, CogVideoX-5B, and Wan2.1-1.3B demonstrate that both methods outperform standard Diffusion-DPO, with SDPO achieving superior VBench scores, human preference alignment, and training robustness. These results highlight the importance of timestep-aware, distribution-corrected optimization in diffusion- based preference learning. 1 Introduction Recent advances in diffusion models [ 41,21,56,35,23] have significantly improved video realism and fidelity, as seen in models like Wan2.1 [ 59], CogVideoX [ 67], and HunyuanVideo [ 34]. However, aligning outputs with human preferences remains largely unexplored. While language models benefit from alignment methods such as RLHF [ 43] and DPO [ 49], recent work on adapting these to diffusion models has shown promise in the image domain. Examples include V ADER [ 47], SPIN Diffusion [ 68], DDPO [ 3], D3PO [ 65], and Diffusion DPO [ 57]. Yet these methods still face challenges like poor data quality, complex denoising, and unstable training. This points to the need for alignment techniques that account for the temporal structure of video. Despite progress, applying Direct Preference Optimization (DPO)[ 49] to diffusion models, especially in the context of video, remains challenging due to two key issues. The first is timestep-dependent ∗Corresponding author Preprint. Under review.arXiv:2505.21893v1 [cs.LG] 28 May 2025 Figure 1: Generated samples from CogVideoX-5B after alignment using our SDPO method. The results demonstrate improved visual quality and stronger adherence to text prompts. instability in diffusion-based preference optimization. This problem is particularly evident in Diffusion-DPO[ 57], which approximates the reverse process by relying on forward conditionals. Several recent works attempt to mitigate timestep imbalance from a general training standpoint. For example, Kim et al.[ 33] propose adaptive sampling to reduce gradient variance across timesteps. SpeeD[ 61] categorizes timesteps based on learning dynamics and downweights those contributing less to training. B-TTDM [ 74] introduces a Beta-distributed timestep schedule. However, these methods are designed for general diffusion training, not preference learning, so their effectiveness in that context is unclear. The
|
https://arxiv.org/abs/2505.21893v1
|
second challenge is the off-policy bias inherent in preference optimization. This occurs when gradients are estimated from a fixed dataset that no longer aligns with the model’s current distribution, leading to a mismatch between the optimization objective and the data collection policy. As a result, the model can suffer from biased updates, or even reward hacking. While methods such as IPO [ 66] and RSO [ 40] attempt to mitigate this bias, they either face scalability issues or prove ineffective in video domains due to the high perceptual sensitivity and cost of sampling. Taken together, these limitations highlight that current methods struggle to address both timestep instability and off-policy bias simultaneously. To address the above challenges, we leverage importance sampling, commonly used in reinforcement learning to correct for distribution mismatch. In our setting, importance weights reweight training signals based on how informative each diffusion timestep. By analyzing the reverse diffusion trajectories in DPO [ 49] and Diffusion-DPO [ 57], we find that weights are highest at intermediate timesteps where preference gradients are strong and model predictions are well-separated, while early timesteps or off-policy samples often have low weights and produce noisy or ineffective updates. When a sample has low cumulative weight across all timesteps, its overall impact on learning is minimal. These observations suggest that optimization should focus on high-weight regions while suppressing unreliable signals, providing a unified view for addressing distributional mismatch during training. We introduce two methods for diffusion-based preference learning. DPO-C&M targets training instability caused by mismatched or noisy samples by applying masking and gradient clipping, effectively addressing the challenge of timestep-dependent instability. Building on this, SDPO provides a more comprehensive solution by incorporating step-wise clipped importance weights to further mitigate off-policy bias. These weights reweight training signals such that gradients are reduced when sample likelihood decreases and amplified when it increases, placing greater emphasis on improving preferred samples rather than simply penalizing less preferred ones. Our methods consistently outperform Diffusion-DPO, with SDPO and DPO-C&M achieving final scores of 81.53 and 81.37, compared to 81.16 for Diffusion-DPO. In human evaluation over 200 prompts, SDPO and DPO-C&M achieve first-place rates of 67% and 29%, respectively. On larger models, SDPO reaches 82.29 on CogVideoX-5B and 84.80 on the rectified-flow-based WanX-1.3B, demonstrating strong generalization. Overall, our contributions are threefold: 1.We present a systematic analysis of the reverse diffusion trajectory, revealing that preference gradients are most informative in mid-range timesteps and unstable in early or late stages. 2.We introduce a principled importance-weighted optimization framework that improves stability and corrects for distribution shift and off-policy bias. 3.We develop two practical algorithms, DPO-C&M andSDPO , that consistently outperform prior methods across automatic metrics, human evaluations, and model scales. 2 Related Work Diffusion models Diffusion-based generative models have become a leading approach for high- quality image and video synthesis. Early methods built on score-based Langevin dynamics [ 54], later formalized as DDPMs [ 26] and unified via SDEs for improved likelihood estimation and generation [ 18]. Sampling speed was improved through DDIM [ 30] and distillation [ 52], while realism benefited from classifier guidance, architectural advances
|
https://arxiv.org/abs/2505.21893v1
|
[ 10], and classifier-free methods [ 13]. Latent Diffusion Models [ 15] enhanced efficiency, and further refinements improved FID [ 16]. In video, diffusion models have been extended to capture temporal dynamics [ 14], achieving strong results in generation, prediction, and interpolation [ 17,24,53]. Recent text-to-video methods leverage transformer-based architectures [ 5,67,7], often adapting text-to-image backbones with temporal attention [ 4,60,22]. Lightweight modules [ 22] enable plug-and-play personalization. Recent works report state-of-the-art quality and scalability [70, 7], with DiT-based models pushing further [5, 67]. Despite advances in fidelity, consistency, and diversity [ 8,12,27,38,51], aligning outputs with nuanced human preferences remains a key challenge [25, 62]. Post-Training Alignment of Language Models Large language models (LLMs) are typically aligned with human preferences through Reinforcement Learning from Human Feedback (RLHF) [ 43], which trains a reward model from human comparisons followed by fine-tuning with reinforcement learning. Direct Preference Optimization (DPO) [ 49] has emerged as a simpler alternative by reframing alignment as supervised preference learning, removing the need for sampling or reward modeling during training. Building on DPO, recent variants improve reasoning, scalability, and feedback efficiency: StepDPO [ 36] introduces step-level supervision (unsuitable for diffusion models); VPO and SimPO [ 6] enable online updates via implicit rewards; and KTO [ 19], inspired by prospect theory [32], optimizes utility with binary feedback. Other approaches include iterative optimization over reasoning steps [ 63,45], dual-LoRA-based stabilization in Online DPO [ 48], self-play in SPO [55], likelihood calibration in SLiC [72], and ranking-based alignment in RRHF [69]. Preference Optimization for Diffusion Models Recent work extends human preference alignment to diffusion models, drawing inspiration from RLHF in language models. Early models like Stable Diffusion used curated data without human feedback. Diffusion DPO [ 57] adapts DPO using the ELBO as a proxy, improving SDXL with large-scale preference data. RL-based methods such as DPOK [ 20] and DDPO [ 3] apply KL-regularized policy gradients or actor-critic training, often requiring constrained generation for stability. IPO [ 66] reduces policy-data mismatch via iterative reward updates, though the process is complex. D3PO [ 65] avoids reward modeling by directly optimizing on pairwise feedback. Gradient-based methods like V ADER [ 47] and DRaFT [ 9] align outputs via backpropagation through reward models. SPIN-Diffusion [ 68] removes human labels entirely, using self-play with automated preferences to achieve strong results. Together, RL- based [ 20,3], direct optimization [ 57,65], iterative [ 66], gradient-based [ 47,9], and self-play [ 68] methods form a growing toolkit for aligning diffusion models with human preferences. 3 Importance-Sampled Direct Preference Optimization 3.1 Preliminary Diffusion models generate data by reversing a noise process, with recent post-training methods aiming to align outputs with human preferences. However, key challenges remain. This section reviews diffusion modeling, importance sampling, and Diffusion-DPO to build the foundation for addressing them. Importance Sampling Importance sampling [ 44] is a general technique for estimating expectations under a target distribution p(x)using samples drawn from a different proposal distribution q(x). It is particularly useful when direct sampling from p(x)is infeasible. The expectation Ep[f(x)]can be rewritten as: Ep[f(x)] =Z f(x)p(x)dx=Eq f(x)p(x) q(x) , (1)
|
https://arxiv.org/abs/2505.21893v1
|
wherep(x) q(x)is the importance weight. In diffusion models, importance sampling can be applied by comparing the learned reverse process with either the forward posterior or a previous model iteration: w(t) =pθ(xt−1|xt) q(xt−1|xt, x0)orw(t) =pθ(xt−1|xt) pold(xt−1|xt). (2) These weights enable reweighting transitions based on how well the model approximates the reverse process, analogous to off-policy correction in reinforcement learning. Diffusion-DPO Diffusion-DPO extends Direct Preference Optimization (DPO) to diffusion models by comparing full denoising trajectories instead of only final outputs. Both DPO and Diffusion-DPO can be viewed as being grounded in two core objectives: the first for reward model learning, and the second for policy optimization via reward maximization. The reward model is trained using pairwise preference data via the binary logistic loss: Lreward =−Ec,xw 0,xl 0 logσ r(c,xw 0)−r(c,xl 0) . (3) The policy is then optimized to maximize expected reward while staying close to a reference distribution: max pθEc∼Dc,x0∼pθ(x0|c)[r(c,x0)]−βDKL[pθ(x0|c)∥pref(x0|c)]. (4) To make training tractable in the diffusion setting, the objective is approximated by comparing single-step reverse transitions at a randomly sampled time step t∈ {1, . . . , T }: L(θ) =−Et,xw t,xl t[logσ(−βTω(λt)·∆ℓ(θ))], (5) where σ(·)denotes the sigmoid function, ω(λt)is a timestep-dependent weighting function, and ∆ℓ(θ)measures the model’s preference alignment error based on noise prediction: ∆ℓ(θ) =∥ϵw−ϵθ(xw t, t)∥2− ∥ϵw−ϵref(xw t, t)∥2 − ϵl−ϵθ(xl t, t) 2− ϵl−ϵref(xl t, t) 2 . Here, ϵw, ϵl∼ N (0, I)are noise samples used to construct xtvia the forward process. This enables trajectory-level preference learning through single-step noise prediction, providing a scalable approach to align diffusion models with human preferences. While Diffusion-DPO effectively aligns diffusion models with human preferences, several challenges limit its broader applicability. In particular, it faces two key limitations: (1) timestep-dependent instability : Diffusion models exhibit significant gradient variance across timesteps, with high- variance steps in early diffusion stages where latents xtresemble pure noise posing optimization bottlenecks. Moreover, Diffusion-DPO simplifies training by approximating the reverse distribution πθ(xt−1|xt)with the forward conditional q(xt−1|xt, x0), introducing mismatch and instability at these critical steps. (2) Off-policy bias : As an offline alignment method, Diffusion-DPO suffers from a mismatch between optimization and data collection policies, leading to biased gradients, overfitting to suboptimal preferences, and limited generalization. Without mechanisms to balance imitation and reward optimization, it remains vulnerable to reward hacking and data sensitivity. 3.2 Analysis Building on these observations, we analyze how preference signals and training stability evolve along the diffusion trajectory. This analysis reveals actionable insights into where and how optimization should be focused, motivating two practical improvements to the Diffusion-DPO framework. Effective Timesteps. To evaluate the distributional mismatch introduced by Diffusion-DPO, where the reverse transition pθ(xt−1|xt)is approximated by the forward conditional q(xt−1|xt, x0), we Figure 2: Predicted probabilities for positive and negative samples across training steps under different diffusion timestep ranges: (a) t∈[0,100], (b)t∈[500,600], (c)t∈[900,1000] . Figure 3: Effect of training with unlike samples. (a) Sample probabilities drop together with little early gap. (b) Importance weights are lower than online, weakening training. analyze the conditional density p(xt−1|xt)across timesteps (Fig. 2). The top subplot shows how this density evolves during training; the bottom shows
|
https://arxiv.org/abs/2505.21893v1
|
differences between positive and negative samples. We examine three stages: early ( t∈[0,100]), middle ( t∈[500,600]), and late ( t∈[900,1000] ). In early steps, the density fluctuates and lacks clear separation, often decreasing for both sample types, indicating instability. In late steps, both densities increase but remain close, offering little learning signal. In contrast, the middle range shows stable trends and clear separation, with positive samples consistently having higher density. This suggests intermediate timesteps are most informative for preference learning, while early ones may be detrimental. 0 200 400 600 800 1000 Timestep0.00.20.40.60.81.0Avg Importance Weight0.550.951.00Importance Weight vs. Timestep Positive Negative Figure 4: Importance weight w(t)in- creases with timestep, saturating around t∈ [500,600].Importance Weights Reveal Stability. While timestep analysis identifies where preference learn- ing is most effective, it does not directly guide train- ing. To address this, we use the importance weight function (Eq. (2)) to quantify each timestep’s con- tribution. As shown in Fig. 4, importance weights increase with timestep and stabilize around one be- tween t= 500 and600, aligning with regions of stable and well-separated model behavior. This in- dicates that importance weights are a reliable signal for selecting effective training regions and support focusing updates on mid-range timesteps to improve stability and performance. Off-Policy Instability and Importance Weights : Diffusion-DPO, like other offline preference learning methods, suffers from mismatch between training data and the current policy. Even when data is generated from the same model, distributional drift during training can degrade performance. To illustrate this, we simulate an off-policy setting where a model is trained with positive samples from HunyuanVideo and negative ones from CogVideoX-2B. These unlike samples lead to unstable optimization and consistently receive lower importance weights than online samples, resulting in weak training signals. This highlights that effective learning depends on trajectories that remain well- aligned with the model distribution. Importance weights thus provide a valuable tool for detecting and mitigating distributional mismatch in offline settings. The importance weight w(t)reveals where training is stable and effective in Diffusion-DPO. Mid- range timesteps with high w(t)correspond to strong preference signals and stable gradients, while low-weight steps, typically from early stages, yield noisy or unreliable updates. Building on this, we propose two improvements: DPO-C&M , which clips and masks low-weight timesteps to stabilize training, and SDPO , which corrects off-policy bias via importance sampling. These methods jointly enhance the stability and robustness of diffusion-based preference optimization. 3.3 Importance-Sampled Direct Preference Optimization To mitigate the timestep-dependent instability in Diffusion-DPO training, we introduce DPO-C&M (Diffusion Preference Optimization with Clipping and Masking). This method enhances training stability by combining importance weighting with a clipped masking mechanism , allowing the model to focus updates on regions where the forward and reverse diffusion processes are better aligned. Clipping and Masking The core idea is to adjust each training sample’s contribution based on how well it matches the reverse process. We introduce an importance weight w(t), defined in Eq. 2, which compares the reverse model transition pθ(xt−1|xt)with the forward noising process q(xt|x0). To prevent high variance and instability, this weight is clipped within a fixed range: ˜w(t)
|
https://arxiv.org/abs/2505.21893v1
|
= clip( w(t),1−ϵ,1 +ϵ). (6) The clipped importance weight ˜w(t)serves two purposes. First, it rescales gradient updates to reflect the reliability of each sample. Second, it acts as a soft mask to suppress gradient flow from noisy regions where the forward and reverse paths diverge significantly. Combining this masked weighting mechanism with the original preference-based objective yields the DPO-C&M loss: LDPO-C&M (θ) =−E(xw 0,xl 0)∼D, t∼U(0,T) xw t∼q(·|xw 0), xl t∼q(·|xl 0)˜w(t)·logσ(−βTω(λt)·∆ℓ(θ)),(7) where ∆ℓ(θ)is defined in Eq. 5.By selectively updating only well-aligned regions, DPO-C&M directly addresses the challenge of distribution mismatch and improves the robustness of diffusion preference optimization. SDPO While DPO-C&M mitigates distributional mismatch through masking and clipping, it leaves the off-policy bias—arising from divergence between training and model distributions—unaddressed. Tackling both simultaneously, we introduce SDPO (Importance-Sampled Direct Preference Optimiza- tion), which incorporates learnable importance weights to correct for distribution shift and stabilize training. Building on the standard preference optimization objective, we reinterpret it through im- portance sampling. By combining Eq. 4 and Eq. 6, we recast the original expectation over model samples into an importance-weighted expectation over off-policy data. This yields the following form: max pθEc∼Dc,x0∼q(x0|c)[wθ·r(c,x0)]−βDKL[pθ(x0|c)∥pref(x0|c)]. (8) Here, the importance weight wθ=pθ(x0|c) q(x0|c)corrects for the mismatch between the model distribution and the off-policy sampling distribution q, allowing optimization to be performed over pre-collected data. The objective above establishes a tractable off-policy preference optimization framework by reweight- ing rewards with importance ratios. To better understand how this objective guides the model distribution during training, we further reformulate it into a KL divergence form. This transformation makes the optimization direction explicit: it reveals that the model is implicitly learning to match a shaped target distribution p∗that balances reward feedback with prior preferences. The reformulation is achieved by expressing the weighted reward term as a log-density ratio, leading to an equivalent form detailed in Appendix A: min pθEc∼Dc,x0∼q(x0|c) logpθ(x0|c) p∗(x0|c) −logZ(c) , (9) where p∗(x0|c)is the target distribution: p∗(x0|c) =1 Z(c)·pref(x0|c)·expwθ β·r(c,x0) . (10) The normalization constant is Z(c) =P x0pref(x0|c) exp 1+ϵ β·r(c,x0) . This change of form makes explicit that SDPO minimizes the KL divergence between the current model and a shaped target distribution, effectively guiding pθtoward reward-aligned behavior under off-policy sampling. This KL-based perspective shows that optimization effectively aligns pθwithp∗, revealing SDPO as a direct learning procedure for the target distribution. Rearranging the expression of p∗, we can solve for the reward: r(c,x0) =β wθ· logp∗(x0|c) pref(x0|c) + log Z(c) , We substitute this reward into the pairwise logistic loss (Eq. 3) following the DPO framework [ 50], and apply it at each denoising step t, consistent with Diffusion-DPO [ 58]. This results in the final SDPO objective: LSDPO(θ) =−E(xw 0,xl 0)∼D, t∼U(1,T)logσ ˜wθ(t)·ψ(xw t−1|xw t)−˜wθ(t)·ψ(xl t−1|xl t) ,(11) where ψ(xt−1|xt) =β·log p∗ θ(xt−1|xt) pref(xt−1|xt) , and ˜wθ(t)is the clipped step-wise importance weight. The step-wise importance weight is defined via inverse weighting, and for preference pairs, the maximum of the two is used: ˜wθ(xt, t) = clip1 wθ(xt, t),1−ϵ,1 +ϵ ,˜wθ(t) = max ˜wθ(xw t, t),˜wθ(xl t, t) . (12) This clipped importance weighting helps prevent reward hacking by limiting
|
https://arxiv.org/abs/2505.21893v1
|
the influence of unreli- able samples and suppressing overly aggressive updates toward noisy rewards. When the model’s performance degrades and the probability of the preferred sample decreases, the corresponding wθ becomes small, leading to a large 1/wθ, which slows further degradation. In contrast, when the model assigns higher probability to the preferred sample, the weight decreases and the loss increases, encouraging continued improvement. This asymmetry biases learning toward increasing the probabil- ity of preferred samples rather than simply suppressing the dispreferred ones. The clipping operation further prevents extreme weights from destabilizing training. 4 Empirical Results 4.1 Experimental Setup Models and Dataset. We conduct all experiments on the CogVideoX-2B video generation model, comparing standard Diffusion-DPO with our improved method, SDPO. To construct the training data, we first generate multiple videos per prompt using CogVideoX-2B, and collect human rankings to form 10,000 high-quality preference pairs after filtering. In addition, we build a unlike dataset using HunyuanVideo, a significantly stronger video generation model. For each prompt, HunyuanVideo outputs are treated as preferred, while CogVideoX-2B outputs are considered less preferred. Although this naturally forms valid preference pairs due to the quality gap, the HunyuanVideo outputs are low-probability under the CogVideoX-2B model. We refer to these as "unlike" pairs, and use them primarily for training dynamics analysis. Training Details. All models are trained using 16 NVIDIA A100 GPUs with a batch size of 4 and gradient accumulation of 4. The learning rate is set to 2×10−5for all methods. The temperature parameter βin the DPO objective is set to 2 for Diffusion-DPO, and 0.02 for both SDPO and DPO-C&M. We observe that our methods are relatively insensitive to the choice of β, and a smaller value not only stabilizes training but also accelerates convergence. 4.2 Main Results Quantitative Comparison. We evaluate post-training methods on the CogVideoX-2B model us- ing VBench [ 29], a comprehensive benchmark for video generation that assesses 16 disentangled dimensions covering visual quality, motion coherence, and semantic fidelity. We report Total Score, Quality Score, and Semantic Score. As shown in Tab. 1, our proposed DPO-C&M and SDPO both outperform Diffusion-DPO, with SDPO achieving the best overall performance. After 500 training steps, Diffusion-DPO reaches 81.16 (versus 80.91 for the pretrained baseline), while SDPO and DPO-C&M achieve 81.53 and 81.37. At 1000 steps, Diffusion-DPO suffers a significant performance Table 1: VBench evaluation of different methods and training steps. Higher is better. Setting Method Total ↑Quality ↑Semantic ↑ CogVideoX-2B @ 500 Steps Pretrained 80.91 82.18 75.83 + Diffusion-DPO 81.16 82.32 76.49 + C&M 81.37 82.55 76.69 + SDPO 81.53 82.74 76.71 CogVideoX-2B @ 1000 Steps + Diffusion-DPO 67.28 68.37 62.93 + C&M 81.46 82.65 76.72 + SDPO 81.68 82.92 76.76 CogVideoX-5B Pretrained 81.91 83.05 77.33 + Diffusion-DPO 82.02 83.15 77.50 + C&M 82.17 83.26 77.81 + SDPO 82.28 83.37 77.91 WanX-1.3B Pretrained 84.26 85.30 80.09 + Diffusion-DPO 84.41 85.32 80.44 + C&M 84.54 85.51 80.67 + SDPO 84.78 85.73 81.02 Figure 5: Ablation results. Left: Impact of the DPO temperature parameter βon Diffusion-DPO and SDPO. Right: Training dynamics comparing mid-timestep window fine-tuning (steps 400–700) against full-model training (steps
|
https://arxiv.org/abs/2505.21893v1
|
0–1000) for Diffusion-DPO and SDPO. drop, with the Total Score falling to 67.28, indicating a collapse in generation quality. In contrast, DPO-C&M and SDPO remain stable and continue to improve, demonstrating superior robustness. To further validate the generality of our approach, we extend the comparison to two larger base models, CogVideoX-5B and WanX-1.3B. As shown in Tab. 1, SDPO achieves Total Scores of 82.29 on CogVideoX-5B and 84.80 on WanX-1.3B, significantly outperforming their Diffusion-DPO counterparts. Notably, WanX is based on a rectified flow architecture, demonstrating that our method remains effective even for flow-based diffusion models. CogvideoX-2B DPO-CM Diffusion-DPO SDPO0.00.10.20.30.40.50.60.7ProportionRank Rank 1 Rank 2 Rank 3 Rank 4 Figure 6: Human evaluation rank.Human Evaluation. We conduct a human evaluation on 200 prompts spanning diverse categories. For each prompt, four videos are generated using CogVideoX-2B, Diffusion-DPO, DPO-C&M, and our SDPO method, and are ranked by annotators along three dimensions. The final results are obtained by aggregating rankings across multiple annotators (see supplementary for de- tails). Figure 6 shows the distribution of each method across ranks 1 to 4. SDPO achieves the highest pro- portion of first-place rankings at 67%, significantly out- performing both Diffusion-DPO and CogVideoX-2B, which have the highest proportions of fourth-place rank- ings. This demonstrates the clear advantage of our ap- proach in aligning video outputs with human preferences. While DPO-C&M performs slightly worse than SDPO, it still achieves a noticeably higher share in rank 1 and rank 2 compared to Diffusion-DPO. This suggests that clipping and masking strategies effectively mitigate the instability introduced by noisy timesteps. 4.3 Ablation Study and Training Dynamics Hyperparameter βSensitivity Analysis The temperature hyperparameter β, defined in Eq. (4), controls the strength of the KL-divergence penalty that prevents the optimized policy πθfrom deviating too far from the reference model πref. We evaluate both Diffusion-DPO and SDPO under a range of βvalues, measuring VBench Total Score after a fixed training duration. As shown in Figure 5, Diffusion-DPO is highly sensitive to β: at very small values its performance falls below the pretrained baseline, it peaks near β= 1, and then declines slightly at β= 10 . In contrast, SDPO exhibits only minor variation across the same range of β, achieving a significant gain even atβ= 0.02. This demonstrates SDPO’s robustness and confirms that the importance-weighting mechanism effectively regularizes the model. Mid-Timestep Window Fine-Tuning (Steps 400–700). As shown in Figure 5, during Diffusion- DPO training the importance weights w(t)are very low near t= 0, causing highly unstable updates, while near t= 1000 the weights are small and have little effect on learning. To validate this behavior and demonstrate the utility of w(t)as an analysis metric, we conduct a mid-timestep fine- tuning experiment in which only t∈[400,700], where w(t)>0.9, are updated. We choose this range not only to cover the previously identified stable interval ( t∈[500,600]), but also to include earlier steps starting from t= 400 , where w(t)remains high. This design allows us to explore the boundary of effective training regions within the high-weight zone. Compared to full-range training (t∈[0,1000] ), Diffusion-DPO with mid-timestep tuning yields higher and more
|
https://arxiv.org/abs/2505.21893v1
|
stable VBench scores (right panel of Figure 5), although it requires longer training time and eventually suffers from model collapse. In contrast, SDPO shows negligible difference between mid-timestep and full-range schedules, maintaining stable performance in both settings, with the full-range variant providing a marginal additional gain. 4.4 Online and Iterative DPO Learning with SDPO 0 1 2 3 4 5 6 Training Iteration79.0079.5080.0080.5081.0081.5082.00VBench T otal Score Diffusion-DPO vs SDPO (VBench) Diffusion-DPO SDPO Figure 7: Comparison of Diffusion-DPO and SDPO on vbench over training iterations.We evaluate suitability for online and iterative training by adopting the IPO protocol [ 66], in which each round samples 3,000 prompts, ranks paired outputs using a fixed reward model [ 39], and trains for 20 epochs before pro- ceeding to the next batch. This procedure is repeated for 10 iterations without human intervention. As shown in Fig. 7, Diffusion-DPO exhibits clear performance degradation over rounds, attributed to reward hacking and distribu- tional drift. In contrast, SDPO maintains stable or slightly improving performance, supported by implicit update con- trol that mitigates collapse when reward signals deteriorate. On average, SDPO outperforms Diffusion-DPO by over 15% in win rate after the final round. Moreover, SDPO shows significantly lower variance across runs, indicating better training stability under non-stationary conditions. These results underscore SDPO’s robustness and effectiveness in iterative preference optimization settings. 5 Conclusion In this paper, we introduced SDPO, a new framework for preference optimization in diffusion models that addresses training instability and off-policy bias via importance sampling. Our analysis revealed that preference signals are most informative in mid-range timesteps, motivating the design of SDPO and DPO-C&M. These methods improve stability by filtering noisy timesteps and reweighting training objectives. Empirical results on CogVideoX and Wan2.1 benchmarks demonstrate consistent improvements over Diffusion-DPO in both automatic metrics and human evaluations. Moreover, SDPO remains robust under online and iterative training, offering a scalable and principled approach to aligning diffusion-based generation with human preferences. References [1]M. S. Albergo, N. M. Boffi, and E. Vanden-Eijnden. Stochastic interpolants: A unifying framework for flows and diffusions. arXiv preprint arXiv:2303.08797 , 2023. 15 [2]M. G. Azar, Z. D. Guo, B. Piot, R. Munos, M. Rowland, M. Valko, and D. Calandriello. A general theoretical paradigm to understand learning from human preferences. In International Conference on Artificial Intelligence and Statistics , pages 4447–4455. PMLR, 2024. 16 [3]K. Black, M. Janner, Y . Du, I. Kostrikov, and S. Levine. Training diffusion models with reinforcement learning. arXiv preprint arXiv:2305.13301 , 2023. 1, 3 [4]A. Blattmann, R. Rombach, H. Ling, T. Dockhorn, S. W. Kim, S. Fidler, and K. Kreis. Align your latents: High-resolution video synthesis with latent diffusion models, 2023. 3 [5]T. Brooks, B. Peebles, C. Holmes, W. DePue, Y . Guo, L. Jing, D. Schnurr, J. Taylor, T. Luhman, E. Luhman, et al. Video generation models as world simulators. OpenAI Blog , 1:8, 2024. 3 [6]S. Cen, J. Mei, K. Goshvadi, H. Dai, T. Yang, S. Yang, D. Schuurmans, Y . Chi, and B. Dai. Value- incentivized preference optimization: A unified approach to online and offline rlhf. arXiv preprint arXiv:2405.19320 , 2024. 3
|
https://arxiv.org/abs/2505.21893v1
|
[7]H. Chen, Y . Zhang, X. Cun, M. Xia, X. Wang, C. Weng, and Y . Shan. Videocrafter2: Overcoming data limitations for high-quality video diffusion models, Jan 2024. 3 [8]X. Chen, Y . Wang, L. Zhang, S. Zhuang, X. Ma, J. Yu, Y . Wang, D. Lin, Y . Qiao, and Z. Liu. Seine: Short-to-long video diffusion model for generative transition and prediction, 2023. 3 [9]K. Clark, P. Vicol, K. Swersky, and D. J. Fleet. Direct reward fine-tuning: Training diffusion models on differentiable rewards. In International Conference on Learning Representations (ICLR) , 2024. 3 [10] P. Dhariwal and A. Nichol. Diffusion models beat gans on image synthesis. In NeurIPS , 2021. 3 [11] Y . Dubois, B. Galambosi, P. Liang, and T. B. Hashimoto. Length-controlled alpacaeval: A simple way to debias automatic evaluators. arXiv preprint arXiv:2404.04475 , 2024. 16 [12] P. Esser, J. Chiu, P. Atighehchian, J. Granskog, and A. Germanidis. Structure and content-guided video synthesis with diffusion models, 2023. 3 [13] A. N. et al. Glide: Towards photorealistic image generation and editing with text-guided diffusion models. arXiv preprint arXiv:2112.10741 , 2021. 3 [14] J. H. et al. Video diffusion models. arXiv preprint arXiv:2204.03458 , 2022. 3 [15] R. R. et al. High-resolution image synthesis with latent diffusion models. In CVPR , 2022. 3 [16] T. K. et al. Elucidating the design space of diffusion-based generative models. In NeurIPS , 2022. 3 [17] V . V . et al. Mcvd: Masked conditional video diffusion for prediction, generation, and interpolation. In NeurIPS , 2022. 3 [18] Y . S. et al. Score-based generative modeling through stochastic differential equations. In ICLR , 2021. 3 [19] K. Ethayarajh, W. Xu, N. Muennighoff, D. Jurafsky, and D. Kiela. Kto: Model alignment as prospect theoretic optimization. arXiv preprint arXiv:2402.01306 , 2024. 3, 16 [20] Y . Fan, O. Watkins, Y . Du, H. Liu, M. Ryu, C. Boutilier, P. Abbeel, M. Ghavamzadeh, K. Lee, and K. Lee. Dpok: Reinforcement learning for fine-tuning text-to-image diffusion models. Advances in Neural Information Processing Systems , 36:79858–79885, 2023. 3 [21] Gen-3. Gen-3. https://runwayml.com/blog/introducing-gen-3-alpha/ , 2024. 1 [22] Y . Guo, C. Yang, A. Rao, Z. Liang, Y . Wang, Y . Qiao, M. Agrawala, D. Lin, and B. Dai. Animated- iff: Animate your personalized text-to-image diffusion models without specific tuning. arXiv preprint arXiv:2307.04725 , 2023. 3 [23] hailuo. hailuo. https://hailuoai.video/ , 2024. 1 [24] J. Ho, W. Chan, C. Saharia, J. Whang, R. Gao, A. Gritsenko, D. Kingma, B. Poole, M. Norouzi, D. Fleet, and T. Salimans. Imagen video: High definition video generation with diffusion models. 3 [25] J. Ho, W. Chan, C. Saharia, J. Whang, R. Gao, A. Gritsenko, D. P. Kingma, B. Poole, M. Norouzi, D. J. Fleet, et al. Imagen video: high definition video generation with diffusion models (2022). arXiv preprint arXiv:2210.02303 , 2022. 3 [26] J. Ho, A. Jain, and P. Abbeel. Denoising diffusion probabilistic models. In NeurIPS , 2020. 3 [27] J. Ho, T. Salimans, A. Gritsenko, W. Chan, M. Norouzi, and D. J. Fleet. Video diffusion models. Advances in Neural Information Processing Systems , 35:8633–8646,
|
https://arxiv.org/abs/2505.21893v1
|
2022. 3 [28] J. Hong, N. Lee, and J. Thorne. Orpo: Monolithic preference optimization without reference model. arXiv preprint arXiv:2403.07691 , 2024. 16 [29] Z. Huang, Y . He, J. Yu, F. Zhang, C. Si, Y . Jiang, Y . Zhang, T. Wu, Q. Jin, N. Chanpaisit, et al. Vbench: Comprehensive benchmark suite for video generative models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages 21807–21818, 2024. 7 [30] S. E. Jiaming Song, Chenlin Meng. Denoising diffusion implicit models. In ICLR , 2021. 3 [31] D. Jiang, X. Ren, and B. Y . Lin. Llm-blender: Ensembling large language models with pairwise ranking and generative fusion. arXiv preprint arXiv:2306.02561 , 2023. 16 [32] D. Kahneman and A. Tversky. Advances in prospect theory: Cumulative representation of uncertainty. Journal of Risk and Uncertainty , 5(4):297–323, 1992. 3 [33] M. Kim, D. Ki, S.-W. Shim, and B.-J. Lee. Adaptive non-uniform timestep sampling for diffusion model training. arXiv preprint arXiv:2411.09998 , 2024. 2 [34] W. Kong, Q. Tian, Z. Zhang, R. Min, Z. Dai, J. Zhou, J. Xiong, X. Li, B. Wu, J. Zhang, et al. Hunyuanvideo: A systematic framework for large video generative models. arXiv preprint arXiv:2412.03603 , 2024. 1 [35] Kuaishou. Kling. https://kling.kuaishou.com/en , 2024. 1 [36] X. Lai, Z. Tian, Y . Chen, S. Yang, X. Peng, and J. Jia. Step-dpo: Step-wise preference optimization for long-chain reasoning of llms. arXiv preprint arXiv:2406.18629 , 2024. 3 [37] X. Li, T. Zhang, Y . Dubois, R. Taori, I. Gulrajani, C. Guestrin, P. Liang, and T. B. Hashimoto. Alpacaeval: An automatic evaluator of instruction-following models, 2023. 16 [38] B. Liu, X. Liu, A. Dai, Z. Zeng, D. Wang, Z. Cui, and J. Yang. Dual-stream diffusion net for text-to-video generation. arXiv preprint arXiv:2308.08316 , 2023. 3 [39] J. Liu, G. Liu, J. Liang, Z. Yuan, X. Liu, M. Zheng, X. Wu, Q. Wang, W. Qin, M. Xia, et al. Improving video generation with human feedback. arXiv preprint arXiv:2501.13918 , 2025. 9 [40] T. Liu, Y . Zhao, R. Joshi, M. Khalman, M. Saleh, P. J. Liu, and J. Liu. Statistical rejection sampling improves preference optimization. arXiv preprint arXiv:2309.06657 , 2023. 2 [41] Luma. Luma ai. https://lumalabs.ai/dream-machine , 2024. 1 [42] Y . Meng, M. Xia, and D. Chen. Simpo: Simple preference optimization with a reference-free reward. Advances in Neural Information Processing Systems , 37:124198–124235, 2024. 15, 16 [43] L. Ouyang, J. Wu, X. Jiang, D. Almeida, C. Wainwright, P. Mishkin, C. Zhang, S. Agarwal, K. Slama, A. Ray, et al. Training language models to follow instructions with human feedback. Advances in neural information processing systems , 35:27730–27744, 2022. 1, 3 [44] A. B. Owen. Monte Carlo Theory, Methods and Examples . Stanford University, 2013. https://statweb. stanford.edu/~owen/mc/ . 3 [45] R. Y . Pang, W. Yuan, H. He, K. Cho, S. Sukhbaatar, and J. Weston. Iterative reasoning preference optimization. Advances in Neural Information Processing Systems , 37:116617–116637, 2024. 3 [46] R. Park, R. Rafailov, S. Ermon, and C. Finn. Disentangling length from quality in direct preference optimization. arXiv preprint arXiv:2403.19159 , 2024. 16
|
https://arxiv.org/abs/2505.21893v1
|
[47] M. Prabhudesai, R. Mendonca, Z. Qin, K. Fragkiadaki, and D. Pathak. Video diffusion alignment via reward gradients. arXiv preprint arXiv:2407.08737 , 2024. 1, 3 [48] B. Qi, P. Li, F. Li, J. Gao, K. Zhang, and B. Zhou. Online dpo: Online direct preference optimization with fast-slow chasing. arXiv preprint arXiv:2406.05534 , 2024. 3 [49] R. Rafailov, A. Sharma, E. Mitchell, C. D. Manning, S. Ermon, and C. Finn. Direct preference optimization: Your language model is secretly a reward model. Advances in Neural Information Processing Systems , 36:53728–53741, 2023. 1, 2, 3 [50] R. Rafailov, A. Sharma, E. Mitchell, C. D. Manning, S. Ermon, and C. Finn. Direct preference optimization: Your language model is secretly a reward model. Advances in Neural Information Processing Systems , 36:53728–53741, 2023. 7, 16 [51] L. Ruan, Y . Ma, H. Yang, H. He, B. Liu, J. Fu, N. J. Yuan, Q. Jin, and B. Guo. Mm-diffusion: Learning multi-modal diffusion models for joint audio and video generation, 2023. 3 [52] T. Salimans and J. Ho. Progressive distillation for fast sampling of diffusion models. In ICLR , 2022. 3 [53] U. Singer, A. Polyak, T. Hayes, X. Yin, J. An, S. Zhang, Q. Hu, H. Yang, O. Ashual, O. Gafni, D. Parikh, S. Gupta, and Y . Taigman. Make-a-video: Text-to-video generation without text-video data. Sep 2022. 3 [54] Y . Song and S. Ermon. Generative modeling by estimating gradients of the data distribution. In NeurIPS , 2019. 3 [55] G. Swamy, C. Dann, R. Kidambi, Z. S. Wu, and A. Agarwal. A minimaximalist approach to reinforcement learning from human feedback. arXiv preprint arXiv:2401.04056 , 2024. 3 [56] Z. Tan, J. Wang, H. Yang, L. Qin, H. Chen, Q. Zhou, and H. Li. Raccoon: Multi-stage diffusion training with coarse-to-fine curating videos. arXiv preprint arXiv:2502.21314 , 2025. 1 [57] B. Wallace, M. Dang, R. Rafailov, L. Zhou, A. Lou, S. Purushwalkam, S. Ermon, C. Xiong, S. Joty, and N. Naik. Diffusion model alignment using direct preference optimization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages 8228–8238, 2024. 1, 2, 3 [58] B. Wallace, M. Dang, R. Rafailov, L. Zhou, A. Lou, S. Purushwalkam, S. Ermon, C. Xiong, S. Joty, and N. Naik. Diffusion model alignment using direct preference optimization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages 8228–8238, 2024. 7 [59] A. Wang, B. Ai, B. Wen, C. Mao, C.-W. Xie, et al. Wan: Open and advanced large-scale video generative models. arXiv preprint arXiv:2503.20314 , 2025. 1 [60] J. Wang, H. Yuan, D. Chen, Y . Zhang, X. Wang, and S. Zhang. Modelscope text-to-video technical report. arXiv preprint arXiv:2308.06571 , 2023. 3 [61] K. Wang, M. Shi, Y . Zhou, Z. Li, Z. Yuan, Y . Shang, X. Peng, H. Zhang, and Y . You. A closer look at time steps is worthy of triple speed-up for diffusion model training. arXiv preprint arXiv:2405.17403 , 2024. 2 [62] C. Wu, L. Huang, Q. Zhang, B. Li, L. Ji, F. Yang, G. Sapiro, and N. Duan. Godiva: Generating open-domain videos
|
https://arxiv.org/abs/2505.21893v1
|
from natural descriptions. arXiv preprint arXiv:2104.14806 , 2021. 3 [63] Y . Xie, A. Goyal, W. Zheng, M.-Y . Kan, T. P. Lillicrap, K. Kawaguchi, and M. Shieh. Monte carlo tree search boosts reasoning via iterative preference learning. arXiv preprint arXiv:2405.00451 , 2024. 3 [64] H. Xu, A. Sharaf, Y . Chen, W. Tan, L. Shen, B. Van Durme, K. Murray, and Y . J. Kim. Contrastive preference optimization: Pushing the boundaries of llm performance in machine translation. arXiv preprint arXiv:2401.08417 , 2024. 16 [65] K. Yang, J. Tao, J. Lyu, C. Ge, J. Chen, W. Shen, X. Zhu, and X. Li. Using human feedback to fine-tune diffusion models without any reward model. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages 8941–8951, 2024. 1, 3, 16 [66] X. Yang, Z. Tan, and H. Li. Ipo: Iterative preference optimization for text-to-video generation. arXiv preprint arXiv:2502.02088 , 2025. 2, 3, 9 [67] Z. Yang, J. Teng, W. Zheng, M. Ding, S. Huang, J. Xu, Y . Yang, W. Hong, X. Zhang, G. Feng, et al. Cogvideox: Text-to-video diffusion models with an expert transformer. arXiv preprint arXiv:2408.06072 , 2024. 1, 3 [68] H. Yuan, Z. Chen, K. Ji, and Q. Gu. Self-play fine-tuning of diffusion models for text-to-image generation. arXiv preprint arXiv:2402.10210 , 2024. 1, 3 [69] Z. Yuan, H. Yuan, C. Tan, W. Wang, S. Huang, and F. Huang. Rrhf: Rank responses to align language models with human feedback without tears. arXiv preprint arXiv:2304.05302 , 2023. 3 [70] S. Zhang, J. Wang, Y . Zhang, K. Zhao, H. Yuan, Z. Qin, X. Wang, D. Zhao, and J. Zhou. I2vgen-xl: High-quality image-to-video synthesis via cascaded diffusion models. arXiv preprint arXiv:2311.04145 , 2023. 3 [71] Y . Zhao, R. Joshi, T. Liu, M. Khalman, M. Saleh, and P. J. Liu. Slic-hf: Sequence likelihood calibration with human feedback. arXiv preprint arXiv:2305.10425 , 2023. 16 [72] Y . Zhao, M. Khalman, R. Joshi, S. Narayan, M. Saleh, and P. J. Liu. Calibrating sequence likelihood improves conditional language generation. arXiv preprint arXiv:2210.00045 , 2022. 3 [73] L. Zheng, W.-L. Chiang, Y . Sheng, S. Zhuang, Z. Wu, Y . Zhuang, Z. Lin, Z. Li, D. Li, E. Xing, et al. Judging llm-as-a-judge with mt-bench and chatbot arena. Advances in Neural Information Processing Systems , 36:46595–46623, 2023. 16 [74] T. Zheng, P.-T. Jiang, B. Wan, H. Zhang, J. Chen, J. Wang, and B. Li. Beta-tuned timestep diffusion model. InEuropean Conference on Computer Vision , pages 114–130. Springer, 2024. 2 A Mathematical Derivations We present detailed derivations of the key equations introduced in the main paper [see Eq. 11], to offer deeper insight into our method. To highlight the versatility of the proposed algorithm, we include both a flow-based implementation and its extension to large language models (LLMs). A.1 SDPO The main optimization objective of RLHF is given by: max pθEc∼Dc,x0∼pθ(x0|c)[r(c,x0)]−βDKL[pθ(x0|c)∥pref(x0|c)]. (13) We introduce importance sampling . According to its definition, the expectation under the target distribution pθ(x0|c)can be rewritten using samples drawn from a behavior policy q(x0|c): Ex0∼pθ(x0|c)[f(x0)] =Ex0∼q(x0|c)pθ(x0|c) q(x0|c)f(x0) . (14) To simplify
|
https://arxiv.org/abs/2505.21893v1
|
notation, we define the importance weight wθ(x0|c)as: wθ(x0|c) =pθ(x0|c) q(x0|c). (15) Using this notation, the importance-sampled expectation becomes: Ex0∼pθ(x0|c)[f(x0)] =Ex0∼q(x0|c)[wθ(x0|c)f(x0)]. (16) We can rewrite the original objective as: max pθEc∼Dc,x0∼pθ(x0|c)[r(c,x0)]−βDKL[pθ(x0|c)∥pref(x0|c)] = max pθEc∼Dc,x0∼q(x0|c)[wθ·r(c,x0)]−βDKL[pθ(x0|c)∥pref(x0|c)] = min pθEc∼Dc,x0∼q(x0|c) log pθ(x0|c) pref(x0|c)·exp wθ β·r(c,x0) = min pθEc∼Dc,x0∼q(x0|c) log pθ(x0|c) 1 Z(c)·pref(x0|c)·exp wθ β·r(c,x0) −logZ(c) (17) Here, wθ=pθ(x0|c) q(x0|c)arises from importance sampling and corrects for the mismatch between the model distribution and the off-policy sampling distribution q. As a result, we arrive at the following form of the objective: min pθEc∼Dc,x0∼q(x0|c) log pθ(x0|c) 1 Z(c)·pref(x0|c)·exp wθ β·r(c,x0) −logZ(c) , (18) where Z(c) =X x0pref(x0|c)·exp1 +ϵ β·r(c,x0) . (19) Here, ϵis the clipping threshold used in the importance weight definition (see Eq. 6), and Z(c)is a partition function. We further define the target distribution: p∗(x0|c) =1 Z(c)·pref(x0|c)·expwθ β·r(c,x0) , (20) under which the optimization reduces to: min pθEc∼Dc,x0∼q(x0|c) logpθ(x0|c) p∗(x0|c) −logZ(c) = min pθEc∼Dc[DKL(pθ(x0|c)∥p∗(x0|c))−logZ(c)].(21) Since Z(c)is independent of pθ, minimizing the KL divergence reduces to matching pθtop∗, allowing us to directly optimize toward the target distribution p∗(x0|c). Based on the definition of p∗(x0|c), we can rearrange and express the reward as: r(c,x0) =β wθ· logp∗(x0|c) pref(x0|c) + log Z(c) . (22) We now derive the preference probability under the SDPO objective by substituting the reward expression into the Bradley–Terry model: P(xw 0≻xl 0|c) =exp (r(c,xw 0)) exp (r(c,xw 0)) + exp r(c,xl 0). (23) Substituting Eq. (22) into the above, we obtain: P(xw 0≻xl 0|c) =exp β ww θ log p∗(xw 0|c) pref(xw 0|c) +log Z(c) exp β ww θh logp∗(xw 0|c) pref(xw 0|c) +log Z(c)i +exp β wl θ log p∗(xl 0|c) pref(xl 0|c) +log Z(c) (24) By factoring out logZ(c)and simplifying, we get: P(xw 0≻xl 0|c) =1 1 + exp β wl θlogp∗(xl 0|c) pref(xl 0|c)−β ww θlogp∗(xw 0|c) pref(xw 0|c) (25) =σβ ww θlogp∗(xw 0|c) pref(xw 0|c)−β wl θlogp∗(xl 0|c) pref(xl 0|c) . (26) Thus, we arrive at the SDPO loss, which mirrors the form of the DPO loss shown above, but incorporates off-policy correction via importance weights: LS-DPO(θ) =−E(c,xw 0,xl 0)∼D logσβ ww θlogpθ(xw 0|c) pref(xw 0|c)−β wl θlogpθ(xl 0|c) pref(xl 0|c) . (27) To improve stability, we define a shared clipped inverse importance weight: ˜wθ= max1 ww θ,1 wl θ ,˜wθ←clip( ˜wθ,1−ϵ,1 +ϵ). (28) We then use ˜wθto scale the entire preference score difference, leading to the final SDPO loss: LSDPO(θ) =−E(c,xw 0,xl 0)∼D logσβ ˜wθ· logpθ(xw 0|c) pref(xw 0|c)−logpθ(xl 0|c) pref(xl 0|c) . (29) A.2 Applicability of SDPO to Large Language Models Although our experiments are conducted in the diffusion model setting, it is evident that SDPO is modality-agnostic and can be readily applied to large language models (LLMs); we include such results in a later section to demonstrate its effectiveness. A.3 Diffusion-Based SDPO Objective To adapt the SDPO objective to the diffusion setting, we follow the formulation introduced in Diffusion-DPO. We express the preference score difference in terms of the diffusion model’s transition probabilities. This leads to the following loss: LSDPO(θ)≤ −E(xw 0,xl 0)∼D, t∼U(0,T)h logσ βT ˜wθ·h logpθ(xw t−1|xw t) pref(xw t−1|xw t)−logpθ(xl t−1|xl t) pref(xl t−1|xl
|
https://arxiv.org/abs/2505.21893v1
|
t)ii (30) A.4 Reformulating Flows as SDEs for Preference Optimization While SDPO naturally fits stochastic diffusion models, applying it to deterministic flow-based models poses challenges. Normalizing flows are governed by deterministic ODEs, which lack the stochasticity needed for effective preference-based learning. Inspired by Stochastic Interpolants [ 1], we instead reformulate the flow model as a stochastic process by constructing a corresponding stochastic differential equation (SDE), enabling stochastic training dynamics compatible with SDPO. To model a smooth transformation from a base distribution ρ0to a target distribution ρ1, we define a stochastic process xt∈Rdover time t∈[0,1]via the following SDE: dxt=b(t, xt)dt+p 2ϵ(t)dWt, (31) where b(t, x)is a time-dependent drift field, ϵ(t)is a scalar diffusion schedule, and Wtis standard Brownian motion. Following the path construction in Flow Matching, we assume an interpolated trajectory of the form: xt=α(t)x1+β(t)z, where x1∼ρ1,z∼ N(0, I), andα(t), β(t)are scalar interpolation schedules. The drift b(t, x)that follows this trajectory in expectation, with an additional stochastic correction, is given by: b(t, x) = ˙α(t)ηos z(t, x) +˙β(t) β(t)(x−α(t)ηos z(t, x))−ϵ(t) α(t)ηos z(t, x), (32) where: •ηos z(t, x)≈E[z|xt=x]is a learned denoiser that approximates the conditional expecta- tion; •α(t), β(t)define the linear interpolation schedule from ztox1; • In practice, the denoiser replaces the score function ∇logρ(t, x). Euler-Maruyama Update Rule To simulate the stochastic trajectory, we discretize Equation (31) via the Euler-Maruyama method: xj+1=xj+b(tj, xj) ∆t+q 2ϵ(tj)∆t ξj, (33) where ξj∼ N(0, I), and ∆t=tj+1−tjis the integration step. We validate the effectiveness of this SDE-based reformulation on the Wan-1.3B flow model, as demonstrated in Tab. 1. B More Qualitative Results B.1 LLM Preference Alignment with SDPO To demonstrate the modality-agnostic nature of SDPO, we apply it to preference alignment training for large language models (LLMs). Following SimPO [ 42], we use two instruction-tuned models as base learners: LLaMA-3-8B-Instruct andMistral-7B-Instruct , both of which have undergone extensive instruction tuning and are stronger than the SFT-only models used to generate training data. For supervision, we use prompts from the UltraFeedback dataset and generate 5 candidate re- sponses per prompt using the SFT model with temperature 0.8. We then score these responses using llm-blender/PairRM [31], selecting the highest- and lowest-ranked responses as the preferred yw and dispreferred yl, respectively. We evaluate on three popular benchmarks: MT-Bench [73],AlpacaEval 2 [37]. For AlpacaEval 2, we report both win rate (WR) and length-controlled win rate (LC) [ 11] and for MT-Bench, the average GPT-4 evaluation score.As shown in Tab. 2, SDPO significantly outperforms the SFT baseline across all evaluation benchmarks. Moreover, it consistently surpasses existing alignment methods, demonstrating a clear advantage in aligning LLMs with human preferences. Table 2: Comparison of instruction tuning methods on Mistral-Instruct (7B) andLlama-3-Instruct (8B) using AlpacaEval 2 and MT-Bench (GPT-4 only). Mistral-Instruct (7B) Llama-3-Instruct (8B) Method AlpacaEval 2 MT-Bench AlpacaEval 2 MT-Bench LC (%) WR (%) GPT-4 LC (%) WR (%) GPT-4 SFT 17.1 14.7 7.5 26.0 25.3 8.1 SLiC-HF [71] 24.1 24.6 7.8 26.9 27.5 8.1 DPO [50] 26.8 24.9 7.6 40.3 37.9 8.0 IPO [2] 20.3 20.3 7.8 35.6 35.6 8.3 CPO [64] 23.8 28.8 7.5 28.9 32.2 8.0 KTO [19] 24.5
|
https://arxiv.org/abs/2505.21893v1
|
23.6 7.7 33.1 31.8 8.2 ORPO [28] 24.5 24.9 7.7 28.5 27.4 8.0 R-DPO [46] 27.3 24.5 7.5 41.1 37.8 8.0 SimPO [42] 32.1 34.8 7.6 44.7 40.5 8.0 SDPO 31.8 35.1 7.9 43.5 41.6 8.2 B.2 Image-Based Evaluation of Diffusion Alignment Methods To further compare SDPO with existing diffusion-based alignment approaches, and to demonstrate its compatibility with both Diffusion-DPO and D3PO frameworks, we conduct experiments on image generation tasks. Due to computational constraints, many alignment algorithms are not directly applicable to video generation; thus, we perform evaluations in the image domain for a fair comparison. Our setup follows that of D3PO [ 65], using Stable Diffusion v1.5 as the base model. We evaluate aligned models using three widely adopted metrics: CLIP score, LAION Aesthetic score, and ImageReward score, which collectively assess semantic alignment, perceptual quality, and human preference.As shown in Tab. 3, SDPO leads to clear improvements over the base SFT model and consistently outperforms D3PO across all evaluation metrics. Table 3: Comparison of alignment methods on image generation tasks. Higher is better. Method CLIP Score LAION Aesthetic ImageReward SDv1.5 30.7 5.25 0.04 D3PO 31.9 5.68 0.27 SDPO (Ours) 32.33 5.81 0.38 C Qualitative Comparison of Visual Outputs To further evaluate the effectiveness of SDPO, we present qualitative comparisons of generated video outputs across multiple aspects, including text alignment, motion continuity, and style fidelity. Figure 8: Visualization of probability density changes during SDPO training across different timestep intervals. (a) t∈[0,100], (b)t∈[500,600], and (c) t∈[900,1000] . Each panel shows how the predicted probability of positive vs. negative samples evolves over training steps within the specified timestep range. As illustrated in Fig. 9 and Fig. 10, SDPO significantly improves upon the base model ( Wan-1.3B ) in overall aesthetic quality. For instance, when generating stylized prompts such as "a cartoon dog" or "a cyberpunk panda," SDPO produces videos that better adhere to the intended artistic style compared to the base model.In terms of motion consistency, SDPO enhances the temporal dynamics of the generated videos. While outputs from the base model tend to be static or lack coherent motion, SDPO produces more fluid and plausible motion sequences.We also compare SDPO with Diffusion-DPO in Fig.??. Across different evaluation axes—semantics, motion, and style—our method demonstrates consistent visual advantages, highlighting the effectiveness of our modality-agnostic preference optimization approach. C.1 Timestep-wise Visualization of SDPO To better understand the behavior of SDPO during training, we visualize the predicted probabilities of positive and negative samples across three timestep intervals: [0,100],[500,600], and [900,1000] . As shown in Figure 8, SDPO gradually increases the gap between positive and negative probabilities, particularly in the early and mid timestep ranges, where positive scores rise while negative scores remain flat or decrease. In contrast, Diffusion-DPO shows an unexpected trend in [0,100], where both positive and negative probabilities decline simultaneously. This suggests that SDPO better aligns with the intended preference objective and produces more interpretable learning dynamics. D Limitations While SDPO significantly improves the stability and alignment quality of diffusion-based preference optimization, several limitations remain. The method relies on proxy importance weights derived from approximated reverse
|
https://arxiv.org/abs/2505.21893v1
|
transitions, which may not fully capture dynamic changes in model be- havior during training, limiting its responsiveness in highly non-stationary or long-horizon scenarios. Although SDPO effectively corrects off-policy bias, it assumes access to high-quality offline prefer- ence data and may degrade in settings with noisy or sparse feedback. While we demonstrate strong generalization to both score-based and flow-based diffusion models, extending SDPO to complex mul- timodal tasks (e.g., video-audio-text generation) remains an open challenge. Additionally, although our preliminary experiments show that SDPO is applicable to large language models (LLMs) and particularly suitable for online or continual learning, its effectiveness in such dynamic settings has not been fully explored. Investigating SDPO under streaming feedback, evolving preference distributions, or reinforcement-style update regimes is a promising direction. Finally, the extra computational overhead from importance weighting and masking, while moderate, may impact scalability to very large datasets or ultra-high-resolution generation tasks. Future work may explore adaptive weighting strategies, online preference data integration, and hybrid objectives combining SDPO with supervised fine-tuning or reinforcement learning techniques. Figure 9: Side-by-side visual comparison of the SDPO-optimized model and the Wanx-1.3B base model. Figure 10: Side-by-side visual comparison of the SDPO-optimized model and the Wanx-1.3B base model.
|
https://arxiv.org/abs/2505.21893v1
|
arXiv:2505.21895v1 [cs.LG] 28 May 2025Compressing Sine-Activated Low-Rank Adapters through Post-Training Quantization Cameron Gordon∗ Australian Institute for Machine Learning University of AdelaideYiping Ji* Australian Institute for Machine Learning University of Adelaide DATA61, CSIRO Hemanth Saratchandran* Australian Institute for Machine Learning University of AdelaidePaul Albert Australian Institute for Machine Learning University of Adelaide Simon Lucey Australian Institute for Machine Learning University of Adelaide Abstract Low-Rank Adaptation (LoRA) has become a standard approach for parameter- efficient fine-tuning, offering substantial reductions in trainable parameters by modeling updates as the product of two low-rank matrices. While effective, the low-rank constraint inherently limits representational capacity, often resulting in reduced performance compared to full-rank fine-tuning. Recent work by Ji et al. (2025) has addressed this limitation by applying a fixed-frequency sinusoidal trans- formation to low-rank adapters, increasing their stable rank without introducing additional parameters. This raises a crucial question: can the same sine-activated technique be successfully applied within the context of Post-Training Quantization to retain benefits even after model compression? In this paper, we investigate this question by extending the sinusoidal transformation framework to quantized LoRA adapters. We develop a theoretical analysis showing that the stable rank of a quantized adapter is tightly linked to that of its full-precision counterpart, motivating the use of such rank-enhancing functions even under quantization. Our results demonstrate that the expressivity gains from a sinusoidal non-linearity persist after quantization, yielding highly compressed adapters with negligible loss in performance. We validate our approach across a range of fine-tuning tasks for language, vision and text-to-image generation achieving significant memory savings while maintaining competitive accuracy. 1 Introduction Parameter-efficient fine-tuning (PEFT) has emerged as a core component of modern machine learning pipelines (Houlsby et al. (2019); Han et al. (2024)). Most PEFT methods adapt a frozen pre-trained backbone by learning a small set of task-specific parameters, often implemented as additive weight updates. Among these approaches, Low-Rank Adapters (LoRA) have become especially prominent, with a rapidly expanding literature (Hu et al. (2022); Mao et al. (2025)). Recent work has sought to further reduce the number of trainable parameters by exploring alternative low-rank decompositions (Karimi Mahabadi et al. (2021); Edalati et al. (2022); Liu et al. (2024b); He et al. (2023); Ding et al. (2023); Albert et al. (2025); Kopiczko et al. (2024); Koohpayegani et al. (2024)). ∗Equal contribution. Correspondence to Cameron Gordon <cameron.gordon@adelaide.edu.au> and Hemanth Saratchandran <hemanth.saratchandran@adelaide.edu.au Recently, a new fine-tuning paradigm has emerged that enhances the expressive power of low-rank adapters by applying rank-enhancing functions component-wise. Introduced in Ji et al. (2025), this approach demonstrates that applying a non-linear transformation, specifically, a fixed-frequency sinusoidal function, to a low-rank adapter can significantly increase its rank. Notably, this gain in expressivity comes at no additional parameter cost, preserving the efficiency of LoRA while yielding higher-rank representations. In this paper, we investigate the interaction between rank-enhancing sinusoidal non-linearities and quantization, a technique that maps full-precision parameters to a smaller set of discrete values, ideally with minimal impact on model performance (Han et al. (2015); Gholami et al. (2021); Li et al. (2024a)). Quantization is a key enabler for deploying large models
|
https://arxiv.org/abs/2505.21895v1
|
on resource-constrained hardware, offering improvements in memory efficiency, computational throughput, and energy consumption (Gholami et al. (2021); Dettmers et al. (2024); Xu et al. (2024); Kaushal et al. (2025)). To study this interaction, we develop a theoretical framework that characterizes how the rank of an adapter changes under quantization, showing that it is tightly controlled by the rank of the original, unquantized adapter. This leads to our key insight: when the adapter has low-rank, as is the case with LoRA, quantization preserves this structure. However, by applying a component-wise sinusoidal non-linearity after quantization, we can enrich the representational capacity of the adapter, effectively compensating for the rank limitation and enabling more expressive quantized models. This insight is particularly relevant in the context of adapter quantization, which has emerged as one of two dominant approaches in quantized fine-tuning. The first, exemplified by QLoRA (Dettmers et al. (2023); Badri and Shaji (2024)), applies quantization to the base model while maintaining high-precision adapters and activations. This approach is primarily motivated by reducing memory overhead during fine-tuning, making it feasible to adapt large language models on a single GPU (Dettmers et al. (2024)). The second approach focuses on quantizing the adapters themselves (Yao and Klimovic (2023); Liu et al. (2024a); Isik et al. (2023); Ping et al. (2024); Jie et al. (2023)), enabling highly compact and transferable fine-tuned models. Our work follows this latter direction, showing that rank-enhancing sinusoidal functions can be easily integrated as a plug-in component into quantized adapters, significantly improving their expressivity while retaining the memory efficiency that makes adapter quantization attractive. To our knowledge, such rank-enhancing functions have not yet been explored in the quantization literature, and we view this work as a first step towards bridging that gap. Our main contributions are as follows: •We provide the first theoretical analysis showing how quantization affects the rank of a fine-tuning adapter, and show that this rank is tightly governed by the rank of its unquantized counterpart. •Based on our theoretical results we demonstrate that the effects of quantization on rank can be mitigated by applying rank enhancing functions in the form of sinusoids with fixed frequencies. We validate our approach through extensive experiments on vision and language tasks, including Large Language Model adaptation, Vision-Language Model Adaptation, and Text-to-Image Generation. For evaluation on Commonsense Reasoning, we achieve up to a 66% reduction in memory usage relative to full-precision LoRA models, without compromising performance. 2 Related Work Rank-Enhancing Functions. The most relevant rank-enhancing adapter works related to our approach are Ji et al. (2025) and Li et al. (2024) who investigate the use of sine-nonlinearities in low-rank adaptation Ji et al. (2025); Li et al. (2024b). We directly extend this approach by considering the effect of quantization on adapter performance. Parameter efficient adapters. Parameter efficient adaptation is a common fine-tuning strategy, in which a pre-trained base model is frozen, and a minimal number of adapter weights are trained on new data Houlsby et al. (2019). Low-Rank Adapters are a common variant, in which the adapter comprises two low-rank matrices Hu et al. (2022). VeRA
|
https://arxiv.org/abs/2505.21895v1
|
Kopiczko et al. (2024), RandLoRA Albert 2 et al. (2025), and NOLA Koohpayegani et al. (2024) use combinations of random projections to reduce the number of parameters contained within the adapters. QA-LoRA Xu et al. (2024) produces adapters that can be merged with the quantized base model, enabling low-precision inference. Delta compression and quantization. Although adapters represent a trivial proportion of the total number of parameters in a network (typically less than 1%), a recent branch of research has focused on the specific compression of these updates. Termed Delta Compression , this branch recognizes the practical importance of reducing the memory throughput of fine-tuned updates, which may be distributed at scale to many parties with a common base model Isik et al. (2023); Yao and Klimovic (2023); Brüel-Gabrielsson et al. (2025). Within this framework it is typical to quantize the adapters, by mapping values to a limited set of floating points. This is often combined with lossless entropy compression such as zip. The quantization works most related to our approach are GPT-Zip Isik et al. (2023), Delta-Zip Yao and Klimovic (2023), Bit Delta Liu et al. (2024a), and Bi-LoRA Jie et al. (2023). Ping et al. (2024) use a mixed-precision strategy, devoting higher precision to larger singular values. Jiang et al. (2024) uses a group-wise dropout and separate quantization. Ryu et al. (2023) focus on low-rank residuals. Liu et al. (2024a) uses binary adapters for delta compression. Our work differs from these models through our specific focus on the rank-increasing properties of a sine adaptation within a quantized framework. 3 Theoretical Framework 3.1 Preliminaries Sine-Activated Low-Rank Adapters. Recent works by Ji et al. (2025) and Li et al. (2024b) have explored the use of non-linear sine activations in adapter modules. Unlike common activations such as ReLU, sine functions can increase the effective rank of a matrix without adding additional parameters, offering a simple yet effective means of enhancing low-rank adapters. Specifically, Ji et al. (2025) introduced a sine-activated low-rank adapter of the form: sin(ωAB ) γ(1) where ωis a frequency parameter, γis a scaling factor, and A∈Rm×k,B∈Rk×nare low-rank matrices with bottleneck dimension k. Stable Rank. The key insight of Ji et al. (2025) is that applying a sine function to the low-rank product AB, with large enough frequency ω, would increase the stable rank of the matrix ABwhich can then be used to increase the rank yielding a high rank adapter. The stable rank of a matrix Ais defined by: SR(A) :=||A||2 F (σ(A)max)2(2) where ||A||2 Fdenotes the Frobenius norm and σmax(A)the maximum singular value of A. Stable rank provides a softer measure of a matrix’s effective dimensionality Martinsson and Tropp (2020). Unlike the classical rank, which counts the number of nonzero singular values, the stable rank reflects how evenly the spectral energy is distributed. For instance, two matrices with identical rank can have vastly different stable ranks depending on the decay of their singular values. This nuance is critical when aiming to enhance low-rank adapters: even without increasing the classical rank, we can improve the adapter’s expressivity by boosting its stable
|
https://arxiv.org/abs/2505.21895v1
|
rank. This is precisely the property exploited by sine-activated adapters in Ji et al. (2025). Quantization. A quantization function Q(·)maps values from a less restricted set to a more restricted set A → B . Practically, this may involve explicit conversion of data-types (e.g. 16-bit precision to 4-bit precision), or maintaining the same data-type, but restricting the set of allowed values (e.g. mapping from 216discrete values to 24) Gholami et al. (2021); Gray and Neuhoff (1998). This simulated quantization is common for memory compression and often coupled with an integer look-up table or an entropy coder Han et al. (2015); Jacob et al. (2018). It is conventional to define 3 quantization error as the residual resulting from a quantization map, which can be treated as a random variable Gersho and Gray (1991); Gray and Neuhoff (1998): ϵ=Q(A)−A (3) For our experiments, we use a k-means quantization scheme due to its theoretical optimality and tractable implementation Gersho and Gray (1991); Han et al. (2015). This is implemented using thek-means1d package Steinberg (2019), which provides an efficient wrapper for a fast k-means solver that runs in O(kn+nlogn)forn1D data and kclusters based on Wu (1991); Grønlund et al. (2018). Further quantization experimental details are included in the Supplementary Materials. 3.2 Main Theorem In this section, we present our main theoretical result, which establishes that the stable rank of a quantized matrix is quantitatively governed by the stable rank of its unquantized counterpart. We will use the notation σmaxto denote the maximum singular value of a matrix, σminto denote the minimum singular value and || · || Fto denote the Frobenius norm. Theorem 3.1. LetAbe a fixed matrix and let Qdenote a quantization operator so that Q(A) =A−ϵ. Assume that σmin(A)≤1andσmax(A)≥σmax(ϵ)>>1. Then: 1 2p SR(A)−||ϵ||F σmax(A) ≤p SR(Q(A))≤2p SR(A) +||ϵ||F σmax(A) (4) where ϵis defined by eq. (3). Proof. We recall from section 3.1 that we can write: Q(A) =A−ϵ (5) were ϵis viewed as a random noise matrix. We then use the triangle inequality to obtain: ||A||F− ||ϵ||F≤ ||Q(A)||F≤ ||A||F+||ϵ||F. (6) Using inequalities for the maximum singular value of a matrix we have: σmax(A)−σmin(ϵ)≤σmax(Q(A))≤σmax(A) +σmax(ϵ). (7) To prove the upper bound observe that: p SR(Q(A)) =||Q(A)||F σmax(Q(A))(8) ≤||A||F+||ϵ||F σmax(Q(A))byeq.(6) (9) ≤||A||F+||ϵ||F σmax(A)−σmin(A)byeq.(7) (10) ≤2||A||F+||ϵ||F σmax(A) (11) where to get the last inequality we use the assumption that σmin(ϵ)<1andσmax(A)>>1so that σmax(A) 2≤σmax(A)−σmin(ϵ). The upper bound then follows from the definition of the stable rank. To prove the lower bound we proceed in a similar way: p SR(Q(A)) =||Q(A)||F σmax(Q(A))(12) ≥||A||F− ||ϵ||F σmax(Q(A))byeq.(6) (13) ≥||A||F− ||ϵ||F σmax(Q(A)) +σmax(ϵ)byeq.(7) (14) ≥1 2||A||F− ||ϵ||F σmax(Q(A)) (15) where the last inequality comes from the assumption that σmax(A)≥σmax(ϵ). The lower bound then follows from the definition of stable rank. 4 1 2 3 4 5 6 7 Quantization Bits0510152025Stable Rank Q(A)Q(B) sin(Q(A)Q(B)) AB sin(AB) Figure 1: A sine-activated low-rank matrix sin(ωAB )increases the stable rank relative to a low-rank matrix AB. By varying quantization level sin(ω(Q(A)Q(B))we can interpolate the effect on stable rank between these two values. Note that simply quantizing the low-rank matrix Q(A)Q(B)does not lead to an increase in
|
https://arxiv.org/abs/2505.21895v1
|
the stable rank regardless of the level of precision. Theorem 3.1 presents the key insight of this work: the stable rank of a quantized adapter remains low if the original (unquantized) adapter has low stable rank, as the quantized stable rank is controlled by the unquantized one. This observation motivates applying a sinusoidal function, with a large frequency ω, after quantization. By leveraging results from Ji et al. (2025), we note that a sine function with large frequency can increase the stable rank of the quantized adapter, effectively boosting its expressivity without sacrificing quantization efficiency. This yields a high-rank quantized adapter while retaining the compression benefits of quantization. In particular this makes applying a sinusoidal function to a post quantization framework an effective way to yield better performance while still retaining compression benefits. Figure 1 provides an empirical illustration of our main insight. Starting with two low-rank matrices AandB, whose product ABis also low-rank, we apply quantization QtoAandBat varying bits. The figure plots the stable ranks of AB, the quantized product Q(A)Q(B), the sine-activated product sin(ωAB ), and sin(ωQ(A)Q(B)). As shown, the stable rank of sin(ωQ(A)Q(B))increases with higher quantization bits, demonstrating how sinusoidal activation can effectively restore stable rank after quantization. 4 Results 4.1 Large Language Model Adaptation Configurations. We fine-tune LLaMA 3-8B on a commonsense reasoning tasks, training the 15k dataset for 1 epoch. Following training we apply Post-Training Quantization at different bits-per- parameter, with each target tensor quantized independently. We then evaluate on each of the test sets directly, without further fine-tuning. We evaluate on a standard suite of benchmarks including BoolQ Clark et al. (2019), PIQA Bisk et al. (2019), SIQA Sap et al. (2019), HellaSwag (HS) Zellers et al. (2019), WinoGrande (WG) Sakaguchi et al. (2021), ARC-c, ARC-e Clark et al. (2018) and OBQA Mihaylov et al. (2018). We use a frozen LLAMA-3-8B base model from Hugging Face AI@Meta (2024). Each base experiment is run on one H100 GPU using a batch size of 128, and re-used for quantizing to different levels of precision. Low-rank adaptors are applied to the weight matrices Wq,Wk,Wv,Wup,andWdown. We use the ωvalues used in Ji et al. (2025), who apply larger ω for low-rank models. Following Ji et al. (2025) we set γ=√n, where nis the row dimension of the weight matrix. Analysis. Full results are recorded in Supplementary Tables table 7 and table 6 show the average performance over commonsense reasoning tasks with a non-quantized base model. We can note that the SineLoRA model consistently outperforms the LoRA and DoRA models with quantized adapters. The improvement is such that the Rank 8 SineLoRA at 5-bits outperforms the full-precision LoRA, with33.5%of the memory (9.1MB to 27.1MB). Interestingly, we find that while both LoRA and SineLoRA are robust to low (2-bit) quantization, the performance of DoRA degrades significantly 5 Table 1: Commonsense Reasoning performance for LoRA and SineLoRA under different quantization rates. Averaged across tasks. Full refers to the typical 16-bit precision used for adapters. Rank Method 1 2 4 8 16 LoRA (2-bit) 69.7 71.0 74.7 75.2 77.3 SineLoRA (2-bit) 70.0 73.7 75.1
|
https://arxiv.org/abs/2505.21895v1
|
76.4 77.9 Memory (MB) 0.6 1.1 2.2 4.3 8.6 LoRA (3-bit) 70.0 73.1 75.5 76.5 78.4 SineLoRA (3-bit) 70.5 74.4 75.9 77.7 78.6 Memory (MB) 0.8 1.5 3.0 6.0 11.9 LoRA (5-bit) 69.4 73.1 75.6 76.7 78.6 SineLoRA (5-bit) 69.8 74.4 76.1 78.1 78.8 Memory (MB) 1.2 2.3 4.5 9.1 18.1 LoRA (Full) 73.7 74.8 76.5 78.0 79.0 SineLoRA (Full) 72.8 75.1 78.5 78.8 78.9 Memory (MB) 3.4 6.8 13.5 27.1 54.0 Parameters (M) 1.8 3.5 7.1 14.2 28.3 2 4 6 8 Memory (MB)70.072.575.077.52-Bit LoRA SinLoRA 2.5 5.0 7.5 10.0 Memory (MB)70.072.575.077.53-Bit LoRA SinLoRA 5 10 15 Memory (MB)70755-Bit LoRA SinLoRA Figure 2: Commonsense Reasoning performance for SineLoRA and LoRA with a frozen non- quantized LLAMA-3-8B base model. SineLoRA exceeds the benchmark LoRA performance for all rank and quantization levels. Averaged across all tasks. Dotted lines denote comparison points. from performance until the higher 5-bits. We suggest that this may relate the sensitivity of scaling parameters in DoRA to quantization. We additionally note that while performance of quantized and non-quantized base models is broadly comparable (see Supplementary Materials), there are some instances where the quantized base model achieves superior performance. Compression Analysis. Bjøntegaard Delta is commonly applied for comparing video and image compression codecs Bjøntegaard (2001); Herglotz et al. (2022, 2024), and has occasionally been applied for other modalities such as Point Cloud Wang et al. (2021a,b); Herglotz et al. (2024); Barman et al. (2022) or Neural Radiance Field compression Ji et al. (2025). The metric evaluates the area under the curve between two algorithms in either the memory (see fig. 2) or performance direction. The BD-Rate denotes the average memory improvement a given accuracy levels, while BD-Accuracy denotes the average accuracy improvement for given memory levels. Table 2 shows this comparison on Commonsense Reasoning, showing memory improvements of 41% between the 2-bit LoRA and SineLoRA models. See the Supplementary Materials for more discussion of this metric. 4.2 Vision-Language Model Adaptation Data. We fine-tune CLIP Radford et al. (2021) on 11 standard image classification datasets, obtained by following Zhang et al. (2024). These include: Cars Krause et al. (2013), DTD Cimpoi et al. (2014), EuroSAT Helber et al. (2018), Food101 Bossard et al. (2014), Caltech101 Fei-Fei et al. (2006), Sun397 Xiao et al. (2016), FGVCAircraft Maji et al. (2013), Flowers102 Nilsback and Zisserman (2008), ImageNet Russakovsky et al. (2015), Oxford Pets Parkhi et al. (2012), and 6 Table 2: Bjøntegaard Analysis for Table 1, taking the respective LoRA model as the baseline codec. Rate-distortion generated by keeping quantization fixed and varying the number of parameters through rank. SineLoRA demonstrates improved performance at each quantization level. Quantization Level BD-Rate ↓BD-Accuracy ↑ 2 -41.60% 1.29% 3 -28.51% 0.88% 5 -28.04% 0.96% 16 -30.46% 0.69% Table 3: Vision-Language Model Adaptation (Averaged Over 11 Tasks) ↑ Model Rank 1-Bit 2-Bit 3-Bit 4-Bit 5-Bit 8-Bit Full Params (’000) LoRA 2 67.3 70.0 74.1 76 .0 76 .4 76 .4 76 .5 123 SineLoRA 2 63 .5 68 .1 74.2 76.3 76.9 77.0 77.0 123 LoRA 5 70.5 74.7 77.0 77 .5 77 .8 77 .8 77
|
https://arxiv.org/abs/2505.21895v1
|
.9 307 SineLoRA 5 66 .9 74 .1 77.5 78.6 78.7 78.9 78.9 307 LoRA 10 71.6 77.2 78.3 78 .8 78 .7 78 .8 78 .9 614 SineLoRA 10 68 .7 76 .3 78.8 79.4 79.6 79.8 79.8 614 LoRA 16 72.9 78.1 79.2 79 .4 79 .5 79 .4 79 .5 983 SineLoRA 16 68 .3 77 .4 79.5 80.0 80.3 80.2 80.3 983 UCF101 Soomro et al. (2012). We compare the performance of LoRA Hu et al. (2022), SineLoRA Ji et al. (2025) for few-shot adaptation using a ViT-B/32 backbone following Post-Training Quantization. Configurations. Experiments are run on a NVIDIA GeForce RTX 4090 GPU with 24GB VRAM. Batch size 64, base model ViT-B/32, learning rate 0.001, weight decay 0.1, 10 epochs, AdamW opti- mizer Loshchilov and Hutter (2019). Fine-tuning is conducted on attention layers( Wq,Wk,andWv) only. We finetune on few-shot tasks using 1 and 16 examples, employing different rank levels. We useω= 200 for all experiments, and γ=√nwhere nis the weight row dimension. Analysis. Table 3 shows our results on 1-shot classification averaged over 11 vision tasks. Con- sistent with the language model experiments, we find that the SineLoRA model outperforms the baseline LoRA at a comparable number of parameters. Interestingly, we note that the SineLoRA model only outperforms LoRA at 3-bits and higher, after which improvements are consistent. This is likely explained by recalling fig. 1, in which lower stable rank improvements are observed for very low precision (1 and 2 bit) quantization. In the Supplementary Materials we include additional ablations, including results on 16-shot classification and comparison with DoRA Liu et al. (2024b). 4.3 Text-to-Image Generation Training Details. To investigate how SineLoRA performs on a text-to-image generation task, we adopt a DreamBooth fine-tuning pipeline Ruiz et al. (2023). DreamBooth is a method for adapting text-to-image diffusion models using just a few reference images of a target object. Our experiments are performed on Stable Diffusion 3 Medium Esser et al. (2024), using the official Hugging Face implementation2. For data, we use the DreamBooth dataset comprising 30 objects with 5-6 images per instance. For each object, we train a separate adapter. Following training, we quantize adapters to 1, 2, 3, and 5 bits using k-means quantization. These are evaluated using standard generative text prompts with 2 seeds each. For both LoRA and SineLoRA we train rank 4 adapters for 300 epochs using the AdamW optimizer using a learning rate of 4×10−4Loshchilov and Hutter (2019). For SineLoRA we use a sinusoid frequency ω= 200 andγ= 2√n. All experiments are run on NVIDIA H100 GPUs, with each fine-tuning run taking around 7 minutes. Analysis. Figure 3 shows a qualitative evaluation of SineLoRA and LoRA for trained using Dreambooth. Results show increased object fidelity for the SineLoRA models, which is maintained 2https://github.com/huggingface/diffusers/tree/main/examples/dreambooth 7 1-Bit 2-Bit 3-Bit 5-Bit LoRA SineLoRA Figure 3: Dreambooth Stable Diffusion comparison for the prompt A toy with tree and autumn leaves in the background for the category robot toy . We find that SineLoRA exhibits greater consistency with target images (left) than LoRA even at low levels of
|
https://arxiv.org/abs/2505.21895v1
|
quantization. Table 4: Comparison of LoRA and SineLoRA for Text-to-Image Generation. Best scores for each bit-width group and metric are highlighted in bold . Bits Model CLIP-I ↑CLIP-T ↑DINO ↑ 1LoRA 0.729 0.219 0.515 SineLoRA 0.746 0.219 0.554 2LoRA 0.768 0.218 0.599 SineLoRA 0.780 0.219 0.616 3LoRA 0.780 0.218 0.621 SineLoRA 0.785 0.219 0.625 5LoRA 0.783 0.219 0.626 SineLoRA 0.787 0.219 0.629 FullLoRA 0.784 0.321 0.626 SineLoRA 0.790 0.317 0.632 at lower quantization levels than LoRA. In the Supplementary Materials we include additional qualitative analysis. Curiously, we often find that 1-bit models have less fidelity to the fine-tuned target image, and appear dominated by the prompt. We attribute this to the increased dominance of the base model weights for generation. Quantitatively, we follow Ruiz et al. (2023) and report the average cosine similarities between CLIP/DINO embeddings of generated images and real images (CLIP-I and DINO), and of generated images and the prompt (CLIP-T) Radford et al. (2021); Caron et al. (2021). Table 4 shows results averaged over all 30 categories evaluated at epoch 300. We find consistent performance improvements at each quantization level for CLIP-I and DINO, which measure the similarity to the target object. We find similar performance between the two models for CLIP-T, which measures similarity to the generative prompt. We provide additional analysis on individual category performance and further training ablations in the Supplementary Materials. 5 Discussion Transfer protocol. The reader may reasonably wonder why we focus on the compression of adapters (measured in MB) when this represents a trivial proportion of total memory relative to a multi-billion parameter base model (measured in GB). Practically, there is increasing interest in use-cases where parties hold standard common base models (e.g., on edge devices like phones), and wish to distribute task adaptions or model updates at scale Isik et al. (2023); Yao and Klimovic 8 (2023). Reducing adapter memory and bandwidth requirements therefore has substantial utility in real-world applications where adapters may need to be distributed to thousands or millions of devices. 6 Limitations Post-Training Quantization. Experimentally we have explored a Post-Training Quantization pipeline, which compresses models independently from training procedures. While this is widely employed, and has practical advantages due to the ability to evaluate models rapidly across bit-rates without additional retraining, it is worth noting that it may yield suboptimal performance relative to Quantization Aware Training, which incorporates quantization (at fixed bit-rates) in the training procedure Gholami et al. (2021); Rastegari et al. (2016). Exploring a pipeline for Quantization Aware Training with sinusoidal activations remains an important direction for future research. Inference Precision. The quantization scheme we have employed uses a simulated quantization which maps tensors to a restricted set of float values (e.g. 24values for 4-bit quantization), without recasting tensor data-types Gholami et al. (2021). This is commonly employed in model compression, and prioritizes memory compression for efficient data transfer. As both inference and training are conducted in the original data-type, it can be easily applied without modified memory types. However, this does not exploit GPU-level optimizations available for alternative data-types Gholami et al. (2021); Dettmers et al.
|
https://arxiv.org/abs/2505.21895v1
|
(2024). Combining our approach with methods such as QA-LoRA which enable INT-4 inference may lead to additional efficiency improvements Xu et al. (2024). 7 Social and Ethical Considerations There are well-documented potential harms enabled by fine-tuning for both language and vision models Hsu et al. (2024); Zong et al. (2024). This issue is not particular to our method but is a consideration in the broader field. Quantization involves a small distortion to fine-tuned weights. As a result there is the possibility that mitigations designed to create aligned models may be affected or made vulnerable to adversarial attacks through model compression Dong et al. (2025); Belkhiter et al. (2024). Research in this area is still developing but is an important consideration for practitioners. 8 Conclusion In this work, we have presented a theoretical and empirical study for compressing low-rank adapters through Post-Training Quantization. Our key insight is that the stable rank of a quantized adapter is tightly controlled by that of its full-precision counterpart, and as a result inherits its rank limitations. To address this, we leverage fixed-frequency sinusoidal transformations as a lightweight, parameter- free mechanism to improve expressivity. This yields a simple yet effective plug-in that improves the performance of quantized adapters across language, vision, and generative image tasks. Although our focus has been explored within a Post-Training Quantization setting, the proposed enhancement is believed possible to extend to Quantization Aware Training and inference-time compression frameworks, such as in QA-LoRA Xu et al. (2024). An interesting direction for future research would be its application in a practical setting for adapter distribution and deployment in bandwidth- controlled or large-scale settings, for example as proposed in Brüel-Gabrielsson et al. (2025). 9 References AI@Meta. Llama 3 model card, 2024. Paul Albert, Hemanth Saratchandran, Frederic Z. Zhang, Cristian Rodriguez-Opazo, Anton van den Hengel, and Ehsan Abbasnejad. RandloRA: Full rank parameter-efficient fine-tuning of large models. In The Thirteenth International Conference on Learning Representations , 2025. Saleh Ashkboos, Amirkeivan Mohtashami, Maximilian L. Croci, Bo Li, Pashmina Cameron, Martin Jaggi, Dan Alistarh, Torsten Hoefler, and James Hensman. Quarot: Outlier-free 4-bit inference in rotated LLMs. In The Thirty-eighth Annual Conference on Neural Information Processing Systems , 2024. Hicham Badri and Appu Shaji. Towards 1-bit machine learning models, 2024. Nabajeet Barman, Maria G Martini, and Yuriy Reznik. Revisiting bjontegaard delta bitrate (bd-br) computation for codec compression efficiency comparison. In Proceedings of the 1st Mile- High Video Conference , page 113–114, New York, NY , USA, 2022. Association for Computing Machinery. Yannis Belkhiter, Giulio Zizzo, and Sergio Maffeis. Harmlevelbench: Evaluating harm-level compli- ance and the impact of quantization on model alignment. In Neurips Safe Generative AI Workshop 2024 , 2024. Yonatan Bisk, Rowan Zellers, Ronan Le Bras, Jianfeng Gao, and Yejin Choi. PIQA: Reasoning about physical commonsense in natural language, 2019. Gisle Bjøntegaard. Calculation of average PSNR differences between RD-curves. Technical report, VCEG-M33, Austin, TX, USA, 2001. Lukas Bossard, Matthieu Guillaumin, and Luc Van Gool. Food-101 – mining discriminative compo- nents with random forests. In European Conference on Computer Vision , 2014. Rickard Brüel-Gabrielsson, Jiacheng Zhu, Onkar Bhardwaj, Leshem Choshen, Kristjan Greenewald,
|
https://arxiv.org/abs/2505.21895v1
|
Mikhail Yurochkin, and Justin Solomon. Compress then serve: Serving thousands of lora adapters with little overhead, 2025. Mathilde Caron, Hugo Touvron, Ishan Misra, Hervé Jegou, Julien Mairal, Piotr Bojanowski, and Armand Joulin. Emerging properties in self-supervised vision transformers. In 2021 IEEE/CVF International Conference on Computer Vision (ICCV) , pages 9630–9640, 2021. Mircea Cimpoi, Subhransu Maji, Iasonas Kokkinos, Sammy Mohamed, and Andrea Vedaldi. Describ- ing textures in the wild. In 2014 IEEE Conference on Computer Vision and Pattern Recognition , pages 3606–3613, 2014. Christopher Clark, Kenton Lee, Ming-Wei Chang, Tom Kwiatkowski, Michael Collins, and Kristina Toutanova. BoolQ: Exploring the surprising difficulty of natural yes/no questions. arXiv preprint arXiv:1905.10044 , 2019. Peter Clark, Isaac Cowhey, Oren Etzioni, Tushar Khot, Ashish Sabharwal, Carissa Schoenick, and Oyvind Tafjord. Think you have solved question answering? try ARC, the AI2 reasoning challenge. arXiv preprint arXiv:1803.05457 , 2018. Tim Dettmers, Artidoro Pagnoni, Ari Holtzman, and Luke Zettlemoyer. Qlora: efficient finetuning of quantized llms. In Proceedings of the 37th International Conference on Neural Information Processing Systems , Red Hook, NY , USA, 2023. Curran Associates Inc. Tim Dettmers, Artidoro Pagnoni, Ari Holtzman, and Luke Zettlemoyer. QLoRA: Efficient finetuning of quantized LLMs. Advances in Neural Information Processing Systems , 36, 2024. Ning Ding, Xingtai Lv, Qiaosen Wang, Yulin Chen, Bowen Zhou, Zhiyuan Liu, and Maosong Sun. Sparse low-rank adaptation of pre-trained language models. In The 2023 Conference on Empirical Methods in Natural Language Processing , 2023. 10 Peiran Dong, Haowei Li, and Song Guo. Durable quantization conditioned misalignment attack on large language models. In The Thirteenth International Conference on Learning Representations , 2025. Ali Edalati, Marzieh Tahaei, Ivan Kobyzev, Vahid Partovi Nia, James J Clark, and Mehdi Rezagholizadeh. Krona: Parameter efficient tuning with kronecker adapter. arXiv preprint arXiv:2212.10650 , 2022. Patrick Esser, Sumith Kulal, Andreas Blattmann, Rahim Entezari, Jonas Müller, Harry Saini, Yam Levi, Dominik Lorenz, Axel Sauer, Frederic Boesel, et al. Scaling rectified flow transformers for high-resolution image synthesis. In Forty-first international conference on machine learning , 2024. Li Fei-Fei, R. Fergus, and P. Perona. One-shot learning of object categories. IEEE Transactions on Pattern Analysis and Machine Intelligence , 28(4):594–611, 2006. Allen Gersho and Robert M. Gray. Vector quantization and signal compression . Kluwer Academic Publishers, USA, 1991. Amir Gholami, Sehoon Kim, Zhen Dong, Zhewei Yao, Michael W. Mahoney, and Kurt Keutzer. A survey of quantization methods for efficient neural network inference, 2021. Cameron Gordon, Shin-Fang Chng, Lachlan MacDonald, and Simon Lucey. On quantizing implicit neural representations. In 2023 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV) , pages 341–350, 2023. R.M. Gray and D.L. Neuhoff. Quantization. IEEE Transactions on Information Theory , 44(6): 2325–2383, 1998. Allan Grønlund, Kasper Green Larsen, Alexander Mathiasen, Jesper Sindahl Nielsen, Stefan Schnei- der, and Mingzhou Song. Fast exact k-means, k-medians and bregman divergence clustering in 1d, 2018. Siem Hadish, Velibor Bojkovi ´c, Moayad Aloqaily, and Mohsen Guizani. Language models at the edge: A survey on techniques, challenges, and applications. In 2024 2nd International Conference on Foundation and Large Language Models (FLLM) , pages 262–271, 2024. Song Han, Huizi Mao, and William J Dally.
|
https://arxiv.org/abs/2505.21895v1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.