text string | source string |
|---|---|
arXiv:2505.21876v1 [cs.CV] 28 May 2025EPiC: Efficient Video Camera Control Learning with Precise Anchor-Video Guidance Zun Wang Jaemin Cho Jialu Li Han Lin Jaehong Yoon Yue Zhang Mohit Bansal UNC Chapel Hill {zunwang, jmincho, jialuli, hanlincs}@cs.unc.edu {jhyoon, yuezhan, mbansal}@cs.unc.edu https://zunwang1.github.i... | https://arxiv.org/abs/2505.21876v1 |
3D point cloud and rendering it along the camera trajectory. Training the camera control module typically requires anchor video and the corresponding full source video as input-output pairs, ideally with perfect geometric alignment. This assumes access to ground-truth 3D point clouds and camera trajectories, which are ... | https://arxiv.org/abs/2505.21876v1 |
synthesis of occluded or invisible regions entirely to the base diffusion model. This clear division of responsibility not only reduces learning difficulty but also improves overall generation quality. Combining these components, we demonstrate that anchor-video-based camera control can be learned in a highly efficient... | https://arxiv.org/abs/2505.21876v1 |
in lower accuracy. Despite these advances, rendered anchor videos are often misaligned due to point-cloud estimation errors and require accurate camera annotations, limiting training to datasets like RealEstate10K. In addition, these methods rely on large-scale data to correct misalignment and address limited diversity... | https://arxiv.org/abs/2505.21876v1 |
temporal dependencies across video frames. Specifically, we use the CogVideoX-5B-I2V variant, which supports both image and text conditions for flexible multimodal control during video generation. Guiding VDMs with Anchor Video as a Structured Prior for Camera Control. Recent meth- ods [ 75,74,11,77] have leveraged anc... | https://arxiv.org/abs/2505.21876v1 |
EPiC Model Architecture. (a) shows an overview of our EPiC framework. EPiC supports multiple inference scenarios. (b) and (c) illustrate our I2V inference scenarios using full and masked point clouds, respectively. (d) depicts V2V inference scenario employing dynamic point clouds. 4.1 Constructing Precise Anchor Videos... | https://arxiv.org/abs/2505.21876v1 |
backbone frozen during training. Model Architecture. Anchor-ControlNet is a lightweight DiT-based module designed to inject anchor video guidance into the base diffusion model. Given an anchor video A, we encode it using the 3D V AE from the backbone model to obtain latent features zanchor . During the reverse diffusio... | https://arxiv.org/abs/2505.21876v1 |
and render the anchor video along the specified camera trajectory. However, this approach produces anchor videos where objects remain static, as rendering is performed from a stationary point cloud. For example, the character in Fig. 2 (b) retains the same position and pose throughout the video, limiting its dynamic re... | https://arxiv.org/abs/2505.21876v1 |
1] and ViewCrafter [ 75]. For consistency, we use similar anchor videos per test sample for both ViewCrafter and EPiC. For V2V setting, we follow Gen3C [ 48] to qualitatively evaluate it using Sora videos [ 10] and provide quantitative results on Kubric4D [20] scenes in the Appendix. Implementation Details. EPiC is tra... | https://arxiv.org/abs/2505.21876v1 |
conditioned on a single image and cannot follow dense source video motions. AC3D and GCD are conditioned on camera embeddings, whereas ViewCrafter, like ours, is conditioned on anchor videos. I2V Camera Control. As shown in Fig. 4 (a), both ViewCrafter (3rd row) and our method (4th row) are capable of following anchor ... | https://arxiv.org/abs/2505.21876v1 |
lines). Effects of Artifact Injection for Constructing Training Anchor Videos. Fig. 5 (b) demonstrates the effectiveness of artifact injection, as described in Sec. 4.1. Due to point cloud estimation errors, flying pixels often appear when rendering from rapidly changing camera poses, resulting in incorrect guidance ev... | https://arxiv.org/abs/2505.21876v1 |
and J. Zhou. Qwen-vl: A versatile vision-language model for understanding, localization, text reading, and beyond. arXiv preprint arXiv:2308.12966 , 2023. [4]J. Bai, M. Xia, X. Fu, X. Wang, L. Mu, J. Cao, Z. Liu, H. Hu, X. Bai, P. Wan, et al. Recammaster: Camera-controlled generative rendering from a single video. arXi... | https://arxiv.org/abs/2505.21876v1 |
Qi, M. Sun, T. Ma, S. Zhao, S. Zhou, and Q. He. I2vcontrol-camera: Precise video camera control with adjustable motion strength. arXiv preprint arXiv:2411.06525 , 2024. [18] R. Gao, A. Holynski, P. Henzler, A. Brussee, R. Martin-Brualla, P. Srinivasan, J. T. Barron, and B. Poole. Cat3d: Create anything in 3d with multi... | https://arxiv.org/abs/2505.21876v1 |
long durations and structured captions. Advances in Neural Information Processing Systems , 37:48955–48970, 2024. [33] L. Khachatryan, A. Movsisyan, V . Tadevosyan, R. Henschel, Z. Wang, S. Navasardyan, and H. Shi. Text2video-zero: Text-to-image diffusion models are zero-shot video generators. In Proceedings of the IEE... | https://arxiv.org/abs/2505.21876v1 |
Shen, J. Huang, H. Ling, Y . Lu, M. Nimier-David, T. Müller, A. Keller, S. Fidler, and J. Gao. Gen3c: 3d-informed world-consistent video generation with precise camera control. arXiv preprint arXiv:2503.03751 , 2025. 12 [49] N. Ruiz, Y . Li, V . Jampani, Y . Pritch, M. Rubinstein, and K. Aberman. Dreambooth: Fine tunin... | https://arxiv.org/abs/2505.21876v1 |
ACM SIGGRAPH 2024 Conference Papers , pages 1–11, 2024. [63] D. Watson, S. Saxena, L. Li, A. Tagliasacchi, and D. J. Fleet. Controlling space and time with diffusion models. In The Thirteenth International Conference on Learning Representations , 2024. [64] R. Wu, R. Gao, B. Poole, A. Trevithick, C. Zheng, J. T. Barron... | https://arxiv.org/abs/2505.21876v1 |
Zhao, L. Ran, Y . Gu, D. Gao, and M. Z. Shou. Show-1: Marrying pixel and latent diffusion models for text-to-video generation. International Journal of Computer Vision , pages 1–15, 2024. [79] Z. Zhang, D. Chen, and J. Liao. I2v3d: Controllable image-to-video generation with 3d guidance. arXiv preprint arXiv:2503.09733... | https://arxiv.org/abs/2505.21876v1 |
, andCamera Matrix Consistency (CamMC) following MotionCtrl [61] and CameraCtrl [22]. •Rotation Error (RotErr) measures the angular deviation (in radians) between the predicted and ground-truth camera rotations: RotErr =nX i=1arccos tr(˜RiR⊤ i)−1 2! where ˜RiandRiare the predicted and ground-truth rotation matrices at ... | https://arxiv.org/abs/2505.21876v1 |
able to precisely follow these details by following the anchor video, the model trained on RealEstate10K can only capture a coarse moving direction, failing to reproduce the fine motion in the crab’s legs. This limitation is likely due to the lack of diverse and dynamic videos in the RealEstate10K dataset, which mainly... | https://arxiv.org/abs/2505.21876v1 |
dumbbells neatly arranged on a rack, a yoga mat laid out near the window, and a treadmill in one corner. Text Prompt: (Camera move forward)…The area is then seen with a bed tucked against one wall, a closet near the curtain, and a dresser with a mirror, giving the space a cozy, bedroom-like feel. Text Prompt: (Camera m... | https://arxiv.org/abs/2505.21876v1 |
the video (highlighted in green text and boxes). Object 3D Trajectory Control via Anchor Video Manipulation. We also demonstrate the flexi- bility of our method in enabling 3D trajectory control for objects. The input is usually a 3D trajectory (e.g., indicating moving backwards with 2 meters) applied to a specific obj... | https://arxiv.org/abs/2505.21876v1 |
Such high-quality and diverse anchor videos further help the efficient learning by our model. Examples of I2V Camera Control. Fig. 13 shows additional qualitative examples of I2V camera control. Given diverse image inputs and a variety of camera trajectories, our method consistently generates high-quality videos that a... | https://arxiv.org/abs/2505.21876v1 |
arXiv:2505.21879v1 [cs.SC] 28 May 2025Symbolic Foundation Regressor on Complex Networks Weiting Liu1,2, Jiaxu Cui1,2*, Jiao Hu1,2, En Wang1,2*, Bo Yang1,2* 1College of Computer Science and Technology, Jilin University, Changchun, 130012, China. 2Key Laboratory of Symbolic Computation and Knowledge Engineering of Minist... | https://arxiv.org/abs/2505.21879v1 |
holes in spiral galaxies [11], and aiding in jet background subtraction during heavy-ion collisions [12]. However, when exploring a vast expression space, search-based approaches often require extensive time and yield complex, hard-to-understand outcome expressions, limiting their practical applications. Learning-based... | https://arxiv.org/abs/2505.21879v1 |
context of network dynamics scenarios, our model can reconstruct the network dynamics equation in only 1.2 minutes, which is more than three times faster than the search-based and learning-based approaches, and shorten the time for discovering more accurate new laws of real- world global infectious disease outbreaks to... | https://arxiv.org/abs/2505.21879v1 |
analyzing, scientists should 3 be able to provide correct responses ( R) using only representations rather than raw data, i.e., decompress- ing(Q×F → R ). In this work, we mainly focus on using machine learning to attempt to mimic the human learning process, as shown in Fig. 1(a), especially in the compressing part, wh... | https://arxiv.org/abs/2505.21879v1 |
longer than the age of the universe to reach the desired outcome [20]. To alleviate the curse of dimensionality, we propose using a physical prior, suggesting that network states can be influenced both by their own states and by the states of their neighbors [17, 39, 42, 43]. Specifically, we can decompose the mathemat... | https://arxiv.org/abs/2505.21879v1 |
and the associated validity rules can be found in the Method section. Constructing a set-to-sequence model with dual branches. Since the input Oof the model is a set, we encounter a translation problem from a set to a sequence. We thus propose a set-to-sequence model with dual branches for symbolic regression on comple... | https://arxiv.org/abs/2505.21879v1 |
number of test data points on the results. d.A physical equation from the AI-Feynman dataset that describes the relationship between the modulus of rigidity G, modulus of elasticity E, and Poisson’s ratio µin material science for regression analysis. our SFR can reconstruct the equation closest to ground truth with the... | https://arxiv.org/abs/2505.21879v1 |
other influencing factors, such as the number of operator and dimensions, as well as additional regression analyses and visualizations, please refer to Appendix B. Validation on symbolic regression on complex networks To assess the effectiveness of our SFR in performing symbolic regression on complex networks, we have ... | https://arxiv.org/abs/2505.21879v1 |
complexity. d. Data representations (denoted as h) generated by equations with various characteristics are visualized through projection using t-SNE. e.A specific example of symbolic regression from USE, demonstrating the ability of our model to regress high-precision equation on complex networks through local observat... | https://arxiv.org/abs/2505.21879v1 |
by assigning different recovery rates ( δ) in the epidemic equation, where xi,0:=Iimeans the probability of an individual ibeing susceptible. b.Comparison of the state prediction curves gener- ated by the governing equations inferred from observations at sampling nodes within each community. Our SFR has successfully re... | https://arxiv.org/abs/2505.21879v1 |
displaying countries or regions with populations over 50 million. Node size indicates population, while edge width reflects route flow. b.Comparison of the time spent on discovering transmission laws. c.Comparison of transmission laws discovered by TPSINDy and ours. d-f.Compar- ison of the number of cases over time in ... | https://arxiv.org/abs/2505.21879v1 |
the accuracy of the recovered equations. By comparing our SFR with state-of-the-art techniques in different scenarios, including non-network symbolic regres- sion, symbolic regression on networks, and varied types of network dynamics, the results demonstrated that our model reconstructs the most accurate equations with... | https://arxiv.org/abs/2505.21879v1 |
whose subtree has a depth smaller than ddepth, insert a new parent node with a unary operator sampled from the occurrence probability distribution of the unary operators, i.e., Pu, and repeat this process utimes. 6. Convert the produced expression tree into prefix expression for generating f(self). 7. Repeat steps 1 to... | https://arxiv.org/abs/2505.21879v1 |
still a set, we apply Set Transformer [45] here to implement emballfor better capturing the contributions of each data point in the observed set while maintaining the characteristics of both permutation invariance and linear complexity in attention computation overhead. Self and interaction branch models ( decselfand d... | https://arxiv.org/abs/2505.21879v1 |
include a cosine function term, such as cos(x)” , this term can be incorporated into the decoding search process as a token. Methods like constant and formal simplification are used to create equations that offer more scientific significance while ensuring accuracy. More detailed pre-processing and post-processing meth... | https://arxiv.org/abs/2505.21879v1 |
C. Interpretable machine learning methods applied to jet background subtraction in heavy-ion collisions. Physical Review C 108, L021901 (2023). [13] Brunton, S. Discovering governing equations from data by sparse identification of nonlinear dynamics , Vol. 2017, X49–004 (2017). [14] Rudy, S. H., Brunton, S. L., Proctor... | https://arxiv.org/abs/2505.21879v1 |
& Babuˇ ska, R. Symformer: End-to-end symbolic regression using transformer-based architecture. IEEE Access (2024). [37] Li, W. et al. Transformer-based model for symbolic regression via joint supervised learning (2022). [38] d’Ascoli, S., Becker, S., Mathis, A., Schwaller, P. & Kilbertus, N. Odeformer: Symbolic regres... | https://arxiv.org/abs/2505.21879v1 |
epidemics. [EB/OL] (2020). https://www.kaggle.com/code/ lnunes/a-brief-comparative-study-of-epidemics Accessed April 1, 2023. [62] Dong, E., Du, H. & Gardner, L. An interactive web-based dashboard to track covid-19 in real time. The Lancet infectious diseases 20, 533–534 (2020). [63] OpenFlights. OpenFlights: Airport, ... | https://arxiv.org/abs/2505.21879v1 |
. . . . . . . . . . . . . . . . . . . . 26 C More details on symbolic regression on complex networks 33 C.1 USE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 C.2 Topological structures of complex networks . . . . . . . . . . . . . . . . . . . . . . . . . . 33 C.3 Det... | https://arxiv.org/abs/2505.21879v1 |
with ×,pow, and the constant 1. Compared to other unary operators, the number of powis relatively large because√x,x2,1 xall need to be represented through pow. In terms of dimensions, the number of equations in each dimension is roughly the same. =1.78,0+(2.23,0)+ (1.99,1+,02(,0+(,1)) =2.45,0+1.89,04+15.45,1+4.34,2 + (... | https://arxiv.org/abs/2505.21879v1 |
Standard for Floating-Point Arithmetic [65], converting data {xi,{xj}j∈N, yi}into binary floating-point encoding {x754 i,{x754 j}j∈N, y754 i}to avoid gradient problems during the calculation process. 22 =((,,),ℱ) (,×754) (×754,) (,) (,,,) (,,,) (,) (×754,) (,) (,) (,) (×754,) (,)754∈ℝ×× 754∈ℝ×× 754∈ℝ×× ∈ℝ×× (,,,) (,,... | https://arxiv.org/abs/2505.21879v1 |
3:foritoNdodo: 4: clusters ⇐GaussianMixture (Xsample i ∈RTsample∗D) 5: forjtoNclustersdodo: 6: Randomly select Tclusterssampling points Xsample i ∈RTclusters∗D 7: end for 8:end for 9:Integrate clustered data to obtain Xcluster∈RN∗(Tclusters×Nclusters)∗D 10:Calculate mean and variance, µ, σ⇐Xcluster 11:Perform normal di... | https://arxiv.org/abs/2505.21879v1 |
test points varies, and for other experiments, the number of input data (IN-Domain) and prediction of unknown data (OUT-Domain) are 200 and 1000, sampled from the distribution of N(0,1) and N(0,10). The parameters for PySR, the library for SINDy, and the models for E2E and NeSymReS follow the configurations specified i... | https://arxiv.org/abs/2505.21879v1 |
equations in AI-Feynman with 2 dimensions (equation surface), the gray surface on the left is the true equation Table B6 : Results of symbolic regression on equations in AI-Feynman with 2 dimensions (equation form). MethodPhysical LawFriction force Elastic potential energy equations True F=µN U =1 2kx2 Ours F=µN U =1 2... | https://arxiv.org/abs/2505.21879v1 |
R2 True y= 0.53xi,0+xi,1+ 2.32xi,2+ 10.34tan(xi,2+ 1.28) / Ours y= 0.558xi,0+xi,1+xi,2+ 10.34tan(xi,2+ 1.28) 0 .996 E2E y= 0.558xi,0+ 0.87xi,1+ 11tan(1.105xi,2+ 1.288) + 0 .117 0 .994 NeSymRes y=0.003tan(xi,0+xi,1−0.262) xi,0−0.116>−1 SINDy − >−1 PySR y=1 xi,2−0.297>−1 Table B12 : Results of symbolic regression on equa... | https://arxiv.org/abs/2505.21879v1 |
is 100 (such as 200 equations with dimension 1, 200 equations with length between 5-10, etc.) each time. Each equation will be paired with 5 randomly generated topologies (grid, power law, small world, community and random), and the topology generation rules are as described in Section C.2. For the experiment on test p... | https://arxiv.org/abs/2505.21879v1 |
logarithmic and polynomial types. As shown in Fig. C9(a), even with different constants, equations with the same structure tend to cluster together, such as y=log(x2 i,0) +log(3.76x2 i,0) +Plog(x2 i,0) +log(x2 j,0) andy=log(x2 i,0) +log(0.97x2 i,0) +Plog(x2 i,0) +log(x2 j,0) , and those with similar structural features... | https://arxiv.org/abs/2505.21879v1 |
7.10xi,0xj,0 Ours: xi,1+1 xi,1+ 4.48xi,0+ 5.68x2 i,0+PAij4.28xi,1+xi,1 0.243+0 .08xi,1+ 7.10xi,0xj,0 Fig. C12(b)True: y=xi,0+ 0.075xi,1+ 0.982exi,1+PAijxj,0+ 0.575xi,0+1.458 xj,1 Ours: y=xi,0+ 0.075xi,1+ 0.982exi,1+PAijxj,0+ 0.575xi,0+1.458 xj,1 39 Table C16 : Results of symbolic regression on equations in USE with 3 d... | https://arxiv.org/abs/2505.21879v1 |
(see Fig .D13). A billion level corpus is key, compared to TPSINDy and GNN+GP, a larger learning space can effectively regress and find suitable dynamic equations. It is worth noting that in order to compare the optimal performance of the SOTA method, its input data far exceeds 200, while we still only need 200. To com... | https://arxiv.org/abs/2505.21879v1 |
3.29 +PAij0.04xj,0dxi,0 dt=−xi,0 +PAij(xj,0−x2 j,0) Small Worlddxi,0 dt=−43.027x4 i,0+54.464x3 i,0−8.113x2 i,0+15.701xi,0+3.53 24.59x2 i,0−29.75xi,0−4.659 −PAij(0.027xi,0−(0.704xi,0−1)2(1.164xj,0+ 0.722))dxi,0 dt=−3.65x2 i,0+ 1.56 +PAij0dxi,0 dt=−xi,0 +PAij(xj,0−x2 j,0) 45 Table D20 : Results of symbolic regression on ... | https://arxiv.org/abs/2505.21879v1 |
June 14th will be used. E.2 Details on experimental setting For experiments on heterogeneous epidemic transmission, we set the number of topology nodes to 360, and the number of nodes in the four communities is 120 ,120,90,30, respectively. Each community inde- pendently samples 200 data points (in IN-Domain), which wi... | https://arxiv.org/abs/2505.21879v1 |
We directly apply the equation regressed only from H1N1 data to SARS and COVID-19 and and obtain heterogeneous equations through data from each node. The cumulative infected generated based on our equation is closer to the ground truth (see Fig E18-E19), indicating that our method has stronger generalization ability. N... | https://arxiv.org/abs/2505.21879v1 |
d= 0.075, e=−2.640, f=−38.008, g=−259.349a= 0.012 b= 123 .971 Burkina Fasoa= 1.039, b= 6.626 c= 1.831, d=−0.195, e= 1.524, f=−32.223, g= 64.446a= 0.011 b= 922 .518 Ghanaa= 0.995, b=−10.572 c=−0.805, d= 0.072, e=−0.684, f= 32.287, g= 33.454a= 0.076 b= 262 .872 Cote d’Ivoirea= 0.647, b=−1.709 c= 1.563, d= 0.816, e=−2.957... | https://arxiv.org/abs/2505.21879v1 |
Incorporating LLMs for Large-Scale Urban Complex Mobility Simulation Yu-Lun Song 1 , Chung-En Tsern 3 , Che-Cheng Wu 2 , Yu-Ming Chang 2 , Syuan-Bo Huang 2 , Wei-Chu Chen 2 , Michael Chia-Liang Lin 1 , Yu-Ta Lin 2 1 Media Lab @ Massachusetts Institute of Technology 2 City Science Lab @ National Taipei University of Tec... | https://arxiv.org/abs/2505.21880v1 |
Figure 2 , the LLM processes statistical data inputs, such as age and education level, to generate proportional distributions for each age group. Iterative Proportional Fitting algorithm ensures that the aggregated educational distribution aligns with real-world population-level statistics. With the LLM’s inherent reco... | https://arxiv.org/abs/2505.21880v1 |
are deployed across Taipei City, Taiwan. Each point on the map represents an agent, with the brightness of the color indicating the density of agents in that area. This comprehensive visualization enables observers or 4 planners to understand agent behavior patterns and environmental impacts during the simulation perio... | https://arxiv.org/abs/2505.21880v1 |
A Scalable Platform to Simulate Urban Activities with Massive LLM Agents”. In: arXiv preprint arXiv:2410.21286. Biographies Yu-Lun Song is a graduate student at the MIT Media Lab and a research assistant in the City Science Lab, specializing in AI and urban mobility simulation. Chung-En Tsern is a graduate student at U... | https://arxiv.org/abs/2505.21880v1 |
arXiv:2505.21887v1 [cs.AI] 28 May 2025 SVRPBench: A Realistic Benchmark for Stochastic Vehicle Routing Problem Ahmed Heakl1Yahia Salaheldin Shaaban1 Martin Taká ˇc1Salem Lahlou1Zangir Iklassov1 1MBZUAI, Abu Dhabi, UAE /githubhttps://github.com/yehias21/vrp-benchmarks ὑ7https://huggingface.co/datasets/MBZUAI/svrp-bench ... | https://arxiv.org/abs/2505.21887v1 |
✓ ✓ Medium instances (100-300) ✓ ✓ ✓ ✓ △ ✓ Large instances (>300) ✓ △ △ △ ✗ △ Varying stochasticity levels ✓ ✗ △ △ ✗ ✗ account for peak-hour congestion, random incidents like accidents, and diverse delivery preferences across customer types [ 14,3,24]. Ignoring these factors leads to overly optimistic performance asses... | https://arxiv.org/abs/2505.21887v1 |
µnight= 21 (σacc= 2) due to elevated nighttime risks from fatigue and impaired driving [ 28]. The delay duration is drawn from U(0.5,2.0)hours, consistent with industry reports on incident clearance times [28]. 2.2 Customer Time Window Sampling Residential and commercial customers exhibit different temporal availabilit... | https://arxiv.org/abs/2505.21887v1 |
be placed either randomly or aligned with city centers (one per city). A homogeneous fleet of vehicles is used, and vehicle count is configured to balance demand and capacity. All customer time windows are sampled to ensure feasibility under the assigned travel time model [1]. Validation. Each generated instance underg... | https://arxiv.org/abs/2505.21887v1 |
(CVRP) TABU (TWCVRP)0.00.20.40.60.81.0 Feasibility Rate Figure 3: Solver Comparison: Overall Performance Metrics. Constraint Violation Rate (CVR) quantifies the proportion of customers whose service violates time windows or exceeds vehicle capacity, capturing solution feasibility: CVR =#violations #customers×100% . (16... | https://arxiv.org/abs/2505.21887v1 |
overall cost (40,259), followed closely by ACO (40,566; +0.8%) and POMO (40,650; +1.0%), with OR-Tools and NN+2opt maintaining the highest feasibility rates (98.4%) while NN+2opt delivered the fastest runtime (0.697s). Learning-based approaches demonstrated a feasibility-speed tradeoff, with POMO offering better soluti... | https://arxiv.org/abs/2505.21887v1 |
important insights: •OR-Tools is the most reliable choice for large-scale offline optimization, balancing quality and feasibility despite higher runtimes. Table 5: Performance Analysis by Depot Configuration. Single Depot Multi Depot Method Cost ↓ CVR↓Feas↑ RT↓ Cost↓ CVR↓Feas↑ RT↓ NN+2opt 34978.5 0.8 0.992 686.3 10625.... | https://arxiv.org/abs/2505.21887v1 |
the importance of flexible depot placement in practical settings. By supporting large-scale, reproducible evaluations via Hugging Face and GitHub, SVRPBench offers a community platform to benchmark solvers across realism axes. We urge the research community to develop adaptive, noise-aware routing algorithms that bridg... | https://arxiv.org/abs/2505.21887v1 |
10 [21] Mohammadreza Nazari, Afshin Oroojlooy, Lawrence Snyder, and Martin Taká ˇc. Reinforcement learning for solving the vehicle routing problem. In Proceedings of Advances in Neural Information Processing Systems , pages 9861–9871, 2018. [22] Jorge Oyola, Halvard Arntzen, and David L. Woodruff. The stochastic vehicl... | https://arxiv.org/abs/2505.21887v1 |
locations based on pheromone intensity and heuristic proximity. The pheromone matrix is updated as: τij←(1−ρ)τij+mX k=1∆τ(k) ij,∆τ(k) ij=Q L(k),if(i, j)∈tour(k) 0, otherwise ,(19) where ρ= 0.5,m= 50 ants,α= 1, and β= 2. Tabu Search. Candidate solutions are evaluated using a penalized cost function: f(S) =Cost(S) +λ·Pe... | https://arxiv.org/abs/2505.21887v1 |
0.00 single depot 500 118279.0 2.2 0.978 929.7 0.00 depots equal city 1000 244956.8 6.1 0.939 3865.2 0.00 single depot 1000 187829.7 2.7 0.973 3911.5 0.00 Table 7: Tabu Search - Detailed Performance Breakdown. Configuration Size Cost CVR Feas Runtime TW Violations single depot single vehicle sumDemands 10 2297.2 0.0 1.... | https://arxiv.org/abs/2505.21887v1 |
equal city 200 89937.2 10.1 0.000 2556.8 0.00 single depot 200 55401.8 1.0 0.000 2327.0 0.00 depots equal city 500 175711.1 7.7 0.000 15299.3 0.00 single depot 500 118280.2 2.2 0.000 14781.5 0.00 depots equal city 1000 244999.0 6.1 0.000 70932.6 0.00 single depot 1000 187332.2 2.8 0.000 54846.8 0.00 Table 9: OR-Tools -... | https://arxiv.org/abs/2505.21887v1 |
– Detailed Performance on TWVRP (runtimes in ms). Solver Configuration Size Cost CVR Feas Runtime (ms) TW Violations Attention single depot 10 3 940.38 0.00 1.000 0.916 0.00 POMO single depot 10 3 854.6 0.00 1.000 0.707 0.00 Attention single depot 20 6 504.73 0.00 1.000 1.780 0.00 POMO single depot 20 6 744.7 0.00 1.00... | https://arxiv.org/abs/2505.21887v1 |
the policy by maximizing the expected return J(θ) =Eτ∼πθ[R(τ)]using two con- structive, autoregressive policy-gradient methods. A constructive policy builds a complete solution by sequentially selecting one customer at a time until the tour is finished, while an autoregressive policy conditions each action on the histo... | https://arxiv.org/abs/2505.21887v1 |
SDPO: Importance-Sampled Direct Preference Optimization for Stable Diffusion Training Xiaomeng Yang1Zhiyu Tan1,2Junyan Wang3Zhijian Zhou2Hao Li1,2∗ 1Shanghai Academy of Artificial Intelligence for Science 2Fudan University 3Australian Institute for Machine Learning, The University of Adelaide yangxlarge@gmail.com Code:... | https://arxiv.org/abs/2505.21893v1 |
second challenge is the off-policy bias inherent in preference optimization. This occurs when gradients are estimated from a fixed dataset that no longer aligns with the model’s current distribution, leading to a mismatch between the optimization objective and the data collection policy. As a result, the model can suff... | https://arxiv.org/abs/2505.21893v1 |
[ 10], and classifier-free methods [ 13]. Latent Diffusion Models [ 15] enhanced efficiency, and further refinements improved FID [ 16]. In video, diffusion models have been extended to capture temporal dynamics [ 14], achieving strong results in generation, prediction, and interpolation [ 17,24,53]. Recent text-to-vid... | https://arxiv.org/abs/2505.21893v1 |
wherep(x) q(x)is the importance weight. In diffusion models, importance sampling can be applied by comparing the learned reverse process with either the forward posterior or a previous model iteration: w(t) =pθ(xt−1|xt) q(xt−1|xt, x0)orw(t) =pθ(xt−1|xt) pold(xt−1|xt). (2) These weights enable reweighting transitions ba... | https://arxiv.org/abs/2505.21893v1 |
differences between positive and negative samples. We examine three stages: early ( t∈[0,100]), middle ( t∈[500,600]), and late ( t∈[900,1000] ). In early steps, the density fluctuates and lacks clear separation, often decreasing for both sample types, indicating instability. In late steps, both densities increase but ... | https://arxiv.org/abs/2505.21893v1 |
= clip( w(t),1−ϵ,1 +ϵ). (6) The clipped importance weight ˜w(t)serves two purposes. First, it rescales gradient updates to reflect the reliability of each sample. Second, it acts as a soft mask to suppress gradient flow from noisy regions where the forward and reverse paths diverge significantly. Combining this masked ... | https://arxiv.org/abs/2505.21893v1 |
the influence of unreli- able samples and suppressing overly aggressive updates toward noisy rewards. When the model’s performance degrades and the probability of the preferred sample decreases, the corresponding wθ becomes small, leading to a large 1/wθ, which slows further degradation. In contrast, when the model ass... | https://arxiv.org/abs/2505.21893v1 |
0–1000) for Diffusion-DPO and SDPO. drop, with the Total Score falling to 67.28, indicating a collapse in generation quality. In contrast, DPO-C&M and SDPO remain stable and continue to improve, demonstrating superior robustness. To further validate the generality of our approach, we extend the comparison to two larger... | https://arxiv.org/abs/2505.21893v1 |
stable VBench scores (right panel of Figure 5), although it requires longer training time and eventually suffers from model collapse. In contrast, SDPO shows negligible difference between mid-timestep and full-range schedules, maintaining stable performance in both settings, with the full-range variant providing a marg... | https://arxiv.org/abs/2505.21893v1 |
[7]H. Chen, Y . Zhang, X. Cun, M. Xia, X. Wang, C. Weng, and Y . Shan. Videocrafter2: Overcoming data limitations for high-quality video diffusion models, Jan 2024. 3 [8]X. Chen, Y . Wang, L. Zhang, S. Zhuang, X. Ma, J. Yu, Y . Wang, D. Lin, Y . Qiao, and Z. Liu. Seine: Short-to-long video diffusion model for generativ... | https://arxiv.org/abs/2505.21893v1 |
2022. 3 [28] J. Hong, N. Lee, and J. Thorne. Orpo: Monolithic preference optimization without reference model. arXiv preprint arXiv:2403.07691 , 2024. 16 [29] Z. Huang, Y . He, J. Yu, F. Zhang, C. Si, Y . Jiang, Y . Zhang, T. Wu, Q. Jin, N. Chanpaisit, et al. Vbench: Comprehensive benchmark suite for video generative m... | https://arxiv.org/abs/2505.21893v1 |
[47] M. Prabhudesai, R. Mendonca, Z. Qin, K. Fragkiadaki, and D. Pathak. Video diffusion alignment via reward gradients. arXiv preprint arXiv:2407.08737 , 2024. 1, 3 [48] B. Qi, P. Li, F. Li, J. Gao, K. Zhang, and B. Zhou. Online dpo: Online direct preference optimization with fast-slow chasing. arXiv preprint arXiv:24... | https://arxiv.org/abs/2505.21893v1 |
from natural descriptions. arXiv preprint arXiv:2104.14806 , 2021. 3 [63] Y . Xie, A. Goyal, W. Zheng, M.-Y . Kan, T. P. Lillicrap, K. Kawaguchi, and M. Shieh. Monte carlo tree search boosts reasoning via iterative preference learning. arXiv preprint arXiv:2405.00451 , 2024. 3 [64] H. Xu, A. Sharaf, Y . Chen, W. Tan, L... | https://arxiv.org/abs/2505.21893v1 |
notation, we define the importance weight wθ(x0|c)as: wθ(x0|c) =pθ(x0|c) q(x0|c). (15) Using this notation, the importance-sampled expectation becomes: Ex0∼pθ(x0|c)[f(x0)] =Ex0∼q(x0|c)[wθ(x0|c)f(x0)]. (16) We can rewrite the original objective as: max pθEc∼Dc,x0∼pθ(x0|c)[r(c,x0)]−βDKL[pθ(x0|c)∥pref(x0|c)] = max pθEc∼Dc... | https://arxiv.org/abs/2505.21893v1 |
t)ii (30) A.4 Reformulating Flows as SDEs for Preference Optimization While SDPO naturally fits stochastic diffusion models, applying it to deterministic flow-based models poses challenges. Normalizing flows are governed by deterministic ODEs, which lack the stochasticity needed for effective preference-based learning... | https://arxiv.org/abs/2505.21893v1 |
23.6 7.7 33.1 31.8 8.2 ORPO [28] 24.5 24.9 7.7 28.5 27.4 8.0 R-DPO [46] 27.3 24.5 7.5 41.1 37.8 8.0 SimPO [42] 32.1 34.8 7.6 44.7 40.5 8.0 SDPO 31.8 35.1 7.9 43.5 41.6 8.2 B.2 Image-Based Evaluation of Diffusion Alignment Methods To further compare SDPO with existing diffusion-based alignment approaches, and to demonst... | https://arxiv.org/abs/2505.21893v1 |
transitions, which may not fully capture dynamic changes in model be- havior during training, limiting its responsiveness in highly non-stationary or long-horizon scenarios. Although SDPO effectively corrects off-policy bias, it assumes access to high-quality offline prefer- ence data and may degrade in settings with n... | https://arxiv.org/abs/2505.21893v1 |
arXiv:2505.21895v1 [cs.LG] 28 May 2025Compressing Sine-Activated Low-Rank Adapters through Post-Training Quantization Cameron Gordon∗ Australian Institute for Machine Learning University of AdelaideYiping Ji* Australian Institute for Machine Learning University of Adelaide DATA61, CSIRO Hemanth Saratchandran* Australia... | https://arxiv.org/abs/2505.21895v1 |
on resource-constrained hardware, offering improvements in memory efficiency, computational throughput, and energy consumption (Gholami et al. (2021); Dettmers et al. (2024); Xu et al. (2024); Kaushal et al. (2025)). To study this interaction, we develop a theoretical framework that characterizes how the rank of an ada... | https://arxiv.org/abs/2505.21895v1 |
Kopiczko et al. (2024), RandLoRA Albert 2 et al. (2025), and NOLA Koohpayegani et al. (2024) use combinations of random projections to reduce the number of parameters contained within the adapters. QA-LoRA Xu et al. (2024) produces adapters that can be merged with the quantized base model, enabling low-precision infere... | https://arxiv.org/abs/2505.21895v1 |
rank. This is precisely the property exploited by sine-activated adapters in Ji et al. (2025). Quantization. A quantization function Q(·)maps values from a less restricted set to a more restricted set A → B . Practically, this may involve explicit conversion of data-types (e.g. 16-bit precision to 4-bit precision), or ... | https://arxiv.org/abs/2505.21895v1 |
the stable rank regardless of the level of precision. Theorem 3.1 presents the key insight of this work: the stable rank of a quantized adapter remains low if the original (unquantized) adapter has low stable rank, as the quantized stable rank is controlled by the unquantized one. This observation motivates applying a ... | https://arxiv.org/abs/2505.21895v1 |
76.4 77.9 Memory (MB) 0.6 1.1 2.2 4.3 8.6 LoRA (3-bit) 70.0 73.1 75.5 76.5 78.4 SineLoRA (3-bit) 70.5 74.4 75.9 77.7 78.6 Memory (MB) 0.8 1.5 3.0 6.0 11.9 LoRA (5-bit) 69.4 73.1 75.6 76.7 78.6 SineLoRA (5-bit) 69.8 74.4 76.1 78.1 78.8 Memory (MB) 1.2 2.3 4.5 9.1 18.1 LoRA (Full) 73.7 74.8 76.5 78.0 79.0 SineLoRA (Full)... | https://arxiv.org/abs/2505.21895v1 |
.9 307 SineLoRA 5 66 .9 74 .1 77.5 78.6 78.7 78.9 78.9 307 LoRA 10 71.6 77.2 78.3 78 .8 78 .7 78 .8 78 .9 614 SineLoRA 10 68 .7 76 .3 78.8 79.4 79.6 79.8 79.8 614 LoRA 16 72.9 78.1 79.2 79 .4 79 .5 79 .4 79 .5 983 SineLoRA 16 68 .3 77 .4 79.5 80.0 80.3 80.2 80.3 983 UCF101 Soomro et al. (2012). We compare the performan... | https://arxiv.org/abs/2505.21895v1 |
quantization. Table 4: Comparison of LoRA and SineLoRA for Text-to-Image Generation. Best scores for each bit-width group and metric are highlighted in bold . Bits Model CLIP-I ↑CLIP-T ↑DINO ↑ 1LoRA 0.729 0.219 0.515 SineLoRA 0.746 0.219 0.554 2LoRA 0.768 0.218 0.599 SineLoRA 0.780 0.219 0.616 3LoRA 0.780 0.218 0.621 S... | https://arxiv.org/abs/2505.21895v1 |
(2024). Combining our approach with methods such as QA-LoRA which enable INT-4 inference may lead to additional efficiency improvements Xu et al. (2024). 7 Social and Ethical Considerations There are well-documented potential harms enabled by fine-tuning for both language and vision models Hsu et al. (2024); Zong et al... | https://arxiv.org/abs/2505.21895v1 |
Mikhail Yurochkin, and Justin Solomon. Compress then serve: Serving thousands of lora adapters with little overhead, 2025. Mathilde Caron, Hugo Touvron, Ishan Misra, Hervé Jegou, Julien Mairal, Piotr Bojanowski, and Armand Joulin. Emerging properties in self-supervised vision transformers. In 2021 IEEE/CVF Internationa... | https://arxiv.org/abs/2505.21895v1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.