text string | source string |
|---|---|
and Antonia M Villarruel. Identifying credible sources of health information in social media: principles and attributes. NAM perspectives , 2021:10–31478, 2021. [28] Bo Li, Kaichen Zhang, Hao Zhang, Dong Guo, Renrui Zhang, Feng Li, Yuanhan Zhang, Ziwei Liu, and Chunyuan Li. Llava-next: Stronger llms supercharge multimo... | https://arxiv.org/abs/2505.21724v1 |
learn to listen? In Proceedings of the IEEE/CVF International Conference on Computer Vision , pages 10083–10093, 2023. [45] Tu Anh Nguyen, Benjamin Muller, Bokai Yu, Marta R Costa-Jussa, Maha Elbayad, Sravya Popuri, Paul-Ambroise Duquenne, Robin Algayres, Ruslan Mavlyutov, Itai Gat, et al. Spirit-lm: Interleaved spoken... | https://arxiv.org/abs/2505.21724v1 |
[58] Chameleon Team. Chameleon: Mixed-modal early-fusion foundation models. arXiv preprint arXiv:2405.09818 , 2024. [59] Linrui Tian, Qi Wang, Bang Zhang, and Liefeng Bo. Emo: Emote portrait alive generating expressive portrait videos with audio2video diffusion model under weak conditions. In European Conference on Com... | https://arxiv.org/abs/2505.21724v1 |
Ping Luo, and Xiaogang Wang. Talking face generation by adversarially disentangled audio-visual representation. In Proceedings of the AAAI Conference on Artificial Intelligence , pages 9299–9306, 2019. [75] Mohan Zhou, Yalong Bai, Wei Zhang, Ting Yao, Tiejun Zhao, and Tao Mei. Responsive listening head generation: a be... | https://arxiv.org/abs/2505.21724v1 |
the predicted visual token, while the TempoV oice module converts textual embeddings into audio waveforms. Vision Projection Layer. The Vision Projection Layer, denoted as Mvis-proj (·), encodes the previously predicted visual frames of the listener ˆFl τ:t−1together with the speaker’s visual frames Fs τ:t−1, and proje... | https://arxiv.org/abs/2505.21724v1 |
frame timestamp: If neither 17 [PAUSE] [PAUSE] [PAUSE] [PAUSE] [PAUSE] [PAUSE] [PAUSE] [PAUSE] [PAUSE] [PAUSE] [PAUSE] [PAUSE] [PAUSE] [PAUSE] [PAUSE] [PAUSE] Why [LASTING] [LASTING] [LASTING] [LASTING] [LASTING] [LASTING] [LASTING] [LASTING] [LASTING] [LASTING] [LASTING] [LASTING] [LASTING] [LASTING] [LASTING] [LASTIN... | https://arxiv.org/abs/2505.21724v1 |
embeddings. •Text. Timestamped speaker- and listener-side tokens annotated with Chrono-Markup. Causal omni-attention. All tokens enter a shared attention block that enforces strict chronology both within andacross modalities: •Visual tokens attend to earlier visual tokens, to text tokens whose timestamps precede the cu... | https://arxiv.org/abs/2505.21724v1 |
Subsequently, we extract facial behavior features using MediaPipe [ 36], yielding per-frame ARKit blendshape coefficients and 3D head pose transformation matrices for both speaker and listener tracks. Step 3: Data Refinement. To ensure privacy and label accuracy, we conduct multi-level cleaning. First, in the De-identi... | https://arxiv.org/abs/2505.21724v1 |
speech representations; the resulting model achieves state-of-the-art correlation with human ratings of naturalness and intelligibility. •LSE-D (Lip–Speech Error Distance) [49,13]: Measures the temporal alignment and synchronization between generated audio and corresponding lip movements, reflecting audio-visual cohere... | https://arxiv.org/abs/2505.21724v1 |
work contributes to the development of more intuitive and responsive multi-modal dialogue sys- tems, with potential applications in education, healthcare, assistive communication, and companion. These technologies may improve access to information, support inclusive interaction, and enhance user experience across diver... | https://arxiv.org/abs/2505.21724v1 |
arXiv:2505.21731v1 [cs.LG] 27 May 2025Deep Reinforcement Learning Agents are not even close to Human Intelligence∗ Quentin Delfosse1,2† quentin.delfosse@tu-darmstadt.deJannis Blüml1,3† jannis.blueml@tu-darmstadt.de Fabian Tatai4,5Théo Vincent1,6Bjarne Gregori1Elisabeth Dillies7 Jan Peters1,3,4,6Constantin Rothkopf1,3,4... | https://arxiv.org/abs/2505.21731v1 |
rather than learning robust, causal strategies (Ilyas et al., 2019; Geirhos et al., 2020; Chan et al., 2020; Koch et al., 2021; Delfosse et al., 2024b). This reliance on shortcuts has lately been uncovered in the simplest Atari Pong game (depicted in Figure 1). In this game, the agent’s enemy follows a deterministic be... | https://arxiv.org/abs/2505.21731v1 |
RL agents and neurosymbolic agents. Task agnostic deep agents often struggle with overfitting and generalization to task variations (Farebrother et al., 2018). In contrast, neurosymbolic agents introduce inductive biases by representing environments in terms of objects and their interactions, supporting abstract and tr... | https://arxiv.org/abs/2505.21731v1 |
their policies within black-box neural networks, we cannot explicit the reason behind their action selections. While existing explainable techniques, such as importance maps, help identifying the decisive input zones, they do not explain the core reasoning. As outlined by Delfosse et al. (2024b), deep PPO agents traine... | https://arxiv.org/abs/2505.21731v1 |
the enemy’s vertical po- sition with its previous value. This makes the enemy remain static whenever the ball approaches the agent. We can thus evaluate potential RL agents misalignments. Let us now provide further examples of tasks variations included in HackAtari, most of which are illustrated in Figure 2. NoDanger (... | https://arxiv.org/abs/2505.21731v1 |
questions: (Q1) Do RL agents’ performance drop on HackAtari tasks variations? (Q2) Can human easily adapt to such tasks variations? (Q3) Are deep agents systematically learning shortcuts on relational reasoning tasks? (Q4) Do human inductive biases help aligning RL agents? Experimental Setup. We evaluate a diverse set ... | https://arxiv.org/abs/2505.21731v1 |
variations (over 17games). IQMs are computed over 3 seeded trained agents ( 30evaluations each). Expert-human scores are borrowed from Badia et al. (2020a). Performance in the original environment is plotted filled, while the performance in the modified environment is plotted hatched. Raw IQM scores (with CIs) for each... | https://arxiv.org/abs/2505.21731v1 |
This metric does not allow for comparing the performances of the different agents, neither on the original task, nor on the variation, but measures individual performance variations. Figure 4 shows that humans performances drastically increase on 11games, notably increases on 2games, slightly decreases on Bankheist, an... | https://arxiv.org/abs/2505.21731v1 |
position, color, size) from the pixel states. Such representations have been shown to improve transferability and interpretability in structured environments (Shindo et al., 2025). We thus evaluate whether introducing object-centric representations enhance the agents’ ability to generalize to simplifications. Figure 5 ... | https://arxiv.org/abs/2505.21731v1 |
learning task-aligned policies. This phenomenon, known as shortcut learning (Geirhos et al., 2020), has been documented in supervised vision (Ilyas et al., 2019; Stammer et al., 2021) and increasingly in RL (Zhang et al., 2018; Cobbe et al., 2020; Koch et al., 2021; Delfosse et al., 2024b). Recent interpretability effo... | https://arxiv.org/abs/2505.21731v1 |
Further, humans also decompose complex tasks into a sequence of high-level actions, and learn skills that correspond to the different sub-goals of such tasks. Hierarchical RL has been shown to improve adaptation to task variations by learning reusable sub-policies or skill hierarchies (Bacon et al., 2017; Hausman et al... | https://arxiv.org/abs/2505.21731v1 |
most widely used evaluation suite in deep RL. HackAtari is designed to go beyond in-distribution evaluation by allowing researchers to test agents on slight but targeted modifications of familiar tasks. These variations, such as changes in color schemes or simplified game dynamics, are typically trivial for humans to a... | https://arxiv.org/abs/2505.21731v1 |
Representations , 2020b. Baker, C. L., Saxe, R., and Tenenbaum, J. B. Action understanding as inverse planning. Cognition , 2009. Barbara, N. H., Wang, R., and Manchester, I. On robust reinforcement learning with lipschitz- bounded policy networks. In ICML Workshop: Foundations of Reinforcement Learning and Control–Con... | https://arxiv.org/abs/2505.21731v1 |
C. The intentional stance . 1989. di Langosco, L. L., Koch, J., Sharkey, L. D., Pfau, J., and Krueger, D. Goal misgeneralization in deep reinforcement learning. In International Conference on Machine Learning , 2022. Dillies, E., Delfosse, Q., Blüml, J., Emunds, R., Busch, F. P., and Kersting, K. Better decisions throu... | https://arxiv.org/abs/2505.21731v1 |
in neural information processing systems , 2019. Jiang, Z. and Luo, S. Neural logic reinforcement learning. In International Conference on Machine Learning , 2019. Kaufmann, T., Blüml, J., Wüst, A., Delfosse, Q., Kersting, K., and Hüllermeier, E. Ocalm: Object- centric assessment with language models. arXiv , 2024. Kes... | https://arxiv.org/abs/2505.21731v1 |
Gupta, A. Robust adversarial reinforcement learning. In International conference on machine learning , 2017. Schulman, J., Wolski, F., Dhariwal, P., Radford, A., and Klimov, O. Proximal policy optimization algorithms. 2017. Schultz, W., Dayan, P., and Montague, P. R. A neural substrate of prediction and reward. Science... | https://arxiv.org/abs/2505.21731v1 |
Overview •Appendix A: Metrics Definitions of Human-Normalized Score (HNS), Interquartile Mean (IQM), and performance change metrics. •Appendix B: Agent Architectures and Training Setup Describes all agents used in the study, their implementation details, training protocols, and sources. •Appendix C: Computational Resou... | https://arxiv.org/abs/2505.21731v1 |
Atari are frequently highly variable, skewed, and heavy-tailed, often due to a small number of runs achieving unusually high scores. These outliers can inflate the sample mean, resulting in a metric that overstates the agent’s typical performance. Additionally, pairing the mean with standard error (SE) implicitly assum... | https://arxiv.org/abs/2505.21731v1 |
evaluate a diverse set of reinforcement learning (RL) agents, covering both standard baselines and object-centric architectures. Most agents are based on publicly available implementations or pretrained models, while PPO was trained by us specifically for this study. Below, we summarize the agents used, their foundatio... | https://arxiv.org/abs/2505.21731v1 |
policy and value function. Training is conducted with CleanRL (Huang et al., 2022), with full training settings provided in Table 1. The agent is trained to maximize the sum of undiscounted episodic returns. In addition to pixel-based PPO, we train a Semantic Vector agent using object-centric observations derived from ... | https://arxiv.org/abs/2505.21731v1 |
Container nvcr.io/nvidia/pytorch:23.05-py3 GPU-Driver CUDA 12.2 CPU Dual Intel Xeon Platinum 8168 Operating System Ubuntu 23.02 LTS Table 2: Hard- and software configuration for our experimental section. 20 D Extended Results Evaluation Setup Our evaluation benchmarks RL agents across 17Atari environments, each tested ... | https://arxiv.org/abs/2505.21731v1 |
Deep Agents solve simplifications? We present the raw scores obtained by all evaluated deep RL agents across a set of Atari games and their corresponding HackAtari variations. Each agent was trained solely on the original environment and evaluated on both the unmodified and modified versions without any fine-tuning or ... | https://arxiv.org/abs/2505.21731v1 |
[3208,4579] set level 1 602[500,718] – 357[291,438] 631[527,744] 328[283,414] 1412 [1216,1555] set level 2 333[240,426] – 326[229,428] 786[641,979] 408[368,465] 432[307,513] set level 3 356[306,412] – 347[295,419] 738[477,986] 176[139,214] 231[158,302] Pong 19[17,19] 5[4,6] 18[14,19] 9[7,11] 19[19,20] 20[18,20] lazy en... | https://arxiv.org/abs/2505.21731v1 |
as a baseline. Games and Modifications in pink were used in Figure 5 and are identical to the human study. Game (Variant) PPO Object Masks Binary Masks Class Masks Planes Semantic Vector ScoBots Amidar 1052 [1017,1111] 554[493,615] 525[430,605] 479[442,513] 527[509,552] 357[325,407] 116[94,128] paint roller player 271[... | https://arxiv.org/abs/2505.21731v1 |
8406 [3938,14106] 962[900,1000] 20812 [10075 ,32231] 88787 [80131 ,94862] 10166 [5991,18597] 11550 [7357,15600] remove mountains 15781 [8693,23206] 9943 [5144,15150] 1000 [1000,1000] 23656 [13000 ,33488] 87287 [78219 ,95475] 13950 [9640,18069] 11250 [7700,14447] static bomber 41475 [24381 ,55043] 13375 [6512,20819] 100... | https://arxiv.org/abs/2505.21731v1 |
static mountains 706.25[588,831] – – – 26 E Code and Data To support reproducibility and further research, we will release all code, task variations, evaluation scripts, and selected model checkpoints as part of the supplementary materials and the code base through an anonymized repository upon acceptance. HackAtari En... | https://arxiv.org/abs/2505.21731v1 |
participant was assigned to a specific game and one of several predefined modification conditions. Each participant could only participate once. We used 15 games (cf. Appendix D.3) with one modification each. Assignment of participants to conditions followed a round-robin strategy over available game-modification pairs... | https://arxiv.org/abs/2505.21731v1 |
sation and performance-based bonus structure, ensuring transparency around incentives and ethical compensation. Again an explicit agreement was required before paricipation. 31 Figure 8: Example game description page shown to participants before gameplay. It provides an overview of the given Atari game, including the o... | https://arxiv.org/abs/2505.21731v1 |
human generalization to simplified modifications. 37 G Game Descriptions and Modifications G.1 Alien Description: You are stuck in a maze-like space ship with three aliens. You goal is to destroy their eggs that are scattered all over the ship while simultaneously avoiding the aliens (they are trying to kill you). You ... | https://arxiv.org/abs/2505.21731v1 |
are caught by the police, or run over some dynamite you have previously dropped. Modification Effect unlimited_gas Unlimited gas for the player. no_police Removes police from the game. only_police No banks only police. two_police_cars Replaces 2 banks with police cars and robbed banks give 50 points. random_city Random... | https://arxiv.org/abs/2505.21731v1 |
hit them in time. 40 Modification Effect no_flying_ducks Ducks in the last row disappear instead of turning into flying ducks. unlimited_ammo Ammunition doesn’t decrease. missile_speed_small_increase The projectiles fired from the players are slightly faster. missile_speed_medium_increase The projectiles fired from the... | https://arxiv.org/abs/2505.21731v1 |
on the bottom road (the fastest becomes the slowest and vice versa). reverse_car_speed_top Reverses the speed order of the cars on the top road (the fastest becomes the slowest and vice versa). speed_mode Increases the speed of all cars. invisible_mode Makes the cars invisible. phantom_mode Each car changes color from ... | https://arxiv.org/abs/2505.21731v1 |
1. change_level2 Changes the level to 2. change_level3 Changes the level to 3. unlimited_time Provides unlimited time to clear the level. Modification Effect no_damage Player does not take damage. unlimited_time Provides unlimited time to clear the level. unlimited_lives Player has an unlimited amounts of lives. G.19 M... | https://arxiv.org/abs/2505.21731v1 |
lose your jet. You also lose a jet when it collides with the river bank or one of the enemy objects (except fuel depots). The game begins with a squadron of three jets in reserve and you’re given an additional jet (up to 9) for each 10,000 points you score. 45 Modification Effect no_fuel Removes the fuel deposits from ... | https://arxiv.org/abs/2505.21731v1 |
gate or a tree, your skier will jump back up and keep going. Modification Effect invert_flags Switches the flag color from blue to red. moguls_to_trees Replaces all moguls with trees. moving_flags Flags move to the left and right. G.27 SpaceInvaders Description: Your objective is to destroy the space invaders by shooti... | https://arxiv.org/abs/2505.21731v1 |
arXiv:2505.21740v1 [cs.CL] 27 May 2025Preprint. Under review. Counterfactual Simulatability of LLM Explanations for Gen- eration Tasks Marvin Limpijankit, Yanda Chen, Melanie Subbiah, Nicholas Deas & Kathleen McKeown Department of Computer Science Columbia University New York, NY 10027, USA {ml4431, m.subbiah }@columbi... | https://arxiv.org/abs/2505.21740v1 |
the ability of LLMs to accurately 1 Preprint. Under review. Figure 1: Our evaluation pipeline. Given a model’s explanation, an LLM is prompted to generate relevant counterfactuals (right) and decompose the explanation into atomic units (left). For each unit, a human annotator verifies whether the element appears in the... | https://arxiv.org/abs/2505.21740v1 |
(Huang et al., 2023; Madsen et al., 2024) as well as unconstrained explanations (Turpin et al., 2023) have focused on faithfulness measures. In particular, work has proposed metrics for dimensions including comprehensiveness (DeYoung et al., 2020), sufficiency (DeYoung et al., 2020), alignment with human rationales (Fa... | https://arxiv.org/abs/2505.21740v1 |
of generation, |O|may be arbitrarily large. A human observes x,ex, and forms a one-to-many mental model hx,ex:X→ P(O), where P(O)denotes the power set of O, and hx,ex(x′)denotes what the human infers to be M’s possible outputs on a counterfactual x′. For simplicity, hex(x′)is used to denote hx,ex(x′). 3.2 Simulatabilit... | https://arxiv.org/abs/2505.21740v1 |
figure 2. In contrast to summarization, generating medical suggestions is more knowledge-based, requiring the LLM to identify key elements of the user’s query (e.g. their expressed symptoms, any relevant medical history) and relate them to potential suggestions using knowledge encoded in the model. As such, the explana... | https://arxiv.org/abs/2505.21740v1 |
generality and precision scores across explanations. A key distinction between the tasks is that while each atomic unit of a summarization explanation is linked to an expected item in the input (for simulatability) and its reference in the output (for precision), medical suggestion explanations do not follow this one-t... | https://arxiv.org/abs/2505.21740v1 |
break down of parsing errors is provided in appendix C. GPT-4 Turbo is able to generate simulatable counterfactuals for summarization but not for medical suggestion. While almost all (74 /76) generated counterfactuals are deemed simulatable in the summarization setting, only slightly more than half are for medical sugg... | https://arxiv.org/abs/2505.21740v1 |
models, explanation types, and tasks for automatic evaluation. The results of the automatic evaluation, presented in table 3, further support our findings from the human evaluation. Namely, models are much better able to accurately explain their behavior for summarization as opposed to medical suggestion while remainin... | https://arxiv.org/abs/2505.21740v1 |
introduces more complexity, making the LLM a less effective tool in the evaluation pipeline. Additionally, we found that the LLM is unable to produce as many simulatable counterfactuals in the medical domain compared to summarization. There is significant room for improvement towards adapting counterfactual simulatabil... | https://arxiv.org/abs/2505.21740v1 |
Nicoletta Calzolari, Min-Yen Kan, Veronique Hoste, Alessan- dro Lenci, Sakriani Sakti, and Nianwen Xue (eds.), Proceedings of the 2024 Joint In- ternational Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024) , pp. 14567–14578, Torino, Italia, May 2024. ELRA and ICCL. URL https... | https://arxiv.org/abs/2505.21740v1 |
M. Bauer, Marc Carrier, Aurelien Delluc, Gr ´egoire Le Gal, Tzu-Fei Wang, Deborah Siegal, and Wojtek Michalowski. Manually-curated versus llm-generated explanations for complex patient cases: An exploratory study with physicians. In Joseph Finkelstein, Robert Moskovitch, and Enea Parimbelli (eds.), Artificial Intellige... | https://arxiv.org/abs/2505.21740v1 |
ISSN 1046-8188. doi: 10.1145/3605357. URL https://doi.org/10.1145/3605357 . Pei Zhou, Pegah Jandaghi, Hyundong Cho, Bill Yuchen Lin, Jay Pujara, and Xiang Ren. Probing commonsense explanation in dialogue response generation. In Marie-Francine Moens, Xuanjing Huang, Lucia Specia, and Scott Wen-tau Yih (eds.), Findings o... | https://arxiv.org/abs/2505.21740v1 |
represent a few popular LLM options as well as a mix of proprietary and open-source models. 14 Preprint. Under review. C Explanation Parsing Quality Results We show a categorized break down of the reported errors for explanation parsing in both tasks. Parsed Explanations News Summarization Medical Suggestion (n=26) ( n... | https://arxiv.org/abs/2505.21740v1 |
Simulating the Unseen: Crash Prediction Must Learn from What Did Not Happen Zihao Li1∗Xinyuan Cao2∗Xiangbo Gao1Kexin Tian1Keshu Wu1 Mohammad Anis1Hao Zhang1Keke Long3Jiwan Jiang3Xiaopeng Li3 Yunlong Zhang1Tianbao Yang1Dominique Lord1Zhengzhong Tu1Yang Zhou1† 1Texas A&M University,2Georgia Tech,3University of Wisconsin-... | https://arxiv.org/abs/2505.21743v1 |
scenarios. Rare-Event Metaphor Imagine a field strewn with a billion identical keys and one hidden landmine that explodes only when its exact key is tried. Thousands of harmless picks give the illusion of safety, yet each trial leaves the catastrophic pairing essentially untested. True safety, then, cannot rely on coun... | https://arxiv.org/abs/2505.21743v1 |
noise, leading to out-of-distribution edge cases. 2 These factors together challenge standard supervised learning approaches. Traditional crash-frequency analysis requires many years of observations to obtain stable estimates. During this time, vehicle occupants and vulnerable users (e.g., pedestrians, cyclists) get in... | https://arxiv.org/abs/2505.21743v1 |
appears [ 37]. Such differences in human response can tip the scale between a collision and a narrow escape. Yet capturing these nuances in models is challenging. Simple rules or distributions may not reflect how humans behave in rare panic situations. In essence, human-in-the-loop uncertainty is a major hurdle: realis... | https://arxiv.org/abs/2505.21743v1 |
lacking ground-truth crash dynamics. We know how people drive, but not how they crash within seconds . Even the context -rich SHRP -2 study [ 59] captured only 1–2k mostly minor crashes across millions of miles, with many events missing synchronized multi -modal data (Table 2). Crash datasets are wide captured but shal... | https://arxiv.org/abs/2505.21743v1 |
jump from a moderately low SSM value to an actual collision is tenuous, as countless sub-1.5 s TTC events resolve safely. This occurs because SSMs depend critically on the chosen vehicle-dynamics model, assumed driver-behavior parameters, and encoded interaction scenarios. Consequently, correlations between SSM counts ... | https://arxiv.org/abs/2505.21743v1 |
system-level benefits, or to prioritize which micro-level scenarios matter most based on macro-level crash data. Closing this loop requires a unified, bidirectional framework in which macro crash patterns inform the generation of critical micro-level scenarios, and micro-level causal evidence is fed back to refine and ... | https://arxiv.org/abs/2505.21743v1 |
high-risk maneuvers [ 99,100]. As a result, AI drivers exhibit varied reaction-time distributions, gap-acceptance thresholds, and lane-change propensities, escalating to panic responses under stressors such as phantom braking or adversarial disturbances [ 101–103]. This fusion of generative realism with adversarial foc... | https://arxiv.org/abs/2505.21743v1 |
to simulate physical interactions such as collisions and surface friction. These elements are essential for accurate traffic simulations. To bridge this gap between visual fidelity and physical plausibility, geometric-aware reconstruction algorithms have emerged as promising solutions [ 122–124], facilitating more phys... | https://arxiv.org/abs/2505.21743v1 |
simulator integrated with AI components, RL agents explore and refine control policies by interacting with a wide spectrum of crash-prone scenarios. Variants such as adversarial RL [ 141,142], robust RL [ 143], and hierarchical/hybrid RL [ 144,145] offer specialized mechanisms for handling uncertainties and maintaining... | https://arxiv.org/abs/2505.21743v1 |
Dominique Lord, and Yang Zhou. Virtual roads, smarter safety: A digital twin framework for mixed autonomous traffic safety analysis. arXiv preprint arXiv:2504.17968 , 2025. [5]Zhuoning Yuan, Xun Zhou, and Tianbao Yang. Hetero-convlstm: A deep learning approach to traffic accident prediction on heterogeneous spatio-temp... | https://arxiv.org/abs/2505.21743v1 |
[19] Tianqi Wang, Sukmin Kim, Ji Wenxuan, Enze Xie, Chongjian Ge, Junsong Chen, Zhenguo Li, and Ping Luo. Deepaccident: A motion and accident prediction benchmark for v2x autonomous driving. In Proceedings of the AAAI Conference on Artificial Intelligence , volume 38, pages 5599–5606, 2024. [20] Krešimir Kuši ´c, René ... | https://arxiv.org/abs/2505.21743v1 |
behaviors under different levels of situational urgency. Transportation research part C: emerging technologies , 71:419–433, 2016. [38] Siddharth Singi, Zhanpeng He, Alvin Pan, Sandip Patel, Gunnar A Sigurdsson, Robinson Piramuthu, Shuran Song, and Matei Ciocarlie. Decision making for human-in-the-loop robotic agents v... | https://arxiv.org/abs/2505.21743v1 |
in a mixed traffic lane change condition. IEEE Internet of Things Journal , 2025. [52] Hao Zhang, Sixu Li, Zihao Li, Mohammad Anis, Dominique Lord, and Yang Zhou. Why anticipatory sensing matters in commercial acc systems under cut-in scenarios: A perspective from stochastic safety analysis. Accident Analysis & Prevent... | https://arxiv.org/abs/2505.21743v1 |
Pan Liu. A review of surrogate safety measures and their applications in connected and automated vehicles safety modeling. Accident Analysis & Prevention , 157:106157, 2021. [68] Sixu Li, Mohammad Anis, Dominique Lord, Hao Zhang, Yang Zhou, and Xinyue Ye. Be- yond 1d and oversimplified kinematics: A generic analytical ... | https://arxiv.org/abs/2505.21743v1 |
Saud Alsaif, Ritchie Lee, and Mykel J Kochenderfer. Adaptive stress testing for autonomous vehicles. In 2018 IEEE Intelligent Vehicles Symposium (IV) , pages 1–7. IEEE, 2018. [84] Qing Cai, Mohamed Abdel-Aty, Jaeyoung Lee, and Helai Huang. Integrating macro-and micro- level safety analyses: a bayesian approach incorpor... | https://arxiv.org/abs/2505.21743v1 |
on Tools with Artificial Intelligence (ICTAI) , pages 717–722. IEEE, 2023. [98] Yunchao Zhang, Yanyan Chen, Xin Gu, NN Sze, and Jianling Huang. A proactive crash risk prediction framework for lane-changing behavior incorporating individual driving styles. Accident Analysis & Prevention , 188:107072, 2023. [99] Lars Mes... | https://arxiv.org/abs/2505.21743v1 |
prediction in the crash scenario. arXiv preprint arXiv:2501.16349 , 2025. [113] Yuting Xie, Xianda Guo, Cong Wang, Kunhua Liu, and Long Chen. Advdiffuser: Gener- ating adversarial safety-critical driving scenarios via guided diffusion. In 2024 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) ,... | https://arxiv.org/abs/2505.21743v1 |
regression for safety-critical systems with an application to highway crash prediction. Engineering Applications of Artificial Intelligence , 117:105534, 2023. [130] R Timothy Marler and Jasbir S Arora. Survey of multi-objective optimization methods for engineering. Structural and multidisciplinary optimization , 26:36... | https://arxiv.org/abs/2505.21743v1 |
driving via personalized safety-critical curriculum learning with vision-language models. arXiv preprint arXiv:2502.15119 , 2025. [149] Zilin Huang, Zihao Sheng, Yansong Qu, Junwei You, and Sikai Chen. Vlm-rl: A unified vision language models and reinforcement learning framework for safe autonomous driving. arXiv prepr... | https://arxiv.org/abs/2505.21743v1 |
of the IEEE/CVF conference on computer vision and pattern recognition , pages 11621–11631, 2020. [162] Scott Ettinger, Shuyang Cheng, Benjamin Caine, Chenxi Liu, Hang Zhao, Sabeek Pradhan, Yuning Chai, Ben Sapp, Charles R Qi, Yin Zhou, et al. Large scale interactive motion forecasting for autonomous driving: The waymo ... | https://arxiv.org/abs/2505.21743v1 |
Proceedings of the 4th middle East Symposium on Simulation and Modelling (MESM20002) , pages 183–187, 2002. [175] PTV Planung Transport Verkehr GmbH, Karlsruhe, Germany. PTV Vissim 2025 [Computer software] , 2025. Version 25.0. User manual available from PTV Group. [176] Aimsun SLU, Barcelona, Spain. Aimsun Next 24 [Co... | https://arxiv.org/abs/2505.21743v1 |
in simulation of urban mobility. IEEE Transactions on Intelligent Vehicles , 2024. [191] NVIDIA Corporation. kit-extension-sample-airoomgenerator. https://github.com/ NVIDIA-Omniverse/kit-extension-sample-airoomgenerator , 2025. Git commit 78a4b7c, released 25 Apr 2025. [192] Bo-Kai Ruan, Hao-Tang Tsui, Yung-Hui Li, an... | https://arxiv.org/abs/2505.21743v1 |
crash-only estimation, we advocate a shift toward counterfactual augmentation . Many traffic samples do not result in crashes but exhibit high-risk behaviors that correspond to elevated latent crash probability. We therefore augment the dataset with near-miss samples satisfying Pr(Yt= 1|Zt−∆:t)> τ, where Zt−∆:tdenotes ... | https://arxiv.org/abs/2505.21743v1 |
[160]Cars, truck, motorcycle19221 months (2012–2017)1.43 M Europe6 countries (France, UK, Spain, Poland, Germany, Netherlands) SHRP 2 : Second Strategic Highway Research Program. 100car–NDS : 100-Car Naturalistic Driving Study. CNDS : Canadian Naturalistic Driving Study. ANDS : Australian Naturalistic Driving Study. SH... | https://arxiv.org/abs/2505.21743v1 |
Paramics Discovery [177] ✓ ✓∗✗ ✗ ✓∗✗ CORSIM [178] ✓ ✗ ✗ ✗ ✓∗✗ High-Fidelity Vehicle / Driving & A V Simulators CARLA [117] ✗ ✓ ✓∗✓ ✓∗✓ Simcenter PreScan [179] ✗ ✓ ✓∗✓ ✓∗✗ IPG CarMaker [180] ✗ ✓ ✓∗✓ ✓∗✗ VIRES VTD [181] ✓ ✓ ✓∗✓ ✓∗✗ LG SVL Simulator [182] ✗ ✓ ✓∗✓ ✓∗✓ Gazebo [183] ✗ ✓ ✓∗✓ ✗ ✓ Project Chrono [184] ✗ ✓ ✓ ✓ ✗... | https://arxiv.org/abs/2505.21743v1 |
arXiv:2505.21746v1 [cs.CV] 27 May 2025Learning to See More: UAS-Guided Super-Resolution of Satellite Imagery for Precision Agriculture Arif Masrur∗ Esri New York, NYPeder A. Olsen Microsoft Research Redmond, WAPaul R. Adler USDA - Agricultural Research Service University Park, PA Carlan Jackson Dept. of Electrical Engi... | https://arxiv.org/abs/2505.21746v1 |
,Zhang and Kovacs, 2012 ], enabling timely, high- resolution field-level assessments of crop biomass [ Wang et al., 2021 ] and nitrogen (N) status [ Argento et al., 2021 , Grüner et al., 2021 ], as well as detection of weeds, diseases, and pest infestations [ Dash et al., 2018 ,Watt et al., 2017 , Zhu et al., 2024 ]. H... | https://arxiv.org/abs/2505.21746v1 |
– produces a very high-fidelity image reconstructions at sub-meter spatial resolution. Combining UAS RGB with satellite imagery in this way unlocks access to critical remote sensing indices, such as those based on vegetation red edge (VRE) and near-infrared (NIR) bands, which are not available from RGB sensors alone. A... | https://arxiv.org/abs/2505.21746v1 |
2018 ,Salgueiro Romero et al., 2020 ,Tarasiewicz et al., 2023 ] and historic Landsat imagery [ Kong et al., 2023 ], and are increasingly being applied in precision agriculture [Jonak et al., 2024 ,Meng et al., 2024 ]. Image colorization is another technique that augments a gray-scale or single- channel image [ Wu et al... | https://arxiv.org/abs/2505.21746v1 |
chosen UAS resolution (12.5 cm in our dataset, derived from data originally collected at 3 cm resolution). We refer to this as a spectral extension model, which adds spectral richness from the satellite bands (i.e., spectral bands in the 700-900 nm range) not available to an UAS RGB sensor. In this context, we explore ... | https://arxiv.org/abs/2505.21746v1 |
expensive hyperspectral equipment, which can cost as much as $175,000, while still achieving performance accuracy that meets or surpasses what is possible with UAS RGB imagery alone. The novelty of this study lies in the development of an end-to-end spatially and temporally scalable system that integrates spectral simu... | https://arxiv.org/abs/2505.21746v1 |
], dominated by the use of Sentinel- 2 [do Nascimento Bendini et al., 2024 ,Fan et al., 2020 ,Gao et al., 2020 ,Goffart et al., 2021 ,Thieme et al., 2020 , Xia et al., 2021 ] or UAS datasets [ Holzhauser et al., 2022 ,Roth and Streit, 2018 ,Yuan et al., 2019 ,Yuan et al., 2021 ], limiting precision and scalability of t... | https://arxiv.org/abs/2505.21746v1 |
each group (brassica, legume, and cereal). In the UCB a dairy crop rotation is common and includes corn and harvested cover crops such as rye and triticale. We also included Miscanthus, switchgrass, and a mix of plant species used in the Conservation Reserved Program (CRP) [Adler et al., 2024]. Figure 3: (A) Cover crop... | https://arxiv.org/abs/2505.21746v1 |
corn, wheat and miscanthus are shown in Figure 4. These hyperspectral flights were spread over a period of time between 2018 and 2024. We list the different sites used in this paper along with approximate location in Table 1. 2.2.2 Satellite image spectral alignment To train a super-resolution model for Sentinel-2 we n... | https://arxiv.org/abs/2505.21746v1 |
higher spatial resolution image based on the hyperspectral UAS image corresponding to the Sentinel-2 MSI sensor, but does so in the geometry of the original UAS. For the purpose of training neural network models, we also need to align the pixels of the UAS image with the Sentinel-2 image. We do so in two stages; (1) tr... | https://arxiv.org/abs/2505.21746v1 |
accuracy of the location of our hyperspectral images, it is reasonable to assume that 9 Super-resolution for precision farming Figure 7: (A) Transforming and aligning the UAS image to the Sentinel-2 image. The middle image shows that simply changing the coordinate reference system (reprojecting) may leave the Sentinel-... | https://arxiv.org/abs/2505.21746v1 |
survey the entire field. In thetemporal extension scenario, we generate high-resolution imagery for the same field at a different time when no recent UAS data are available. Table 2 details the data splits used for each scenario, capturing spatial and temporal variability across different fields and time periods. To tr... | https://arxiv.org/abs/2505.21746v1 |
handle dual image input. For our spatial and temporal extension models, we also used SRCNN, but with different kernel sizes. 12 Super-resolution for precision farming Figure 10: The proposed end-to-end system for data preparation, super-resolution fusion and predictive modeling workflow to assess complementarity of mul... | https://arxiv.org/abs/2505.21746v1 |
that encompasses features from DenseNet and uses kernel sizes inspired by our SRCNN model. The interested reader can see the full model in Figure A.3. 2.3.2 Apply reconstructed imagery to predict cover crop biomass yield and quality We developed two sets of cross-validated random forest (RF)-based regression models to ... | https://arxiv.org/abs/2505.21746v1 |
with five bands (RGB and three VRE) for N ( M5-M6). With weighted 8-band data from Headwall hyperspectral 269 bands (based on the Sentinel-2 MSI spectral response function, as discussed in Section 2.2.2), we observed the same patterns of impacts of spectral range as the original Sentinel-2, however, RMSEs were much low... | https://arxiv.org/abs/2505.21746v1 |
for Micasense RedEdge MX and Hyperspectral images is ∼3 cm. Tag Sensor Biomass Nitrogen R2RMSE (Mg/ha) R2RMSE (kg/ha) M1 Sentinel-2A RGB 18.5 1.33 -1.4 36.79 M2 Sentinel-2A 5-band 51.7 1.04 37.5 28.81 M3 Sentinel-2A 8-band 69.1 0.87 61.1 23.88 M4 Sentinel-2A 8-band & SWIRs 67.9 0.88 64.1 23.09 M5 RedEdge-MX RGB 84.2 0.... | https://arxiv.org/abs/2505.21746v1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.