text
string | source
string |
|---|---|
and Antonia M Villarruel. Identifying credible sources of health information in social media: principles and attributes. NAM perspectives , 2021:10–31478, 2021. [28] Bo Li, Kaichen Zhang, Hao Zhang, Dong Guo, Renrui Zhang, Feng Li, Yuanhan Zhang, Ziwei Liu, and Chunyuan Li. Llava-next: Stronger llms supercharge multimodal capabilities in the wild. https://llava-vl.github.io/blog/2024-05-10-llava-next-stronger-llms/ , May 2024. [29] Guohao Li, Hasan Hammoud, Hani Itani, Dmitrii Khizbullin, and Bernard Ghanem. Camel: Communicative agents for" mind" exploration of large language model society. Advances in Neural Information Processing Systems , 36:51991–52008, 2023. [30] Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, and Bill Dolan. A diversity-promoting objective function for neural conversation models. arXiv preprint arXiv:1510.03055 , 2015. [31] Xiang Li, Hao Chen, Kai Qiu, Jason Kuen, Jiuxiang Gu, Bhiksha Raj, and Zhe Lin. Imagefolder: Autoregressive image generation with folded tokens. arXiv preprint arXiv:2410.01756 , 2024. [32] Chin-Yew Lin. Rouge: A package for automatic evaluation of summaries. In Text Summarization Branches Out , pages 74–81, 2004. [33] Christine Lisetti, Reza Amini, Ugan Yasavur, and Naphtali Rishe. I can help you change! an empathic virtual agent delivers behavior change health interventions. ACM Transactions on Management Information Systems , 4(4):1–28, 2013. [34] Haotian Liu, Chunyuan Li, Qingyang Wu, and Yong Jae Lee. Visual instruction tuning. Advances in Neural Information Processing Systems , 36, 2024. [35] Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 , 2019. [36] Camillo Lugaresi, Jiuqiang Tang, Hadon Nash, Chris McClanahan, Esha Uboweja, Michael Hays, Fan Zhang, Chuo-Ling Chang, Ming Guang Yong, Juhyun Lee, et al. Mediapipe: A framework for building perception pipelines. arXiv preprint arXiv:1906.08172 , 2019. [37] Cheng Luo, Siyang Song, Weicheng Xie, Micol Spitale, Zongyuan Ge, Linlin Shen, and Hatice Gunes. Reactface: Online multiple appropriate facial reaction generation in dyadic interactions. IEEE Transactions on Visualization and Computer Graphics , pages 1–18, 2024. [38] Fabian Mentzer, David Minnen, Eirikur Agustsson, and Michael Tschannen. Finite scalar quantization: Vq-vae made simple. arXiv preprint arXiv:2309.15505 , 2023. [39] Mehdi Mirza and Simon Osindero. Conditional generative adversarial nets. arXiv preprint arXiv:1411.1784 , 2014. [40] Kentaro Mitsui, Koh Mitsuda, Toshiaki Wakatsuki, Yukiya Hono, and Kei Sawada. Pslm: Parallel generation of text and speech with llms for low-latency spoken dialogue systems. arXiv preprint arXiv:2406.12428 , 2024. [41] David Mizrahi, Roman Bachmann, Oguzhan Kar, Teresa Yeo, Mingfei Gao, Afshin Dehghan, and Amir Zamir. 4m: Massively multimodal masked modeling. Advances in Neural Information Processing Systems , 36, 2024. 12 [42] Eliya Nachmani, Alon Levkovitch, Roy Hirsch, Julian Salazar, Chulayuth Asawaroengchai, Soroosh Mariooryad, Ehud Rivlin, RJ Skerry-Ryan, and Michelle Tadmor Ramanovich. Spoken question answering and speech continuation using spectrogram-powered llm. arXiv preprint arXiv:2305.15255 , 2023. [43] Evonne Ng, Hanbyul Joo, Liwen Hu, Hao Li, Trevor Darrell, Angjoo Kanazawa, and Shiry Ginosar. Learning to listen: Modeling non-deterministic dyadic facial motion. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages 20395–20405, 2022. [44] Evonne Ng, Sanjay Subramanian, Dan Klein, Angjoo Kanazawa, Trevor Darrell, and Shiry Ginosar. Can language models
|
https://arxiv.org/abs/2505.21724v1
|
learn to listen? In Proceedings of the IEEE/CVF International Conference on Computer Vision , pages 10083–10093, 2023. [45] Tu Anh Nguyen, Benjamin Muller, Bokai Yu, Marta R Costa-Jussa, Maha Elbayad, Sravya Popuri, Paul-Ambroise Duquenne, Robin Algayres, Ruslan Mavlyutov, Itai Gat, et al. Spirit-lm: Interleaved spoken and written language model. arXiv preprint arXiv:2402.05755 , 2024. [46] Cristina Palmero, Javier Selva, Sorina Smeureanu, Julio Junior, CS Jacques, Albert Clapés, Alexa Moseguí, Zejian Zhang, David Gallardo, Georgina Guilera, et al. Context-aware per- sonality inference in dyadic scenarios: Introducing the udiva dataset. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision , pages 1–12, 2021. [47] Se Jin Park, Chae Won Kim, Hyeongseop Rha, Minsu Kim, Joanna Hong, Jeong Hun Yeo, and Yong Man Ro. Let’s go real talk: Spoken dialogue model for face-to-face conversation. arXiv preprint arXiv:2406.07867 , 2024. [48] Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, et al. Pytorch: An imperative style, high-performance deep learning library. Advances in Neural Information Processing Systems , 32, 2019. [49] KR Prajwal, Rudrabha Mukhopadhyay, Vinay P Namboodiri, and CV Jawahar. A lip sync expert is all you need for speech to lip generation in the wild. In Proceedings of the ACM International Conference on Multimedia , pages 484–492, 2020. [50] Alec Radford, Jong Wook Kim, Tao Xu, Greg Brockman, Christine McLeavey, and Ilya Sutskever. Robust speech recognition via large-scale weak supervision. In International Conference on Machine Learning , pages 28492–28518, 2023. [51] Aditya Ramesh, Prafulla Dhariwal, Alex Nichol, Casey Chu, and Mark Chen. Hierarchical text-conditional image generation with clip latents. arXiv preprint arXiv:2204.06125 , 1(2):3, 2022. [52] Fabien Ringeval, Andreas Sonderegger, Juergen Sauer, and Denis Lalanne. Introducing the recola multimodal corpus of remote collaborative and affective interactions. In IEEE Inter- national Conference and Workshops on Automatic Face and Gesture Recognition , pages 1–8, 2013. [53] Paul K Rubenstein, Chulayuth Asawaroengchai, Duc Dung Nguyen, Ankur Bapna, Zalán Borsos, Félix de Chaumont Quitry, Peter Chen, Dalia El Badawy, Wei Han, Eugene Kharitonov, et al. Audiopalm: A large language model that can speak and listen. arXiv preprint arXiv:2306.12925 , 2023. [54] Siyang Song, Micol Spitale, Cheng Luo, German Barquero, Cristina Palmero, Sergio Escalera, Michel Valstar, Tobias Baur, Fabien Ringeval, Elisabeth Andre, et al. React2023: The first mul- tiple appropriate facial reaction generation challenge. In Proceedings of the ACM International Conference on Multimedia , pages 9620–9624, 2023. [55] Siyang Song, Micol Spitale, Cheng Luo, Cristina Palmero, German Barquero, Hengde Zhu, Sergio Escalera, Michel Valstar, Tobias Baur, Fabien Ringeval, Elisabeth André, and Hatice Gunes. React 2024: the second multiple appropriate facial reaction generation challenge. In IEEE International Conference on Automatic Face and Gesture Recognition , 2024. 13 [56] Siyang Song, Micol Spitale, Yiming Luo, Batuhan Bal, and Hatice Gunes. Multiple appropriate facial reaction generation in dyadic interaction settings: What, why and how? arXiv preprint arXiv:2302.06514 , 2023. [57] Yang Song, Jascha Sohl-Dickstein, Diederik P Kingma, Abhishek Kumar, Stefano Ermon, and Ben Poole. Score-based generative modeling through stochastic differential equations. In International Conference on Learning Representations , 2020.
|
https://arxiv.org/abs/2505.21724v1
|
[58] Chameleon Team. Chameleon: Mixed-modal early-fusion foundation models. arXiv preprint arXiv:2405.09818 , 2024. [59] Linrui Tian, Qi Wang, Bang Zhang, and Liefeng Bo. Emo: Emote portrait alive generating expressive portrait videos with audio2video diffusion model under weak conditions. In European Conference on Computer Vision , pages 244–260. Springer, 2024. [60] Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timo- thée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, et al. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971 , 2023. [61] Thomas Unterthiner, Sjoerd Van Steenkiste, Karol Kurach, Raphael Marinier, Marcin Michalski, and Sylvain Gelly. Towards accurate generative models of video: A new metric & challenges. arXiv preprint arXiv:1812.01717 , 2018. [62] Aaron Van Den Oord, Oriol Vinyals, et al. Neural discrete representation learning. Advances in Neural Information Processing Systems , 30, 2017. [63] A Vaswani. Attention is all you need. Advances in Neural Information Processing Systems , 2017. [64] Xinsheng Wang, Mingqi Jiang, Ziyang Ma, Ziyu Zhang, Songxiang Liu, Linqin Li, Zheng Liang, Qixi Zheng, Rui Wang, Xiaoqin Feng, et al. Spark-tts: An efficient llm-based text-to-speech model with single-stream decoupled speech tokens. arXiv preprint arXiv:2503.01710 , 2025. [65] Chengyue Wu, Xiaokang Chen, Zhiyu Wu, Yiyang Ma, Xingchao Liu, Zizheng Pan, Wen Liu, Zhenda Xie, Xingkai Yu, Chong Ruan, et al. Janus: Decoupling visual encoding for unified multimodal understanding and generation. arXiv preprint arXiv:2410.13848 , 2024. [66] Shitao Xiao, Yueze Wang, Junjie Zhou, Huaying Yuan, Xingrun Xing, Ruiran Yan, Shuting Wang, Tiejun Huang, and Zheng Liu. Omnigen: Unified image generation. arXiv preprint arXiv:2409.11340 , 2024. [67] Jinheng Xie, Weijia Mao, Zechen Bai, David Junhao Zhang, Weihao Wang, Kevin Qinghong Lin, Yuchao Gu, Zhijie Chen, Zhenheng Yang, and Mike Zheng Shou. Show-o: One single trans- former to unify multimodal understanding and generation. arXiv preprint arXiv:2408.12528 , 2024. [68] Jiahui Yu, Yuanzhong Xu, Jing Yu Koh, Thang Luong, Gunjan Baid, Zirui Wang, Vijay Vasudevan, Alexander Ku, Yinfei Yang, Burcu Karagol Ayan, et al. Scaling autoregressive models for content-rich text-to-image generation. arXiv preprint arXiv:2206.10789 , 2(3):5, 2022. [69] Dong Zhang, Shimin Li, Xin Zhang, Jun Zhan, Pengyu Wang, Yaqian Zhou, and Xipeng Qiu. Speechgpt: Empowering large language models with intrinsic cross-modal conversational abilities. arXiv preprint arXiv:2305.11000 , 2023. [70] Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q Weinberger, and Yoav Artzi. Bertscore: Evaluating text generation with bert. arXiv preprint arXiv:1904.09675 , 2019. [71] Yiyuan Zhang, Kaixiong Gong, Kaipeng Zhang, Hongsheng Li, Yu Qiao, Wanli Ouyang, and Xiangyu Yue. Meta-transformer: A unified framework for multimodal learning. arXiv preprint arXiv:2307.10802 , 2023. [72] S Zhao, Y Ma, C Ni, C Zhang, H Wang, TH Nguyen, K Zhou, J Yip, D Ng, and B Ma. Mossformer2: Combining transformer and rnn-free recurrent network for enhanced time- domain monaural speech separation. arxiv 2024. arXiv preprint arXiv:2312.11825 , 2025. 14 [73] Chunting Zhou, Lili Yu, Arun Babu, Kushal Tirumala, Michihiro Yasunaga, Leonid Shamis, Jacob Kahn, Xuezhe Ma, Luke Zettlemoyer, and Omer Levy. Transfusion: Predict the next token and diffuse images with one multi-modal model. arXiv preprint arXiv:2408.11039 , 2024. [74] Hang Zhou, Yu Liu, Ziwei Liu,
|
https://arxiv.org/abs/2505.21724v1
|
Ping Luo, and Xiaogang Wang. Talking face generation by adversarially disentangled audio-visual representation. In Proceedings of the AAAI Conference on Artificial Intelligence , pages 9299–9306, 2019. [75] Mohan Zhou, Yalong Bai, Wei Zhang, Ting Yao, Tiejun Zhao, and Tao Mei. Responsive listening head generation: a benchmark dataset and baseline. In European Conference on Computer Vision , pages 124–142, 2022. [76] Hengde Zhu, Xiangyu Kong, Weicheng Xie, Xin Huang, Linlin Shen, Lu Liu, Hatice Gunes, and Siyang Song. Perfrdiff: Personalised weight editing for multiple appropriate facial reaction generation. In Proceedings of the ACM International Conference on Multimedia , 2024. 15 Appendices A Implementation Details and Hyperparameters Table 4: Implementation details and hyperparameters. Setup Batch Size 1 Training Epoch for the Unified Stage 1500 Training Epoch for A/V finetuning 500 Warmup Epoch 100 Large Language Model Phi-3.5 Mini-Instruct [1] (3.8B) Text Tokenizer Phi-3.5 Mini-Instruct [1] Tokenizer Audio Tokenizer Spark-tts [64] BiCodec Facial Coefficients MediaPipe [36] facial blendshapes + transformation matrix Lora Rank 64 Lora Alpha 16 Optimizer AdamW Learning Rate 2.0×10−5 Model Parameters Nparam 4.5B β1 0.9 β2 0.999 λvision 1.0 λaudio 100 Table 4 summarizes the key hyperparameters used in our experiments. For the core architecture, we employ the Phi-3.5 Mini-Instruct large language model [ 1] for multimodal fusion and dialogue reasoning. Input modalities are processed as follows: the audio waveform is tokenized into discrete representations using the BiCodec of Spark-tts [ 64], while text is tokenized using the Phi-3.5 Mini-Instruct tokenizer, augmented with special tokens such as [PAUSE] and[LASTING] . Visual features are extracted using the widely adopted MediaPipe toolkit [ 36], yielding 52-dimensional facial blendshape coefficients to capture local facial movements and a 12-dimensional transformation matrix representing head pose dynamics. Model optimization is performed using the AdamW optimizer [ 26], with an initial learning rate of 2.0×10−5,β1= 0.9,β2= 0.999, and a weight decay of 1.0×10−4. The batch size is set to 1, and a cosine learning rate scheduler is applied throughout training. The model is first trained end-to-end, including all components ( i.e., , LLM, vision projection, decoder, and TempoV oice) for 1,500 epochs, with a 100-epoch warmup phase. To enable efficient adaptation of the large language model, we employ the LoRA fine-tuning strategy [ 20] (rank 64, alpha 16), while all other parameters of OmniResponse are jointly optimized. Subsequently, a dedicated fine-tuning stage is performed for the audio and visual components ( i.e., , vision projection, decoder, and TempoV oice) over an additional 500 epochs. B Methodological Details In this section, we provide a comprehensive overview of OmniResponse, highlighting its architectural design and the key technical innovations, namely, Chrono-Text Markup and TempoV oice. B.1 Network Architecture OmniResponse is composed of several interconnected modules: a vision projection layer for encoding the visual frames of both the speaker and listener, a large language model [ 1] for fusing visual features, textual instructions, conversational history, and dynamic texts, and a Chrono-Text Markup module for temporal alignment of text tokens. The model jointly predicts the next visual token, text token, and audio response token. A vision decoder layer reconstructs the listener’s visual frame from
|
https://arxiv.org/abs/2505.21724v1
|
the predicted visual token, while the TempoV oice module converts textual embeddings into audio waveforms. Vision Projection Layer. The Vision Projection Layer, denoted as Mvis-proj (·), encodes the previously predicted visual frames of the listener ˆFl τ:t−1together with the speaker’s visual frames Fs τ:t−1, and projects them into a sequence of embedding features Vτ:t−1over the temporal interval [τ, t−1]. 16 messages = [ {"role": "system", "content": "You are an active participant in a face-to-face dyadic interaction, and you are responding to the other speaker with speech content that aligns with your facial expressions."} ]messages.append({"role": "user", "content": history["user"] })messages.append({"role": "assistant", "content": history["assistant"] })messages.append({"role": "user", "content": dynamic_user_text })messages.append({"role": "assistant", "content": dynamic_assist_text)Prompt ConstructionFigure 6: Illustration of Prompt Construction. The final prompt (messgae) is composed of a system prompt, the conversation history from the speaker (user) and listener (assistant), and dynamic speaker/listener text processed by our Chrono-Text Markup module. Here, τis the starting index of the considered time window, which limits the number of temporal visual tokens and reduces computational overhead. The process is formulated as follows: Vτ:t−1=Mvis-proj ˆFl τ:t−1,Fs τ:t−1 (5) The projection module Mvis-proj can be instantiated either as a multilayer perceptron that processes the concatenated visual features of the speaker and listener:ˆFl τ:t−1,Fs τ:t−1 (where [·]denotes concatenation), or as a transformer-based layer, where the listener’s visual features serve as queries, and the speaker’s visual features act as keys and values within a cross-attention mechanism. This architecture enables effective temporal fusion of visual information from both conversational participants, providing context for subsequent response generation. Vision Decoder. The vision decoder consists of a two-layer Transformer Decoder that processes the predicted embeddings ˆVl τ+1:tgenerated by the large language model for the first t−τpositions, and maps them to the facial coefficient space ˆFl τ+1:t. Subsequently, a pre-trained visual renderer converts these facial coefficients into 2D facial frames, conditioned on a given portrait image. The renderer is trained on a large-scale web video dataset and is utilized as a tool to synthesize photorealistic images by mapping the predicted facial expression and head pose coefficients to high-quality 2D visuals. Static Text. The large language model accepts both visual and textual inputs. The textual inputs include static text. Specifically, static text contains the instruction prompt Winstruct and the con- versation history Whistory ,<τ. The construction process for the instruction prompt is illustrated in Figure 6. The final prompt comprises the static system message (serving as the assistant’s instruction) and the conversation history between the speaker (user) and the listener (assistant) up to time τ. This static text is provided to the LLM following the visual coefficients. B.2 Chrono-Text Markup In addition to the static instruction and conversation history, we also supply the model with dynamic text annotated with precise timing information. Figure 6 illustrates how we interleave static and dynamic text when constructing each prompt: the static text preserves long-term context, while the dynamic text encodes exactly when each word occurs and how long silences last. To achieve this, Chrono-Text Markup introduces two special tokens, [PAUSE] and[LASTING] , into the token stream according to the timestamps in our dataset. At each
|
https://arxiv.org/abs/2505.21724v1
|
frame timestamp: If neither 17 [PAUSE] [PAUSE] [PAUSE] [PAUSE] [PAUSE] [PAUSE] [PAUSE] [PAUSE] [PAUSE] [PAUSE] [PAUSE] [PAUSE] [PAUSE] [PAUSE] [PAUSE] [PAUSE] Why [LASTING] [LASTING] [LASTING] [LASTING] [LASTING] [LASTING] [LASTING] [LASTING] [LASTING] [LASTING] [LASTING] [LASTING] [LASTING] [LASTING] [LASTING] [LASTING] [LASTING] [LASTING] [LASTING] [LASTING] do [LASTING] [LASTING] [LASTING] [LASTING] [LASTING] [LASTING] [LASTING] [LASTING] [LASTING] [LASTING] [LASTING] I [LASTING] [LASTING] [LASTING] [LASTING] [LASTING] [LASTING] [LASTING] [LASTING] [LASTING] [LASTING] [LASTING] [LASTING] [LASTING] [LASTING] [LASTING] [LASTING] [LASTING] [LASTING] [LASTING] [LASTING] [LASTING] [LASTING] [LASTING] [LASTING] [LASTING] [LASTING] [LASTING] [LASTING] [LASTING] [LASTING] [LASTING] [LASTING] [LASTING] [LASTING] [LASTING] [LASTING] [LASTING] [LASTING] [LASTING] [LASTING] think [LASTING] [LASTING] you're [LASTING] [LASTING] [LASTING] [LASTING] [LASTING] in [LASTING] my [LASTING] life? [LASTING] [LASTING] [LASTING] [LASTING] [PAUSE] [PAUSE] [PAUSE] [PAUSE] [PAUSE] [PAUSE] [PAUSE] [PAUSE] [PAUSE] [PAUSE] [PAUSE] [PAUSE] [PAUSE] [PAUSE] [PAUSE] [PAUSE] [PAUSE] [PAUSE] [PAUSE] [PAUSE] [PAUSE] [PAUSE] [PAUSE] [PAUSE] [PAUSE] [PAUSE] [PAUSE] [PAUSE] [PAUSE] [PAUSE] [PAUSE] [PAUSE] [PAUSE] [PAUSE] [PAUSE] [PAUSE] [PAUSE] [PAUSE] [PAUSE] [PAUSE] [PAUSE] [PAUSE] [PAUSE] [PAUSE] [PAUSE] [PAUSE] [PAUSE] [PAUSE] [PAUSE] [PAUSE] [PAUSE] [PAUSE] [PAUSE] [PAUSE] [PAUSE] [PAUSE] [PAUSE] [PAUSE] [PAUSE] [PAUSE] [PAUSE] [PAUSE] [PAUSE] [PAUSE] [PAUSE] [PAUSE] [PAUSE] [PAUSE] [PAUSE] [PAUSE] [PAUSE] [PAUSE] [PAUSE] [PAUSE] [PAUSE] [PAUSE] [PAUSE] [PAUSE] [PAUSE] Okay. [LASTING] [LASTING] [LASTING] [LASTING] [LASTING] [LASTING] [LASTING] [PAUSE] [PAUSE] [PAUSE] [PAUSE] [PAUSE] [PAUSE] [PAUSE] [PAUSE] [PAUSE] [PAUSE] [PAUSE] [PAUSE] [PAUSE] [PAUSE] [PAUSE] [PAUSE] [PAUSE] [PAUSE] [PAUSE] [PAUSE] [PAUSE] [PAUSE] [PAUSE] [PAUSE] [PAUSE] [PAUSE] [PAUSE] [PAUSE] [PAUSE] [PAUSE] [PAUSE] [PAUSE] [PAUSE] [PAUSE] [PAUSE] [PAUSE] [PAUSE] [PAUSE] [PAUSE] [PAUSE] That [LASTING] [LASTING] [LASTING] [LASTING] [LASTING] [LASTING] brought [LASTING] [LASTING] [LASTING] [LASTING] [LASTING] [LASTING] up [LASTING] [LASTING] [LASTING] [LASTING] [LASTING] [LASTING] [LASTING] [LASTING] two [LASTING] [LASTING] things [LASTING] [LASTING] [LASTING] [LASTING] [LASTING] [LASTING] [LASTING] in [LASTING] [LASTING] my [LASTING] [LASTING] [LASTING] mind. [LASTING] [LASTING]Dynamic TextFigure 7: Example of Dynamic Text. the speaker nor listener is uttering a word, we insert a [PAUSE] token; When speech is present, we emit the actual word tokens (e.g., “do”, “think”) and then append one or more [LASTING] tokens to occupy the remainder of that word’s duration in the timeline. Here is an example shown in Figure 7. The large language model also generates dynamic text predictions. By encoding precise timing information into these text embeddings, the subsequent audio synthesis produces segments that are more tightly synchronized with the spoken content. 18 Visual Tokens Static Text Tokens Speaker Dynamic Text Tokens Speaker Dynamic Text Tokens Visual Tokens Static Text Tokens Speaker Dynamic Text Tokens Speaker Dynamic Text Tokens Figure 8: Illustration of Multimodal Context Modeling. Each visual token attends to all preceding visual tokens and static and dynamic text tokens annotated by Chrono-Text markers at earlier timestamps. Similarly, each dynamic text token attends to all past visual and textual tokens, enabling rich cross-modal context integration. B.3 Multimodal Context Modeling We partition the input sequence into two streams, static anddynamic , and fuse them with a single causal omni-attention layer (Figure 8). Static stream. The instruction prompt and the complete conversation history are encoded as global tokens. These tokens are never masked and therefore remain visible to every other token throughout the sequence. Dynamic stream. •Visual. Frame-aligned visual
|
https://arxiv.org/abs/2505.21724v1
|
embeddings. •Text. Timestamped speaker- and listener-side tokens annotated with Chrono-Markup. Causal omni-attention. All tokens enter a shared attention block that enforces strict chronology both within andacross modalities: •Visual tokens attend to earlier visual tokens, to text tokens whose timestamps precede the current frame, and to all static tokens. • Dynamic text tokens attend to past visual and text tokens, plus all static tokens. • Future dynamic tokens are masked to preserve temporal integrity. • The static tokens remain unmasked, providing global semantic guidance at each step. This design yields temporally coherent cross-modal interactions while maintaining a consistent global context, key qualities for online conversational generation. 19 B.4 TempoVoice TempoV oice is designed to transform generated textual tokens into temporally synchronized audio waveforms. Given the hidden representations corresponding to the listener’s text tokens, Hτ:t, Tem- poV oice generates audio tokens A⌊τ k⌋:µ, which are then converted into continuous audio waveforms using an audio tokenizer. The process is defined as follows: A⌊τ k⌋:µ=TempoV oice P⌊τ k⌋:µ,[Avoiceprint ,Hτ:t] (6) where P⌊τ k⌋:µdenotes the positional encodings for positionsτ k toµ,Avoiceprint represents the voiceprint embeddings, and Hτ:tare the generated textual embeddings over the interval [τ, t]. Here, [·]indicates concatenation along the temporal axis. The resulting audio tokens A⌊τ k⌋:µare subsequently transformed into audio waveforms using the BiCodec module from Spark-TTS. C ResponseNet Dataset Automatic Tools and Human LaborFace Cropping and SeparationStep1: Data Collection Step2: Data Processing Video Segmentationand Filtering Camera-view Alignment Audio Separation 𝑟11𝑟12𝑟13𝑡𝑥𝑟21𝑟22𝑟23𝑡𝑦𝑟31𝑟32𝑟33𝑡𝑧Facial FeatureExtraction Step2: Data Processing "words": [ { "text": "How", "start": 0.0, "end": 0.34, }, { "text": "do", "start": 0.34, "end": 0.46,…"words": [ { "text": "Well,", "start": 6.8, "end": 6.92, }, { "text": "I", "start": 7.84, "end": 7.9,… TranscriptionStep3: Data Refinement De-identification & Sanitization Human Validation & Correction Quality Control & Filtering Figure 9: Illustration of Dataset Construction Pipeline. C.1 Construction Pipeline To build the ResponseNet dataset, we design a three-stage pipeline encompassing data collection, data processing, and data refinement, as illustrated in Figure 9. This structured process ensures high- quality, temporally aligned multimodal data suitable for online conversational response generation tasks. Step 1: Data Collection. We begin by sourcing dyadic conversational videos from diverse public domains, including interviews, podcasts, and online discussions. Candidate videos are selected through a combination of automatic filtering tools and human curation to ensure conversational structure and speaker clarity. These videos include one speaker and one listener. The data is manually labeled for speaker turns, and high-resolution videos are retained for downstream visual analysis. Step 2: Data Processing. This step extracts synchronized multimodal data from the raw videos. First, Video Segmentation and Filtering isolates segments with clear speaker-listener interaction using face detection and quality heuristics. We apply Camera-view Alignment to standardize the perspective, 20 especially in multi-camera recordings. Next, we conduct Face Cropping and Separation to isolate individual speaker and listener views. In parallel, the audio track is separated and segmented using speaker diarization and voice activity detection. However, automatic tools could lead to bad cases, we correct these separated audio tracks manually. We then apply an ASR system (whisper [ 50]) for Transcription to obtain timestamped word-level alignments.
|
https://arxiv.org/abs/2505.21724v1
|
Subsequently, we extract facial behavior features using MediaPipe [ 36], yielding per-frame ARKit blendshape coefficients and 3D head pose transformation matrices for both speaker and listener tracks. Step 3: Data Refinement. To ensure privacy and label accuracy, we conduct multi-level cleaning. First, in the De-identification and Sanitization stage, we mask personally identifiable information (PII) and redact sensitive content from transcripts and audio. Then, Human Validation and Correction is performed to manually inspect and correct transcription errors, alignment mismatches, and feature inconsistencies. Finally, a Quality Control and Filtering phase discards corrupted or ambiguous segments, yielding a clean, high-quality dataset with tightly aligned audio, visual, and textual modalities. Overall, the pipeline enables reliable construction of multimodal dialogue samples with rich facial dynamics and accurate verbal content, supporting the development of real-time response generation models. C.2 Dataset Statistics The dataset is partitioned into training, validation, and test splits following the standard ratio of 6:2:2. Specifically, we ensure that the distributions of conversation topics, speaker identities, and recording conditions are balanced across each subset to avoid potential biases and to facilitate robust evaluation. The detailed statistics of the video stream pairs in each split are summarized in Table 5. Table 5: Data split of video stream pairs in our dataset. Split Number of Video Stream Pairs Proportion (%) Train 417 59.9 Validation 139 20.0 Test 140 20.1 Total 696 100.0 Each data sample consists of a synchronized pair of video streams representing a dyadic conversational interaction. The train, validation, and test splits are disjoint with respect to participant pairs to ensure fair evaluation and to prevent data leakage. This stratified partitioning enables comprehensive benchmarking of model performance across diverse conversational scenarios. C.3 Privacy Considerations The YouTube platform enforces strict content moderation policies to prevent the dissemination of violent or harmful material [27]. In addition, according to YouTube’s copyright guidelines2, the use of copyrighted material for research purposes typically qualifies as fair use, permitting reuse without the need for explicit permission from the copyright holder. Together, these factors ensure that our dataset collection and usage align with established privacy and ethical standards. D Evaluation Protocol D.1 Evaluation Metrics Quantitative evaluation of multimodal response generation is inherently challenging due to the need to assess multiple aspects of quality across different modalities. To provide a comprehensive assessment, we employ a suite of metrics spanning text, audio, and visual outputs. 2https://www.youtube.com/howyoutubeworks/policies/copyright/#fair-use 21 Text Metrics. •METEOR [ 7]: Measures the alignment between generated and reference responses by considering synonymy, stemming, and word order, providing a nuanced evaluation of semantic adequacy. •BERTScore F1[70]: Calculates the F1 score of semantic similarity between generated and reference texts by comparing their contextual embeddings from a pretrained RoBERTa model [35], providing a robust measure of semantic fidelity. •ROUGE-L [32]: Evaluates the longest common subsequence between generated and reference responses, reflecting fluency and content overlap. •Distinct-2 [30]: Calculates the proportion of unique bi-grams in the generated responses, serving as an indicator of output diversity and lexical richness. Audio Metrics. •UTMOSv2 [5]: A two-stage neural MOS predictor for high-quality synthetic speech that fuses EfficientNetV2 spectrogram embeddings with SSL-based
|
https://arxiv.org/abs/2505.21724v1
|
speech representations; the resulting model achieves state-of-the-art correlation with human ratings of naturalness and intelligibility. •LSE-D (Lip–Speech Error Distance) [49,13]: Measures the temporal alignment and synchronization between generated audio and corresponding lip movements, reflecting audio-visual coherence. Visual Metrics. •Fréchet Distance (FD) [4]: Computes the distributional distance between real and generated facial feature embeddings, assessing the realism of static visual features. •Fréchet Video Distance (FVD) [61]: Quantifies the spatial-temporal quality of generated video sequences by comparing their feature distributions to those of real videos, thus evaluating overall video realism and consistency. By leveraging these complementary metrics, we are able to rigorously assess the naturalness ,diversity , andsynchronization of generated responses across modalities, enabling a thorough benchmarking of model performance on the ResponseNet test set. D.2 Baseline Methods As this is the first work addressing OMCRG tasK, we compare OmniResponse with a diverse set of prior methods that target single-modality generation, as well as several multimodal baselines. Specifically, we include the following baselines: •Offline Text Dialogue Generation Systems: State-of-the-art large language models, in- cluding GPT-4o, GPT-4, and GPT-o1 [ 2], are evaluated for their ability to generate text responses in offline settings. These models only produce text outputs, without audio or visual generation. •Online Auditory Dialogue Generation System: Moshi [ 14] is adopted as a representative model for generating spoken responses in real time, focusing exclusively on audio outputs. •Facial Reaction Generation Systems: ReactFace [ 37] and ViCo [ 75] serve as facial reaction generation baselines, producing only visual (facial) responses based on the conversational context. •Online Multimodal Conversational Baselines: To provide a direct comparison for OM- CRG, we construct two multimodal baselines: 1.ALSTM-based model [19] employing a recurrent neural network for temporal se- quence modeling across modalities. The LSTM takes visual-audio-text modalities of speaker as inputs, and outputs listener’s visual and audio modalities. 22 2.AnAudio-visual LLM baseline that takes both speaker and listener audio–visual inputs and autoregressively generates audio-visual responses of the listener via a pre-trained large language model [1]. While previous approaches focus primarily on generating a single modality, OmniResponse is designed to produce synchronized and coherent responses across text, audio, and visual channels in an online setting. E Risks and Potential Misuse This system is developed for multi-modal conversational AI, but certain risks should be acknowl- edged. For instance, realistic synthetic contents could be misused for impersonation or misleading information. During real-time human-user interactions, users may also develop misunderstandings or excessive reliance on the system without proper contents control. To avoid these risks, we recommend clear labeling of the generated contents, appropriate usage monitoring, and the inclusion of protective measures against potential misuse. F Limitations While our approach performed well on the evaluated datasets, the remaining challenges include the proposed approach (e.g., its results) may largely depend on the quality and diversity of training data, replying on accurate speaker–listener segmentation and can be negatively affected in noisy or overlapping conversations. Additionally, generating well-aligned multi-modal responses remains difficult in fast-changing or emotionally rich interactions, while our paper lacks fairness analysis. Future work will focus on improving these aspects. G Broader Impacts Our
|
https://arxiv.org/abs/2505.21724v1
|
work contributes to the development of more intuitive and responsive multi-modal dialogue sys- tems, with potential applications in education, healthcare, assistive communication, and companion. These technologies may improve access to information, support inclusive interaction, and enhance user experience across diverse contexts and scenerios. We encourage responsible research practices that prioritize transparency, user safety, and alignment with social values to ensure that such systems serve the public good. H Responsibility and License We acknowledge full responsibility in the event of any rights infringement. The dataset is dis- tributed under the Creative Commons CC BY-NC-SA license, permitting use with attribution for non-commercial purposes and requiring derivative works to be shared alike. 23
|
https://arxiv.org/abs/2505.21724v1
|
arXiv:2505.21731v1 [cs.LG] 27 May 2025Deep Reinforcement Learning Agents are not even close to Human Intelligence∗ Quentin Delfosse1,2† quentin.delfosse@tu-darmstadt.deJannis Blüml1,3† jannis.blueml@tu-darmstadt.de Fabian Tatai4,5Théo Vincent1,6Bjarne Gregori1Elisabeth Dillies7 Jan Peters1,3,4,6Constantin Rothkopf1,3,4,5Kristian Kersting1,3,4,6 1Department of Computer Science, Technical University Darmstadt, Germany 2National Research Center for Applied Cybersecurity (ATHENE), Germany 3Hessian Center for Artificial Intelligence (hessian.AI) 4Centre for Cognitive Science, Darmstadt 5Institute of Psychology, Technical University Darmstadt, Germany 6German Research Center for Artificial Intelligence (DFKI) 7Sorbonne Université, Paris, France Abstract Deep reinforcement learning (RL) agents achieve impressive results in a wide variety of tasks, but they lack zero-shot adaptation capabilities. While most robust- ness evaluations focus on tasks complexifications, for which human also struggle to maintain performances, no evaluation has been performed on tasks simplifi- cations. To tackle this issue, we introduce HackAtari, a set of task variations of the Arcade Learning Environments. We use it to demonstrate that, contrary to humans, RL agents systematically exhibit huge performance drops on simpler versions of their training tasks, uncovering agents’ consistent reliance on shortcuts. Our analysis across multiple algorithms and architectures highlights the persis- tent gap between RL agents and human behavioral intelligence, underscoring the need for new benchmarks and methodologies that enforce systematic generaliza- tion testing beyond static evaluation protocols. Training and testing in the same environment is not enough to obtain agents equipped with human-like intelligence. 1 Introduction Deep reinforcement learning (RL) has become a key technique for training agents to solve relational reasoning tasks directly from high-dimensional sensory inputs (Zambaldi et al., 2018). In these tasks, agents must identify distinct entities, infer their relationships, and model their dynamics to derive effective decision-making policies. The Arcade Learning Environment (ALE) (Bellemare et al., 2013) is the most widely used benchmark for evaluating RL algorithms in this setting, offering a diverse collection of Atari 2600 games that span a broad range of perceptual and strategic challenges, ∗In this paper, “ intelligence ” specifically refers to the ability to adapt to structural simplifications in relational reasoning tasks, which humans handle with ease but deep RL agents systematically fail to generalize to. †equal contribution Preprint. Under review. including spatial reasoning, long-term planning, and real-time reaction. In their seminal work, Mnih et al. (2015) introduced the first deep RL method able to solve many ALE games and claimed that when “ agents are confronted with a difficult task: they must derive efficient representations of the environment from high-dimensional sensory inputs, and use these to generalize past experience to new situations ”. Inspired by the brain’s visual processing mechanisms, they incorporated convolutional neural networks to extract spatial features from raw pixels. Deep RL algorithms have then been improved and finally achieved superhuman performance in all Atari games (Badia et al., 2020a,b). However, the capacity of these agents to “ generalize experiences from past situations to novel scenarios ”, core to relational reasoning, is rarely assessed by RL practitioners. Traditional benchmarks typically involve training and evaluating agents within identical environments, thereby masking their reliance on shortcuts and spurious correlations – a particularly troubling issue, given recent evidence that RL agents often exploit superficial shortcuts
|
https://arxiv.org/abs/2505.21731v1
|
rather than learning robust, causal strategies (Ilyas et al., 2019; Geirhos et al., 2020; Chan et al., 2020; Koch et al., 2021; Delfosse et al., 2024b). This reliance on shortcuts has lately been uncovered in the simplest Atari Pong game (depicted in Figure 1). In this game, the agent’s enemy follows a deterministic behavior based on the position of the ball. Delfosse et al. (2024b) have thus exposed that deep and symbolic RL agents learn to rely on the enemy’s position to catch and return the ball optimally. Hiding the enemy or altering its behavior thus leads to a performance drop. While the inability of DQN agents to zero-shot generalize to complexified versions of their training environment has been showcased (Farebrother et al., 2018), there has not yet been any systematic evaluation of the potential misalignments of RL agents. To assess whether RL agents learn the right behaviors, we evaluate them on simplified task variations that humans can easily adapt to - such as color changes or gameplay simplifications - ensuring that performance drops reveal flawed reasoning rather than increased task difficulty. In this article, we demonstrate that current deep and symbolic RL agents systematically fail to solve simplified variants of their training tasks, supporting that human-level performances in training settings does not imply human-like reasoning capabilities . We present a systematic investigation of generalization failures in RL using controlled environment modifications, most of which simplify the original tasks. We release HackAtari3, a suite of environment variations for the most widely used RL benchmark: the Arcade Learning Environments (Bellemare et al., 2013). We argue that RL agents should be evaluated on held-out task variations – common practice in machine learning – to enable meaningful comparisons with human intelligence, particularly in relational reasoning domains. Our main contributions can be summarized as follows: (i)We show that deep RL agents fail to generalize to task simplifications and consistently rely on shortcut learning – selecting the right actions for the wrong reasons, regardless of the algorithm. (ii)We show that symbolic/object-centric agents exhibit better adaptation capabilities, but are not yet able to match humans’ one. (iii) We conduct a user study to assert users’ abilities to maintain their performances within many of these variations, and thus provide performance references for these simplifications. (iv) We open-source a wide range of variations for the Arcade Learning Environments, opening up new possibilities for designing and testing RL agents with supposed human-like intelligence. In the following, we present the different RL agents’ architectures and algorithms used in this article. We then introduce our HackAtari framework and use it to evaluate RL agents’ generalization ability. 2 The Quest for General RL algorithms The objective of relational reasoning RL is to develop general agents with human-comparable problem- solving skills. This goal encompasses two key subgoals: (1) designing a single, versatile algorithm that can be applied across a wide range of tasks, and (2) enabling trained agents to generalize effectively to variations of their training tasks. This work focuses on the second aspect of generality. We here evaluate two prominent classes of agents: deep
|
https://arxiv.org/abs/2505.21731v1
|
RL agents and neurosymbolic agents. Task agnostic deep agents often struggle with overfitting and generalization to task variations (Farebrother et al., 2018). In contrast, neurosymbolic agents introduce inductive biases by representing environments in terms of objects and their interactions, supporting abstract and transferable representations. 3HackAtari available at https://github.com/k4ntz/HackAtari . 2 Task agnostic deep RL agents have been introduced with the Deep Q-Networks (DQN) (Mnih et al., 2015) algorithm, demonstrating that a single convolutional architecture could learn to play a variety of Atari games directly from pixel inputs, establishing a foundation for general-purpose deep RL. Subsequent improvements extended DQN along several dimensions: C51 (Bellemare et al., 2017) returns distributions to capture uncertainty better. M-DQN (Vieillard et al., 2020) introduces entropy- regularized rewards to improve training stability. Rainbow (Hessel et al., 2018) combines several of these enhancements into a widely used baseline. i-DQN (Vincent et al., 2025) enables multiple consecutive Bellman updates through a sequence of learned action-value functions. On-policy methods like PPO (Schulman et al., 2017) and distributed frameworks such as IMPALA (Espeholt et al., 2018) offer scalable alternatives with strong empirical performance. More recently, model- based agents like Dreamer (Hafner et al., 2020) learn latent dynamics models and perform planning in imagination, yielding greater sample efficiency and improved extrapolation in high-dimensional visual environments. The ease of adaptation of deep RL algorithms to novel tasks, requiring no expert knowledge injection, explains their widespread adoption in relational RL (Shaheen et al., 2025). Object-centric and neurosymbolic agents incorporate stronger inductive biases through structured representations but require specific domain adaption. Early work on object-oriented MDPs (Diuk et al., 2008) introduced state decompositions based on object attributes and relations, enabling policies to reason abstractly over entities rather than raw pixel arrays. Recent methods build on this idea to improve both interpretability and generalization. Unsupervised object extraction methods (Lin et al., 2020; Delfosse et al., 2023b) have allowed RL practitioners to develop symbolic policies. Jiang & Luo (2019) and Delfosse et al. (2023a) represents policies as sets of logic rules over symbolic object states, showing transfer to environments with varying object types and counts. Shindo et al. (2025) leverages large language models to autonomously generate predicates from visual inputs and integrate symbolic reasoning with deep learning in a hybrid policy architecture. Other approaches constrain object-centric policy representations to interpretable structures such as polynomials (Luo et al., 2024), decision trees (Bastani et al., 2018; Delfosse et al., 2024b; Marton et al., 2025), or programs (Cao et al., 2022; Liu et al., 2023; Kohler et al., 2024). These methods are usually also tested in tasks that require extrapolating to unseen object configurations or recombining learned behaviors across novel layouts. Finally, object-centric deep agents have been developed. They make use of object-centric masking (Davidson & Lake, 2020; Blüml et al., 2025), removing background information to force attention on dynamic entities without relying on explicit symbolic conversion. 3 Creating ALE variations Our main objective is to show that relational reasoning RL agents consistently learn misaligned policies, i.e.that they rely on shortcuts. As most of the RL agents trained on such tasks encode
|
https://arxiv.org/abs/2505.21731v1
|
their policies within black-box neural networks, we cannot explicit the reason behind their action selections. While existing explainable techniques, such as importance maps, help identifying the decisive input zones, they do not explain the core reasoning. As outlined by Delfosse et al. (2024b), deep PPO agents trained on Pong highlight the ball, as well as both paddles. This lures external viewers into thinking that the agent attends to all the relevant objects. In contrast, it uses the position of the enemy paddle to estimate the vertical position of the ball. The only way to explicitly exhibit these agents reliance on shortcuts is by altering the environment. Evaluations on tasks variations have been conducted in the past on more complex environments (Farebrother et al., 2018). However, the eventual performance drops do not imply the agent’s misalignment, as it could stem from bad adaptation capabilities. To assert that relational reasoning agents learn incorrect policies, and have not mastered the desired set of skills necessary to solve the game, we need to test them on tasks simplifications, i.e.tasks that will lead humans to increase or maintain their overall performances. We here introduce HackAtari, an extension of the Arcade Learning Environment (ALE), that provides different tasks modifications (mainly simplifications) to evaluate RL agents’ true re- lational reasoning abilities. Since the source code of the Atari games used in the ALE is proprietary and not publicly available, direct modifications to game logic or mechanics at the code level are not feasible. As a result, the only practical method to alter or simplify these environments for experimental purposes is through direct manipulation of their Random Ac- cess Memory (RAM) states. Thus, modifications are implemented through direct alterations of the game’s RAM, allowing diverse controlled perturbations. Specifically, we created visual 3 Figure 2: Examples of HackAtari simple tasks variations. Top: the original Atari games used to trained RL agents. Bottom: simplifications ( i.e.variations for which human performances do not drop). These include color changes and gameplay shifts. Superposed frames show the game dynamics. Descriptions of more environments and their variations are provided in Appendix G. alterations, such as changing or obscuring object colors, forcing agents to rely less on super- ficial visual features. Additionally, we adjusted gameplay dynamics, including modifying the speed or presence of objects and enemies, or introducing new mechanics such as gravity effects. Original ModifiedRAM alteration RAM Figure 1: RAM alteration allows for modi- fied environments , here exemplified on Pong. Altering specific RAM cells leads to an en- emy remaining static after it returned the ball.This is exemplified in Figure 1 on Pong. In the de- fault game, the brown enemy is programmed to go down if the ball is bellow its paddle and up if the ball is above. There is thus a high correlation between the enemy’s and the ball’s vertical position, hence, a misalignment opportunity. To create the LazyEnemy variation, we first identify the RAM cell that controls the ball’s horizontal speed. When this value is posi- tive — indicating that the ball is moving toward the green agent — we overwrite
|
https://arxiv.org/abs/2505.21731v1
|
the enemy’s vertical po- sition with its previous value. This makes the enemy remain static whenever the ball approaches the agent. We can thus evaluate potential RL agents misalignments. Let us now provide further examples of tasks variations included in HackAtari, most of which are illustrated in Figure 2. NoDanger (Kangaroo). In Kangaroo, the player controls a mother Kangaroo that needs to reach her joey (top left). Monkeys will come to punch her and throw coconuts. To collect points, mother Kangaroo can punch the enemies, collect fruits and reach her baby. In the NoDanger variations, deadly monkeys and coconuts are deactivated. The player can safely navigate to the joey. StableBlocks (Frostbite). In Frostbite, the player builds an igloo by jumping on moving floating ice blocks while avoiding deadly predators. In the modified task, the ice blocks are aligned and static. StoppedCars (Freeway). In Freeway, the agent’s goal is to have the (left) controlled chicken cross the highway. If the chicken collides with a car, it is sent down and immobilized for a few seconds. In this variation, cars are completely stopped, making crossing trivial. Maze2 (MsPacman). In MsPacman, Pacman needs to navigate within a maze to consuming all the dots while avoiding the four pursuing ghosts. In this variation, the maze layout is modified. RestrictedFire (RiverRaid). Players here pilot a fighter jet over a river, aiming to destroy enemy targets while avoiding collisions with riverbanks and obstacles. In this variation, the agent can only shoot in case of unavoidable obstacles. It can collects points by dodging and overcoming the enemies. 4 ShiftedShields (SpaceInvaders). In SpaceInvaders, we shift the protective shields horizontally by 1 or by 3pixels to the right. This type of change adjusts the environment in a very minimal way, which is often not even noticeable by humans. It does not significantly alter the core gameplay or difficulty. At the time of writing, HackAtari incorporates more than 224variations in total, spanning over more than 33games of the original Arcade Learning Environments. We provide illustrations and descriptions of the variations in Appendix G. As HackAtari creates tasks variations by altering the RAM, these modify or ablate a specific part of the gameplay. Thus, most variations simplify (i.e.should lead humans to at least maintain their performances) the original tasks, which is necessary for showing that RL agents learn misaligned behaviors (using shortcuts). Further, modifying the RAM values adds a negligible time overhead on top of the Atari emulator, that makes training and testing on the variations as fast as on the original games. Finally, ALE already incorporates some variations that augment some games’ difficulty levels, that are of course integrated in HackAtari. Overall, these capabilities collectively position HackAtari as a valuable resource for developing relational reasoning RL agents that are robust, adaptable, and capable of generalizing their policies beyond their initial training conditions. 4 Experimental Evaluation Let us now empirically evaluate several aspects of deep and object-centric RL agents’ generalization capabilities, focusing specifically on their ability to handle simplified variations of their original training tasks. We here aim to answer the following research
|
https://arxiv.org/abs/2505.21731v1
|
questions: (Q1) Do RL agents’ performance drop on HackAtari tasks variations? (Q2) Can human easily adapt to such tasks variations? (Q3) Are deep agents systematically learning shortcuts on relational reasoning tasks? (Q4) Do human inductive biases help aligning RL agents? Experimental Setup. We evaluate a diverse set of RL agents, including standard value-based methods: DQN (Mnih et al., 2015), C51 (Bellemare et al., 2017) and i-DQN (Vincent et al., 2025), policy-gradient methods: PPO (Schulman et al., 2017) and IMPALA (Espeholt et al., 2018)). All agents use the nature-CNN neural network, except for IMPALA, whose authors have come up with a more complex neural network using neural architecture search. We also evaluate several object- centric variants: OCCAM (Blüml et al., 2025), Object-centric agents using neural networks (Delfosse et al., 2024a), SCoBots (Delfosse et al., 2024b). Most tested agents have been collected from their official repository, with only deep PPO and OCNN variants trained in-house using the CleanRL framework (Huang et al., 2022). Detailed training settings, architectures, and source references for each agent are provided in Appendix B. As commonly done, all agents are trained for 200million frames on the Atari environments (v 5) using a fixed frameskip of 4and a repeat action probability of 0.25, following established evaluation protocols (Machado et al., 2018). A complete hyperparameters list can be found in the extended experimental setup ( cf.Appendix D). To assess generalization, we evaluate each agent HackAtari controlled environment variations, with some described in Section 3. The evaluation is conducted over 30episodes per game per agent, using at least 3different evaluation seeds per agent. This resource-limited evaluation suffices to assert the consistent performance drops of the agents (as shown by confidence intervals (CI) in Appendix D.1). For metrics, we report Expert-Humans Normalized Score (E-HNS), with expert scores borrowed from Badia et al. (2020a), and the random scores evaluated in house both on the original and the task variations ( cf.Table D.3), following Agarwal et al. (2021), as well as Performance Drop, aggregated using the inter-quartile Mean (IQM) and 95% CI. The Performance Drop is positive if the agent’s performance increases, null if the performance does not change, and negative if the performance drops. Note that in the computation of these metrics, the average score, obtained by random agents (on the original or on the tasks variation) is ablated. These metrics enable robust comparisons of the ability of agents to generalize to task variations. For further details on the metrics, cf.Appendix A. To evaluate the effect of the game variations on human performance, we conducted an online study with 128subjects on 15games and a variation for each (on the Prolific platform). We included English speaking participants from around the world and all age groups (Mean Age: 33, Range:[ 18-73]). 5 DQN MDQN C51 i-DQN Impala PPO Binary Mask Planes OC-NN SCoBots0.61.21.8Expert HNS-1.3-1.4-1.4-1.0-0.5 -1.2 -0.7-0.6 -0.6 -0.2Original ModifiedFigure 3: Deep and symbolic RL agents performances drop on HackAtari variations , illustrated by the IQM (following reliable (Agarwal et al., 2021)) over the human normalized scores (HNS) of various RL agents on a total set of 32task
|
https://arxiv.org/abs/2505.21731v1
|
variations (over 17games). IQMs are computed over 3 seeded trained agents ( 30evaluations each). Expert-human scores are borrowed from Badia et al. (2020a). Performance in the original environment is plotted filled, while the performance in the modified environment is plotted hatched. Raw IQM scores (with CIs) for each agent on each game (original and variations) and extended results are provided in Appendix D.1 and D.2. The subjects were allowed to train for 10to15minutes on the original game. They were then evaluated for 15minutes on the original task, and finally for 15minutes on a HackAtari variation (for the game and variation list, cf.Appendix D.3). Each subject was only trained and evaluated on a single game and its variation. Unlike to Badia et al. (2020a), we have not selected professional gamers, to avoid biased evaluations, potentially stemming from such users’ ability to perform well on anyvideo game task (thus not directly measuring their ability to adapt to the variations), but there overall zero-shot (or no training) performances. For more details on the evaluation protocol and demographics studies, cf.Appendix F. Let us now demonstrate that current RL agents consistently learn shortcuts, and fail to adapt to task simplifications. RL agents’ performance drop on HackAtari most tasks variations (Q1). We first evaluate all the available RL algorithms on a subset of 17ALE games (the ones with publicly available agents; C51 was only available for 12games), with 1to4variations per game. Figure 3 depicts the expert-human normalized scores, averaged using IQMs, for each algorithm, on the 17original games and on the 30modifications. Every RL agent exhibits a significant performance drop over the complete set of tested variations. Remarkably, SCoBots agents exhibit the lowest E-HNS performance drop ( 20% overall), and IMPALA maintains an average superhuman performance in the game variations. This outlines that changing the architecture or inference paradigm has a bigger impact on the ability of agents to adapt to task variations than varying the algorithm. However, we still need to demonstrate that these agents learn misaligned policies that prevent them from adapting to simplifications, and do not fail on the variations because of potential increased game complexities. Human adaptation to simplification far exceeds RL agents’ ones (Q2). Our claim that RL agents consistently learn misaligned policies when trained on relational reasoning tasks relies on the fact that, contrary to previous work, we evaluate these agents on tasks simplifications (here meant as variations for which human performances do not deteriorate). We thus first need to evaluate human users’ ability to adapt their learned relational reasoning policies to the HackAtari task variations. We thus selected 15games, with one variation for each, for which most agents exhibit performance drops while we expect humans to improve or maintain their performances. We randomly selected 134users (at least 8per game), trained on the original ALE games, then evaluated them on the original version and on the variations. We evaluate the performance changes of each agent type on each game. Note that the performance change depicts the change in variation (between the original ALE task and the HackAtari selected variation) of each agent.
|
https://arxiv.org/abs/2505.21731v1
|
This metric does not allow for comparing the performances of the different agents, neither on the original task, nor on the variation, but measures individual performance variations. Figure 4 shows that humans performances drastically increase on 11games, notably increases on 2games, slightly decreases on Bankheist, and notably decreases on MsPacman. The modifications used for Bankheist spawn police cars that constantly chase the player, but increase the reward obtained by successfully robbing a bank. Depending on the ability of humans to escape the chasing police, this can lead to either a performance increase or a drop ( cf.Appendix F.6). For MsPacman, humans exhibit performance drops on the maze layout changes. This is likely due to 6 AsterixRiverraid Kangaroo Frostbite Seaquest Freeway Bowling Amidar Boxing SpaceinvadersPong BreakoutStargunner Bankheist Mspacman Games100 50 050100150Performance Change (%) 405%957% 7064% 1658% 1013% 483% 472% 374% 102% 226%Strict Simplification No Difficulty ChangeDQN MDQN C51 i-DQNIMPALA PPO HumanFigure 4: While humans easily adapt to task simplifications, deep agents’ performances drop, illustrated on 15ALE games. Non-expert users and deep RL agents are trained and evaluated on the original ALE environment, then presented with a variation of the task. Left: Variations considered as task simplifications by design. Right: Variations for which little or no performance increase is expected. Games for which no C51 agent is publicly available are marked with ×. For the exact performances of humans and deep agents, cf.Appendix D.1 and D.3. some fatigue faced by the participants (MsPacman is one of the most complex and stressful games), as most users’ performance already slightly decreases between the training phase and the first evaluation. This could also stem from users’ overfitting strategies. They could be learning to favor a secure path that is not transferable between the maze layout changes. Further investigation is needed to determine how much each feature impacts the performance changes. However, users’ performance drop is still significantly lower than all the available deep agents ones on MsPacman. Overall, the descriptions of the applied change provided in Section 3 and Appendix G and the user study theoretically and experimentally support that these 15tasks variations are simplifications. Deep agents fail to adapt to task simplifications (Q3). Let us now investigate whether deep agents can adapt to task simplifications. We evaluate several widely-used algorithms, including DQN, PPO, C51 (not available on all games), IMPALA, i-DQN, and MQN, across many original Atari environments and some simplified variations included in HackAtari. Figure 4 shows that, contrary to humans, all agents exhibit severe performance drops in most of the task variations. Even IMPALA’s performances drop by more than 50% on10out of 15games. All other algorithms exhibit even higher performance drops (also confirmed by Figure 3). Deep RL agents can thus not maintain their performances on most task simplifications, demonstrating their inability to learn meaningful, correctly aligned policies. The object-centric inductive bias is not enough (Q4). RL agents relying on more human inductive biases could help narrow the gap between deep agents and humans. The most common bias is object- centricity, with agents that first extract the depicted objects and their properties (such as
|
https://arxiv.org/abs/2505.21731v1
|
position, color, size) from the pixel states. Such representations have been shown to improve transferability and interpretability in structured environments (Shindo et al., 2025). We thus evaluate whether introducing object-centric representations enhance the agents’ ability to generalize to simplifications. Figure 5 shows Performance Change for object-centric agents. As these agents are all trained using PPO, we also provide also include deep ( i.e.using Nature-CNN) PPO baseline. As expected, these agents are insensitive to visual perturbations ( e.g.color changes). Notably, decision tree based SCoBots agents have limited performance drops (< 30%), on 9out of 15games. However, even on most tasks with gameplay alterations, all object-centric agents exhibit high performance drops. Overall, even if these symbolic architectures generally adapt better than the deep pixel-based PPO baseline, they still exhibit notable performance drops. These findings suggest that while object-centricity helps, it alone is insufficient for robust generalization to simpler tasks. Overall, our analysis demonstrates the global incapacity of both deep and symbolic agents to adapt to task simplifications, as they often completely overfit to their training tasks. RL agents do not “derive efficient representations allowing to generalize past experience to new situations ” (Mnih et al., 2015). 7 Amidar BoxingBreakoutAsterixBankheistBowling Freeway Frostbite Kangaroo MspacmanPong Riverraid Seaquest SpaceinvadersStargunner Games 100 75 50 25 0 25 50 75Performance Change (%) 100% 379% 863% 192% 390% 219% 40%Visual Modifications Game Logic Modifications Nature CNN (Baseline) OCCAM - Binary Mask OCCAM - Planes OC-NN ScoBotsFigure 5: Object-centric RL agents also fail to adapt to simplified environments. Different object-centric approaches (all using PPO) are here compared to the classical CNN baseline on the same 15variations (as Figure 4). Visual perturbations ( e.g.color change, left) have a very limited impact on the symbolic agents, while most gameplay modifications (left) still cannot be solved by these agents. Extended results are available in Appendix D.2. 5 Related Work Generalization Benchmarks and Failures. As shown here, generalization remains a core unsolved problem in reinforcement learning. Traditional evaluations often focus on performance in fixed environments, leading to overfitting and a poor understanding of agents’ robustness. Extensions to the Arcade Learning Environment (ALE) such as sticky actions (Machado et al., 2018), unseen modes (Cobbe et al., 2019) and action noise (Koch et al., 2021) introduced limited variability to probe robustness. Zhang et al. (2018) also focus on perturbations on the observation space of the ALE by evaluating trained agents against abrupt background changes. More ambitious frameworks like CoinRun (Cobbe et al., 2019) and Procgen (Cobbe et al., 2020) procedurally generate diverse levels, but this does not prevent misalignment di Langosco et al. (2022). Crafter (Hafner, 2022) and MineRL (Guss et al., 2019; Milani et al., 2020) further emphasize compositionality and long-term credit assignment. However, these benchmarks increase perceptual or structural complexity, making it difficult to disentangle poor adaptation from underlying misalignment. In contrast, our work mainly introduces simplifications that preserve task semantics while reducing difficulty, revealing performance failures even in settings where humans readily adapt. Misalignment and Shortcut Learning. We show that RL agents often succeed by exploiting spuri- ous correlations or shallow heuristics, rather than
|
https://arxiv.org/abs/2505.21731v1
|
learning task-aligned policies. This phenomenon, known as shortcut learning (Geirhos et al., 2020), has been documented in supervised vision (Ilyas et al., 2019; Stammer et al., 2021) and increasingly in RL (Zhang et al., 2018; Cobbe et al., 2020; Koch et al., 2021; Delfosse et al., 2024b). Recent interpretability efforts, such as saliency maps and attention overlays, suggest that agents attend to task-relevant regions (Greydanus et al., 2018), yet fail to capture the latent reasoning process. In Pong, for instance, agents may appear to track the ball but instead use opponent behavior as a proxy (Delfosse et al., 2024b). While robustness techniques such as adversarial regularization (Pinto et al., 2017) and domain randomization (Tobin et al., 2017) offer partial defenses, they do not prevent agents from failing when irrelevant cues are removed. By evaluating agents on simplified tasks, where human performance is stable or improves, we isolate misalignment not as failure to adapt, but as failure to learn to select actions for the right reasons . 6 Towards RL policies aligned with the true task goals Evaluating agents’ robustness through simplifications to better reflect human-like reasoning. Robust reinforcement learning is often framed as worst-case optimization (Tamar et al., 2014; Moos et al., 2022) or adversarial training (Pinto et al., 2017), with recent work addressing observation perturbations through network sparsity (Grooten et al., 2023) or Lipschitz constraints (Barbara et al., 2024). However, RL agents often fail on simplified versions of their training tasks – settings where humans adapt effortlessly. This reveals a reliance on superficial cues rather than robust relational reasoning. We argue that testing agents on logically simpler variations is essential for evaluating true 8 generalization. HackAtari allows for testing whether agents truly understand the task, challenging the common assumption — reflected in metrics like the Human Normalized Score — that achieving human-level performance implies human-like reasoning. We advocate for benchmarks that explicitly test relational understanding, as the ones appearing in supervised learning setting (Helff et al., 2025; Wüst et al., 2025) and for integrating human inductive biases to align RL agents with task structure. Incorporating human inductive bias is crucial for the rise of aligned RL agents. Our work demonstrates that relying on extensive compute, large number of parameters and long training does not guarantee that deep RL agents learn coherent representations of the tasks, aligned with the intended task goals. While the dopaminergic system in humans partly resembles model-free reinforcement learning (Schultz et al., 1997), humans make use of use rich model-based representations of their environment (Tenenbaum et al., 2011; Daw et al., 2011). Several studies provide guidance how these findings from cognitive science about human intelligence should be incorporated in intelligent agents (Lake et al., 2017; Zhu et al., 2020). Besides using structured representation and planning in navigation (Kessler et al., 2024) humans have also been shown to have coherent but noisy models of their physical environment (Kubricht et al., 2017; Battaglia et al., 2013), aiding with inference over object properties (Neupärtl et al., 2020), learning tool use (Allen et al., 2020), and choosing actions (Tatai et al., 2025).
|
https://arxiv.org/abs/2505.21731v1
|
Further, humans also decompose complex tasks into a sequence of high-level actions, and learn skills that correspond to the different sub-goals of such tasks. Hierarchical RL has been shown to improve adaptation to task variations by learning reusable sub-policies or skill hierarchies (Bacon et al., 2017; Hausman et al., 2018; Li et al., 2020; Cannon & ¸ Sim¸ sek, 2025). Humans also model causal object interaction (Michotte, 1963) and can employ abstract and qualitative reasoning about physical situations (Forbus, 1988). Latest approaches try to incorporate causal world models within RL agents (Yang et al., 2025; Dillies et al., 2025) applicable to multi-objective tasks (Bhatija et al., 2025). Additionally, humans also represent the beliefs and desires of other agents (Dennett, 1989), mathematically formalizable as inverse reinforcement learning (IRL) (Baker et al., 2009). IRL can infer uncertainties about others’ behaviors and the consequential reward signals (Dimitrakakis & Rothkopf, 2011; Straub et al., 2023), relevant in e.g.Kangaroo, where agents prioritize enemy-killing over child-rescue, showcasing misalignment between RL agents’ and human goals. Incorporating the common reasoning abilities of language models to extract reward signals from object-centric states could help tackle this problem (Kaufmann et al., 2024). 7 Limitations and Future Work While HackAtari offers valuable insights into RL generalization, it is limited by its reliance on ALE, which restricts evaluation to visually stylized environments that may not transfer to more complex or realistic domains. As the benchmark gains popularity, there is also a risk that the agents will overfit to its specific perturbations, undermining its diagnostic value, even if we believe that with enough variations for a given tasks, agents able to solve all variations while learning on the original tasks are likely to be aligned with the primary desired task objec- tives. Finally, the absence of standardized difficulty scaling across variations can make it dif- ficult to interpret the significance of observed performance drops. We hope that overall, this analysis and the release of the HackAtari variations will motivate RL practitioners to develop agents can maintain performance on task simplifications, thus learning correctly aligned policies. We believe that systematic evaluations should also be run on held-out test sets also for other, po- tentially more complex RL benchmarks. To obtain correctly aligned agents, IMPALA stands out for maintaining superhuman performance under task variations, likely due to its off-policy V-trace correction, distributed data collection, and smoother policy updates. Understanding how these el- ements support generalization may guide the development of more task-aligned deep RL agents. However, we believe that object-centricity, and in general the inclusion of human inductive biases, represent a better avenue towards RL agents that learn to select the right action for the right reasons. 8 Conclusion While Mnih et al. (2015) emphasized that “ reinforcement learning agents must learn efficient representations from high-dimensional sensory inputs and generalize from past experience to new situations ”, our results demonstrate that current deep and symbolic RL agents largely fail to meet this objective. To investigate this issue systematically, we introduce HackAtari, an open-source 9 benchmark comprising over 200 task variations based on the Arcade Learning Environment, the
|
https://arxiv.org/abs/2505.21731v1
|
most widely used evaluation suite in deep RL. HackAtari is designed to go beyond in-distribution evaluation by allowing researchers to test agents on slight but targeted modifications of familiar tasks. These variations, such as changes in color schemes or simplified game dynamics, are typically trivial for humans to adapt to but often reveal the brittleness of learned policies. We hope that HackAtari will serve as a diagnostic tool for evaluating whether RL agents have truly internalized the relational structure of their environments and whether they select actions for the right, generalizable reasons. Finally, Silver et al. (2021) have hypothesized that reward is enough , but our results suggest that with- out the right biases and evaluations, reward alone is not enough to build agents that truly understand. 9 Impact statement Our work aims at developing correctly aligned RL agents, who base their action selection on the correct intended goal. We believe that such algorithms are critical to uncover and mitigate potential misalignments of AI systems. A malicious user can, however, utilize such approaches for aligning agents in a harmful way, thereby potentially leading to a negative impact on further users or society as a whole. Even so, we believe that the checking that RL agents follow the intended goal could greatly help preventing the deployment of agents whose behavior could unexpectedly diverge from the one intended by their creators, thus reducing potential harm caused by such agents. Acknowledgments We would like to express our sincere gratitude to our students for their dedication and feedback as well as all people participating in our study. This research work has been primarily funded by the German Federal Ministry of Education and Research, the Hessian Ministry of Higher Education, Research, Science and the Arts (HMWK) within their joint support of the National Research Center for Applied Cybersecurity ATHENE as well as the Excellence Program of the Hessian Ministry of Higher Education through their cluster projects within the Hessian Center for AI (hessian.AI) “The Third Wave of Artificial Intelligence - 3AI” and “The Adaptive Mind”. Further, we acknowledge support and funding by the German Research Center for AI (DFKI). References Agarwal, R., Schwarzer, M., Castro, P. S., Courville, A. C., and Bellemare, M. Deep reinforcement learning at the edge of the statistical precipice. Advances in Neural Information Processing Systems , 2021. Allen, K. R., Smith, K. A., and Tenenbaum, J. B. Rapid trial-and-error learning with simulation supports flexible tool use and physical reasoning. Proceedings of the National Academy of Sciences , 2020. Bacon, P., Harb, J., and Precup, D. The option-critic architecture. In Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence , 2017. Badia, A. P., Piot, B., Kapturowski, S., Sprechmann, P., Vitvitskyi, A., Guo, Z. D., and Blundell, C. Agent57: Outperforming the atari human benchmark. In Proceedings of the 37th International Conference on Machine Learning, , 2020a. Badia, A. P., Sprechmann, P., Vitvitskyi, A., Guo, Z. D., Piot, B., Kapturowski, S., Tieleman, O., Arjovsky, M., Pritzel, A., Bolt, A., and Blundell, C. Never give up: Learning directed exploration strategies. In 8th International Conference on Learning
|
https://arxiv.org/abs/2505.21731v1
|
Representations , 2020b. Baker, C. L., Saxe, R., and Tenenbaum, J. B. Action understanding as inverse planning. Cognition , 2009. Barbara, N. H., Wang, R., and Manchester, I. On robust reinforcement learning with lipschitz- bounded policy networks. In ICML Workshop: Foundations of Reinforcement Learning and Control–Connections and Perspectives , 2024. 10 Bastani, O., Pu, Y ., and Solar-Lezama, A. Verifiable reinforcement learning via policy extraction. In Advances in Neural Information Processing Systems , 2018. Battaglia, P. W., Hamrick, J. B., and Tenenbaum, J. B. Simulation as an engine of physical scene understanding. Proceedings of the National Academy of Sciences , 2013. Bellemare, M. G., Naddaf, Y ., Veness, J., and Bowling, M. The arcade learning environment: An evaluation platform for general agents. Journal of Artificial Intelligence Research , 2013. Bellemare, M. G., Dabney, W., and Munos, R. A distributional perspective on reinforcement learning. InProceedings of the 34th International Conference on Machine Learning , 2017. Bhatija, S., Zuercher, P.-D., Thumm, J., and Bohné, T. Multi-objective causal bayesian optimization. arXiv , 2025. Blüml, J., Derstroff, C., Gregori, B., Dillies, E., Delfosse, Q., and Kersting, K. Deep reinforcement learning via object-centric attention. arXiv , 2025. Cannon, T. P. and ¸ Sim¸ sek, Ö. Accelerating task generalisation with multi-level hierarchical options. InThe Thirteenth International Conference on Learning Representations , 2025. Cao, Y ., Li, Z., Yang, T., Zhang, H., Zheng, Y ., Li, Y ., Hao, J., and Liu, Y . Galois: boosting deep reinforcement learning via generalizable logic synthesis. Advances in Neural Information Processing Systems , 2022. Chan, S. C. Y ., Fishman, S., Korattikara, A., Canny, J. F., and Guadarrama, S. Measuring the reliability of reinforcement learning algorithms. In 8th International Conference on Learning Representations , 2020. Cobbe, K., Klimov, O., Hesse, C., Kim, T., and Schulman, J. Quantifying generalization in rein- forcement learning. In Proceedings of the 36th International Conference on Machine Learning , 2019. Cobbe, K., Hesse, C., Hilton, J., and Schulman, J. Leveraging procedural generation to benchmark reinforcement learning. In International conference on machine learning , 2020. Davidson, G. and Lake, B. M. Investigating simple object representations in model-free deep reinforcement learning. In Proceedings of the 42th Annual Meeting of the Cognitive Science Society , 2020. Daw, N. D., Gershman, S. J., Seymour, B., Dayan, P., and Dolan, R. J. Model-based influences on humans’ choices and striatal prediction errors. Neuron , 2011. Delfosse, Q., Shindo, H., Dhami, D. S., and Kersting, K. Interpretable and explainable logical policies via neurally guided symbolic abstraction. Advances in Neural Information Processing (NeurIPS) , 2023a. Delfosse, Q., Stammer, W., Rothenbacher, T., Vittal, D., and Kersting, K. Boosting object representa- tion learning via motion and object continuity. In European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases (ECML) , 2023b. Delfosse, Q., Blüml, J., Gregori, B., Sztwiertnia, S., and Kersting, K. OCAtari: Object-centric Atari 2600 reinforcement learning environments. Reinforcement Learning Journal , 2024a. Delfosse, Q., Sztwiertnia, S., Stammer, W., Rothermel, M., and Kersting, K. Interpretable concept bottlenecks to align reinforcement learning agents. Advances in Neural Information Processing Systems , 2024b. Dennett, D.
|
https://arxiv.org/abs/2505.21731v1
|
C. The intentional stance . 1989. di Langosco, L. L., Koch, J., Sharkey, L. D., Pfau, J., and Krueger, D. Goal misgeneralization in deep reinforcement learning. In International Conference on Machine Learning , 2022. Dillies, E., Delfosse, Q., Blüml, J., Emunds, R., Busch, F. P., and Kersting, K. Better decisions through the right causal world model. arXiv , 2025. 11 Dimitrakakis, C. and Rothkopf, C. A. Bayesian multitask inverse reinforcement learning. In European workshop on reinforcement learning , 2011. Diuk, C., Cohen, A., and Littman, M. L. An object-oriented representation for efficient reinforcement learning. In Proceedings of the 25th international conference on Machine learning , 2008. Espeholt, L., Soyer, H., Munos, R., Simonyan, K., Mnih, V ., Ward, T., Doron, Y ., Firoiu, V ., Harley, T., Dunning, I., Legg, S., and Kavukcuoglu, K. IMPALA: scalable distributed deep-rl with importance weighted actor-learner architectures. In Proceedings of the 35th International Conference on Machine Learning, , 2018. Farebrother, J., Machado, M. C., and Bowling, M. Generalization and regularization in DQN. 2018. Forbus, K. D. Qualitative physics: Past, present, and future. In Exploring artificial intelligence . 1988. Geirhos, R., Jacobsen, J., Michaelis, C., Zemel, R. S., Brendel, W., Bethge, M., and Wichmann, F. A. Shortcut learning in deep neural networks. Nat. Mach. Intell. , 2020. Gogianu, F., Berariu, T., Bus ,oniu, L., and Burceanu, E. Atari agents, 2022. Greydanus, S., Koul, A., Dodge, J., and Fern, A. Visualizing and understanding atari agents, 2018. Grooten, B., Sokar, G., Dohare, S., Mocanu, E., Taylor, M. E., Pechenizkiy, M., and Mocanu, D. C. Automatic noise filtering with dynamic sparse training in deep reinforcement learning. In International Conference on Autonomous Agents and Multiagent Systems , 2023. Guss, W. H., Codel, C., Hofmann, K., Houghton, B., Kuno, N., Milani, S., Mohanty, S., Liebana, D. P., Salakhutdinov, R., Topin, N., et al. The minerl competition on sample efficient reinforcement learning using human priors. arXiv , 2019. Hafner, D. Benchmarking the spectrum of agent capabilities. In International Conference on Learning Representations , 2022. Hafner, D., Lillicrap, T. P., Ba, J., and Norouzi, M. Dream to control: Learning behaviors by latent imagination. In 8th International Conference on Learning Representations , 2020. Hausman, K., Springenberg, J. T., Wang, Z., Heess, N., and Riedmiller, M. Learning an embedding space for transferable robot skills. In International Conference on Learning Representations , 2018. Helff, L., Stammer, W., Shindo, H., Dhami, D. S., and Kersting, K. V-lol: A diagnostic dataset for visual logical learning. Journal of Data-centric Machine Learning Research , 2025. Hessel, M., Modayil, J., Van Hasselt, H., Schaul, T., Ostrovski, G., Dabney, W., Horgan, D., Piot, B., Azar, M., and Silver, D. Rainbow: Combining improvements in deep reinforcement learning. In Proceedings of the AAAI conference on artificial intelligence , 2018. Huang, S., Dossa, R. F. J., Ye, C., Braga, J., Chakraborty, D., Mehta, K., and Araújo, J. G. Cleanrl: High-quality single-file implementations of deep reinforcement learning algorithms. Journal of Machine Learning Research , 2022. Ilyas, A., Santurkar, S., Tsipras, D., Engstrom, L., Tran, B., and Madry, A. Adversarial examples are not bugs, they are features. Advances
|
https://arxiv.org/abs/2505.21731v1
|
in neural information processing systems , 2019. Jiang, Z. and Luo, S. Neural logic reinforcement learning. In International Conference on Machine Learning , 2019. Kaufmann, T., Blüml, J., Wüst, A., Delfosse, Q., Kersting, K., and Hüllermeier, E. Ocalm: Object- centric assessment with language models. arXiv , 2024. Kessler, F., Frankenstein, J., and Rothkopf, C. A. Human navigation strategies and their errors result from dynamic interactions of spatial uncertainties. Nature Communications , 2024. Koch, J., Langosco, L., Pfau, J., Le, J., and Sharkey, L. Objective robustness in deep reinforcement learning. arXiv , 2021. 12 Kohler, H., Delfosse, Q., Akrour, R., Kersting, K., and Preux, P. Interpretable and editable program- matic tree policies for reinforcement learning. arXiv , 2024. Kubricht, J. R., Holyoak, K. J., and Lu, H. Intuitive physics: Current research and controversies. Trends in cognitive sciences , 2017. Lake, B. M., Ullman, T. D., Tenenbaum, J. B., and Gershman, S. J. Building machines that learn and think like people. Behavioral and brain sciences , 2017. Li, A., Florensa, C., Clavera, I., and Abbeel, P. Sub-policy adaptation for hierarchical reinforcement learning. In International Conference on Learning Representations , 2020. Lin, Z., Wu, Y ., Peri, S. V ., Sun, W., Singh, G., Deng, F., Jiang, J., and Ahn, S. SPACE: unsupervised object-oriented scene representation via spatial attention and decomposition. In International Conference on Learning Representations , 2020. Liu, G.-T., Hu, E.-P., Cheng, P.-J., Lee, H.-Y ., and Sun, S.-H. Hierarchical programmatic rein- forcement learning via learning to compose programs. In International Conference on Machine Learning , 2023. Luo, L., Zhang, G., Xu, H., Yang, Y ., Fang, C., and Li, Q. Insight: End-to-end neuro-symbolic visual reinforcement learning with language explanations. arXiv , 2024. Machado, M. C., Bellemare, M. G., Talvitie, E., Veness, J., Hausknecht, M., and Bowling, M. Revisiting the arcade learning environment: Evaluation protocols and open problems for general agents. Journal of Artificial Intelligence Research , 2018. Marton, S., Grams, T., V ogt, F., Lüdtke, S., Bartelt, C., and Stuckenschmidt, H. Sympol: Sym- bolic tree-based on-policy reinforcement learning. In International Conference on Learning Representations , 2025. Michotte, A. The perception of causality. 1963. Milani, S., Topin, N., Houghton, B., Guss, W. H., Mohanty, S. P., Nakata, K., Vinyals, O., and Kuno, N. S. Retrospective analysis of the 2019 minerl competition on sample efficient reinforcement learning. In NeurIPS 2019 competition and demonstration track , 2020. Mnih, V ., Kavukcuoglu, K., Silver, D., Rusu, A. A., Veness, J., Bellemare, M. G., Graves, A., Riedmiller, M. A., Fidjeland, A., Ostrovski, G., Petersen, S., Beattie, C., Sadik, A., Antonoglou, I., King, H., Kumaran, D., Wierstra, D., Legg, S., and Hassabis, D. Human-level control through deep reinforcement learning. Nature , 2015. Moos, J., Hansel, K., Abdulsamad, H., Stark, S., Clever, D., and Peters, J. Robust reinforcement learning: A review of foundations and recent advances. Machine Learning and Knowledge Extraction , 2022. Neupärtl, N., Tatai, F., and Rothkopf, C. A. Intuitive physical reasoning about objects’ masses transfers to a visuomotor decision task consistent with newtonian physics. PLoS Computational Biology , 2020. Pinto, L., Davidson, J., Sukthankar, R., and
|
https://arxiv.org/abs/2505.21731v1
|
Gupta, A. Robust adversarial reinforcement learning. In International conference on machine learning , 2017. Schulman, J., Wolski, F., Dhariwal, P., Radford, A., and Klimov, O. Proximal policy optimization algorithms. 2017. Schultz, W., Dayan, P., and Montague, P. R. A neural substrate of prediction and reward. Science , 1997. Shaheen, A., Badr, A., Abohendy, A., Alsaadawy, H., and Alsayad, N. Reinforcement learning in strategy-based and atari games: A review of google deepminds innovations. arXiv , 2025. Shindo, H., Delfosse, Q., Dhami, D. S., and Kersting, K. Blendrl: A framework for merging symbolic and neural policy learning. In International Conference on Learning Representations , 2025. 13 Silver, D., Singh, S., Precup, D., and Sutton, R. S. Reward is enough. Artificial intelligence , 2021. Stammer, W., Schramowski, P., and Kersting, K. Right for the right concept: Revising neuro-symbolic concepts by interacting with their explanations. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition , 2021. Straub, D., Schultheis, M., Koeppl, H., and Rothkopf, C. A. Probabilistic inverse optimal control for non-linear partially observable systems disentangles perceptual uncertainty and behavioral costs. Advances in Neural Information Processing Systems , 2023. Tamar, A., Mannor, S., and Xu, H. Scaling up robust mdps using function approximation. In Proceedings of the 31st International Conference on Machine Learning , 2014. Tatai, F., Straub, D., and Rothkopf, C. A. Intuitive sensorimotor decisions under risk take newtonian physics into account. 2025. Tenenbaum, J. B., Kemp, C., Griffiths, T. L., and Goodman, N. D. How to grow a mind: Statistics, structure, and abstraction. Science , 2011. Tobin, J., Fong, R., Ray, A., Schneider, J., Zaremba, W., and Abbeel, P. Domain randomization for transferring deep neural networks from simulation to the real world. In international conference on intelligent robots and systems , 2017. Vieillard, N., Pietquin, O., and Geist, M. Munchausen reinforcement learning. Advances in Neural Information Processing Systems , 2020. Vincent, T., Palenicek, D., Belousov, B., Peters, J., and D’Eramo, C. Iterated q-network: Beyond one-step bellman updates in deep reinforcement learning. Transactions on Machine Learning Research , 2025. Wüst, A., Tobiasch, T., Helff, L., Ibs, I., Stammer, W., Dhami, D. S., Rothkopf, C. A., and Kersting, K. Bongard in wonderland: Visual puzzles that still make ai go mad? arXiv , 2025. Yang, Y ., Huang, B., Feng, F., Wang, X., Tu, S., and Xu, L. Towards generalizable reinforce- ment learning via causality-guided self-adaptive representations. In International Conference on Learning Representations , 2025. Zambaldi, V . F., Raposo, D., Santoro, A., Bapst, V ., Li, Y ., Babuschkin, I., Tuyls, K., Reichert, D. P., Lillicrap, T. P., Lockhart, E., Shanahan, M., Langston, V ., Pascanu, R., Botvinick, M. M., Vinyals, O., and Battaglia, P. W. Relational deep reinforcement learning. arXiv , 2018. Zhang, A., Wu, Y ., and Pineau, J. Natural environment benchmarks for reinforcement learning. arXiv , 2018. Zhu, Y ., Gao, T., Fan, L., Huang, S., Edmonds, M., Liu, H., Gao, F., Zhang, C., Qi, S., Wu, Y . N., et al. Dark, beyond deep: A paradigm shift to cognitive ai with humanlike common sense. Engineering , 2020. 14 Appendix
|
https://arxiv.org/abs/2505.21731v1
|
Overview •Appendix A: Metrics Definitions of Human-Normalized Score (HNS), Interquartile Mean (IQM), and performance change metrics. •Appendix B: Agent Architectures and Training Setup Describes all agents used in the study, their implementation details, training protocols, and sources. •Appendix C: Computational Resources Details hardware setup, GPU usage, training time, and total inference budget required for evaluation. •Appendix D: Evaluation Setup and Extended Results Provides extended results and tables for deep agents, object-centric models, and human baselines. •Appendix E: Code and Data Describes access to HackAtari environments, RAM modifications, and data logging pipeline. •Appendix F: Human Study Details Includes full study protocol, consent materials, payment info, interface screenshots, and questionnaire. •Appendix G: HackAtari Game Descriptions and Variants Full documentation of game environments and all implemented task simplifications used for evaluation. 15 A Metrics Throughout this article, we have used two complementary metrics to evaluate agents in Atari environments: the Interquartile Mean (IQM) over Human-Normalized Scores (HNS) , which measures absolute performance of an agent, normalized by the performance of a human. Metrics are aggregated using the rliable library (Agarwal et al., 2021) to provide statistically reliable evaluations across diverse environments. (Expert) Human-Normalized Score (HNS) To assess agent performance relative to both naive and expert baselines, we report the Human- Normalized Score (HNS), a standard evaluation metric in Atari benchmarks (Mnih et al., 2015; Machado et al., 2018). Let Adenote the average score achieved by the agent, Hthe score of a human expert, and Rthe score of a random policy. The HNS is defined as: HNS =A−R |H−R|(1) An HNS of 1.0 (or 100%) indicates human-level performance, while values above 1.0 suggest superhuman ability. Values near 0 imply the agent performs comparably to a random policy, and negative values indicate performance worse than random. This metric normalizes across games with different score scales and dynamics, enabling fair comparison and aggregation across tasks. HNS is computed using raw episodic returns averaged over multiple seeds. We are using the human and random reference scores from Badia et al. (2020a). Since Badia et al. (2020a) were using “professional human game testers ”, we call this Expert HNS. We report HNS alongside other metrics such as interquartile mean (IQM) and performance change to provide a robust and nuanced picture of agent behavior across original and modified environments. IQM over Multiple Games and Modifications To evaluate the robustness of each model across multiple environments and their corresponding modifications, we compute the Expert HNS over all available games, cf.Figure 3. For modified environments, each game’s (multiple) variants are first aggregated by computing the mean of HNS across modifications, ensuring that games with more variants do not exert disproportionate influence on the result. To aggregate the games, we follow Agarwal et al. (2021), using rliable . This means we calculate the IQM over all results of all seeds for all games. Using the Interquantile Mean (IQM) + 95% Confidence Intervall (CI) over Mean + StdErr. Following Agarwal et al. (2021), we report the interquartile mean (IQM) along with 95% stratified bootstrap confidence intervals (CIs) as our primary performance metric. Return distributions in
|
https://arxiv.org/abs/2505.21731v1
|
Atari are frequently highly variable, skewed, and heavy-tailed, often due to a small number of runs achieving unusually high scores. These outliers can inflate the sample mean, resulting in a metric that overstates the agent’s typical performance. Additionally, pairing the mean with standard error (SE) implicitly assumes independent, identically distributed (i.i.d.) samples from a light-tailed, symmetric distribution, assumptions that are frequently violated in RL settings. The IQM addresses these issues by computing the mean over the interquartile range (25th to 75th percentiles), discarding both the lowest and highest quartiles. This yields a robust point estimate that emphasizes consistent behavior across seeds and reduces the influence of extreme values. To quantify uncertainty, we use non-parametric bootstrap resampling to construct 95% confidence intervals, which make no distributional assumptions and better reflect the empirical variability of the data. This combination—IQM with bootstrap CIs—provides a statistically sound and robust summary of performance, particularly well-suited for environments like Atari where outcomes can vary substantially across random seeds. It enables more reliable comparisons between agents and guards against misleading conclusions driven by a few atypical trajectories. 16 Aggregating Human Study Results Using Mean with Per-Participant IQMs To evaluate human performance in a robust and statistically sound manner, we report the mean over per-participant IQM as our primary summary statistic. Each participant completed multiple episodes per evaluation condition (e.g., original vs. modified environment), resulting in a distribution of scores that may be highly variable due to factors such as learning effects, momentary lapses in attention, or individual familiarity with similar tasks. To reduce the impact of intra-participant variance and episodic outliers, we first compute the IQM across each participant’s episodes. This yields a robust estimate of central performance for each individual, effectively filtering out unusually high or low scores that may not be representative of typical behavior. After obtaining one IQM per participant per condition, we compute the mean across participants to summarize overall group-level performance. The goal of this is to obtain a more stable and representative estimate of average human performance, where all humans share the same influence on the results, independent of their performance or how many episodes they play. Performance Change To quantify the impact of environment modifications on agent performance, we compute the perfor- mance change (PC) as the relative degradation in performance between the original and modified environments. For a given metric M(e.g., HNS or raw return), let Moriginal denote the score in the original environment and Mmodif the corresponding score in the modified one. In this work, we used the raw scores for the computations; as such, we subtract the random scores from each metric to make sure. The performance drop is defined as: PC=(Mmodif−Rmodif)−(Moriginal−Roriginal ) |Moriginal−Roriginal|(2) A value of 0 indicates no performance change, while values closer to 100% represent a user reaching twice the amount of points. A value close to -100% means the agent performs similar or worse than a random agent. This normalized form facilitates comparison across tasks with different score ranges and highlights robustness failures that may be masked by absolute performance metrics. 17 B Agents We
|
https://arxiv.org/abs/2505.21731v1
|
evaluate a diverse set of reinforcement learning (RL) agents, covering both standard baselines and object-centric architectures. Most agents are based on publicly available implementations or pretrained models, while PPO was trained by us specifically for this study. Below, we summarize the agents used, their foundational works, and where to find their implementations or pretrained models. •DQN, MDQN – Standard value-based deep RL models. DQN is the canonical deep Q-learning agent introduced by Mnih et al. (2015), while MDQN is its Munchausen vari- ant (Vieillard et al., 2020) that incorporates entropy regularization into Q-learning. We use the pretrained models from Gogianu et al. (2022)4, originally built for analyzing agent robustness. •C51 – A distributional RL method that models return distributions instead of expected values (Bellemare et al., 2017). C51 models were also taken from Gogianu et al. (2022), though pretrained models are not available for all games included in our benchmark. •i-DQN – Another strong and relatively new baseline, based on the idea of iterated Q- Networks (i-QN) (Vincent et al., 2025), enabling multiple consecutive Bellman updates by learning a tailored sequence of action-value functions where each serves as the target for the next one. We used the models by Vincent et al. (2025), which can be found on Huggingface5. •IMPALA – A scalable actor-critic architecture designed for distributed training, introduced by Espeholt et al. (2018). We evaluate pretrained models by the cleanRL team, available on Huggingface6. •PPO – A widely used policy-gradient baseline introduced by Schulman et al. (2017). We trained our own PPO models using the CleanRL framework (Huang et al., 2022), following their recommended hyperparameters and reproducibility practices. Training details and hyperparameters are provided in Table 1. Models will be made available after acceptance. •Binary Mask PPO, Planes PPO – Object-centric PPO variants introduced by Blüml et al. (2025), incorporating structured visual representations (e.g., binary object masks, spatial planes) instead of raw pixels. These models were trained by us using modified observation wrappers. Code and trained models can be found on GitHub7or were provided by the authors. Note, these models were trained only for 40 million frames instead of 200 million. •Semantic Vector PPO, ScoBots Vector – Agents that operate on symbolic or vector-based representations of objects in the environment, pretrained using the OCAtari and ScoBots frameworks (Delfosse et al., 2024a,b). Both Semantic Vector PPO and ScoBots Vector were provided by the authors. Final model checkpoints will be made available after acceptance. 4https://github.com/floringogianu/atari-agents 5https://huggingface.co/TheoVincent/Atari_i-QN 6https://huggingface.co/cleanrl 7https://github.com/VanillaWhey/OCAtariWrappers 18 B.1 Training PPO Agents We train PPO agents for 200 million environment frames using the v5version of the ALE, following the evaluation protocol established by Machado et al. (2018). A fixed frameskip of 4 and a repeat action probability of 0.25 introduce environment stochasticity and align with established Atari benchmarks. Observations are preprocessed by converting RGB frames to grayscale, resizing to 84×84pixels, and stacking the last four frames to capture short-term temporal dynamics. Each agent is trained using 3 random seeds to ensure robustness across initialization. Our PPO policy uses an IMPALA-style convolutional encoder (Espeholt et al., 2018) with separate heads for the
|
https://arxiv.org/abs/2505.21731v1
|
policy and value function. Training is conducted with CleanRL (Huang et al., 2022), with full training settings provided in Table 1. The agent is trained to maximize the sum of undiscounted episodic returns. In addition to pixel-based PPO, we train a Semantic Vector agent using object-centric observations derived from OCAtari (Delfosse et al., 2024a). Instead of raw images, this agent receives a structured vector representation extracted from two consecutive frames. Each observation encodes object-level information, including positions, bounding boxes, and class labels for entities present in the scene. These vector inputs are passed through a multilayer perceptron encoder before being processed by a PPO policy. Training follows the same protocol as our pixel-based PPO agents, using CleanRL with adjusted hyperparameters for vector inputs. This setup enables direct comparison between pixel-level and object-centric agents under identical training conditions and stochastic environment settings, allowing us to assess how input structure affects generalization and robustness. Table 1: Hyperparameters used in our PPO training to ensure reproducibility and consistency. Hyperparameter Value Hyperparameter Value Seeds {0,1,2} Learning Rate ( α) 2.5×10−4 Total Timesteps 2×108Number of Environments 10 Batch Size ( B) 1280 Minibatch Size ( b) 320 Update Epochs 4 GAE Lambda ( λ) 0.95 Discount Factor ( γ) 0.99 Value Loss Coefficient ( cv) 0.5 Entropy Coefficient ( ce)0.01 Clipping Coefficient ( ϵ) 0.1 Clip Value Loss True Max Gradient Norm ( ∥g∥max) 0.5 19 C Computational Resources Although many of the agents used in this study were obtained from publicly available repositories and did not require retraining (cf. Appendix B), substantial computing resources were required to conduct large-scale evaluations across our proposed HackAtari benchmark. Evaluation Scale. Each of our 11−12agents was evaluated on 17games with 1−4simplified or modified variations per game (in total 50 game configurations), across 3random seeds and 10 episodes per configuration. This resulted in around 1,500gameplay rollouts per agent. While these are not expensive, the amount still leads to non-trivial inference costs. The total evaluation workload included both deep RL baselines (e.g., DQN, PPO, IMPALA, C51, i-DQN,...) and object-centric agents (e.g., binary masks, planes, OC-NN, ScoBots,...), for a total of 12unique agent configurations. This results in around 18,000evaluation runs. The results can be seen in Appendix D. Evaluation used deterministic seeds and fixed emulator settings (e.g., sticky actions) to ensure reproducibility (also cf.Appendix D). Training Resources. A subset of agents, namely, PPO (pixel-based) were trained in-house using the CleanRL framework (cf. Appendix B.1). Most PPO agents took around 4−6h per instance In total, we trained around 120PPO models. Total training time for these agents amounted to approximately 500−700GPU hours. Hardware. All training was conducted on a cluster of NVIDIA DGX systems, each equipped with A100 GPUs and sufficient RAM (greater than 40 GB per process) to support parallelized Atari rollouts, see Table 2. All evaluations, except IMPALA, were conducted using Python 3.11, Gym 0.26.2, and the ALE-py Atari interface (v0.8.1), on macOS 15.3.1. IMPALA, due to their requirement of the envpool package, was evaluated on a separate device, using Python 3.9 and Arch 2025.02.01. Hardware/Software Description GPU 8 ×NVIDIA® Tesla A100 NGC
|
https://arxiv.org/abs/2505.21731v1
|
Container nvcr.io/nvidia/pytorch:23.05-py3 GPU-Driver CUDA 12.2 CPU Dual Intel Xeon Platinum 8168 Operating System Ubuntu 23.02 LTS Table 2: Hard- and software configuration for our experimental section. 20 D Extended Results Evaluation Setup Our evaluation benchmarks RL agents across 17Atari environments, each tested under 1to4 environment modifications, depending on the environment. All agents are trained, not always by ourselves, exclusively on the original, unmodified versions of the games using 200million environment frames and the hyperparameters listed in Table 1, unless otherwise specified in the figure or table captions. Evaluation is performed using our own evaluation setup, based on HackAtari. Each experiment is conducted over 3seeds ( 0,1,2) with 10episodes per game per seed, totaling 30 episodes per configuration. We follow the evaluation guidelines by Machado et al. (2018), e.g., using a repeat action probability of 0.25and a maximum of 30 NOOP actions after each reset, meaning initial states are sampled by taking a random number of no-ops on reset. No-op is assumed to be action 0. The evaluated agent architectures and training settings are described in detail in Appendix B. For performance metrics, we report raw episodic returns, interquartile mean (IQM), and performance change, as defined in Section A. The selected environments and the corresponding modifications applied to each are introduced in Section G. Table 3: Evaluation Parameters Hyperparameter Value Hyperparameter Value #Episodes 30 #Seeds 3 Epsilon 0.001 Frameskip 4 Frames stacked 4 Repeat Action Probability 0.25 Full Actionspace False Max. NOOP Actions after Reset 30 21 Mapping Results to Figure Table 4: Mapping of HackAtari game variations to figures in the main paper. Each entry indicates which figure(s) include evaluation results for the given game and modification pair. Games in pink are used in the human study (cf. Appendix F). Game Variant Appears In Amidar paint roller player Fig. 3, 4, 5 Amidar pig enemies Fig. 3 Asterix obelix Fig. 3, 4, 5 BankHeist two police cars Fig. 3, 4, 5 Bowling shift player Fig. 3 Bowling top pins Fig. 3, 4, 5 Boxing color player red Fig. 3, 4, 5 Boxing switch positions Fig. 3 Breakout color all blocks red Fig. 3 Breakout color player and ball red Fig. 3, 4, 5 FishingDerby fish on different sides Fig. 3 Freeway all black cars Fig. 3 Freeway stop all cars edge Fig. 3 Freeway stop all cars Fig. 3, 4, 5 Freeway stop random car Fig. 3 Frostbite reposition floes easy Fig. 3, 4, 5 Jamesbond straight shots Fig. 3 Kangaroo no danger Fig. 3, 4, 5 MsPacman set level 1 Fig. 3, 4, 5 MsPacman set level 2 Fig. 3 MsPacman set level 3 Fig. 3 Pong lazy enemy Fig. 3, 4, 5 RiverRaid exploding fuels Fig. 3 RiverRaid game color change01 Fig. 3 RiverRaid restricted firing Fig. 3, 4, 5 Seaquest disable enemies Fig. 3, 4, 5 SpaceInvaders relocate shields off by one Fig. 3 SpaceInvaders relocate shields off by three Fig. 3, 4, 5 StarGunner remove mountains Fig. 3, 4, 5 StarGunner static bomber Fig. 3 StarGunner static flyers Fig. 3 StarGunner static mountains Fig. 3 22 D.1 Can
|
https://arxiv.org/abs/2505.21731v1
|
Deep Agents solve simplifications? We present the raw scores obtained by all evaluated deep RL agents across a set of Atari games and their corresponding HackAtari variations. Each agent was trained solely on the original environment and evaluated on both the unmodified and modified versions without any fine-tuning or adaptation. Performance is measured over 30 episodes per configuration, averaged across 3 random seeds, and reported as interquartile means (IQM) with 95% confidence intervals. The task variations are primarily designed as simplifications —modifications that preserve core game mechanics while reducing visual or strategic complexity or modifications that should not influence the performance of an agent, e.g., color changes. They provide a strong test for determining whether agents have learned task-aligned policies or are relying on superficial cues. A reliable agent should maintain or improve performance under these conditions. The table also indicates which task variations were used in our human study (cf. Appendix E). These entries are highlighted in pink. Table 5: Raw episodic scores for all evaluated deep RL agents across selected Atari games and their HackAtari task variations (cf. Figure 3). Each entry shows the interquartile mean (IQM) and 95% confidence interval across 30 episodes and 3 random seeds. Variants marked in pink were used in the human study (cf. Appendix F) and in Figure 4. Game (Variant) DQN C51 PPO IMPALA i-DQN MDQN Amidar 867[771,956] – 1052 [1017,1111] 1215 [1131,1298] 666[452,901] 1481 [1225,1928] paint roller player 71[54,94] – 271[189,377] 205[166,255] 529[439,630] 160[94,300] pig enemies 480[316,664] – 1313 [1177,1454] 1122 [890,1369] 70[53,86] 708[555,1248] Asterix 10212 [6734,17884] 7766 [5262,17937] 8588 [6915,11412] 3543 [2084,7491] 4017 [3194,4888] 5116 [3046,9275] obelix 3812 [3062,4593] 81812 [52250 ,121593] 5594 [4687,6750] 240218 [136250 ,346501] 6269 [5288,7365] 4594 [3437,5906] BankHeist 1064 [1011,1113] – 1062 [1028,1092] 388[222,636] 1291 [1226,1366] 1319 [1245,1413] two police cars 0[0,0] – 0[0,0] 0[0,0] 0[0,0] 0[0,0] Bowling 34[30,39] 46[40,53] 67[65,69] 45[42,49] 42[40,45] 33[33,35] shift player 36[31,41] 45[39,50] 63[54,65] 39[35,44] 43[39,48] 37[34,40] top pins 40.88[9,94] 37.38[23,56] 80.00[60,96] 62[38,87] 26[21,41] 181.38[139,213] Boxing 92[88,94] 77[68,85] 99[97,99] 94[92,97] 96[95,98] 95[93,97] color player red −2[−5,0] −7[−12,−2] −1[−2,0] −3[−8,0] −2[−6,−0] −2[−3,0] switch positions 47[21,68] −16[−29,−1] 57[34,68] 94[92,97] 67[60,72] 46[23,68] Breakout 123[85,178] 41[27,64] 391[366,409] 303[245,350] 259[219,295] 284[202,349] color all blocks red 144[91,215] 23[12,37] 413[396,424] 247[175,322] 103[63,173] 261[176,331] color player and ball red 4[2,6] 2[1,3] 4[3,6] 76[59,121] 8[7,10] 15[11,19] FishingDerby 32[24,40] 5[0,13] 41[37,44] 34[31,38] 32[28,38] 42[32,50] fish on different sides −90[−92,−88] −84[−87,−82] −98[−98,−97] 27[25,30] 20[16,24] −94[−96,−90] Freeway 27[16,33] 32[31,32] 31[30,31] 33[33,33] 33[33,33] 34[33,34] all black cars 10[7,13] 18[16,20] 25[23,26] 28[27,29] 26[24,29] 25[24,26] stop all cars edge 6[1,13] 34[22,40] 39[37,39] 11[6,19] 0[0,6] 40[39,40] stop all cars 0[0,0] 0[0,0] 7.56[0,20] 35[29,38] 0[0,1] 0.38[0,1] stop random car 16[9,20] 22[20,22] 21[20,21] 22[21,22] 16[16,18] 24[22,24] Frostbite 5431 [4411,6624] 5165 [4466,6055] 301[288,311] 281[262,299] 6198 [5196,7193] 8349 [6961,8951] reposition floes easy 198[43,461] 22[0,63] 9[0,28] 0[0,2] 0[0,0] 28[12,58] Jamesbond 972[887,1046] 3056 [1099,5134] 2216 [1850,2596] 11778 [9775,13891] 600[571,637] 722[662,778] straight shots 22[6,46] 94[53,762] 481[215,987] 1772 [734,3319] 63[46,77] 91[65,118] Kangaroo 9669 [7962,11487] 7456 [5231,10012] 13525 [12275 ,14243] 6244 [4200,8181] 14269 [13581 ,14550] 13588 [12581 ,14012] no danger 19[0,50] 6[0,37] 0[0,0] 0[0,0] 15[0,54] 81[50,100] MsPacman 4232 [3632,4636] – 7225 [6684,7600] 3652 [3089,4212] 3667 [3142,4180] 3961
|
https://arxiv.org/abs/2505.21731v1
|
[3208,4579] set level 1 602[500,718] – 357[291,438] 631[527,744] 328[283,414] 1412 [1216,1555] set level 2 333[240,426] – 326[229,428] 786[641,979] 408[368,465] 432[307,513] set level 3 356[306,412] – 347[295,419] 738[477,986] 176[139,214] 231[158,302] Pong 19[17,19] 5[4,6] 18[14,19] 9[7,11] 19[19,20] 20[18,20] lazy enemy −15[−17,−12] −13[−15,−8] −15[−17,−13] 6[1,10] −18[−19,−17] −16[−18,−12] Riverraid 14030 [12366 ,14900] – 16186 [15134 ,16920] 17421 [14467 ,20599] 11251 [10166 ,12375] 15340 [14739 ,15953] exploding fuels 5530 [5129,5847] – 7972 [7710,8233] 6756 [5249,8446] 4763 [4229,5383] 6094 [5454,6671] game color change01 439[405,475] – 368[256,480] 721[592,888] 379[302,461] 234[138,427] restricted firing 757[576,984] – 933[520,1820] 1656 [1313,2114] 1573 [1227,1928] 1155 [883,1553] Seaquest 5916 [5281,6750] 123622 .50[38904 ,222808] 1836 [1826,1840] 951[945,958] 6336 [4995,7672] 18170 [15821 ,21457] disable enemies 268[0,717] 0[0,0] 0[0,0] 0[0,0] 1134 [409,2030] 1392 [676,5015] SpaceInvaders 4173 [2305,7020] – 1406 [1179,1668] 11514 [5439,19878] 3225 [2222,4798] 6183 [3195,10738] relocate shields off by one 1314 [949,2302] – 1533 [1320,1725] 10415 [6222,15419] 3853 [2041,6291] 2863 [1983,6178] relocate shields off by three 475[322,684] – 1158 [919,1385] 8252 [4306,13849] 1037 [864,1454] 820[578,1344] StarGunner 59269 [55750 ,61300] 31894 [22831 ,42906] 29288 [15737 ,42162] 166881 [159112 ,177275] 55411 [51977 ,58388] 63181 [61068 ,64781] remove mountains 50838 [43012 ,56243] 24688 [15562 ,35025] 15781 [8693,23206] 171169 [162438 ,180062] 9726 [7177,12427] 58931 [56487 ,60975] static bomber 64506 [62362 ,66925] 25250 [17175 ,36750] 41475 [24381 ,55043] 172731 [163138 ,181425] 55146 [51323 ,57812] 66475 [65006 ,68143] static flyers 235031 [209081 ,260543] 24856 [14675 ,36993] 4244 [950,13562] 878106 [869850 ,884031] 68684 [40065 ,112265] 190569 [133318 ,246019] static mountains 46950 [37356 ,54575] 32769 [23600 ,42875] 19562 [10293 ,29275] 159169 [152875 ,165269] 53965 [49888 ,56515] 61769 [59487 ,63681] 23 D.2 Can Object-centricity solve misalignment? This section presents the raw performance scores of object-centric agents evaluated on the same HackAtari task variations described in Appendix C.1. These agents incorporate structured inductive biases through symbolic, object-based, or mask-based representations. While such representations are often credited with improved interpretability and generalization in structured environments, our results assess whether they also support better alignment with simplified task variants. All object-centric agents were trained using Proximal Policy Optimization (PPO) and receive object- based inputs—either through learned visual decompositions (e.g., binary masks or spatial planes) or symbolic representations (e.g., object class vectors from OCAtari and ScoBots). For comparison, we include the standard PPO baseline trained on raw pixel inputs. Each entry reports the interquartile mean (IQM) and 95% confidence interval across 30 episodes and 3 random seeds. A reliable object-centric agent should demonstrate resilience to both visual and gameplay simplifications, avoiding performance drops that indicate overreliance on spurious correlations. However, as our findings show, object-centricity improves robustness primarily to visual perturbations, but fails to fully eliminate shortcut behavior in more complex dynamics-altering variations. 24 Table 6: Episodic scores for object-centric RL agents on HackAtari task variations. All agents use PPO and differ only in their input representation (e.g., binary masks, planes, or symbolic vectors). Models were taken by Blüml et al. (2025) and Delfosse et al. (2024a,b) or self-trained. Scores are reported as interquartile means (IQM) with 95% confidence intervals, evaluated over 30 episodes and 3 seeds. PPO with raw pixels, as done by Mnih et al. (2015), is included
|
https://arxiv.org/abs/2505.21731v1
|
as a baseline. Games and Modifications in pink were used in Figure 5 and are identical to the human study. Game (Variant) PPO Object Masks Binary Masks Class Masks Planes Semantic Vector ScoBots Amidar 1052 [1017,1111] 554[493,615] 525[430,605] 479[442,513] 527[509,552] 357[325,407] 116[94,128] paint roller player 271[189,377] 315[251,369] 535[435,610] 492[458,526] 510[491,528] 301[252,350] 116[91,129] pig enemies 1313 [1177,1454] 384[332,426] 513[415,597] 516[465,554] 108[91,118] 25[16,34] 18[17,20] Asterix 8588 [6915,11412] 4065 [3522,4497] 171[116,241] 4181 [3528,4900] 91753 [83844 ,98809] 733[358,1066] 883[624,1133] obelix 5594 [4687,6750] 4562 [3812,5406] 2062 [1719,2594] 3375 [2750,4469] 30343 [21312 ,46312] 4583 [3039,7543] 4000 [3333,5500] BankHeist 1062 [1028,1092] 781[769,794] 1192 [1158,1211] 1181 [1151,1205] 1154 [1013,1302] 1163 [1155,1173] 720[670,741] two police cars 0[0,0] 0[0,0] 0[0,0] 0[0,0] 0[0,0] 61[42,91] 108[80,133] Bowling 67[65,69] 65[62,67] 70[67,71] 64[61,67] 99[99,99] 99[97,99] 51[41,60] shift player 63[54,65] 63[61,66] 67[64,70] 62[60,66] 84[53,85] 0[0,0] 51[41,61] top pins 80[60,96] 58[37,78] 98[49,159] 184[171,195] 98[73,124] 8[8,8] 0[0,0] Boxing 99[97,99] 94[92,96] 96[95,97] 94[93,96] 97[96,98] 87[78,96] 51[41,60] color player red −1[−2,0] 80[76,83] 96[95,97] 95[93,96] 96[95,98] 87[78,95] 51[41,61] switch positions 57[34,68] −53[−77,−11] −65[−97,−18] 29[−12,65] 37[−14,77] −35[−42,−31] 63[45,78] Breakout 391[366,409] 216[156,279] 259[212,295] 222[171,267] 371[348,390] 38[25,51] 21[14,28] color all blocks red 413[396,424] 299[247,328] 278[229,312] 259[207,290] 372[330,403] 38[25,51] 21[14,28] color player and ball red 4[3,6] 99[72,155] 256[201,287] 216[165,266] 390[352,407] 38[25,51] 21[14,29] FishingDerby 41[37,44] 20[13,28] −81[−84,−79] −62[−70,−56] 47[42,52] 18[11,26] 23[18,27] fish on different sides −98[−98,−97] 23[9,30] −86[−89,−83] −60[−67,−53] 30[27,32] 3[−4,21] −4[−12,4] Freeway 31[30,31] 33[32,33] 33[33,33] 32[32,32] 33[33,34] 31[31,32] 30[28,31] all black cars 25[23,26] 23[21,25] 33[32,33] 32[32,33] 33[33,34] 31[31,32] 30[28,31] stop all cars edge 39[37,39] 0[0,0] 0[0,0] 7[0,19] 22[6,37] 0[0,0] 41[41,41] stop all cars 7[0,20] 0[0,0] 0[0,0] 32[20,40] 19[4,36] 0[0,0] 41[41,41] stop random car 21[20,21] 18[17,20] 19[18,20] 19[18,20] 20[20,21] 15[13,16] 21[20,22] Frostbite 301[288,311] 278[270,290] 290[278,304] 265[258,270] 278[271,286] 2821 [2488,3134] 1931 [1806,2125] reposition floes easy 9[0,28] 11[0,32] 4[0,11] 1[0,24] 28[0,79] 60[43,75] 70[60,80] Jamesbond 2216 [1850,2596] – – – – 591[525,1583] 175[125,241] straight shots 481[215,987] – – – – 108[58,200] 0[0,21] Kangaroo 13525 [12275 ,14243] 112[0,300] 8425 [5550,10625] 112[0,300] 1800 [1800,1800] 0[0,0] 0[0,0] no danger 0[0,0] 0[0,0] 0[0,0] 0[0,0] 0[0,0] 0[0,0] 0[0,0] MsPacman 7225 [6684,7600] 5195 [4476,5724] 4227 [3646,4896] 4313 [3790,4899] 6211 [5471,6794] 4583 [3650,5298] 3620 [3086,3620] set level 1 357[291,438] 281[236,319] 548[400,778] 378[256,553] 294[234,366] 210[210,210] 90[90,90] set level 2 326[229,428] 285[258,317] 376[323,439] 695[446,969] 126[104,161] 170[170,170] 90[90,90] set level 3 347[295,419] 259[222,301] 628[512,775] 228[110,497] 124[98,172] 1610 [1610,1610] 90[90,90] Pong 18[14,19] 18[18,19] 19[18,20] 19[19,20] 19[19,20] 19[19,20] 16[15,19] lazy enemy −15[−17,−13] 12[10,15] 12[5,16] 3[−4,9] 17[16,18] −20[−20,−19] 17[15,−19] Riverraid 16186 [15134 ,16920] 7948 [7814,8092] 7958 [7739,8204] 7803 [7552,7993] 7990 [7824,8198] 2175 [2086,2350] 2230 [2074,2368] exploding fuels 7972 [7710,8233] 4111 [3886,4201] 3651 [3071,4038] 4039 [3801,4279] 4136 [3999,4272] 1068 [1010,1241] 1186 [952,1221] game color change01 368[256,480] 7638 [7474,7811] 7850 [7745,8011] 7867 [7567,8105] 8042 [7841,8242] 2175 [2091,2353] 2230 [2075,2353] restricted firing 933[520,1820] 523[511,539] 1677 [1460,1964] 2261 [1850,2594] 1864 [1524,2359] 200[200,200] 2741 [2216,3058] Seaquest 1836 [1826,1840] 1656 [1388,1820] 732[465,1141] 1832 [1815,1840] 2013 [1840,2308] 293[254,353] 293[256,360] disable enemies 0[0,0] 0[0,0] 0[0,0] 0[0,0] 0[0,0] 0[0,0] 0[0,0] SpaceInvaders 1406 [1179,1668] 688[600,806] 826[763,909] 554[475,634] 1557 [1247,1851] 278[205,321] 335[215,451] relocate shields off by one 1533 [1320,1725] 621[560,693] 803[730,884] 483[362,588] 1787 [1416,2217] 290[256,358] 241[208,318] relocate shields off by three 1158 [919,1385] 469[389,571] 589[474,683] 288[245,325] 1555 [1189,1899] 239[186,347] 266[252,330] StarGunner 29288 [15737 ,42162]
|
https://arxiv.org/abs/2505.21731v1
|
8406 [3938,14106] 962[900,1000] 20812 [10075 ,32231] 88787 [80131 ,94862] 10166 [5991,18597] 11550 [7357,15600] remove mountains 15781 [8693,23206] 9943 [5144,15150] 1000 [1000,1000] 23656 [13000 ,33488] 87287 [78219 ,95475] 13950 [9640,18069] 11250 [7700,14447] static bomber 41475 [24381 ,55043] 13375 [6512,20819] 1000 [1000,1000] 43031 [26325 ,54312] 83268 [68862 ,102025] 24766 [21231 ,28208] 11550 [6815,15821] static flyers 4244 [950,13562] 11625 [4881,27369] 343[275,438] 83881 [27206 ,178088] 577606 [505392 ,646894] 23950 [7666,59957] 11550 [7524,15760] static mountains 19562 [10293 ,29275] 7325 [3888,11600] 1000 [1000,1000] 13718 [6506,25269] 84487 [77275 ,92850] 13516 [8041,17467] 11550 [7323,15592] 25 D.3 Can Humans Solve these Simplifications? To validate that our selected HackAtari variations constitute genuine simplifications rather than increased difficulty, we conducted a user study evaluating human adaptation to a subset of modified environment (cf. Appendix F). Table 7 presents the raw scores for each game and its corresponding variation. We report the mean over IQM (cf. Appendix A) and 95% confidence intervals across participants, alongside the random agent baseline and expert human scores (from Badia et al. (2020a)) when available. This enables robust comparison between deep RL agents and non-expert human players. Table 7: Raw scores for humans, random agents, and expert human baselines on original and modified tasks. Variants used in the human study are marked and match those highlighted in Table 5. Scores are reported with 95% confidence intervals and were used in Figure 4. Game (Variant) Random Random (Badia et al.) Human Human (Badia et al.) Amidar 1.62[0,3] 5.8 114.58[46,220] 7127.7 paint roller player 1.31[0,2] – 360.43[52,930] – pig enemies 1.06[0,3] – – – Asterix 218.75[162,275] 210.0 1357.54[458,2462] 1719.5 obelix 2062.50[1688,2500] – 20384 .03[4624,39712] – BankHeist 13.12[10,17] 14.2 268.67[98,506] 753.1 two police cars 0.00[0,29] – 242.08[72,470] – Bowling 63.12[54,65] 23.1 109.74[88,129] 160.7 shift player 24.25[22,27] – – – top pins 90.44[80,104] – 192.17[157,220] – Boxing 1.50[0,3] 0.1 −2.75ci−5,0 12.1 color player red 1.06[−0,3] – −0.93[−7,6] – switch positions 1.06[−0,3] – – – Breakout 1.31[1,2] 1.7 8.35[3,14] 30.5 color all blocks red 0.94[0,1] – – – color player and ball red 1.00[0,2] – 10.31[4,19] – Freeway 0.00[0,0] 0.0 17.74[12,22] 29.6 all black cars 0.00[0,0] – – – stop all cars edge 0.31[0,1] – – – stop all cars 0.53[0,1] – 35.83[32,39] – stop random car 0.00[0,0] – – – Frostbite 71.25[58,84] 65.2 1372.81[167,2834] 4334.7 reposition floes easy 7.50[1,28] – 7540.21[1369,15306] – Kangaroo 0.00[0,50] 52.0 459.97[200,791] 3035.0 no danger 0.00[0,0] – 2431.48[1616,3414] – MsPacman 243.75[219,270] 307.3 996.9[518,1492] 6951.6 set level 1 201.88[174,229] – 673.6[305,1206] – set level 2 231.88[203,262] – – – set level 3 187.50[165,219] – – – Pong −20.62[−21,−20] -20.7 −15.19[−18,−10] 14.6 lazy enemy −20.62[−21,−20] – −12.2[−17,−6] – Riverraid 1533.75[1348,1688] 1338.5 2406.56[1428,3584] 17118.0 exploding fuels 860.62[659,946] – – – game color change01 1475.62[1341,1637] – – – restricted firing 520.00[520,579] – 13222 .81[8391,18407] – Seaquest 70.00[54,88] 68.4 2626.39[331,5397] 42054.7 disable enemies 0.00[0,0] – 12191 .67[6836,17288] – SpaceInvaders 137.81[115,169] 148.0 237.8[148,338] 1668.7 relocate shields off by one 117.50[92,148] – – – relocate shields off by three 125.62[96,160] – 286.96[175,436] – StarGunner 675.00[550,800] 664.0 2435.83[846,5056] 10250.0 remove mountains 656.25[550,781] – 2936.67[1200,5197] – static bomber 468.75[356,594] – – – static flyers 512.50[419,650] – – –
|
https://arxiv.org/abs/2505.21731v1
|
static mountains 706.25[588,831] – – – 26 E Code and Data To support reproducibility and further research, we will release all code, task variations, evaluation scripts, and selected model checkpoints as part of the supplementary materials and the code base through an anonymized repository upon acceptance. HackAtari Environment. The HackAtari benchmark is implemented as a lightweight extension of the Gymnasium Atari framework. All modifications are applied via deterministic RAM-based wrap- pers, allowing reproducible control over game dynamics, visuals, and semantics. The environment includes: • Using any of task variants by name ( cf.Appendix G). •Supporting scripts to help visualize and change the RAM of the current state on the fly, as well as other tools helping to create new variants. • Compatibility with Gym-style RL pipelines, wrappers and vectorized rollouts. • Compatibility with StableBaselines3 and CleanRL training frameworks Agent Models and Evaluation. We include pretrained checkpoints for selected deep and object- centric agents in the supplementary materials, covering a subset of games and variants. Human Study Infrastructure. To facilitate future user studies and make it easy to reproduce and test it, we will release the full source code for the web-based evaluation interface used in our human experiments (see Appendix F). Anonymized Access and Deanonymization. For the review phase, we include all evaluation results and selected models in the supplementary materials. Deanonymized versions of HackAtari, HackTari-Web, Models, and the Dataset will be provided upon paper acceptance. 27 F Human Study: Evaluating Human Performance of Modified Atari Environments F.1 Motivation and Research Question The primary motivation of this study is to understand how visual and semantic modifications to Atari game environments affect human gameplay performance. Our central research question is: How do different types of modifications influence human players’ performance in Atari games? By comparing performance across original and modified versions of the same game, we aim to identify which transformations preserve or degrade task-relevant understanding for humans. This helps evaluate the alignment between human representation of the games and machine abstractions. It also informs us if the proposed environmental changes are doable for a human, compared to the performance of our machine learning agents. F.2 Study Design Participants We recruited participants through Prolific8. A total of 160participants completed the study. All participants were at least 18 years old and reported fluent English skills. Participation was voluntary, and individuals were monetarily compensated. Communication with the participants, if required, was done over Prolific. Procedure Each participant completed the study in three phases: 1.Free Training (10-15 minutes): Participants played an unmodified version of a randomly assigned Atari game. This phase allowed them to familiarize themselves with the core mechanics. Participants could proceed early by clicking an “I’m ready” button after 10 min. 2.Evaluation Phase 1 (15 minutes): Participants played the original version of the same game. 3.Evaluation Phase 2 (15 minutes): Participants then played a modified version of the game. The modification applied was determined by a token-to-variant mapping and involved visual or semantic changes. After completing the evaluation rounds, participants answered short questions about the task (e.g., reward identification) and reported their confidence. Each
|
https://arxiv.org/abs/2505.21731v1
|
participant was assigned to a specific game and one of several predefined modification conditions. Each participant could only participate once. We used 15 games (cf. Appendix D.3) with one modification each. Assignment of participants to conditions followed a round-robin strategy over available game-modification pairs. Tasks Participants were asked to: •Play the game and maximize their score across both original and modified versions, further incentivized with a bonus payment based on performance. • Identify what they believed was the modification. • Rate their confidence in their understanding of the task. • Provide free-text feedback when prompted. 8https://www.prolific.com/ 28 F.3 Implementation Details The study was conducted using a custom web-based platform called HackTari , which was built as a demonstration tool for HackAtari modification, enabling real-time human interaction with modified Atari game environments. The system consisted of the following components: • AFlask backend that served the web interface and managed participant sessions. •Socket.IO was used to support low-latency, bidirectional communication between the browser and game server, allowing responsive gameplay. •Games were instantiated using OpenAI Gym environments, modified with custom wrappers to support HackAtari modifications. •Each participant was assigned a unique token that mapped to a specific game-modification variant. This ensured that gameplay sessions and logging were isolated per user. •All gameplay logs, including actions, rewards, observations, and survey responses, were stored as session-specific .csv files for later analysis. The full study workflow was hosted using AWS, requiring no software installation by participants. This setup allowed scalable, remote data collection while maintaining consistency across game conditions. Screenshots of the study can be found in Figures 6 to 11. F.4 Compensation Participants were compensated with a fixed payment of £7 (equal to ~£8.50 per hour) for completing the study. Further, they were paid a bonus of up to £3 based on their performance. Importantly, this bonus was not about the performance drop but their actual performance in both phases, Eval1 and Eval2. This was done in accordance with institutional guidelines and ethical research standards. F.5 Ethical Considerations This study was conducted in accordance with ethical research standards. Participants provided informed consent prior to beginning the experiment. They were informed about the study design, the compensation, and general information about the game they would play. All data was anonymized, and no personally identifiable information was collected. Compensation was provided according to Prolific recommendations. The study was approved by the local ethics board. F.6 Results Participants were excluded from our data analysis if they did not complete all phases of the experiment, if it was clearly apparent that they stopped actively playing the game, or if they reported technical difficulties. Therefore, our final dataset includes 128participants. We analyzed the quantitative performance metrics across the original and modified game conditions. Findings can be seen in Figure 12 to 14. 29 Figure 6: Consent form shown to participants at the beginning of the study. It includes a brief description of the study, use of data, confidentiality statement, and explicit agreement requirement before participation. 30 Figure 7: Payment information screen shown before the start of the task. It outlines base compen-
|
https://arxiv.org/abs/2505.21731v1
|
sation and performance-based bonus structure, ensuring transparency around incentives and ethical compensation. Again an explicit agreement was required before paricipation. 31 Figure 8: Example game description page shown to participants before gameplay. It provides an overview of the given Atari game, including the objective, scoring mechanics, and available controls. 32 Figure 9: Pre-task questionnaire to get basic information about experience. This could support the quantitative evaluation in the next step. As well as a short reminder of how the study will be run before starting it. 33 Figure 10: Screenshot of the interactive HackTari gameplay interface. Participants use this interface during Free Training, Evaluation Phase 1 (original game), and Evaluation Phase 2 (modified game). The interface includes the game window, control guide, and timer. In free play the average human score, taken from Badia et al. (2020a), is shown as reference. Figure 11: Post-task questionnaire to get feedback. 34 Free Training Eval1 Eval205001000150020002500 (a) Amidar Free Training Eval1 Eval20250005000075000100000125000 (b) Asterix Free Training Eval1 Eval2025050075010001250 (c) Bankheist Free Training Eval1 Eval250100150200250 (d) Bowling Free Training Eval1 Eval230 20 10 01020 (e) Boxing Free Training Eval1 Eval20204060 (f) Breakout Free Training Eval1 Eval2010203040 (g) Freeway Free Training Eval1 Eval2010000200003000040000 (h) Frostbite Free Training Eval1 Eval2020004000600080001000012000 (i) Kangaroo Free Training Eval1 Eval20100020003000 (j) Mspacman Free Training Eval1 Eval225 20 15 10 5 05 (k) Pong Free Training Eval1 Eval2010000200003000040000 (l) Riverraid Free Training Eval1 Eval205000100001500020000 (m) Seaquest Free Training Eval1 Eval20200400600800 (n) Spaceinvaders Free Training Eval1 Eval2050001000015000 (o) Stargunner Figure 12: Raw human scores across three phases: Free Training, Evaluation on the original task (Eval1), and Evaluation on the modified task (Eval2). Each subplot shows mean scores across all participants for a specific game. Most participants improved or maintained performance on Eval2, supporting the claim that the selected HackAtari variants are true task simplifications. 35 Free Training Eval1 Eval205001000150020002500 (a) Amidar Free Training Eval1 Eval20250005000075000100000125000 (b) Asterix Free Training Eval1 Eval2025050075010001250 (c) Bankheist Free Training Eval1 Eval2050100150200250 (d) Bowling Free Training Eval1 Eval240 20 020 (e) Boxing Free Training Eval1 Eval2020406080100120 (f) Breakout Free Training Eval1 Eval2010203040 (g) Freeway Free Training Eval1 Eval2010000200003000040000 (h) Frostbite Free Training Eval1 Eval2020004000600080001000012000 (i) Kangaroo Free Training Eval1 Eval20100020003000400050006000 (j) Mspacman Free Training Eval1 Eval225 20 15 10 5 05 (k) Pong Free Training Eval1 Eval20100002000030000400005000060000 (l) Riverraid Free Training Eval1 Eval205000100001500020000 (m) Seaquest Free Training Eval1 Eval202004006008001000 (n) Spaceinvaders Free Training Eval1 Eval20500010000150002000025000 (o) StargunnerFigure 13: Visualization of all results of all users. 36 Free Training Eval1 Eval205001000150020002500 (a) Amidar Free Training Eval1 Eval2020000400006000080000100000120000140000 (b) Asterix Free Training Eval1 Eval20200400600800100012001400 (c) Bankheist Free Training Eval1 Eval2050100150200250 (d) Bowling Free Training Eval1 Eval240 20 020 (e) Boxing Free Training Eval1 Eval2020406080100120 (f) Breakout Free Training Eval1 Eval2010203040 (g) Freeway Free Training Eval1 Eval2010000200003000040000 (h) Frostbite Free Training Eval1 Eval2020004000600080001000012000 (i) Kangaroo Free Training Eval1 Eval20100020003000400050006000 (j) Mspacman Free Training Eval1 Eval220 15 10 5 05 (k) Pong Free Training Eval1 Eval20100002000030000400005000060000 (l) Riverraid Free Training Eval1 Eval205000100001500020000 (m) Seaquest Free Training Eval1 Eval202004006008001000 (n) Spaceinvaders Free Training Eval1 Eval20500010000150002000025000 (o) StargunnerFigure 14: This presentation emphasizes the robustness of
|
https://arxiv.org/abs/2505.21731v1
|
human generalization to simplified modifications. 37 G Game Descriptions and Modifications G.1 Alien Description: You are stuck in a maze-like space ship with three aliens. You goal is to destroy their eggs that are scattered all over the ship while simultaneously avoiding the aliens (they are trying to kill you). You have a flamethrower that can help you turn them away in tricky situations. Moreover, you can occasionally collect a power-up (pulsar) that gives you the temporary ability to kill aliens. Modification Effect last_egg Removes all eggs but one. G.2 Amidar Description: This game is similar to Pac-Man: You are trying to visit all places on a 2- dimensional grid while simultaneously avoiding your enemies. You can turn the tables at one point in the game: Your enemies turn into chickens and you can catch them. Modification Effect pig_enemies Replaces all enemies with the pig enemies. paint_roller_player Changes the player to the paint roller character. unlimited_lives Player has an unlimited amounts of lives. G.3 Asterix Description: You are Asterix and can move horizontally (continuously) and vertically (dis- cretely). Objects move horizontally across the screen: lyres and other (more useful) objects. Your goal is to guide Asterix in such a way as to avoid lyres and collect as many other objects as possible. You score points by collecting objects and lose a life whenever you collect a lyre. You have three lives available at the beginning. If you score sufficiently many points, you will be awarded additional points. Modification Effect obelix Changes the playable character to obelix. set_consumable_1 Changes all consumables to pink consumables (100points). set_consumable_2 Changes all consumables to shields (200points). unlimited_lives Player has an unlimited amounts of lives. even_lines_free Clears the even numbered lines (1 being the top most line). odd_lines_free Clears the odd numbered lines (1 being the top most line). 38 G.4 Atlantis Description: Your job is to defend the submerged city of Atlantis. Your enemies slowly descend towards the city and you must destroy them before they reach striking distance. To this end, you control turrets stationed on three defense posts. You lose if your enemies manage to destroy all seven of Atlantis’ installations. You may rebuild installations after you have fought of a wave of enemies and scored a sufficient number of points. Modification Effect no_last_line Removes enemies from the lowest line. jets_only All enemy ships are jets. random_enemies Randomly assigns enemy types, instead of following the standardized pattern. speed_mode_slow Sets enemy speed to low. speed_mode_medium Sets the enemy speed to medium. speed_mode_fast Sets the enemy speed to fast. speed_mode_ultrafast Sets the enemy speed to ultrafast. G.5 BankHeist Description: You are a bank robber and (naturally) want to rob as many banks as possible. You control your getaway car and must navigate maze-like cities. The police chases you and will appear whenever you rob a bank. You may destroy police cars by dropping sticks of dynamite. You can fill up your gas tank by entering a new city. At the beginning of the game you have four lives. Lives are lost if you run out of gas,
|
https://arxiv.org/abs/2505.21731v1
|
are caught by the police, or run over some dynamite you have previously dropped. Modification Effect unlimited_gas Unlimited gas for the player. no_police Removes police from the game. only_police No banks only police. two_police_cars Replaces 2 banks with police cars and robbed banks give 50 points. random_city Randomizes which city is entered next. revisit_city Allows player to go back one city. G.6 Bowling Description: Your goal is to score as many points as possible in the game of Bowling. A game consists of 10 frames and you have two tries per frame. Knocking down all pins on the first try is called a “strike”. Knocking down all pins on the second roll is called a “spar”. Otherwise, the frame is called “open”. G.7 Boxing Description: You fight an opponent in a boxing ring. You score points for hitting the opponent. If you score 100 points, your opponent is knocked out. 39 Modification Effect shift_player Shifts the player to the right. horizontal_pins Draws the pins horizontally instead of vertically. small_pins Decreases the pin size from 2 pixels to 1 pixel. moving_pins Moves the pins up and down. top_pins The top two pins are in-game. middle_pins Two middle pins are in-game. bottom_pins The bottom two pins are in-game. Modification Effect gravity Adds a permanent downwards movement. antigravity Adds a permanent upwards movement. offensive Moves the player character forward in the game environment. defensive Moves the player character backward in the game environment. down Moves the player character down in the game environment. one_armed Disables the "hitting motion" with the right arm permanently drunken_boxing Applies random movements to the players input. color_player_ X Changes the color of the player to color X∈[Black, White, Red, Blue, Green] color_enemy_ X Changes the color of the enemy to color X∈[Black, White, Red, Blue, Green] by choosing a value 0-4 switch_positions Switches the position of player and enemy classic_colors Using a red player and a blue enemy G.8 Breakout Description: Another famous Atari game. The dynamics are similar to pong: You move a paddle and hit the ball at a brick wall at the top of the screen. Your goal is to destroy the brick wall. You can try to break through the wall and let the ball wreak havoc on the other side, all on its own! If the ball falls below your padle you lose one of your five lives. Modification Effect right_drift Applies a right drift to the ball. left_drift Applies a left drift to the ball. gravity Applies a downward force to the ball inverse_gravity Applies an upward force to the ball color_player_and_ball_ X Changes the color of the player to color X∈[Black, White, Red, Blue, Green]. color_all_blocks_ X Changes the color of all blocks to color X∈[Black, White, Red, Blue, Green]. G.9 Carnival Description: This is a “shoot ‘em up” game. Targets move horizontally across the screen and you must shoot them. You control the gun at the bottom of the screen that can be moved horizontally. The supply of ammunition is limited and chickens may steal some bullets from you if you don’t
|
https://arxiv.org/abs/2505.21731v1
|
hit them in time. 40 Modification Effect no_flying_ducks Ducks in the last row disappear instead of turning into flying ducks. unlimited_ammo Ammunition doesn’t decrease. missile_speed_small_increase The projectiles fired from the players are slightly faster. missile_speed_medium_increase The projectiles fired from the players are notably faster. missile_speed_faster_increase The projectiles fired from the players are a lot faster. G.10 ChopperCommand Description: You control a helicopter and must protect truck convoys. To that end, you need to shoot down all enemy aircraft. A mini-map is displayed at the bottom of the screen that shows all truck and aircraft positions. Modification Effect delay_shots Puts time delay between shots no_enemies Removes all Enemies from the game no_radar Removes the radar content invisible_player Makes the player invisible color_ X Changes the color of background to color X∈[black, white, red, blue, green]. This also affects the enemies colors G.11 DemonAttack Description: You are facing waves of demons in the ice planet of Krybor. Points are accumu- lated by destroying demons. You begin with 3 reserve bunkers, and can increase its number (up to 6) by avoiding enemy attacks. Each attack wave you survive without any hits, grants you a new bunker. Every time an enemy hits you, a bunker is destroyed. When the last bunker falls, the next enemy hit will destroy you and the game ends. Modification Effect static_enemies Makes the enemies horizontally static (static x). one_missile Enemies only shoot one missile, instead of three G.12 DonkeyKong Description: You play as Mario trying to save Pauline from the hands of Donkey Kong. Remove rivets and jump over the barrels, with a score that starts high and counts down throughout the game. 41 Modification Effect no_barrel Remove barrels from the game. unlimited_time Provides unlimited time for the player. random_start Set the players start position to a random pre-defined start position. G.13 FishingDerby Description: Your objective is to catch more sunfish than your opponent. Watch out for the shark, as it tries to snatch the fish caught on your hook. Modification Effect fish_on_player_side Spawns all fish on the players side. fish_in_middle Spawns all fish in the middle. fished_on_each_sides1 Swap fish sides, the fish that were on the player side are now on the enemy’s one and vice versa. fished_on_each_sides2 Swap fish sides, the fish that were on the player side are now on the enemy’s one and vice versa. shark_no_movement_easy The shark will not move and stay on the opponents side. shark_no_movement_middle The shark will not move and is placed in the middle. shark_teleport The shark will teleport between the player and the enemy side at a set interval. shark_speed_mode Increases the sharks movement speed. G.14 Freeway Description: Your objective is to guide your chicken across lane after lane of busy rush hour traffic. You receive a point for every chicken that makes it to the other side at the top of the screen. Modification Effect stop_random_car Stops a random car with a biased probability. stop_all_cars Stops all cars and repositions some to predefined positions. align_all_cars Aligns the x position of all cars. reverse_car_speed_bottom Reverses the speed order of the cars
|
https://arxiv.org/abs/2505.21731v1
|
on the bottom road (the fastest becomes the slowest and vice versa). reverse_car_speed_top Reverses the speed order of the cars on the top road (the fastest becomes the slowest and vice versa). speed_mode Increases the speed of all cars. invisible_mode Makes the cars invisible. phantom_mode Each car changes color from black to invisible approximately every second. blinking_mode Each car changes color randomly approximately every second. strobo_mode Each car changes color randomly every timestep. all_X_cars Set color of all cars to X∈[black, white, red, blue, green]. 42 G.15 Frostbite Description: In Frostbite, the player controls “Frostbite Bailey” who hops back and forth across an Arctic river, changing the color of the ice shards from white to blue. Each time he does so, a block is added to his igloo. Hurry before the polar bear gets you, but watch out for birds that throw you of the shards or crabs that snap you. Look for fish jumping out of the water to receive some bonus points. Modification Effect ui_color_ X Adjust color of the UI to color X∈[black, red] reposition_floes_easy Adjusts the position of the ice floes to an easy layout. reposition_floes_medium Adjusts the position of the ice floes to a medium layout. reposition_floes_hard Adjusts the position of the ice floes to a hard layout. no_birds Removes the birds. few_enemies Reduces the amount of enemies spawning in. many_enemies Increases the amount of enemies spawning in. G.16 Jamesbond Description: Your mission is to control Mr. Bond’s specially designed multipurpose craft to complete a variety of missions. The craft moves forward with a right input and slightly back with a left input. An up or down input causes the craft to jump or dive. You can also fire by either lobbing a bomb to the bottom of the screen or firing a fixed angle shot to the top of the screen. Modification Effect constant_jump Makes the player character jump constantly. fast_backward Increases the reversing speed. mobile_player Makes the player character jump constantly and increases the reversing speed. straight_shots The player shots go straight up, instead of diagonal. unlimited_lives Player has an unlimited amounts of lives. G.17 Kangaroo Description: The object of the game is to score as many points as you can while controlling Mother Kangaroo to rescue her precious baby. You start the game with three lives. During this rescue mission, Mother Kangaroo encounters many obstacles. You need to help her climb ladders, pick bonus fruit, and throw punches at monkeys, while avoiding the falling coconuts. G.18 KungFuMaster Description: You are a Kung-Fu master on the mission to rescue your girlfriend from the evil Mr. X. find your way through the castle fighting his lackeys and dodging traps. 43 Modification Effect disable_monkeys Disables the monkeys in the game. disable_coconut Disables the coconuts in the game. disable_thrown_coconut Disables the thrown coconut in the game. no_danger Combines the three modifications above, disabling all hazards in the game. set_kangaroo_position_floor1 Sets the kangaroo’s position for floor 1. set_kangaroo_position_floor2 Sets the kangaroo’s position for floor 2. randomize_kangaroo_position Randomize the floor on which the kangaroo starts. change_level1 Changes the level to
|
https://arxiv.org/abs/2505.21731v1
|
1. change_level2 Changes the level to 2. change_level3 Changes the level to 3. unlimited_time Provides unlimited time to clear the level. Modification Effect no_damage Player does not take damage. unlimited_time Provides unlimited time to clear the level. unlimited_lives Player has an unlimited amounts of lives. G.19 MontezumaRevenge Description: Your goal is to acquire Montezuma’s treasure by making your way through a maze of chambers within the emperor’s fortress. You must avoid deadly creatures while collecting valuables and tools which can help you escape with the treasure. Modification textbfEffect random_position_start Randomize the start position within the room set_level_0 Sets the level to 0. set_level_1 Sets the level to 1. set_level_2 Sets the level to 2. randomize_items Randomize which item is found in which room. full_inventory Adds all items to inventory. G.20 MsPacman Description: Your goal is to collect all of the pellets on the screen while avoiding the ghosts. G.21 NameThisGame Description: You, the diver need to defend your treasure from the giant octopus at the top of the screen. Defend yourself by shooting his tentacles and the shark that is looking to bite you. Remember to refill your oxygen via the the boats tube, before you run out. 44 Modification Effect caged_ghosts Fix the position of the ghost inside the square in the middle of the screen. disable_orange Fix the position of the orange ghost only. disable_red Fix the position of the red ghost only. disable_cyan Fix the position of the cyan ghost only. disable_pink Fix the position of the pink ghost only. caged_ghosts Fix the position of all the ghosts. edible_ghosts All ghost will be made edible the entire game set_level_ X Changes the level to X∈[0, 1, 2]. end_game Simulates an almost done game, with 15 pills remaining in the level. maze_man Changes the game to a maze solving task. Only one pill will spawn at a time. After the player collects it, a new pill will spawn. The game is won when the player has collected 20 pills. Modification Effect unlimited_oxygen Provides the player with an unlimited supply of oxygen. unlimited_lives Player has an unlimited amounts of lives. double_wave_length Doubles the amount of time it takes to get into the next phase. quick_start Skips the intro. G.22 Pong Description: You control the right paddle and compete against the left paddle controlled by the computer. You each try to keep deflecting the ball away from your goal and into your opponent’s goal. Modification Effect lazy_enemy Enemy does not move after returning the shot until player hits ball. hidden_enemy Removes the enemy from the OCAtari object list (Makes it invisible to object based agents). up_drift Makes the ball drift upwards. down_drift Makes the ball drift down. left_drift Makes the ball drift to the left. right_drift Makes the ball drift to the right. G.23 RiverRaid Description: You control a jet that flies over a river: you can move it sideways and fire missiles to destroy enemy objects. Each time an enemy object is destroyed you score points (i.e. rewards). Fly over a fuel depot when you begin to run low, before you
|
https://arxiv.org/abs/2505.21731v1
|
lose your jet. You also lose a jet when it collides with the river bank or one of the enemy objects (except fuel depots). The game begins with a squadron of three jets in reserve and you’re given an additional jet (up to 9) for each 10,000 points you score. 45 Modification Effect no_fuel Removes the fuel deposits from the game. red_river Turns the river red. linear_river Makes the river straight, however objects still spwan at their normal position making them unreachable in the worst case. exploding_fuels Shooting the fuel deposits will now provides -80 points (instead of 20). unlimited_lives Player has an unlimited amounts of lives. restricted_firing The player is only able to shoot in critical situation, facing a bridge or in a corridor. unlimited_lives Player has an unlimited amounts of lives. restricted_firing The player is only able to shoot in critical situation, facing a bridge or in a corridor. game_color_change_ XChanges the color of the player to color X∈[01, 02, 03] object_color_change_ XChanges the color of the player to color X∈[01, 02, 03] G.24 RoboTank Description: You are playing a tank looking to defeat the enemy squadrons of tanks. Each squadron consists of 12 tanks. Use the radar to find enemy tanks even in the night or during foggy weather. Being hit by the enemy requires you to switch to one of your reserve tanks. Enemy stray shots may destroy vital sensors. Modification Effect fog Weather condition is always set to fog. snow Weather condition is always set to snow. rain Weather condition is always set to rain. tread_damage Tread sensor is damaged. canon_damage Canon is damaged. vision_damage Vision sensor is damaged. no_radar Disables the radar. G.25 Seaquest Description: You control a submarine able to move in all directions and fire torpedoes. The goal is to rescue as many divers as you can, while dodging and blasting enemy subs and killer sharks; points will be awarded accordingly. The game begins with one sub and three waiting on the horizon. Each time you score 10,000 points, an extra sub will be delivered to your base. You can only have six reserve subs at one time. Your sub will explode if it collides with anything except your own divers. The sub has a limited amount of oxygen that decreases at a constant rate during the game. When the oxygen tank is almost empty, you need to surface and if you don’t do it in time, your sub will blow up and you’ll lose one diver. Each time you’re forced to surface, with less than six divers, you lose one diver as well. Modification Effect unlimited_oxygen Changes the behavior of the oxygen bar to remain filled gravity Enables gravity for the player. disable_enemies Disables all the enemies. random_color_enemies The enemies have new random colors each time they go across the screen. 46 G.26 Skiing Description: You control a skier who can move sideways. The goal is to pass through all the gates (between the poles) in the fastest time. You are penalized five seconds for each gate you miss. If you hit a
|
https://arxiv.org/abs/2505.21731v1
|
gate or a tree, your skier will jump back up and keep going. Modification Effect invert_flags Switches the flag color from blue to red. moguls_to_trees Replaces all moguls with trees. moving_flags Flags move to the left and right. G.27 SpaceInvaders Description: Your objective is to destroy the space invaders by shooting your laser cannon at them before they reach the Earth. The game ends when all your lives are lost after taking enemy fire, or when they reach the earth. Modification Effect disable_shield_left Disables the left shield. disable_shield_middle Disables the middle shield. disable_shield_right Disables the right shield. disable_shields Disables all shields. relocate_shields_right Relocates the shields to a more right position. relocate_shields_slight_left Relocates the shields to a slightly more left position. relocate_shields_off_by_one Relocates the shields to to the right by 1 pixel. relocate_shields_off_by_three Relocates the shields to to the right by 3 pixels. controlable_missile The player can control the missile by moving left and right. no_danger Removes disables the enemies shots and removes the shields. G.28 StarGunner Description: You are playing as a stargunner from the Yarthae Empire. The Empire is being invaded by the Sphyzygi. You, the stargunner, must destroy their invading ships while dodging the bombs dropped by the Sphyzygi droid. The droid cannot be destroyed. Modification Effect static_bomber Stops the bomber at the top from moving. static_flyers Stops the flying enemies in place. remove_mountains Removes the mountains from the game. static_mountains The mountains stay the same, even if the player moves. 47 G.29 Tennis Description: You control the orange player playing against a computer-controlled blue player. The game follows the rules of tennis. The first player to win at least 6 games with a margin of at least two games wins the match. If the score is tied at 6-6, the first player to go 2 games up wins the match. Modification Effect wind_effect Sets the ball in the up and right direction by 3 pixles every single ram step to simulate the effect of wind upper_pitches Changes the ram so that it is always the upper persons turn to pitch. lower_pitches Changes the ram so that it is always the lower persons turn to pitch. upper_player Changes the ram so that the player is always in the upper field lower_player Changes the ram so that the player is always in the lower field G.30 TimePilot Description: You control an aircraft. Destroy your enemies by shooting them down. As you progress through the game, you will encounter enemies with increasingly advanced technology. Modification Effect level_ X Changes the level to X∈[1, 2, 3, 4, 5]. random_orientation Randomizes orientation of enemies. They are no longer aligned. G.31 Venture Description: Your goal is to capture the treasure in every chamber of the dungeon while avoiding the monsters. Modification Effect enemy_color_ X Changes the color of all enemies to the color X∈[black, white, red, blue, green]. random_enemy_colors Changes the color of all enemies to a random color. G.32 YarsRevenge Description: The objective is to break a path through the shield and destroy the Qotile with a blast from the Zorlon Cannon. 48 Modification
|
https://arxiv.org/abs/2505.21731v1
|
arXiv:2505.21740v1 [cs.CL] 27 May 2025Preprint. Under review. Counterfactual Simulatability of LLM Explanations for Gen- eration Tasks Marvin Limpijankit, Yanda Chen, Melanie Subbiah, Nicholas Deas & Kathleen McKeown Department of Computer Science Columbia University New York, NY 10027, USA {ml4431, m.subbiah }@columbia.edu, {yanda.chen, ndeas, kathy }@cs.columbia.edu Abstract LLMs can be unpredictable, as even slight alterations to the prompt can cause the output to change in unexpected ways. Thus, the ability of models to accurately explain their behavior is critical, especially in high-stakes settings. One approach for evaluating explanations is counterfactual simu- latability, how well an explanation allows users to infer the model’s output on related counterfactuals. Counterfactual simulatability has been previ- ously studied for yes/no question answering tasks. We provide a general framework for extending this method to generation tasks, using news sum- marization and medical suggestion as example use cases. We find that while LLM explanations do enable users to better predict LLM outputs on counterfactuals in the summarization setting, there is significant room for improvement for medical suggestion. Furthermore, our results suggest that the evaluation for counterfactual simulatability may be more appropriate for skill-based tasks as opposed to knowledge-based tasks. 1 Introduction While large language models (LLMs) have proven effective for a diverse range of applica- tions, their outputs still often contain hallucinations of unsupported information (Ji et al., 2023) or biases (Sheng et al., 2021) that hinder their reliability on critical language generation tasks. At the same time, it is important that users themselves can accurately evaluate the model capabilities, knowledge, and reliability (Steyvers et al., 2025). Particularly in high stakes domains, such as medical applications, misunderstandings or the lack of ability to predict model behavior on unseen inputs can pose disastrous risks to users (Michalowski et al., 2024). To anticipate these risks, recent work has turned attention to evaluating the reliability of LLM explanations (Madsen et al., 2024; Turpin et al., 2023; Kunz & Kuhlmann, 2024). In particular, Chen et al. (2024) evaluates the counterfactual simulatability of LLM explanations in yes/no question answering settings. Counterfactual simulatability is a measure of how well a model’s explanation allows humans to correctly infer the model’s predictions on simulatable counterfactuals (unseen inputs where the explanation enables the user to confidently guess the model’s output). Furthermore, according to Chen et al. (2024), the counterfactual simulatability of an explanation can be decomposed into simulation generality , a measure of the diversity of simulatable counterfactuals and simulation precision , a measure of the proportion of these counterfactuals for which humans correctly infer the model’s output. Ideal model explanations should balance both generality and precision. Counterfactual simulatability, however, is equally critical to generation tasks, where the larger space of possible outputs makes understanding a model’s decision process more challenging. To fill this gap, we formalize a framework for evaluating counterfactual simulatability in language generation tasks (figure 1). We first use LLMs to produce counter- factuals and decompose explanations into atomic units. Then, human annotators determine the simulatability and precision of each unit of a models’ explanations. Rather than evaluating the factual correctness of explanations, our framework measures
|
https://arxiv.org/abs/2505.21740v1
|
the ability of LLMs to accurately 1 Preprint. Under review. Figure 1: Our evaluation pipeline. Given a model’s explanation, an LLM is prompted to generate relevant counterfactuals (right) and decompose the explanation into atomic units (left). For each unit, a human annotator verifies whether the element appears in the counterfactual (simulatability) and the counterfactual output (precision). describe their behavior in a generative setting. The framework evaluates whether LLMs’ explanations lead to reliable human mental models consistent with their outputs . We apply the framework to two tasks: news summarization using CNN/DM (Nallapati et al., 2016) and medical suggestion generation using the Taiwan e-Hospital Dataset (Chen et al., 2022). These specific tasks involve different trade-offs between generality and precision . Explanations for news summarization can be highly general as models may employ similar approaches towards news documents that contain common elements (e.g., dates, quotes) while differing in content. In contrast, medical suggestion explanations can be less general as the model’s suggestion may be highly dependent on the details described by the user (i.e., slight differences in symptoms can lead to completely different suggestions). This leads to a more limited set of counterfactuals the explanations apply to, though potentially being more precise in its outputs. With these generation tasks, we conduct an initial evaluation of LLMs’ explanations and assess where models’ explanations in these generation tasks fail. Our contributions are summarized as follow: 1.We propose a framework to measure the ability of LLMs to accurately describe their behavior in a generative setting by evaluating whether LLMs’ explanations lead to mental models consistent with their outputs (counterfactual simulatability). 2.We assess the feasibility of using LLMs to automate human annotation in our evaluation pipeline and find that an LLM achieves significant agreement with humans. 2 Preprint. Under review. 3.Using our framework, we evaluate Chain-of-thought and Post-hoc explanations for multiple LLMs on two complementary tasks: news summarization and medical suggestion generation. We show that LLMs generate explanations that lead to reliable mental models in news summarization, but not medical suggestion. 2 Related work Human mental models. Humans form mental models of the physical world as a whole (Gentner & Stevens, 2014) as well as specific technologies (e.g., Payne (1991); Du et al. (2018); Lei et al. (2016)) through their past experiences and observations. Specifically with regard to artificial intelligence systems, explanations of model predictions have been considered high quality if they provide users with an accurate and generalizable understanding of the system in the form of mental models (Rutjes et al., 2019; Merry et al., 2021). When such explanations are successful, they can improve users’ ability to effectively use AI models (Vasconcelos et al., 2022; Senoner et al., 2024) as well as to anticipate and correct undesirable model behaviors, such as biases and incorrect predictions (Bansal et al., 2019). We specifically evaluate LLMs’ explanations in generation tasks considering their ability to help form mental models. Explanation evaluation. A variety of different approaches have been used to evaluate the quality and utility of model-generated natural language explanations and rationales. Studies evaluating natural language explanations similar to word attributions
|
https://arxiv.org/abs/2505.21740v1
|
(Huang et al., 2023; Madsen et al., 2024) as well as unconstrained explanations (Turpin et al., 2023) have focused on faithfulness measures. In particular, work has proposed metrics for dimensions including comprehensiveness (DeYoung et al., 2020), sufficiency (DeYoung et al., 2020), alignment with human rationales (Fayyaz et al., 2024), and scrutability (Xu et al., 2023) among others. In contrast to intrinsic measures of model explanations, other work evaluates explanations extrinsically through impact on task performance (e.g., Camburu et al. (2018); Wei et al. (2022); Krishna et al. (2023)). Among these topics, relatively few works have investigated natural language explanations in language generation tasks; such studies include dialogue responses (Zhou et al., 2021), dialogue understanding (Gao et al., 2024), and more prominently, open-ended question answering (Ho et al., 2023; Fragkathoulas & Chlapanis, 2024; Lyu et al., 2023). While Chen et al. (2024) introduces counterfactual simulatability in classification settings, we propose a novel framework for counterfactual simulatability in language generation settings. 3 Counterfactual simulatability for generation tasks Explanations can be evaluated by considering to what extent an observer, having seen a model’s explanation for some input, can infer (i.e., simulate) the model’s output for a counterfactual input. For instance, if a user asks an LLM “ do dolphins swim? ” and a model answers “ yes” with the explanation “ all aquatic animals swim ”, then the user would infer that when asked the counterfactual “ do starfish swim? ”, the model will similarly answer “ yes”. If, in reality, the model answers “ no” to this question, then, as Chen et al. (2024) notes, the explanation is ineffective because it creates a mental model that is inconsistent with the model’s behavior despite the answer being factually correct. Counterfactual simulatability measures how accurate these mental models are on simulatable counterfactuals, unseen inputs where the explanation should allow the user to confidently guess the model’s output (Chen et al., 2024). However, in contrast to this classification example, generation tasks have a much larger output space (e.g., there exists many possible summaries for a news document or free text responses to an open-ended medical question). This makes simulation extremely challenging as it is nearly impossible to precisely identify a single possible output based on its explanation. For instance in news summarization, if a user is shown the explanation “the summary should include the key event ” and a counterfactual document on “ the opening ceremony of the 2024 Paris Olympics ”, while the user can logically infer that the summary will include the opening ceremony, it is impossible to predict the exact wording in which it will appear. Explanations typically cannot enable humans to pinpoint a single model 3 Preprint. Under review. output. However, they are still very useful if they help humans narrow down the possible outputs (e.g., refining “all possible summaries” to “summaries that mention the opening ceremony”). 3.1 Notation For a given generation task, a model Mtakes an input x∈Xand produces an output ox∈O and a corresponding explanation ex. Here, the input, output, and explanation are all natural language and, in the case
|
https://arxiv.org/abs/2505.21740v1
|
of generation, |O|may be arbitrarily large. A human observes x,ex, and forms a one-to-many mental model hx,ex:X→ P(O), where P(O)denotes the power set of O, and hx,ex(x′)denotes what the human infers to be M’s possible outputs on a counterfactual x′. For simplicity, hex(x′)is used to denote hx,ex(x′). 3.2 Simulatability Given a mental model, a counterfactual input is deemed simulatable if the observer can refine their expectation of the model’s output on the counterfactual (i.e., |hx,ex(x′)| ≪ O). Our approach to determining whether a counterfactual is simulatable is outlined further in section 4.3. 3.3 Simulation generality and precision The generality of an explanation is defined as the diversity of simulatable counterfactuals it applies to. In the previous example, the explanation “ the summary should include the key event ” is more general compared to “ the summary should mention the opening ceremony of the 2024 Paris Olympics ” since the former can be applied to documents covering a broader range of topics. Generality is calculated as one minus the average pairwise similarity between simulatable counterfactuals: generality =1−Ex′,x′′∼p,x′̸=x′′[α(x′,x′′)], (1) where pis the distribution of simulatable counterfactuals and αis a similarity metric such as cosine similarity. Precision measures whether the model’s actual output M(x′)is within the observer’s inferred output space: precision =Ex′∼p 1[M(x′)∈hex(x′)] . (2) In practice, it is unfeasible for humans to enumerate all valid outputs in hex(x′)due to the large output space and so determining whether the model’s output is a member of the human’s inferred output space is difficult. In section 4.3, we propose a method to estimate mental models by evaluating the presence of inferred atomic units of information in the model’s output. 4 Methods 4.1 Datasets We select two generative tasks that represent practical LLM use cases and whose natural language explanations differ in relation to generality and precision. For the first task of news summarization , we use the CNN/DM dataset (Nallapati et al., 2016). Summarization is chosen as it is a well-studied application for language generation. Additionally, this task allows us to study high-level, abstract explanations since summarization does not rely on facts learned during the LLM’s pre-training, rather its ability to identify key elements from the input document. Conversely, we also investigate medical suggestion , a domain where faithful explanations are critical as they allow humans to accurately predict the model’s behavior. For this task, 4 Preprint. Under review. Figure 2: Example explanations, counterfactuals, counterfactual outputs, and annotations for news summarization and medical suggestion. Atomic units of the explanation are highlighted (for medical suggestion, blue: patient information, orange: suggestions). we use the Taiwan e-Hospital Dataset, a collection of 86,399 Mandarin question answer pairs from the online health website Taiwan e-Hospital (Chen et al., 2022). We select this specific dataset because each sample consists of a question, suggestion, explanation triple, ensuring that the questions are complex enough such that explanations are helpful. Furthermore, this dataset mirrors the practical setting we aimed to study, where everyday users interact with LLMs to seek general medical advice without any specialized knowledge. The differences between explanations for these tasks are illustrated in
|
https://arxiv.org/abs/2505.21740v1
|
figure 2. In contrast to summarization, generating medical suggestions is more knowledge-based, requiring the LLM to identify key elements of the user’s query (e.g. their expressed symptoms, any relevant medical history) and relate them to potential suggestions using knowledge encoded in the model. As such, the explanations are highly specific to the input. For instance in figure 2, whereas the summarization explanation applies to counterfactuals where abstract elements such as a “ key event ” or “ quotes or states ’ are mentioned, the medical suggestion explanation can only be used to infer the model’s output when a user expresses “ persistent nasal discharge ”, “frequent sneezing ”, and “ symptoms [that] have lasted for over three months ”. While explanations for medical suggestion are likely to apply to a more limited range of counterfactuals, they allow the user to infer specific pieces of information in the output as opposed to high-level elements as is the case for summarization. Therefore, these two tasks are complementary for studying counterfactual simulatability as they represent different requirements for generality and precision. 4.2 Explanations Through prompting (instructions, few-shot examples), we guide the LLMs to generate ex- planations that emphasize this difference in generality and precision across tasks. For news 5 Preprint. Under review. summarization, to be highly generalizable, we encourage the model to explain its decision process at a high-level, identifying abstract elements while avoiding reference to specific topics. For medical suggestion, we instruct the model to first identify important details in the user’s question, propose possible underlying causes, and suggest recommended actions accordingly. These approaches reflect the skill-based vs. knowledge-based nature of our tasks. Additionally, following Chen et al. (2024), we experiment with both Chain-of-thought and Post-hoc prompting to produce explanations (Camburu et al., 2018). The prompts used are provided in appendix F. 4.3 Estimating mental models Generation tasks are particularly challenging for counterfactual simulatability because mental models are extremely difficult to estimate given the large space of possible outputs. To this end, we introduce a method for estimating mental models by considering atomic units of information in the explanation as a proxy. Intuitively, explanations exidentify key pieces of information that the model considers essential for the output. Thus, an observer may simulate that if the counterfactual input x′ includes a piece of information adeemed important by the model in ex, the model should similarly take into account ain its output for x′. Based on this explanation ex, an observer’s mental model hexcan be formalized as: hex(x′) ={o∈O| ∀a∈ex∩x′,a∈o}, (3) where the space of possible outputs on the counterfactual is refined to only outputs that contain the elements {a}. With this, we formulate simulatability and precision as: simulatability =1{∀a∈ex,a∈x′} (4) precision =|{a∈ex∩M(x′)}| |{a∈ex}|(5) First, we determine whether a counterfactual is simulatable by verifying whether all atomic units of the explanation appear in the counterfactual (equation 4). Then, for simulatable counterfactuals, precision is measured as the proportion of these atomic units that appear in the model’s counterfactual output (equation 5). Generality is calculated using the approach outlined in section 3.3 using cosine similarity. Finally, we average
|
https://arxiv.org/abs/2505.21740v1
|
generality and precision scores across explanations. A key distinction between the tasks is that while each atomic unit of a summarization explanation is linked to an expected item in the input (for simulatability) and its reference in the output (for precision), medical suggestion explanations do not follow this one-to-one mapping of units between input and output. For instance, using the example provided in figure 2, while ” Persistent nasal discharge ” is identified as important, it is unclear which suggested action the model has generated in response to this symptom. To address this, for the medical domain, we further instruct the LLM when parsing the explanation to classify units into two categories, patient information and suggestions. These categories are then used to evaluate simulatability and precision respectively during annotation (see figure 2). 4.4 Experimental setup Human evaluation. Figure 1 provides a high-level overview of our evaluation framework. First, using randomly sampled inputs from our dataset, we prompt GPT-4 Turbo (see section 4.2), to generate outputs and explanations . For each of these explanations, we use GPT-4 Turbo togenerate 3 relevant counterfactuals , providing the explanation in the prompt as context. Then, we use GPT-4 Turbo to decompose the explanation into atomic units (i.e., ex→ { a}) and, in the medical suggestion use case, instruct it to extract units into two groups for patient details or suggested actions. We opted for GPT-4 Turbo as the LLM in our evaluation 6 Preprint. Under review. framework due to its performance and relatively low cost. Finally, we generate counterfactual outputs using the same prompt as the first step and have humans examine the counterfactual and counterfactual output to annotate simulatability and precision for each atomic unit of the explanation. To evaluate the ability of GPT-4 Turbo to break down explanations, we also instruct the annotators to indicate whether each unit was extracted correctly and whether any details that should have been extracted are missing. Furthermore, we investigate if increasing the number of counterfactuals generated per explanation affects generality, but find no noticeable differences (appendix E). Additional details for the human evaluation and the prompts used are provided in appendix A and F respectively. Comparison of human-LLM annotations. We evaluate whether using an LLM to automate human annotation is feasible by prompting GPT-4 Turbo with similar instructions and assessing the inter-annotator agreements between human-human and human-LLM pairs. Automatic evaluation. Finally, using GPT-4 Turbo for annotation, we repeat our frame- work and conduct a larger evaluation of three popular LLMs using more data and five counterfactuals per explanation. Additional details for the automatic evaluation are pro- vided in appendix B. 5 Results 5.1 Intermediate results GPT-4 Turbo is able to parse explanations well for summarization but not for medical suggestion. Annotators find that the LLM successfully breaks down explanations with 96% and 57% accuracy for news summarization and medical suggestion respectively. For medical suggestion errors, the LLM either fails to identify a key detail or extracts an atomic unit incorrectly (extracted a unit it should not have or classified it as the incorrect type of information). A detailed
|
https://arxiv.org/abs/2505.21740v1
|
break down of parsing errors is provided in appendix C. GPT-4 Turbo is able to generate simulatable counterfactuals for summarization but not for medical suggestion. While almost all (74 /76) generated counterfactuals are deemed simulatable in the summarization setting, only slightly more than half are for medical suggestion (52 /90). Most non-simulatable counterfactuals for medical suggestion contain 0.4−0.8 of the atomic units of the explanation, indicating that counterfactual generation is more difficult in this setting. The distribution of generated counterfactuals, bucketed by the proportion of atomic units that appear is provided in appendix D. 5.2 Human evaluation Task Explanation Method Generality Precision News SummarizationChain of Thought 0.52 0.81 Post Hoc 0.49 0.89 Medical SuggestionChain of Thought 0.20 0.51 Post Hoc 0.26 0.59 Table 1: Generality and precision across explanation types and tasks for human evaluation. GPT-4 Turbo explanations lead to generalizable counterfactuals and consistent mental models in the case of summarization, with approximately 0.8 of the inferred information appearing in the counterfactual output on average. In contrast, explanations in the medical suggestion setting are less generalizable and less precise (approximately 0.5), indicating that LLMs may struggle more to reliably explain their behavior for this task. Chain-of-thought and Post-hoc explanations lead to similar results (see table 1). Additionally, as a sanity check, we generate counterfactual outputs conditioned on the original explanation and verify that these result in a precision score of 1.00. Since we expect 7 Preprint. Under review. that the model will perfectly follow the explanation, this precision score indicates that our framework does indeed capture how well models follow their explanations. Thus, the low precision scores for medical suggestion are due to the model’s behavior not adhering to its explanation on that task rather than the evaluation setup. 5.3 Automatic evaluation News Summarization Medical Suggestion Annotator 1 2 3 GPT-4 Turbo n 1 2 3 GPT-4 Turbo n 1 - 0.35 0.71 0.64 263 - 0.76 0.74 0.68 273 2 0.35 - 0.57 0.48 213 0.76 - 0.73 0.54 261 3 0.71 0.57 - 0.71 73 0.74 0.73 - 0.74 258 Table 2: Inter-annotator agreement (Cohen’s Kappa) between human annotators and GPT-4 Turbo for news summarization and medical suggestion. GPT-4 Turbo is able to approximate human annotation for our tasks. Table 2 reports the pairwise Cohen’s Kappa between each pair of annotators and the LLM. We find that overall, GPT-4 Turbo achieves similar inter-annotator agreement to humans, with an average of 0.61 for human-LLM pairs compared to 0.54 for human-human pairs in news summarization and 0.65 for human-LLM pairs compared to 0.74 for human-human pairs in medical suggestion. Task ModelChain-of-thought Post-hoc # Expl # Samples Generality Precision # Expl # Samples Generality Precision News SummarizationClaude 3.7 Sonnet 48 189 0.67 0.93 45 173 0.62 0.84 GPT-4 46 160 0.59 0.84 48 193 0.65 0.78 Llama 3 47 172 0.67 0.74 43 153 0.67 0.66 Medical SuggestionClaude 3.7 Sonnet 24 55 0.21 0.48 24 78 0.20 0.66 GPT-4 36 103 0.20 0.46 36 122 0.19 0.65 Llama 3 24 69 0.19 0.56 30 110 0.20 0.66 Table 3: Generality and precision across
|
https://arxiv.org/abs/2505.21740v1
|
models, explanation types, and tasks for automatic evaluation. The results of the automatic evaluation, presented in table 3, further support our findings from the human evaluation. Namely, models are much better able to accurately explain their behavior for summarization as opposed to medical suggestion while remaining general such that these explanations apply to a diverse set of counterfactuals. Chain-of-thought explanations tend to lead to better mental models for summarization, which may reflect the skill-based nature of this task. On the other hand, Post-hoc explanations lead to more precise explanations for medical suggestion. Models may also differ in their ability to describe their behavior in a generative setting, for instance, Claude 3.7 Sonnet demonstrates noticeably better precision scores for news summarization compared to other models. 6 Discussion LLM explanations are helpful for skill-based generation tasks but struggle for knowledge- based generation tasks. While model explanations do enable users to infer pieces of infor- mation that will appear in counterfactual outputs in both settings, the mental models they produce are more accurate in summarization compared to medical suggestion. Furthermore, summarization explanations, employing a high-level approach in their explanation, also lead to counterfactuals that are more generalizable. These may suggest that LLMs are better able to describe their behavior for skill-based tasks, where users can reliably infer the 8 Preprint. Under review. elements that will appear in the outputs, compared to knowledge-based tasks, where users are less capable of reliably inferring specific points (e.g., suggestions). Additionally, using Chain-of-thought prompting, which aligns with the skill-based nature of summarization, may lead to more precise explanations. These findings may reflect the predictability of model behavior on different tasks types. For medical question, minor alterations to the question may lead to very different answers. For instance, if a user describes ” I experience chest pain sometimes when I exercise. ” a model might respond ” consider reducing exercise intensity, try warming up thoroughly before exercise ... ”. However, if a user changes their question slightly to ” I experience chest pain sometimes when I exercise. I started taking a new pre-workout supplement with high caffeine recently. ”, although they are expressing the same symptoms, because of the presence of additional information the model might respond ” discontinue the supplement or use smaller dosages, the chest pain is likely related to caffeine-induced heart palpitations ... ”. These differences in the sensitivity of the model’s output relative to the input may lead to the differences in precision and generality observed in our experiments. Our counterfactual simulatability evaluation framework is effective in the summarization setting but less suited for medical suggestion. Although LLMs are able to automate the human annotation steps in our evaluation, they demonstrate issues in other aspects for the medical suggestion task. Specifically, annotators identify many errors in the parsing of explanations into atomic units, where the LLM misses key information or mis-classifies the information. In one instance, “possible heat-induced asthma” is incorrectly extracted as an expressed symptom when in reality it is a potential cause. Unlike the summarization setting, this additional requirement of classifying the type of information
|
https://arxiv.org/abs/2505.21740v1
|
introduces more complexity, making the LLM a less effective tool in the evaluation pipeline. Additionally, we found that the LLM is unable to produce as many simulatable counterfactuals in the medical domain compared to summarization. There is significant room for improvement towards adapting counterfactual simulatability for knowledge-based tasks like medical suggestion, where the explanation and decision process for an LLM is less explicit and relies heavily on knowledge encoded in the model. References Gagan Bansal, Besmira Nushi, Ece Kamar, Walter S. Lasecki, Daniel S. Weld, and Eric Horvitz. Beyond accuracy: The role of mental models in human-ai team performance. Proceedings of the AAAI Conference on Human Computation and Crowdsourcing , 7:2–11, October 2019. ISSN 2769-1330. doi: 10.1609/hcomp.v7i1.5285. URL http://dx.doi.org/10.1609/hcomp. v7i1.5285 . Oana-Maria Camburu, Tim Rockt ¨aschel, Thomas Lukasiewicz, and Phil Blunsom. e- snli: Natural language inference with natural language explanations. In S. Ben- gio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett (eds.), Advances in Neural Information Processing Systems , volume 31. Curran Asso- ciates, Inc., 2018. URL https://proceedings.neurips.cc/paper files/paper/2018/ file/4c7a167bb329bd92580a99ce422d6fa6-Paper.pdf . Wei-Lin Chen, An-Zi Yen, Hen-Hsen Huang, and Hsin-Hsi Chen. Learning to generate explanation from e-hospital services for medical suggestion. In Nicoletta Calzolari, Chu- Ren Huang, Hansaem Kim, James Pustejovsky, Leo Wanner, Key-Sun Choi, Pum-Mo Ryu, Hsin-Hsi Chen, Lucia Donatelli, Heng Ji, Sadao Kurohashi, Patrizia Paggio, Nianwen Xue, Seokhwan Kim, Younggyun Hahm, Zhong He, Tony Kyungil Lee, Enrico Santus, Francis Bond, and Seung-Hoon Na (eds.), Proceedings of the 29th International Conference on Computational Linguistics , pp. 2946–2951, Gyeongju, Republic of Korea, October 2022. International Committee on Computational Linguistics. URL https://aclanthology. org/2022.coling-1.260/ . Yanda Chen, Ruiqi Zhong, Narutatsu Ri, Chen Zhao, He He, Jacob Steinhardt, Zhou Yu, and Kathleen McKeown. Do models explain themselves? counterfactual simulatability of 9 Preprint. Under review. natural language explanations. In Proceedings of the 41st International Conference on Machine Learning , ICML’24. JMLR.org, 2024. Jay DeYoung, Sarthak Jain, Nazneen Fatema Rajani, Eric Lehman, Caiming Xiong, Richard Socher, and Byron C. Wallace. ERASER: A benchmark to evaluate rationalized NLP models. In Dan Jurafsky, Joyce Chai, Natalie Schluter, and Joel Tetreault (eds.), Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics , pp. 4443–4458, Online, July 2020. Association for Computational Linguistics. doi: 10.18653/v1/2020. acl-main.408. URL https://aclanthology.org/2020.acl-main.408/ . Yuemeng Du, Jingyan Qin, Shujing Zhang, Sha Cao, and Jinhua Dou. Voice User Inter- face Interaction Design Research Based on User Mental Model in Autonomous Vehicle , pp. 117–132. Springer International Publishing, 2018. ISBN 9783319912509. doi: 10.1007/ 978-3-319-91250-9 10. URL http://dx.doi.org/10.1007/978-3-319-91250-9 10. Mohsen Fayyaz, Fan Yin, Jiao Sun, and Nanyun Peng. Evaluating human alignment and model faithfulness of llm rationale, 2024. URL https://arxiv.org/abs/2407.00219 . Christos Fragkathoulas and Odysseas Spyridon Chlapanis. Local explanations and self- explanations for assessing faithfulness in black-box llms. In Proceedings of the 13th Hellenic Conference on Artificial Intelligence , SETN ’24, New York, NY, USA, 2024. Association for Computing Machinery. ISBN 9798400709821. doi: 10.1145/3688671.3688775. URL https://doi.org/10.1145/3688671.3688775 . Haoyu Gao, Ting-En Lin, Hangyu Li, Min Yang, Yuchuan Wu, Wentao Ma, Fei Huang, and Yongbin Li. Self-explanation prompting improves dialogue understanding in large language models. In
|
https://arxiv.org/abs/2505.21740v1
|
Nicoletta Calzolari, Min-Yen Kan, Veronique Hoste, Alessan- dro Lenci, Sakriani Sakti, and Nianwen Xue (eds.), Proceedings of the 2024 Joint In- ternational Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024) , pp. 14567–14578, Torino, Italia, May 2024. ELRA and ICCL. URL https://aclanthology.org/2024.lrec-main.1269/ . Dedre Gentner and Albert L. Stevens. Mental Models . Psychology Press, January 2014. ISBN 9781317769408. doi: 10.4324/9781315802725. URL http://dx.doi.org/10.4324/ 9781315802725 . Matthew Ho, Aditya Sharma, Justin Chang, Michael Saxon, Sharon Levy, Yujie Lu, and William Yang Wang. Wikiwhy: Answering and explaining cause-and-effect questions. InThe Eleventh International Conference on Learning Representations , 2023. URL https: //openreview.net/forum?id=vaxnu-Utr4l . Shiyuan Huang, Siddarth Mamidanna, Shreedhar Jangam, Yilun Zhou, and Leilani H. Gilpin. Can large language models explain themselves? a study of llm-generated self- explanations, 2023. URL https://arxiv.org/abs/2310.11207 . Ziwei Ji, Nayeon Lee, Rita Frieske, Tiezheng Yu, Dan Su, Yan Xu, Etsuko Ishii, Ye Jin Bang, Andrea Madotto, and Pascale Fung. Survey of hallucination in natural language generation. ACM Computing Surveys , 55(12):1–38, 2023. Satyapriya Krishna, Jiaqi Ma, Dylan Slack, Asma Ghandeharioun, Sameer Singh, and Himabindu Lakkaraju. Post hoc explanations of language models can improve language models. In A. Oh, T. Naumann, A. Globerson, K. Saenko, M. Hardt, and S. Levine (eds.), Advances in Neural Information Processing Systems , volume 36, pp. 65468–65483. Curran Associates, Inc., 2023. URL https://proceedings.neurips.cc/paper files/paper/2023/ file/ce65173b994cf7c925c71b482ee14a8d-Paper-Conference.pdf . Jenny Kunz and Marco Kuhlmann. Properties and challenges of LLM-generated expla- nations. In Su Lin Blodgett, Amanda Cercas Curry, Sunipa Dev, Michael Madaio, Ani Nenkova, Diyi Yang, and Ziang Xiao (eds.), Proceedings of the Third Workshop on Bridging Human–Computer Interaction and Natural Language Processing , pp. 13–27, Mexico City, Mexico, June 2024. Association for Computational Linguistics. doi: 10.18653/v1/2024.hcinlp-1.2. URL https://aclanthology.org/2024.hcinlp-1.2/ . 10 Preprint. Under review. Tian Lei, Xu Liu, Lei Wu, Ziliang Jin, Yuhui Wang, and Shuaili Wei. The Influence of Matching Degree of the User’s Inherent Mental Model and the Product’s Embedded Mental Model on the Mobile User Experience , pp. 320–329. Springer International Publishing, 2016. ISBN 9783319395166. doi: 10.1007/978-3-319-39516-6 31. URL http://dx.doi.org/10.1007/ 978-3-319-39516-6 31. Qing Lyu, Shreya Havaldar, Adam Stein, Li Zhang, Delip Rao, Eric Wong, Marianna Apidianaki, and Chris Callison-Burch. Faithful chain-of-thought reasoning. In Jong C. Park, Yuki Arase, Baotian Hu, Wei Lu, Derry Wijaya, Ayu Purwarianti, and Adila Alfa Krisnadhi (eds.), Proceedings of the 13th International Joint Conference on Natural Language Processing and the 3rd Conference of the Asia-Pacific Chapter of the Association for Computa- tional Linguistics (Volume 1: Long Papers) , pp. 305–329, Nusa Dua, Bali, November 2023. Association for Computational Linguistics. doi: 10.18653/v1/2023.ijcnlp-main.20. URL https://aclanthology.org/2023.ijcnlp-main.20/ . Andreas Madsen, Sarath Chandar, and Siva Reddy. Are self-explanations from large language models faithful? In Lun-Wei Ku, Andre Martins, and Vivek Srikumar (eds.), Findings of the Association for Computational Linguistics: ACL 2024 , pp. 295–337, Bangkok, Thailand, August 2024. Association for Computational Linguistics. doi: 10.18653/v1/ 2024.findings-acl.19. URL https://aclanthology.org/2024.findings-acl.19/ . Michael Merry, Pat Riddle, and Jim Warren. A mental models approach for defining explainable artificial intelligence. BMC Medical Informatics and Decision Making , 21(1), December 2021. ISSN 1472-6947. doi: 10.1186/s12911-021-01703-7. URL http://dx.doi. org/10.1186/s12911-021-01703-7 . Martin Michalowski, Szymon Wilk, Jenny
|
https://arxiv.org/abs/2505.21740v1
|
M. Bauer, Marc Carrier, Aurelien Delluc, Gr ´egoire Le Gal, Tzu-Fei Wang, Deborah Siegal, and Wojtek Michalowski. Manually-curated versus llm-generated explanations for complex patient cases: An exploratory study with physicians. In Joseph Finkelstein, Robert Moskovitch, and Enea Parimbelli (eds.), Artificial Intelligence in Medicine , pp. 313–323, Cham, 2024. Springer Nature Switzerland. ISBN 978-3-031-66535-6. Ramesh Nallapati, Bowen Zhou, Cicero dos Santos, C ¸a˘glar Gu ˙lc ¸ehre, and Bing Xiang. Abstractive text summarization using sequence-to-sequence RNNs and beyond. In Stefan Riezler and Yoav Goldberg (eds.), Proceedings of the 20th SIGNLL Conference on Computational Natural Language Learning , pp. 280–290, Berlin, Germany, August 2016. Association for Computational Linguistics. doi: 10.18653/v1/K16-1028. URL https: //aclanthology.org/K16-1028/ . Stephen J. Payne. A descriptive study of mental models †.Behaviour & Information Technol- ogy, 10(1):3–21, 1991. doi: 10.1080/01449299108924268. URL https://doi.org/10.1080/ 01449299108924268 . Heleen Rutjes, Martijn Willemsen, and Wijnand IJsselsteijn. Considerations on explainable ai and users’ mental models. In Where is the Human? Bridging the Gap Between AI and HCI, United States, May 2019. Association for Computing Machinery, Inc. CHI 2019 Workshop : Where is the Human? Bridging the Gap Between AI and HCI ; Conference date: 04-05-2019 Through 04-05-2019. Julian Senoner, Simon Schallmoser, Bernhard Kratzwald, Stefan Feuerriegel, and Torbjørn Netland. Explainable ai improves task performance in human–ai collaboration. Scientific Reports , 14(1), December 2024. ISSN 2045-2322. doi: 10.1038/s41598-024-82501-9. URL http://dx.doi.org/10.1038/s41598-024-82501-9 . Emily Sheng, Kai-Wei Chang, Prem Natarajan, and Nanyun Peng. Societal biases in language generation: Progress and challenges. In Chengqing Zong, Fei Xia, Wenjie Li, and Roberto Navigli (eds.), Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers) , pp. 4275–4293, Online, August 2021. Association for Computational Linguistics. doi: 10.18653/v1/2021.acl-long.330. URL https://aclanthology.org/2021. acl-long.330/ . 11 Preprint. Under review. Mark Steyvers, Heliodoro Tejeda, Aakriti Kumar, Catarina Belem, Sheer Karny, Xinyue Hu, Lukas W. Mayer, and Padhraic Smyth. What large language models know and what people think they know. Nature Machine Intelligence , January 2025. ISSN 2522-5839. doi: 10.1038/s42256-024-00976-7. URL http://dx.doi.org/10.1038/s42256-024-00976-7 . Miles Turpin, Julian Michael, Ethan Perez, and Samuel R. Bowman. Language models don’t always say what they think: unfaithful explanations in chain-of-thought prompting. In Proceedings of the 37th International Conference on Neural Information Processing Systems , NIPS ’23, Red Hook, NY, USA, 2023. Curran Associates Inc. Helena Vasconcelos, Matthew J ¨orke, Madeleine Grunde-McLaughlin, Ranjay Krishna, To- bias Gerstenberg, and Michael S Bernstein. When do xai methods work? a cost-benefit approach to human-ai collaboration. In CHI Workshop on Trust and Reliance in AI-Human Teams , pp. 1–15, 2022. Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Brian Ichter, Fei Xia, Ed H. Chi, Quoc V . Le, and Denny Zhou. Chain-of-thought prompting elicits reasoning in large language models. In Proceedings of the 36th International Conference on Neural Information Processing Systems , NIPS ’22, Red Hook, NY, USA, 2022. Curran Associates Inc. ISBN 9781713871088. Zhichao Xu, Hansi Zeng, Juntao Tan, Zuohui Fu, Yongfeng Zhang, and Qingyao Ai. A reusable model-agnostic framework for faithfully explainable recommendation and system scrutability. ACM Trans. Inf. Syst. , 42(1), August 2023.
|
https://arxiv.org/abs/2505.21740v1
|
ISSN 1046-8188. doi: 10.1145/3605357. URL https://doi.org/10.1145/3605357 . Pei Zhou, Pegah Jandaghi, Hyundong Cho, Bill Yuchen Lin, Jay Pujara, and Xiang Ren. Probing commonsense explanation in dialogue response generation. In Marie-Francine Moens, Xuanjing Huang, Lucia Specia, and Scott Wen-tau Yih (eds.), Findings of the Associ- ation for Computational Linguistics: EMNLP 2021 , pp. 4132–4146, Punta Cana, Dominican Republic, November 2021. Association for Computational Linguistics. doi: 10.18653/v1/ 2021.findings-emnlp.349. URL https://aclanthology.org/2021.findings-emnlp.349/ . 12 Preprint. Under review. A Human Evaluation Annotators: Two groups of 3 student annotators each from the Columbia NLP lab were recruited for the human evaluation (one group per task). Since these tasks were meant to replicate LLMs in a practical setting, where a general user is asking for a brief summary of an article or seeking some quick medical advice, the annotators were not required to have any specialized domain knowledge. Distribution of Annotations: For each task and explanation type (Chain-of-thought and Post-hoc), annotators were asked to annotate across a total of 15 explanations with 3 coun- terfactuals each, resulting in 45 samples (explanation, counterfactual pairs). In order to compare human-human agreement against human-LLM agreement, each annotator was assigned the same set of 3 samples plus an additional 4. Not all annotations were completed for news summarization, leading to a slight discrepancy in annotations across tasks. News Summarization Instructions: •Parsed Explanation Annotations: Check if the explanation (Column B) is parsed correctly in Column C. If yes, put ”ok”. If not, write which information in the explanation is missing / which info in column C are hallucinating. •Document Annotations: Check whether each item in the parsed explanation appears in the document or not. Put either ”no” or ”yes”. •Summary Annotations: Check whether each item in the parsed explanation appears in the summary or not. Put either ”no” or ”yes” Medical Suggestion Instructions: Tasks 1, 3: •for each of the ”Extracted Points”, label ”Y” or ”N” if the point was parsed correctly from the AI explanation • if there are missing points or other comments, feel free to note them in the EXTRA QUESTION annotation cell • task 1 focuses on extracting patient details (e.g., symptoms, medical history) • task 3 focuses on extracting suggested next steps (e.g., recommended treatments) •since breakdowns are the same across rows, feel free to complete just one per example ID Tasks 2, 4: •for each of the ”Extracted Points”, label ”Y” or ”N” if the point is reflected in the counterfactual (task 2) or AI suggestion (task 4) •the matching is flexible, i.e., point does not necessarily need to appear in exactly the same wording to be marked ”Y” •for example, in example 3 the point mentions ””hemoglobin level of 10.5, slightly below normal”” these specific scores do not need to be mentioned, rather ””below normal levels”” is sufficient. 13 Preprint. Under review. B Automatic Evaluation The automatic evaluation consisted of 50 explanations per task and generating 5 counter- factuals each (250 explanation, counterfactual pairs total). The models evaluated were: Anthropic’s Claude 3.7 Sonnet, OpenAI’s GPT-4, and Meta’s Llama 3.3 70B Instruct. These models were chosen as they
|
https://arxiv.org/abs/2505.21740v1
|
represent a few popular LLM options as well as a mix of proprietary and open-source models. 14 Preprint. Under review. C Explanation Parsing Quality Results We show a categorized break down of the reported errors for explanation parsing in both tasks. Parsed Explanations News Summarization Medical Suggestion (n=26) ( n=30) Accuracy 0.96 0.57 Breakdown of Incorrect Examples: Missing Extraction 1 8 Incorrect Extraction 0 4 Missing and Incorrect Extraction 0 1 Table 4: Observed errors in explanation parsing across tasks. 15 Preprint. Under review. D Counterfactual Generation: Simulatability We calculate the proportion of atomic units that appear in each counterfactual for the human evaluation (where 1.00 indicates a simulatable counterfactual), bucket them, and display the results below. Proportion of atomic units present News Medical in counterfactual (human annotated) Summarization Suggestion 1.00* 74 52 0.80–0.99 2 0 0.60–0.79 2 20 0.40–0.59 0 17 0.20–0.39 0 1 0.00–0.19 0 0 Total 76 90 Table 5: Distribution of the proportion of atomic units present in GPT4-Turbo generated counterfactuals across tasks. *Indicates the set of simulatable counterfactuals. 16 Preprint. Under review. E Counterfactual Generation: Generality We show how generality and the number of simulatable counterfactuals changes as we adjust the number of counterfactuals generated per explanation. For news summarization and 10 counterfactuals per explanation, not all 100 counterfactuals were generated due to errors parsing the LLMs output. Since news summarizations are more expensive to produce (longer context prompt), we only use 10 explanations compared to 30 for medical suggestion. MetricCounterfactuals generated per explanation 3 5 10 News Summarization Generality 0.497 0.512 0.595 Simulatable counterfactuals 18 30 38 Total generated counterfactuals 30 50 80 Medical Suggestion Generality score 0.187 0.218 0.227 Simulatable counterfactuals 41 66 95 Total generated counterfactuals 90 150 300 Table 6: Generality and simulatability as the number of generated counterfactuals per explanation increases. 17 Preprint. Under review. F LLM Prompts We provide the prompts used in our evaluation pipeline. Figure 3: News Summarization: Chain-of-thought explanations 18 Preprint. Under review. Figure 4: News Summarization: Post-hoc explanations 19 Preprint. Under review. Figure 5: News Summarization: Counterfactual generation 20 Preprint. Under review. Figure 6: News Summarization: Explanation extraction 21 Preprint. Under review. Figure 7: News Summarization: Simulatability annotation 22 Preprint. Under review. Figure 8: News Summarization: Precision annotation 23 Preprint. Under review. Figure 9: Medical Suggestion: Chain-of-thought explanations 24 Preprint. Under review. Figure 10: Medical Suggestion: Post-hoc explanations 25 Preprint. Under review. Figure 11: Medical Suggestion: Counterfactual generation 26 Preprint. Under review. Figure 12: Medical Suggestion: Explanation extraction 27 Preprint. Under review. Figure 13: Medical Suggestion: Simulatability annotation 28 Preprint. Under review. Figure 14: Medical Suggestion: Precision annotation 29
|
https://arxiv.org/abs/2505.21740v1
|
Simulating the Unseen: Crash Prediction Must Learn from What Did Not Happen Zihao Li1∗Xinyuan Cao2∗Xiangbo Gao1Kexin Tian1Keshu Wu1 Mohammad Anis1Hao Zhang1Keke Long3Jiwan Jiang3Xiaopeng Li3 Yunlong Zhang1Tianbao Yang1Dominique Lord1Zhengzhong Tu1Yang Zhou1† 1Texas A&M University,2Georgia Tech,3University of Wisconsin-Madison Abstract Traffic safety science has long been hindered by a fundamental data paradox: the crashes we most wish to prevent are precisely those events we rarely observe. Ex- isting crash-frequency models and surrogate safety metrics rely heavily on sparse, noisy, and under-reported records, while even sophisticated, high-fidelity simula- tions undersample the long-tailed situations that trigger catastrophic outcomes such as fatalities. We argue that the path to achieving Vision Zero ,i.e., the complete elimination of traffic fatalities and severe injuries, requires a paradigm shift from traditional crash-only learning to a new form of counterfactual safety learning : reasoning not only about what happened, but also about the vast set of plausible yet perilous scenarios that could have happened under slightly different circumstances. To operationalize this shift, our proposed agenda bridges macro to micro. Guided by crash-rate priors, generative scene engines, diverse driver models, and causal learning, near-miss events are synthesized and explained. A crash-focused digital twin testbed links micro scenes to macro patterns, while a multi-objective validator ensures that simulations maintain statistical realism. This pipeline transforms sparse crash data into rich signals for crash prediction, enabling the stress-testing of vehicles, roads, and policies before deployment. By learning from crashes that almost happened, we can shift traffic safety from reactive forensics to proactive prevention, advancing Vision Zero. 1 Introduction Road transportation systems worldwide face a persistent safety challenge, with traffic crashes claiming about 1.35 million lives annually, according to the World Health Organization. In the United States alone, over 40,901 fatal crashes occurred in 2023 [ 1], imposing a staggering societal cost of estimated $1.85 trillion, including $460 billion in direct economic costs and $1.4 trillion in quality-of-life losses [2]. Despite decades of investment in road safety research and infrastructure improvements, severe crashes remain stubbornly difficult to predict and prevent, primarily because these catastrophic events are statistically rare relative to the vast scale of driving activity and inherently random. In fact, the US fatality rate stood at just 1.26 deaths per 100 million vehicle-miles traveled (VMT) in 2023 [ 2]. This illustrates that even a tiny fraction of driving situations, occurring in mere seconds and meters, can cause tremendous harm, yet pinpointing these events is like finding a needle in a haystack in billions of driving hours [ 3]. Thus, achieving the ambitious goal of “Vision Zero”—the elimination of traffic fatalities and serious injuries caused by traffic crashes—demands overcoming a fundamental paradox: the events we most urgently need to learn and avert are those we seldom observe, making traditional reactive approaches fundamentally inadequent [ 4] as it requiring years of driving data ∗The first two authors contributed equally. †Corresponding author: Yang Zhou ( yangzhou295@tamu.edu ). Preprint.arXiv:2505.21743v1 [cs.LG] 27 May 2025 to surface then analyze these “rare events”. This has underscored the urgent need for innovative predictive frameworks capable of reasoning effectively beyond these statistically rare but devastating traffic
|
https://arxiv.org/abs/2505.21743v1
|
scenarios. Rare-Event Metaphor Imagine a field strewn with a billion identical keys and one hidden landmine that explodes only when its exact key is tried. Thousands of harmless picks give the illusion of safety, yet each trial leaves the catastrophic pairing essentially untested. True safety, then, cannot rely on counting uneventful trials; it requires reasoning explicitly about the unseen key-mine match. So it is with crash prediction: rare, disastrous couplings hide among countless benign moments, demanding insight beyond observed data. To map where and when crashes are most likely, artificial intelligence (AI) has emerged as a promising approach that can proactively mine large crash and exposure datasets with spatial-temporal deep networks. Researchers have increasingly utilized advanced machine learning methods, including spatio-temporal networks [ 5,6], Transformers [ 7,8], and diffusion model [ 9], to learn region-level risk surfaces and patterns that account for traffic flow, road topology, census data, and weather dynamics[ 10]. Beyond region-level analyses, surrogate safety measures , such as time-to-collision (TTC) [ 11], conflict counts [ 12], post-encroachment time, and other near-miss indicators [ 13], enable per-vehicle, second-by-second risk assessment. Existing vision and trajectory-mining algorithms [ 14– 18] automatically extract and label safety-critical signals from video streams and probe-vehicle logs. These metrics are usually embedded in high-fidelity digital-twin simulations. The simulations fuse detailed road geometry, traffic sensors, and weather feeds with learned components, such as GAN- based scenario generators [ 19], to create virtual testbeds for evaluating vehicle safety functions and traffic interventions under realistic conditions [ 4,20,21]. Despite these advances, two critical gaps remain: surrogate-based models capture correlations but fail on rare or out-of-distribution events, and simulation pipelines often omit essential real-world details, limiting their fidelity. This paper begins with a mathematical analysis showing how the rarity and randomness of crashes cripple data, hungry AI models. We argue that effective “crash AI” must therefore learn from the vastly richer universe of near-misses, events that almost become crashes . After mapping the strengths and blind spots of current methods, we chart a path to overcome data sparsity, embrace high-dimensional behavioral complexity, and uncover causal insight. The roadmap is built on four mutually reinforcing pillars enabled by suites of AI. Together, these pillars fuse digital twins with generative scenarios, embed multimodal behavior and hypergraph interactions, impose multi-scale causal and symbolic consistency checks, and enable reasoning-driven safety interventions. 2 Core Challenges in Crash Prediction and Reproduction Accurately predicting crashes and reproducing them in experimental settings is intrinsically challeng- ing due to several interrelated factors. We highlight four core challenges that any comprehensive safety analysis framework must address: Diverse and Context-Sensitive Human BehaviorLack of High-Resolution Crash Data for ValidationGroup-wise Interactions and Heterogenous Environments Rarity and Randomness of Traffic Crashes Figure 1: Challenges in Crash-Centric Safety Analysis Framework 2.1 Rarity and Randomness of Traffic Crashes Severe crashes are statistically infrequent and often appear random, which makes it hard to gather sufficient data and validate models [ 22,23].In machine learning terms, rarity corresponds to extreme data imbalance or long-tailed distributions, where crashes make up only a tiny fraction of all driving scenarios. Randomness introduces distributional
|
https://arxiv.org/abs/2505.21743v1
|
noise, leading to out-of-distribution edge cases. 2 These factors together challenge standard supervised learning approaches. Traditional crash-frequency analysis requires many years of observations to obtain stable estimates. During this time, vehicle occupants and vulnerable users (e.g., pedestrians, cyclists) get injured, sometimes fatally, and vehicles and different properties or objects get damaged. Even large-scale naturalistic driving studies capture relatively few actual crashes compared to near-misses. For instance, the SHRP2 naturalistic study (with thousands of vehicle-hours of data) recorded on the order of only hundreds of crashes versus thousands of near-crashes. This scarcity means models calibrated on historical crashes risk being either underpowered or overfit to anomalous conditions [ 3]. Moreover, the stochastic variability is high – as [ 24] noted, crash occurrence has large random fluctuations due to the stochasticity of human nature. Thus, distinguishing meaningful patterns (signal) from randomness (noise) is inherently difficult. Rare-event statistics also challenge model validation: a predictor may perform well on surrogate or aggregated metrics yet still fail to capture truly rare extreme cases. Although AUC maximization [ 25–27] offers a promising approach to address class imbalance in such settings, AUC- based objectives remain difficult to optimize under significant data randomness and out-of-distribution conditions [28]. In summary, crash data are “small data” in a big data world – any reliable prediction strategy must somehow amplify information from non-crash surrogates or simulations to overcome the rarity of the events of ultimate interest. To our knowledge, this is the first systematic mathematical analysis of safety learning that explicitly accounts for both randomness and rarity ; to further illustrate the inherent difficulty of learning from such rare, stochastic events, we present a simplified mathematical formulation using Fisher information [ 29] to quantify the limits of estimating crash probabilities from crash-only data (Appendix A-B). As a result, rare-event estimators remain highly sensitive to sampling noise, limiting reliability. Crash-only learning is therefore statistically inefficient . To address this, we advocate for a shift toward counterfactual reasoning (Appendix C). Many non-crash scenes exhibit high-risk behaviors, such as unsafe following or delayed reactions, that reflect elevated latent risk. We propose augmenting the dataset with near-miss cases where the model-estimated crash probability exceeds a threshold. High-risk near misses densify informative samples, sharpen decision boundaries, boost Fisher information, and lower estimator variance, enabling efficient learning without extra crash data . Hungry Models Crash-only learning starves on rare events, and randomness only deepens its hunger. 2.2 Diverse and Context-Sensitive Human Behavior Besides the aforementioned curse of rarity [ 3,30], human drivers exhibit extremely complex and varied behavior, which becomes even less predictable in emergency or time-critical situations. Driving decisions are diverse[ 31–35] – a driver may choose to brake, swerve, or accelerate in response to a hazard, and this choice depends on a host of factors (individual skill, reaction time, attention state, surrounding traffic, road condition, etc.) as suggested by Yerkes-Dodson Law [ 36]. Critically, under emergency imminent-crash scenarios, human reactions can be highly context-sensitive and sometimes suboptimal. For example, some drivers instinctively brake hard while others may swerve or do nothing (freezing up) when a sudden obstacle
|
https://arxiv.org/abs/2505.21743v1
|
appears [ 37]. Such differences in human response can tip the scale between a collision and a narrow escape. Yet capturing these nuances in models is challenging. Simple rules or distributions may not reflect how humans behave in rare panic situations. In essence, human-in-the-loop uncertainty is a major hurdle: realistic safety analysis needs to model not just an average driver but the whole spectrum of possible driver behaviors and errors, especially in critical moments [38–40]. Stochastic Human Nature Human unpredictability is the hurdle, true safety comes from modeling every behavior, not just the average one. 2.3 Group-wise Interactions and Heterogeneous Environments Traffic crashes frequently result from the collective dynamics of multiple road users, vehicles, pedestrians, and cyclists, navigating complex and variable environments [ 41,42]. Group-wise 3 interactions, defined as the dynamic interplay among numerous actors whose behaviors mutually influence one another over time, occur both within each user class (e.g., vehicles responding to other vehicles [ 18,43,44]; pedestrians adjusting to fellow pedestrians [ 45]) as well as across classes (e.g., vehicle–infrastructure exchanges [46] or mixed vehicle–pedestrian flows [47] In transportation engineering, “human–vehicle–environment” interaction is the foundational concept taught in the very first course [48],yet traditional model-based approaches often fail short in capturing this high-dimensional complexity [18]. For example, a single lane-change maneuver in dense traffic can trigger oscillatory shockwaves: as one vehicle shifts lanes, following vehicles brake and accelerate in response, generating stop-and-go waves that propagate upstream and amplify risk [ 49–53]. Furthermore, in dense traffic, a sudden deceleration by one vehicle within a tightly packed convoy may precipitate a multi-vehicle collision through rapid propagation of traffic waves [ 54,55]. Roadway characteristics, including lane geometry, intersection control mechanisms, sight-distance limitations, and surface conditions, vary substantially between urban, suburban, and rural contexts, further shaping how these group interactions evolve [ 5]. Macro-scale context therefore matters: an emergency stop on a congested freeway differs fundamentally from one on an empty rural road [ 56]. Effective crash prediction must be context-aware, capturing how risk emerges from the interaction of many agents under specific environmental conditions [57, 58]. Traffic Scenario Group-wise InteractionIntra-type and Cross-type Interaction VehiclePedestrianLane markingCrosswalk markingEgo VehiclePreceding VehicleLane-changing VehicleHomogeneous Agents Heterogeneous Agents VRU Layer Vehicle Layer Road Layer VehicleVRUCrosswalkLane boundary Figure 2: Context-aware group-wise interactions and both intra-type and cross-type relations in traffic, illustrating higher-order graph dynamics dependencies. Emergent Multi-Agent Risk Traffic risk emerges from drivers interacting with their specific environmental context and, when other road users are present, from interactions within that context, underscoring the need for context-aware, group-wise, multi-agent modeling. 2.4 Lack of High-Resolution Crash Data for Validation Macroscopic crash databases (Table 1), including state and national compilations of police reports and roadway inventories, offer millions of crash records describing basic information such as location and time. Yet these sources are inherently coarse and often under-reported. Moreover, they typically lack contextual information. Microscale resources, by contrast, stream high-rate, multi-modal sensor data that precisely track vehicle trajectories, driver inputs, and environmental context (Tables 2-3). However, serious crashes are rare in these collections, leaving models to train on plentiful surrogate or near-miss events while
|
https://arxiv.org/abs/2505.21743v1
|
lacking ground-truth crash dynamics. We know how people drive, but not how they crash within seconds . Even the context -rich SHRP -2 study [ 59] captured only 1–2k mostly minor crashes across millions of miles, with many events missing synchronized multi -modal data (Table 2). Crash datasets are wide captured but shallow, with millions of events with minimal detail. Without richer multi-modal crash records to support AI reasoning, it remains difficult to validate that simulations or analytical models accurately reproduce real-world impacts. 4 Summary Challenges Crash rarity, behavioral complexity, multi -agent dynamics, and data scarcity constrain the prevailing safety analysis. AI methods that fuse heterogeneous real and simulated data offer great potential. 3 Limitations of Current Methods While recent studies have advanced both macro- and micro-level crash modeling, limitations persist at each scale. We begin by examining macroscopic crash-frequency models, then turn to micro- level approaches such as surrogate safety metrics and simulation. We conclude by highlighting the disconnect between these levels, which hinders a unified understanding of traffic safety. 3.1 Limitations of Macroscopic Crash-Frequency Learning Over the past decade, AI-based crash-frequency research has reframed safety prediction as a spatio- temporal learning problem on road graphs. Convolutional and graph-neural architectures ingest high-resolution traffic probes, weather data, and points of interest, and output fine-grained risk maps [ 5,6,60]. Yet they learn purely correlational patterns from police reports that underreport 30–40% of minor or non-injury crashes [ 61], and from crash-count datasets dominated by zeros due to event rarity over short time frames or road segments [ 62,63]. Table 1 summarizes the traditional crash datasets. Macroscopic crash-frequency statistical modeling remains rooted in count-data formulations, such as Poisson, negative-binomial, Poisson–lognormal, negative binomial-Lindley, and their spatio- temporal or random-parameters extensions [ 64,22]. While these models yield interpretable estimates and support more defensible causal inference, they struggle to scale to massive, high-dimensional road–time grids due to computational bottlenecks, and their linear link functions cannot natively capture complex, multimodal interactions without extensive, manual feature engineering [ 22,23]. To bridge this trade-off, Hybrid frameworks merge statistical models with deep learning [ 8,65,66], gaining the non-linear flexibility absent from classical approaches and offering a clearer path toward causal inference. Yet, because they still rely on aggregated crash counts, the insights remain macro- level and often do not generalize to specific locations or events, limiting their usefulness for targeted safety interventions. Correlation Is Not Enough Correlation is useful for prioritization, but causality is indispensable for intervention design. 3.2 Reliance on Surrogate Safety Metrics with Uncertain Crash Correlation Surrogate safety measures (SSMs), and other “near-miss” indices, provide an anticipatory view of crash risk [ 67]. Typically, SSMs are computed by solving ordinary differential equations (ODEs) that model vehicle kinematics and flag an event whenever inter-vehicle separation falls below physical dimensions [ 52,68,69]. These metrics excel at highlighting hazardous interactions that occur far more often than actual crashes. Yet mapping moderately low TTC values (e.g. 1.0–2.5 s) to true collision risk remains ambiguous: unless TTC falls into a very low regime, there is no consensus on what threshold signifies danger . The
|
https://arxiv.org/abs/2505.21743v1
|
jump from a moderately low SSM value to an actual collision is tenuous, as countless sub-1.5 s TTC events resolve safely. This occurs because SSMs depend critically on the chosen vehicle-dynamics model, assumed driver-behavior parameters, and encoded interaction scenarios. Consequently, correlations between SSM counts and crash counts fluctuate with context, threshold choice, and study design, making SSMs reliable for ranking relative severity but unreliable for quantifying absolute risk. Although Extreme Value Theory has been used within hierarchical Bayesian frameworks to extrapolate crash likelihoods from SSM tails [ 70–72], it still relies on strong, hard-to-verify assumptions about tail shape and does not systematically resolve these fundamental ambiguities. 5 SSMs may not work Put simply, SSMs only earn our trust as they near zero. At that point they’re critical alarms, but at moderately low values their indication of true crash likelihood remains murky. 3.3 High-fidelity simulation still under-samples the crash tail State-of-the-art simulators, combining HD maps, multi-body dynamics, and sensor emulation, re- produce everyday traffic with impressive detail. The details of the existing simulation are sum- marized in Tables 4-5 of the Appendix. Extensions such as stochastic driver models [ 69,73,74], domain-randomized physics [ 75], reinforcement-learning agents [ 76–78], and adversarial [ 79–81] or importance-sampling searches [ 82,83] have improved fidelity for typical maneuvers. Yet critical edge cases remain under-represented. Driver models, even when randomly perturbed, fail to capture the full spectrum of human error, and physical parameters are almost always randomized independently. As a result, some dangerous joint conditions (e.g., low tire–road friction + delayed braking + poor visibility), which may lie well outside any single marginal tail, occur neither realistically nor with controllable frequency . While marginal long-tail events in any one parameter (such as extremely low friction or severe sensor noise) are themselves high-risk, validation typically measures only aggregate exposure (total simulated kilometers) rather than explicit coverage of these high-risk combinations, offering no guarantee that the rare-event manifold has been explored. Consequently, simulators can score well on overall statistics yet still miss the critical collision modes that drive real-world fatalities, fostering unwarranted confidence in their safety assessments . 3.4 Macro-micro inconsistency A persistent limitation in current traffic safety research is the disconnect between macro-level and micro-level analyses [ 84]. Road safety management often emphasizes system-wide indicators, such as crash rates per million vehicle-miles or the identification of high-risk locations (black spots), but overlooks the fine-grained dynamics that cause individual crashes [ 60,85–87]. Conversely, micro-level studies, such as simulations or driving simulator experiments [ 14,17,76,88,89], typically examine specific behaviors or events in isolation, without accounting for their broader implications on network-level safety. Furthermore, many simulation platforms are primarily designed for analyzing traffic efficiency and do not explicitly model collisions or safety-critical outcomes. This disconnect between purpose and application introduces inconsistencies: micro-level insights may fail to generalize or translate to measurable improvements in real-world safety, especially when the tested conditions are rare or the operational environment differs significantly, a phenomenon known as distribution shift [ 76,81,90]. Without a mechanism to bridge these scales, we lack the ability to verify whether micro-level innovations produce
|
https://arxiv.org/abs/2505.21743v1
|
system-level benefits, or to prioritize which micro-level scenarios matter most based on macro-level crash data. Closing this loop requires a unified, bidirectional framework in which macro crash patterns inform the generation of critical micro-level scenarios, and micro-level causal evidence is fed back to refine and calibrate macro-level risk models. Macro–Micro Divide Accurate micro-level behavior models do not ensure macro-level safety gains. Without integrating insights across scales, detailed fidelity stays inconsequential. 4 Toward Vision Zero: An AI-Centered Agenda To systematically overcome the core challenges inherent in traditional methods, we propose an integrated AI-driven framework structured around four foundational pillars: (1) New Simulation Platform (a high-fidelity digital twin for perturbation-driven safety evaluation), (2) New Scenario Engine (an advanced generative AI system for realistic and rare-event scenario synthesis), (3) New Validation Suite (multi-scale robustness metrics and rare-event validation), and (4) New Intervention Platform (actionable risk insights powered by interpretable AI reasoning and reinforcement learning). This unified approach explicitly quantifies and identifies true crash vulnerabilities, transitioning from 6 ambiguous surrogate measures to direct, scenario-specific risk quantification through systematic perturbation, generation, validation, and intervention. GenAI Scenario Creation Scenario Banks Real Data & Near-Missing Observations Digital Twin Perturbation Testing Robustness MetricsActionable Risk Insights & Interventions SynthesizePopulateEvaluateSimulateUnderstandRefineEnrich Figure 3: A pipeline integrating GenAI-driven scenario creation, digital twin perturbation testing, and robustness evaluation to uncover rare-event risks and support Vision Zero interventions. 4.1 Generative AI Techniques for Realistic Scenario Creation and Behavior Emulation Generative AI for Scenario Creation. Recent progress in deep generative modeling offers a data-driven alternative to hand-crafted test suites. By learning directly from large-scale real-world driving datasets, modern models can reproduce the entire spectrum of traffic states in multiple formats, including time-series trajectories, rasterized semantic maps, video-like image sequences, and simulated sensor streams. The generated “AI worlds" jointly sample agents (vehicle positions, velocities, goals), environment (road geometry, signage, occlusions), and conditions (lighting, weather, work zones), providing variability that manual scripting struggles to reach.Current research follows three converging threads. First, utilizing reinforcement learning with a reward function consisting of both plausibility objectives [ 91]. Second, conditional generative models, ranging from GANs to diffusion processes, use structured priors such as lane topology [ 92], initial traffic layouts [ 93], or free-form natural-language prompts [ 94,95] to yield controllable, diversified scenarios. Third, large transformer-based diffusion models scale this idea further, capturing macro traffic context and micro interaction details in a single pass and allowing fine-grained scene editing or interpolation [ 96,97]. Together, these advances recast scenario generation as principled sampling over rich spatio-temporal and sensory data spaces, supplying diverse, realistic, and tunable test cases for both simulation and on-road evaluation. Personalized and Multi-modal Driver Behavior through AI. Generative AI now supports driver models spanning the full spectrum of human behaviors rather than a single rule set. By training on large naturalistic-trajectory datasets, these models learn latent embeddings that capture diverse behaviors, such as cautious, aggressive, distracted, and highly responsive [ 98]. Sampling from these embeddings in simulation populates scenes with heterogeneous agents, enhancing realism and behavioral diversity [ 39,40]. Variational autoencoders encode continuous intention manifolds, while GAN/GAIL variants emphasize rare,
|
https://arxiv.org/abs/2505.21743v1
|
high-risk maneuvers [ 99,100]. As a result, AI drivers exhibit varied reaction-time distributions, gap-acceptance thresholds, and lane-change propensities, escalating to panic responses under stressors such as phantom braking or adversarial disturbances [ 101–103]. This fusion of generative realism with adversarial focus yields traffic streams that stress-test both everyday flow and low-probability hazards. Hypergraph-based modeling of interactive environments. Graph neural networks (GNNs) excel at encoding pair-wise agent dependencies, yet real traffic risk emerges from group-wise interactions involving vehicles, VRUs, and infrastructure [ 104]. Hypergraphs generalise GNNs by allowing one edge to bind multiple heterogeneous nodes, thereby capturing collective manoeuvres such as platoon oscillations or pedestrian–vehicle negotiation at crossings [ 44,105]. To represent these rich relations faithfully, a heterogeneous-structured, temporally evolving hypergraph is needed: nodes of different types (vehicle, pedestrian, traffic control devices, roadway cell) participate in hyperedges whose composition and strength change over time, learned via dynamic attention mechanisms [ 106–108]. This potential structure lets the model reason over high-order dependencies while adapting online to the ever-shifting topology of real traffic streams. 7 Generative AI for Crash and Near-Crash Events. One core promise of generative modeling lies in its ability to overcome the rarity of crash data by creating synthetic yet plausible near-crash or crash scenarios. These outputs can take the form of time-stamped sequences of agent states, simulated video frames, or multimodal logs consistent with real crash precursors. Current approaches include class resampling to rebalance rare outcomes [ 109], adversarial perturbation of real-world scenes [83,110,111], diffusion-based generative modeling for rare events [ 112–114], and reinforcement learning with failure-driven objectives [ 115]. To maintain the realism and statistical validity of generated data, post-processing steps like rejection sampling are employed to align the synthetic data with real-world crash distributions, for example, by matching empirical annual crash counts per vehicle-kilometer [ 82]. This step ensures that the overall datasets remain statistically consistent with macroscopic safety data. Realistic Scenario Creation and Behavior EmulationEnvironmentConditionsAgents Response PoliciesVariability Models Behavioral Embeddings Crash and Near-Crash Events Hypergraph-based Modeling of Interactive Environments Personalized and Multi-modal Driver Behavior Scenario Creation Group-wise InteractionsEvolving HypergraphHeterogeneous Nodes Adversarial Scenario AugmentationEdge-Case SamplingRisk Precursor Identification Figure 4: Mind-map of generative AI techniques for scenario creation and behavior emulation. 4.2 New Platform: AI-Driven Digital Twin and Human-in-the-Loop (HITL) Design Leveraging the AI techniques outlined above, our proposed high-fidelity digital twin directly addresses crash rarity, behavioral unpredictability, and multi-agent interaction. Built on PhysX [ 116] and CARLA [ 117], the platform fuses precise road geometries, high-order vehicle dynamics, and realistic environmental factors (e.g., weather, visibility, and infrastructure states) into a single unified virtual environment [4]. Beyond the digital twin mentioned above, traffic simulators integrated with physical driving simulators also contribute to further examining the behaviors, which require photorealism. The quality of visual representation directly impacts the validity of human behavioral responses within simulated environments. Recent neural-based 3D reconstruction methods offer efficient ways. Neural Radiance Fields [ 118,119] and Gaussian Splatting [ 120,121] enable the creation of visually convincing digital replicas from captured images. Although these methods excel at generating photorealistic visual representations, they often lack the geometric information needed
|
https://arxiv.org/abs/2505.21743v1
|
to simulate physical interactions such as collisions and surface friction. These elements are essential for accurate traffic simulations. To bridge this gap between visual fidelity and physical plausibility, geometric-aware reconstruction algorithms have emerged as promising solutions [ 122–124], facilitating more physically accurate simulations while maintaining visual quality. For further enhancing the visual fidelity of digital assets, advanced generative AI models can effectively predict and fill the residual gaps between simulation and reality [125, 126], producing more coherent and complete representations. 4.3 New Suites of Evaluation and Reasoning : Multi-Scale Evaluation and Causal Reasoning The new framework must also rethink how to validate and evaluate our safety analysis methods. Instead of relying on traditional validation that focuses on a single domain, we propose a suite of multi- scale hybrid validation metrics that combine indicators of micro-level realism [ 127] with macro-level safety outcomes [ 128]. This ensures micro-level realism and enables two additional metrics: (1) Crash surrogate alignment, measuring how well surrogate-identified high-risk areas match historical crash 8 hotspots; and (2) Rare-event reproduction index: the statistical similarity between the distribution of outcomes from simulations seeded with real-crash scenarios and the distribution of observed crash events. A multi-objective Pareto search balances fine-grained simulation fidelity, surrogate alignment, and rare-event reproduction, stopping when no further Pareto gain is possible [129, 130]. Causal reasoning. While the aforementioned metrics provide direct quantitative measurement, they lack interpretability and the ability to enforce known constraints, both of which are especially important in scenarios involving crash prediction. In this sense, causal reasoning can be further incorporated to corroborate the understanding with AI engines: (i) Structural causal models (SCMs). An SCM is a directed acyclic graph whose nodes are traffic variables (speed, friction, gap, etc.) and whose structural equations define how interventions propagate; do-calculus enables counterfactual queries, e.g., “Had friction been higher, would the crash persist?” [ 131–133]; (ii) SCM-conditioned LLMs generate counterfactual “what if” scenario reconstructions, making such counterfactual queries scalable to high-dimensional multimodal traffic states. This pairs each explanation with natural- language rationales generated by a Vision Language Model (VLM) fine-tuned on “traffic-narration” data, allowing stakeholders to inspect causal chains without deciphering raw tensors. [ 134–136]; (iii)Symbolic and Neuro-symbolic validators. Symbolic AI encodes high-level traffic rules, such as right-of-way laws, reaction-time limits, and common failure modes, using formal rules [ 137]. These rules act as scenario filters, or merge with neural modules via differentiable logic layers to create hybrid neuro-symbolic checks [ 138–140]. Embedding structured reasoning in this way boosts both robustness and interpretability of the system. 4.4 New Intervention Brain: Intervention Design with Reasoning Building upon the risk insights and causal validation obtained from the multi-scale evaluation suite, the final step is to design intelligent, reasoning-driven interventions that mitigate identified crash scenarios. At the core of our intervention platform lies Reinforcement Learning (RL), which frames intervention design as an optimization problem: minimizing the probability of crashes under high-dimensional, uncertain conditions. The digital twin environment (Sects. 4.1–4.2) and rare-event reward shaping (Sect. 4.3) naturally support this formulation, making RL a foundational component for reasoning- driven decision making. Within this high-fidelity
|
https://arxiv.org/abs/2505.21743v1
|
simulator integrated with AI components, RL agents explore and refine control policies by interacting with a wide spectrum of crash-prone scenarios. Variants such as adversarial RL [ 141,142], robust RL [ 143], and hierarchical/hybrid RL [ 144,145] offer specialized mechanisms for handling uncertainties and maintaining performance under worst- case disturbances. However, conventional RL lacks semantic grounding, interpretability [ 146], and generalization to previously unseen rare events. To address these limitations, reasoning-enhanced extensions are needed, that augment RL with VLMs. VLM-augmented RL integrates high-level semantic cues and prompt-based guidance, using retrieved memory from historical scenarios to inform exploration and action selection [ 147,148]. The VLM supports RL by offering interpretable rationales [ 149], context-aware priors [ 150], and natural language feedback during training [ 151] to improve policy stability and correct unsafe behavior. Beyond the RL paradigm, Vision–Language Action (VLA) [ 152–154] providing an interpretable alternative by mapping perceptual inputs directly to intervention actions through pretrained models is also a viable solution. The effort in Sections 4.1 to 4.4 will form a unified framework that advances our mission toward Vision Zero by transforming identified risk into intelligent and actionable policy. 5 Conclusions Crash prediction will remain inconsistent and data -starved until we couple network -level statistics with trajectory -level details and augment both with plausible counterfactual scenarios. In this position paper, we argue that counterfactual augmentation, causal learning, and chain-of-thought reasoning with multimodal large models offer a path forward. Generative “what-if” engines simulate unobserved crash scenarios, causal graphs reveal why a near-miss becomes a crash or remains safe, and large language models translate these causal pathways into clear, human-readable explanations. Together, they transform sparse crash records into rich training signals for crash prediction. Implementing this pipeline requires interdisciplinary collaboration: AI researchers lead the development of predictive modeling, generative scenario synthesis, and causal reasoning techniques; automotive and transporta- 9 tion engineers contribute domain knowledge; human factors specialists incorporate realistic cognitive processes; and policymakers ensure that interventions are practical, reliable, and equitable. If pursued, this agenda would transform traffic safety from a reactive forensics into a proactive prevention. By exposing emerging technologies to millions of synthetic yet plausible crash scenarios before deployment, it enables early identification of vulnerabilities. Real-time risk alerts can inform driver or system intervention, while causal insights guide targeted infrastructure improvements. Such science is essential to any credible path toward Vision-Zero. References [1]National Center for Statistics and Analysis. Early estimate of motor vehicle traffic fatalities in 2024. Traffic Safety Facts Crash Stats Brief Statistical Summary DOT HS 813 710, National Highway Traffic Safety Administration, April 2025. [2]L. Blincoe, T. Miller, J.-S. Wang, D. Swedler, T. Coughlin, B. Lawrence, F. Guo, S. Klauer, and T. Dingus. The economic and societal impact of motor vehicle crashes, 2019 (revised). Technical Report DOT HS 813 403, National Highway Traffic Safety Administration, February 2023. [3]Dominique Lord, Simon P Washington, and John N Ivan. Poisson, poisson-gamma and zero-inflated regression models of motor vehicle crashes: balancing statistical fit and theory. Accident Analysis & Prevention , 37(1):35–46, 2005. [4]Hao Zhang, Ximin Yue, Kexin Tian, Sixu Li, Keshu Wu, Zihao Li,
|
https://arxiv.org/abs/2505.21743v1
|
Dominique Lord, and Yang Zhou. Virtual roads, smarter safety: A digital twin framework for mixed autonomous traffic safety analysis. arXiv preprint arXiv:2504.17968 , 2025. [5]Zhuoning Yuan, Xun Zhou, and Tianbao Yang. Hetero-convlstm: A deep learning approach to traffic accident prediction on heterogeneous spatio-temporal data. In Proceedings of the 24th ACM SIGKDD international conference on knowledge discovery & data mining , pages 984–992, 2018. [6]Zhengyang Zhou, Yang Wang, Xike Xie, Lianliang Chen, and Hengchang Liu. Riskoracle: A minute-level citywide traffic accident forecasting framework. In Proceedings of the AAAI conference on artificial intelligence , volume 34, pages 1258–1265, 2020. [7]Amin Karimi Monsefi, Pouya Shiri, Ahmad Mohammadshirazi, Nastaran Karimi Monsefi, Ron Davies, Sobhan Moosavi, and Rajiv Ramnath. Crashformer: A multimodal architecture to predict the risk of crash. In Proceedings of the 1st ACM SIGSPATIAL International Workshop on Advances in Urban-AI , pages 42–51, 2023. [8]Zihao Li, Chaolun Ma, Yang Zhou, Dominique Lord, and Yunlong Zhang. Leveraging textual description and structured data for estimating crash risks of traffic violation: A multimodal learning approach. IEEE Transactions on Intelligent Transportation Systems , pages 1–13, 2025. Early Access. [9]Junlan Chen, Qijie He, Pei Liu, Wei Ma, and Ziyuan Pu. Enhancing crash frequency modeling based on augmented multi-type data by hybrid vae-diffusion-based generative neural networks. arXiv preprint arXiv:2501.10017 , 2025. [10] Zihang Wei, Yang Zhou, Zihao Li, Mihir Kulkarni, and Yunlong Zhang. Supporting equitable and responsible highway safety improvement funding allocation strategies–why ai prediction biases matter. Accident Analysis & Prevention , 202:107585, 2024. [11] Errol R. Hoffmann and Rugolf G. Mortimer. Drivers’ estimates of time to collision. Accident Analysis & Prevention , 26(4):511–520, 1994. [12] PJ Cooper. Experience with traffic conflicts in canada with emphasis on “post encroachment time” techniques. In International calibration study of traffic conflict techniques , pages 75–96. Springer, 1984. 10 [13] Md Mohasin Howlader, Fred Mannering, and Md Mazharul Haque. Estimating crash risk and injury severity considering multiple traffic conflict and crash types: A bivariate extreme value approach. Analytic Methods in Accident Research , 42:100331, 2024. [14] Tomoyuki Suzuki, Hirokatsu Kataoka, Yoshimitsu Aoki, and Yutaka Satoh. Anticipating traffic accidents with adaptive loss and large-scale incident db. In Proceedings of the IEEE conference on computer vision and pattern recognition , pages 3521–3529, 2018. [15] Muhammad Monjurul Karim, Yu Li, Ruwen Qin, and Zhaozheng Yin. A system of vision sensor based deep neural networks for complex driving scene analysis in support of crash risk assessment and prevention. arXiv preprint arXiv:2106.10319 , 2021. [16] Kequan Chen, Chengcheng Xu, Pan Liu, Zhibin Li, and Yuxuan Wang. Evaluating the performance of traffic conflict measures in real-time crash risk prediction using pre-crash vehicle trajectories. Accident Analysis & Prevention , 203:107640, 2024. [17] Jianwu Fang, Lei lei Li, Junfei Zhou, Junbin Xiao, Hongkai Yu, Chen Lv, Jianru Xue, and Tat-Seng Chua. Abductive ego-view accident video understanding for safe driving perception. InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) , 2024. [18] Keshu Wu, Zihao Li, Sixu Li, Xinyue Ye, Dominique Lord, and Yang Zhou. Ai2-active safety: Ai-enabled interaction-aware active safety analysis with vehicle dynamics. arXiv preprint arXiv:2505.00322 , 2025.
|
https://arxiv.org/abs/2505.21743v1
|
[19] Tianqi Wang, Sukmin Kim, Ji Wenxuan, Enze Xie, Chongjian Ge, Junsong Chen, Zhenguo Li, and Ping Luo. Deepaccident: A motion and accident prediction benchmark for v2x autonomous driving. In Proceedings of the AAAI Conference on Artificial Intelligence , volume 38, pages 5599–5606, 2024. [20] Krešimir Kuši ´c, René Schumann, and Edouard Ivanjko. A digital twin in transportation: Real-time synergy of traffic data streams and simulation for virtualizing motorway dynamics. Advanced Engineering Informatics , 55:101858, 2023. [21] Keshu Wu, Pei Li, Yang Cheng, Steven T Parker, Bin Ran, David A Noyce, and Xinyue Ye. A digital twin framework for physical-virtual integration in v2x-enabled connected vehicle corridors. IEEE Transactions on Intelligent Transportation Systems , 2025. [22] Dominique Lord, Xiao Qin, and Srinivas R Geedipally. Highway safety analytics and modeling . Elsevier, 2021. [23] Fred Mannering, Chandra R Bhat, Venky Shankar, and Mohamed Abdel-Aty. Big data, traditional data and the tradeoffs between prediction and causality in highway-safety analysis. Analytic methods in accident research , 25:100113, 2020. [24] Ezra Hauer. Statistical road safety modeling. Transportation Research Record , 1897(1):81–87, 2004. [25] Zhuoning Yuan, Zhishuai Guo, Nitesh Chawla, and Tianbao Yang. Compositional training for end-to-end deep auc maximization. In International Conference on Learning Representations , 2021. [26] Tianbao Yang. Deep auc maximization for medical image classification: Challenges and opportunities. arXiv preprint arXiv:2111.02400 , 2021. [27] Zhuoning Yuan, Yan Yan, Milan Sonka, and Tianbao Yang. Large-scale robust deep auc maximization: A new surrogate loss and empirical studies on medical image classification. In Proceedings of the IEEE/CVF International Conference on Computer Vision , pages 3040–3049, 2021. [28] Tianbao Yang and Yiming Ying. Auc maximization in the era of big data and ai: A survey. ACM computing surveys , 55(8):1–37, 2022. 11 [29] Ronald Aylmer Fisher. Theory of statistical estimation. In Mathematical proceedings of the Cambridge philosophical society , volume 22, pages 700–725. Cambridge University Press, 1925. [30] Henry X Liu and Shuo Feng. Curse of rarity for autonomous vehicles. nature communications , 15(1):4808, 2024. [31] Vineet Kosaraju, Amir Sadeghian, Roberto Martín-Martín, Ian Reid, Hamid Rezatofighi, and Silvio Savarese. Social-bigat: Multimodal trajectory forecasting using bicycle-gan and graph attention networks. In Advances in Neural Information Processing Systems (NeurIPS) , pages 13772–13782, 2019. [32] Shreya Gong, Mark Hoogendoorn, Yike Lu, and Matthew Turk. Social-stgcnn: Social spatio- temporal graph convolutional neural networks for human trajectory prediction. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) , page 10534–10543, 2020. [33] Anonymous Zhora. What truly matters in trajectory prediction for autonomous driving?, 2023. NeurIPS Poster. [34] Manoj Bhat and Jonathan Francis. Multi-modal agent trajectory prediction with local self- attention contexts. In NeurIPS Workshop on Machine Learning for Autonomous Driving , 2020. [35] Rowan McAllister. Multimodal trajectory prediction for autonomous driving with semantic map and dynamic graph attention network. In NeurIPS Workshop on Machine Learning for Autonomous Driving , 2020. [36] Robert M Yerkes, John D Dodson, et al. The relation of strength of stimulus to rapidity of habit-formation. Journal of comparative neurology and psychology , 18(5):459–482, 1908. [37] Xuesong Wang, Meixin Zhu, Ming Chen, and Paul Tremont. Drivers’ rear end collision avoidance
|
https://arxiv.org/abs/2505.21743v1
|
behaviors under different levels of situational urgency. Transportation research part C: emerging technologies , 71:419–433, 2016. [38] Siddharth Singi, Zhanpeng He, Alvin Pan, Sandip Patel, Gunnar A Sigurdsson, Robinson Piramuthu, Shuran Song, and Matei Ciocarlie. Decision making for human-in-the-loop robotic agents via uncertainty-aware reinforcement learning. In 2024 IEEE International Conference on Robotics and Automation (ICRA) , pages 7939–7945. IEEE, 2024. [39] Hao Shao, Yuxuan Hu, Letian Wang, Guanglu Song, Steven L Waslander, Yu Liu, and Hongsheng Li. Lmdrive: Closed-loop end-to-end driving with large language models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages 15120–15130, 2024. [40] Zelin Qian, Kun Jiang, Weitao Zhou, Junze Wen, Cheng Jing, Zhong Cao, and Diange Yang. An end-to-end autonomous driving pre-trained transformer model for multi-behavior- optimal trajectory generation. In 2023 IEEE 26th International Conference on Intelligent Transportation Systems (ITSC) , pages 4730–4737. IEEE, 2023. [41] Saransh Sahu, Yasir Ali, Sebastien Glaser, and Md Mazharul Haque. A physics-informed risk force theory for estimating pedestrian crash risk by severity using artificial intelligence-based video analytics. Analytic Methods in Accident Research , 46:Article–number, 2025. [42] Stefan Zernetsch, Viktor Kress, Maarten Bieshaar, Jan Schneegans, Günther Reitberger, Erich Fuchs, Bernhard Sick, and Konrad Doll. Detecting intentions of vulnerable road users based on collective intelligence as a basis for automated driving. In Cooperatively Interacting Vehicles: Methods and Effects of Automated Cooperation in Traffic , pages 35–87. Springer International Publishing Cham, 2024. [43] Keshu Wu, Yang Zhou, Haotian Shi, Xiaopeng Li, and Bin Ran. Graph-based interaction-aware multimodal 2d vehicle trajectory prediction using diffusion graph convolutional networks. IEEE Transactions on Intelligent Vehicles , 9(2):3630–3643, 2023. 12 [44] Keshu Wu, Yang Zhou, Haotian Shi, Dominique Lord, Bin Ran, and Xinyue Ye. Hypergraph- based motion generation with multi-modal interaction relational reasoning. arXiv preprint arXiv:2409.11676 , 2024. [45] Raphael Trumpp, Harald Bayerlein, and David Gesbert. Modeling interactions of autonomous vehicles and pedestrians with deep multi-agent reinforcement learning for collision avoidance. In2022 IEEE Intelligent Vehicles Symposium (IV) , pages 331–336. IEEE, 2022. [46] Mark Colley, Julian Czymmeck, Mustafa Kücükkocak, Pascal Jansen, and Enrico Rukzio. Pedsumo: Simulacra of automated vehicle-pedestrian interaction using sumo to study large- scale effects. In Proceedings of the 2024 ACM/IEEE International Conference on Human- Robot Interaction , pages 890–895, 2024. [47] Chi Zhang and Christian Berger. Learning the pedestrian-vehicle interaction for pedestrian trajectory prediction. In 2022 8th International Conference on Control, Automation and Robotics (ICCAR) , pages 230–236. IEEE, 2022. [48] Highway Capacity Manual. Highway capacity manual. Washington, DC , 2(1):1, 2000. [49] Kunsong Shi, Yuankai Wu, Haotian Shi, Yang Zhou, and Bin Ran. An integrated car-following and lane changing vehicle trajectory prediction algorithm based on a deep neural network. Physica A: Statistical Mechanics and its Applications , 599:127303, 2022. [50] Zihao Li, Yang Zhou, Yunlong Zhang, and Xiaopeng Li. Enhancing vehicular platoon stability in the presence of communication cyberattacks: A reliable longitudinal cooperative control strategy. Transportation Research Part C: Emerging Technologies , 163:104660, 2024. [51] Haotian Shi, Kunsong Shi, Keshu Wu, Wan Li, Yang Zhou, and Bin Ran. A predictive deep reinforcement learning based connected automated vehicle anticipatory longitudinal control
|
https://arxiv.org/abs/2505.21743v1
|
in a mixed traffic lane change condition. IEEE Internet of Things Journal , 2025. [52] Hao Zhang, Sixu Li, Zihao Li, Mohammad Anis, Dominique Lord, and Yang Zhou. Why anticipatory sensing matters in commercial acc systems under cut-in scenarios: A perspective from stochastic safety analysis. Accident Analysis & Prevention , 218:108064, 2025. [53] Kexin Tian, Haotian Shi, Yang Zhou, and Sixu Li. Physically analyzable ai-based nonlinear platoon dynamics modeling during traffic oscillation: A koopman approach. IEEE Transactions on Intelligent Transportation Systems , 2025. [54] Naoki Sugiyama and Takashi Nagatani. Multiple-vehicle collision induced by a sudden stop in traffic flow. Physics Letters A , 376(22):1803–1806, 2012. [55] Tim Salzmann, Boris Ivanovic, Punarjay Chakravarty, and Marco Pavone. Trajectron++: Dynamically-feasible trajectory forecasting with heterogeneous data. In Computer Vision– ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XVIII 16 , pages 683–700. Springer, 2020. [56] Haicheng Liao, Haoyu Sun, Huanming Shen, Chengyue Wang, Chunlin Tian, KaHou Tam, Li Li, Chengzhong Xu, and Zhenning Li. Crash: Crash recognition and anticipation system harnessing with context-aware and temporal focus attentions. In Proceedings of the 32nd ACM International Conference on Multimedia , pages 11041–11050, 2024. [57] Meng Wang, Zach Noonan, Pnina Gershon, and Shannon C Roberts. Multimodal crash likelihood prediction: A complexity-infused approach integrating semantic, contextual, and driving features. arXiv preprint arXiv:2411.17886 , 2024. [58] Artur Grigorev, Khaled Saleh, and Adriana-Simona Mihaita. Traffic accident risk forecasting using contextual vision transformers with static map generation and coarse-fine-coarse trans- formers. In 2023 IEEE 26th International Conference on Intelligent Transportation Systems (ITSC) , pages 4762–4769. IEEE, 2023. [59] Jon Antin. Design of the in-vehicle driving behavior and crash risk study: in support of the SHRP 2 naturalistic driving study . Transportation Research Board, 2011. 13 [60] Abhinav Nippani, Dongyue Li, Haotian Ju, Haris Koutsopoulos, and Hongyang Zhang. Graph neural networks for road safety modeling: Datasets and evaluations for accident analysis. Advances in neural information processing systems , 36:52009–52032, 2023. [61] Kira H Janstrup, Sigal Kaplan, Tove Hels, Jens Lauritsen, and Carlo G Prato. Understanding traffic crash under-reporting: linking police and medical records to individual and crash characteristics. Traffic injury prevention , 17(6):580–584, 2016. [62] Dominique Lord and Srinivas Reddy Geedipally. Safety prediction with datasets characterised with excess zero responses and long tails. In Safe Mobility: Challenges, Methodology and Solutions , volume 11, pages 297–323. Emerald Publishing Limited, 2018. [63] Dominique Lord and Simon Washington. Safe mobility: Challenges, methodology and solutions . Emerald Publishing Limited, 2018. [64] Dominique Lord and Fred Mannering. The statistical analysis of crash-frequency data: A review and assessment of methodological alternatives. Transportation research part A: policy and practice , 44(5):291–305, 2010. [65] Chunjiao Dong, Chunfu Shao, Juan Li, and Zhihua Xiong. An improved deep learning model for traffic crash prediction. Journal of Advanced Transportation , 2018(1):3869106, 2018. [66] Jieling Jin, Helai Huang, Chen Yuan, Ye Li, Guoqing Zou, and Hongli Xue. Real-time crash risk prediction in freeway tunnels considering features interaction and unobserved heterogeneity: A two-stage deep learning modeling framework. Analytic methods in accident research , 40:100306, 2023. [67] Chen Wang, Yuanchang Xie, Helai Huang, and
|
https://arxiv.org/abs/2505.21743v1
|
Pan Liu. A review of surrogate safety measures and their applications in connected and automated vehicles safety modeling. Accident Analysis & Prevention , 157:106157, 2021. [68] Sixu Li, Mohammad Anis, Dominique Lord, Hao Zhang, Yang Zhou, and Xinyue Ye. Be- yond 1d and oversimplified kinematics: A generic analytical framework for surrogate safety measures. Accident Analysis & Prevention , 204:107649, 2024. [69] Zihao Li, Yang Zhou, Danjue Chen, and Yunlong Zhang. Disturbances and safety analysis of linear adaptive cruise control for cut-in scenarios: A theoretical framework. Transportation Research Part C: Emerging Technologies , 168:104576, 2024. [70] Andrew P Tarko. Estimating the expected number of crashes with traffic conflicts and the lomax distribution–a theoretical and numerical exploration. Accident Analysis & Prevention , 113:63–73, 2018. [71] Lai Zheng, Karim Ismail, and Xianghai Meng. Freeway safety estimation using extreme value theory approaches: A comparative study. Accident Analysis & Prevention , 62:32–41, 2014. [72] Mohammad Anis, Sixu Li, Srinivas R Geedipally, Yang Zhou, and Dominique Lord. Real-time risk estimation for active road safety: Leveraging waymo av sensor data with hierarchical bayesian extreme value models. Accident Analysis & Prevention , 211:107880, 2025. [73] Mert Albaba, Yildiray Yildiz, Nan Li, Ilya Kolmanovsky, and Anouck Girard. Stochastic driver modeling and validation with traffic data. In 2019 American Control Conference (ACC) , pages 4198–4203. IEEE, 2019. [74] Zihao Li, Yang Zhou, Jiwan Jiang, Yunlong Zhang, and Mihir Mandar Kulkarni. Adaptive cruise control under threat: A stochastic active safety analysis of sensing attacks in mixed traffic. Accident Analysis & Prevention , 209:107813, 2025. [75] Alexander Amini, Igor Gilitschenski, Jacob Phillips, Julia Moseyko, Rohan Banerjee, Sertac Karaman, and Daniela Rus. Learning robust control policies for end-to-end autonomous driving from data-driven simulation. IEEE Robotics and Automation Letters , 5(2):1143–1150, 2020. 14 [76] Simon Suo, Sebastian Regalado, Sergio Casas, and Raquel Urtasun. Trafficsim: Learning to simulate realistic multi-agent behaviors. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages 10400–10409, 2021. [77] Quanyi Li, Zhenghao Peng, Lan Feng, Qihang Zhang, Zhenghai Xue, and Bolei Zhou. Metadrive: Composing diverse driving scenarios for generalizable reinforcement learning. IEEE transactions on pattern analysis and machine intelligence , 45(3):3461–3475, 2022. [78] Xiaoxue Yang, Yajie Zou, Hao Zhang, Xiaobo Qu, and Lei Chen. Improved deep reinforce- ment learning for car-following decision-making. Physica A: Statistical Mechanics and Its Applications , 624:128912, 2023. [79] Wei Li, CW Pan, Rong Zhang, JP Ren, YX Ma, Jin Fang, FL Yan, QC Geng, XY Huang, HJ Gong, et al. Aads: Augmented autonomous driving simulation using data-driven algorithms. Science robotics , 4(28):eaaw0863, 2019. [80] Shuo Feng, Xintao Yan, Haowei Sun, Yiheng Feng, and Henry X Liu. Intelligent driving intelligence test for autonomous vehicles with naturalistic and adversarial environment. Nature communications , 12(1):748, 2021. [81] Xintao Yan, Zhengxia Zou, Shuo Feng, Haojie Zhu, Haowei Sun, and Henry X Liu. Learning naturalistic driving environment with statistical realism. Nature communications , 14(1):2037, 2023. [82] Shuo Feng, Haowei Sun, Xintao Yan, Haojie Zhu, Zhengxia Zou, Shengyin Shen, and Henry X Liu. Dense reinforcement learning for safety validation of autonomous vehicles. Nature , 615(7953):620–627, 2023. [83] Mark Koren,
|
https://arxiv.org/abs/2505.21743v1
|
Saud Alsaif, Ritchie Lee, and Mykel J Kochenderfer. Adaptive stress testing for autonomous vehicles. In 2018 IEEE Intelligent Vehicles Symposium (IV) , pages 1–7. IEEE, 2018. [84] Qing Cai, Mohamed Abdel-Aty, Jaeyoung Lee, and Helai Huang. Integrating macro-and micro- level safety analyses: a bayesian approach incorporating spatial interaction. Transportmetrica A: transport science , 15(2):285–306, 2019. [85] Quanjun Chen, Xuan Song, Harutoshi Yamada, and Ryosuke Shibasaki. Learning deep representation from big and heterogeneous data for traffic accident inference. In Proceedings of the AAAI Conference on Artificial Intelligence , volume 30, 2016. [86] John Krumm and Eric Horvitz. Risk-aware planning: Methods and case study on safe driving route. In Proceedings of the AAAI Conference on Artificial Intelligence , volume 31, pages 4708–4714, 2017. [87] Alameen Najjar, Shun’ichi Kaneko, and Yoshikazu Miyanaga. Combining satellite imagery and open data to map road safety. In Proceedings of the AAAI Conference on Artificial Intelligence , volume 31, 2017. [88] Luca Bergamini, Yawei Ye, Oliver Scheel, Long Chen, Chih Hu, Luca Del Pero, Bła ˙zej Osi ´nski, Hugo Grimmett, and Peter Ondruska. Simnet: Learning reactive self-driving simulations from real-world observations. In 2021 IEEE International Conference on Robotics and Automation (ICRA) , pages 5119–5125. IEEE, 2021. [89] Nico Montali, John Lambert, Paul Mougin, Alex Kuefler, Nicholas Rhinehart, Michelle Li, Cole Gulino, Tristan Emrich, Zoey Yang, Shimon Whiteson, et al. The waymo open sim agents challenge. Advances in Neural Information Processing Systems , 36:59151–59171, 2023. [90] Benjamin Stoler, Ingrid Navarro, Meghdeep Jana, Soonmin Hwang, Jonathan Francis, and Jean Oh. Safeshift: Safety-informed distribution shifts for robust trajectory prediction in autonomous driving. In 2024 IEEE Intelligent Vehicles Symposium (IV) , pages 1179–1186. IEEE, 2024. 15 [91] Haolan Liu, Liangjun Zhang, Siva Kumar Sastry Hari, and Jishen Zhao. Safety-critical scenario generation via reinforcement learning based editing. In 2024 IEEE International Conference on Robotics and Automation (ICRA) , pages 14405–14412. IEEE, 2024. [92] Luke Rowe, Roger Girgis, Anthony Gosselin, Liam Paull, Christopher Pal, and Felix Heide. Scenario dreamer: Vectorized latent diffusion for generating driving simulation environments. arXiv preprint arXiv:2503.22496 , 2025. [93] Chiyu Max Jiang, Yijing Bai, Andre Cornman, Christopher Davis, Xiukun Huang, Hong Jeon, Sakshum Kulshrestha, John Lambert, Shuangyu Li, Xuanyu Zhou, Carlos Fuertes, Chang Yuan, Mingxing Tan, Yin Zhou, and Dragomir Anguelov. Scenediffuser: Efficient and controllable driving simulation initialization and rollout. In A. Globerson, L. Mackey, D. Belgrave, A. Fan, U. Paquet, J. Tomczak, and C. Zhang, editors, Advances in Neural Information Processing Systems , volume 37, pages 55729–55760. Curran Associates, Inc., 2024. [94] Shuhan Tan, Boris Ivanovic, Xinshuo Weng, Marco Pavone, and Philipp Kraehenbuehl. Language conditioned traffic generation. arXiv preprint arXiv:2307.07947 , 2023. [95] Jiawei Zhang, Chejian Xu, and Bo Li. Chatscene: Knowledge-enabled safety-critical scenario generation for autonomous vehicles. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages 15459–15469, 2024. [96] William Peebles and Saining Xie. Scalable diffusion models with transformers. In Proceedings of the IEEE/CVF international conference on computer vision , pages 4195–4205, 2023. [97] Zipeng Guo, Yuezhao Yu, and Chao Gou. Controllable diffusion models for safety-critical driving scenario generation. In 2023 IEEE 35th International Conference
|
https://arxiv.org/abs/2505.21743v1
|
on Tools with Artificial Intelligence (ICTAI) , pages 717–722. IEEE, 2023. [98] Yunchao Zhang, Yanyan Chen, Xin Gu, NN Sze, and Jianling Huang. A proactive crash risk prediction framework for lane-changing behavior incorporating individual driving styles. Accident Analysis & Prevention , 188:107072, 2023. [99] Lars Mescheder, Sebastian Nowozin, and Andreas Geiger. Adversarial variational bayes: Unifying variational autoencoders and generative adversarial networks. In International conference on machine learning , pages 2391–2400. PMLR, 2017. [100] Debaditya Roy, Tetsuhiro Ishizaka, C Krishna Mohan, and Atsushi Fukuda. Vehicle trajectory prediction at intersections using interaction based generative adversarial networks. In 2019 IEEE Intelligent transportation systems conference (ITSC) , pages 2318–2323. IEEE, 2019. [101] Jia Yu Tee, Oliver De Candido, Wolfgang Utschick, and Philipp Geiger. On learning the tail quantiles of driving behavior distributions via quantile regression and flows. In 2023 IEEE 26th International Conference on Intelligent Transportation Systems (ITSC) , pages 5876–5883. IEEE, 2023. [102] Zhenhai Gao, Mingxi Bao, Taisong Cui, Fangyuan Shi, Xianqing Chen, Wenhao Wen, Fei Gao, and Rui Zhao. Collision risk assessment for intelligent vehicles considering multi-dimensional uncertainties. IEEE Access , 12:57780–57795, 2024. [103] Ben Nassi, Yisroel Mirsky, Dudi Nassi, Raz Ben-Netanel, Oleg Drokin, and Yuval Elovici. Phantom of the adas: Securing advanced driver-assistance systems from split-second phantom attacks. In Proceedings of the 2020 ACM SIGSAC conference on computer and communications security , pages 293–308, 2020. [104] Peter W. Battaglia, Jessica B. Hamrick, Victor Bapst, Alvaro Sanchez-Gonzalez, Vinicius Zambaldi, Mateusz Malinowski, Andrea Tacchetti, David Raposo, Adam Santoro, Rachel Faulkner, et al. Relational inductive biases, deep learning, and graph networks. arXiv preprint arXiv:1806.01261 , 2018. [105] Chengyue Wang, Haicheng Liao, Bonan Wang, Yanchen Guan, Bin Rao, Ziyuan Pu, Zhiyong Cui, Cheng-Zhong Xu, and Zhenning Li. Nest: A neuromodulated small-world hypergraph trajectory prediction model for autonomous driving. In Proceedings of the AAAI Conference on Artificial Intelligence , volume 39, pages 808–816, 2025. 16 [106] Tiehua Zhang, Yuze Liu, Zhishu Shen, Xingjun Ma, Peng Qi, Zhijun Ding, and Jiong Jin. Learning from heterogeneity: A dynamic learning framework for hypergraphs. IEEE Transac- tions on Artificial Intelligence , 2025. [107] Nicolò Ruggeri, Federico Battiston, and Caterina De Bacco. Framework to generate hyper- graphs with community structure. Physical Review E , 109(3):034309, 2024. [108] Malik Khizar Hayat, Shan Xue, Jia Wu, and Jian Yang. Heterogeneous hypergraph embedding for node classification in dynamic networks. IEEE Transactions on Artificial Intelligence , 2024. [109] Matthew O’Kelly, Aman Sinha, Hongseok Namkoong, Russ Tedrake, and John C Duchi. Scalable end-to-end autonomous vehicle testing via rare-event simulation. Advances in neural information processing systems , 31, 2018. [110] Davis Rempe, Jonah Philion, Leonidas J Guibas, Sanja Fidler, and Or Litany. Generating useful accident-prone driving scenarios via a learned traffic prior. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages 17305–17315, 2022. [111] Yuewen Mei, Tong Nie, Jian Sun, and Ye Tian. Seeking to collide: Online safety-critical scenario generation for autonomous driving with retrieval augmented large language models. arXiv preprint arXiv:2505.00972 , 2025. [112] Junlan Chen, Pei Liu, Zihao Zhang, Hongyi Zhao, Yufei Ji, and Ziyuan Pu. Risk-informed diffusion transformer for long-tail trajectory
|
https://arxiv.org/abs/2505.21743v1
|
prediction in the crash scenario. arXiv preprint arXiv:2501.16349 , 2025. [113] Yuting Xie, Xianda Guo, Cong Wang, Kunhua Liu, and Long Chen. Advdiffuser: Gener- ating adversarial safety-critical driving scenarios via guided diffusion. In 2024 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) , pages 9983–9989. IEEE, 2024. [114] Chejian Xu, Aleksandr Petiushko, Ding Zhao, and Bo Li. Diffscene: Diffusion-based safety- critical scenario generation for autonomous vehicles. In Proceedings of the AAAI Conference on Artificial Intelligence , volume 39, pages 8797–8805, 2025. [115] Amar Kulkarni, Shangtong Zhang, and Madhur Behl. Crash: Challenging reinforcement- learning based adversarial scenarios for safety hardening. arXiv preprint arXiv:2411.16996 , 2024. [116] NVIDIA Corporation. Nvidia physx sdk. https://github.com/NVIDIA-Omniverse/ PhysX , 2025. Version 5.6.0, released 25 March 2025. [117] Alexey Dosovitskiy, German Ros, Felipe Codevilla, Antonio Lopez, and Vladlen Koltun. Carla: An open urban driving simulator. In Conference on robot learning , pages 1–16. PMLR, 2017. [118] Ben Mildenhall, Pratul P Srinivasan, Matthew Tancik, Jonathan T Barron, Ravi Ramamoor- thi, and Ren Ng. Nerf: Representing scenes as neural radiance fields for view synthesis. Communications of the ACM , 65(1):99–106, 2021. [119] Lei He, Leheng Li, Wenchao Sun, Zeyu Han, Yichen Liu, Sifa Zheng, Jianqiang Wang, and Keqiang Li. Neural radiance field in autonomous driving: A survey. arXiv preprint arXiv:2404.13816 , 2024. [120] Bernhard Kerbl, Georgios Kopanas, Thomas Leimkühler, and George Drettakis. 3d gaussian splatting for real-time radiance field rendering. ACM Trans. Graph. , 42(4):139–1, 2023. [121] Yunzhi Yan, Haotong Lin, Chenxu Zhou, Weijie Wang, Haiyang Sun, Kun Zhan, Xianpeng Lang, Xiaowei Zhou, and Sida Peng. Street gaussians: Modeling dynamic urban scenes with gaussian splatting. In European Conference on Computer Vision , pages 156–173. Springer, 2024. 17 [122] Binbin Huang, Zehao Yu, Anpei Chen, Andreas Geiger, and Shenghua Gao. 2d gaussian splatting for geometrically accurate radiance fields. In ACM SIGGRAPH 2024 conference papers , pages 1–11, 2024. [123] Antoine Guédon and Vincent Lepetit. Sugar: Surface-aligned gaussian splatting for efficient 3d mesh reconstruction and high-quality mesh rendering. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages 5354–5363, 2024. [124] Hanlin Chen, Chen Li, and Gim Hee Lee. Neusg: Neural implicit surface reconstruction with 3d gaussian splatting guidance. arXiv preprint arXiv:2312.00846 , 2023. [125] Hassan Abu Alhaija, Jose Alvarez, Maciej Bala, Tiffany Cai, Tianshi Cao, Liz Cha, Joshua Chen, Mike Chen, Francesco Ferroni, Sanja Fidler, et al. Cosmos-transfer1: Conditional world generation with adaptive multimodal control. arXiv preprint arXiv:2503.14492 , 2025. [126] Lue Fan, Hao Zhang, Qitai Wang, Hongsheng Li, and Zhaoxiang Zhang. Freesim: Toward free-viewpoint camera simulation in driving scenes. arXiv preprint arXiv:2412.03566 , 2024. [127] Haoran Song, Wenchao Ding, Yuxuan Chen, Shaojie Shen, Michael Yu Wang, and Qifeng Chen. Pip: Planning-informed trajectory prediction for autonomous driving. In Computer Vision– ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XXI 16 , pages 598–614. Springer, 2020. [128] Xuedong Yan, Mohamed Abdel-Aty, Essam Radwan, Xuesong Wang, and Praveen Chilakapati. Validating a driving simulator using surrogate safety measures. Accident Analysis & Prevention , 40(1):274–288, 2008. [129] Thomas Veran, Pierre-Edouard Portier, and François Fouquet. Interpretable hierarchical symbolic
|
https://arxiv.org/abs/2505.21743v1
|
regression for safety-critical systems with an application to highway crash prediction. Engineering Applications of Artificial Intelligence , 117:105534, 2023. [130] R Timothy Marler and Jasbir S Arora. Survey of multi-objective optimization methods for engineering. Structural and multidisciplinary optimization , 26:369–395, 2004. [131] Leland Gerson Neuberg. Causality: models, reasoning, and inference, by judea pearl, cam- bridge university press, 2000. Econometric Theory , 19(4):675–685, 2003. [132] Elias Bareinboim and Judea Pearl. Causal inference and the data-fusion problem. Proceedings of the National Academy of Sciences , 113(27):7345–7352, 2016. [133] Shengyu Zhu, Ignavier Ng, and Zhitang Chen. Causal discovery with reinforcement learning. arXiv preprint arXiv:1906.04477 , 2019. [134] Gaël Gendron, Jože M Rožanec, Michael Witbrock, and Gillian Dobbie. Counterfactual causal inference in natural language with large language models. arXiv preprint arXiv:2410.06392 , 2024. [135] Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in neural information processing systems , 35:24824–24837, 2022. [136] Nick Pawlowski, Daniel Coelho de Castro, and Ben Glocker. Deep structural causal models for tractable counterfactual inference. Advances in neural information processing systems , 33:857–869, 2020. [137] Md Kamruzzaman Sarker, Lu Zhou, Aaron Eberhart, and Pascal Hitzler. Neuro-symbolic artificial intelligence: Current trends. Ai Communications , 34(3):197–209, 2022. [138] Robin Manhaeve, Sebastijan Dumancic, Angelika Kimmig, Thomas Demeester, and Luc De Raedt. Deepproblog: Neural probabilistic logic programming. Advances in neural information processing systems , 31, 2018. [139] Pascal Hitzler, Aaron Eberhart, Monireh Ebrahimi, Md Kamruzzaman Sarker, and Lu Zhou. Neuro-symbolic approaches in artificial intelligence. National Science Review , 9(6):nwac035, 2022. 18 [140] Artur d’Avila Garcez, Marco Gori, Luis C Lamb, Luciano Serafini, Michael Spranger, and Son N Tran. Neural-symbolic computing: An effective methodology for principled integration of machine learning and reasoning. arXiv preprint arXiv:1905.06088 , 2019. [141] Lerrel Pinto, James Davidson, Rahul Sukthankar, and Abhinav Gupta. Robust adversarial reinforcement learning. In International conference on machine learning , pages 2817–2826. PMLR, 2017. [142] Inaam Ilahi, Muhammad Usama, Junaid Qadir, Muhammad Umar Janjua, Ala Al-Fuqaha, Dinh Thai Hoang, and Dusit Niyato. Challenges and countermeasures for adversarial attacks on deep reinforcement learning. IEEE Transactions on Artificial Intelligence , 3(2):90–109, 2021. [143] Jun Morimoto and Kenji Doya. Robust reinforcement learning. Neural computation , 17(2):335– 359, 2005. [144] Shubham Pateria, Budhitama Subagdja, Ah-hwee Tan, and Chai Quek. Hierarchical rein- forcement learning: A comprehensive survey. ACM Computing Surveys (CSUR) , 54(5):1–35, 2021. [145] Ximin Yue, Haotian Shi, Yang Zhou, and Zihao Li. Hybrid car following control for cavs: Integrating linear feedback and deep reinforcement learning to stabilize mixed traffic. Trans- portation Research Part C: Emerging Technologies , 167:104773, 2024. [146] Claire Glanois, Paul Weng, Matthieu Zimmer, Dong Li, Tianpei Yang, Jianye Hao, and Wulong Liu. A survey on interpretable reinforcement learning. Machine Learning , 113(8):5847–5890, 2024. [147] Keke Long, Haotian Shi, Jiaxi Liu, and Xiaopeng Li. Vlm-mpc: Vision language foundation model (vlm)-guided model predictive controller (mpc) for autonomous driving. arXiv preprint arXiv:2408.04821 , 2024. [148] Zihao Sheng, Zilin Huang, Yansong Qu, Yue Leng, Sruthi Bhavanam, and Sikai Chen. Cur- ricuvlm: Towards safe autonomous
|
https://arxiv.org/abs/2505.21743v1
|
driving via personalized safety-critical curriculum learning with vision-language models. arXiv preprint arXiv:2502.15119 , 2025. [149] Zilin Huang, Zihao Sheng, Yansong Qu, Junwei You, and Sikai Chen. Vlm-rl: A unified vision language models and reinforcement learning framework for safe autonomous driving. arXiv preprint arXiv:2412.15544 , 2024. [150] Chengzhengxu Li, Xiaoming Liu, Yichen Wang, Duyi Li, Yu Lan, and Chao Shen. Dialogue for prompting: a policy-gradient-based discrete prompt generation for few-shot learning. In Proceedings of the AAAI Conference on Artificial Intelligence , volume 38, pages 18481–18489, 2024. [151] Yiqun Duan, Qiang Zhang, and Renjing Xu. Prompting multi-modal tokens to enhance end-to- end autonomous driving imitation learning with llms. In 2024 IEEE International Conference on Robotics and Automation (ICRA) , pages 6798–6805. IEEE, 2024. [152] Anthony Brohan, Noah Brown, Justice Carbajal, Yevgen Chebotar, Xi Chen, Krzysztof Choro- manski, Tianli Ding, Danny Driess, Avinava Dubey, Chelsea Finn, et al. Rt-2: Vision-language- action models transfer web knowledge to robotic control. arXiv preprint arXiv:2307.15818 , 2023. [153] Moo Jin Kim, Karl Pertsch, Siddharth Karamcheti, Ted Xiao, Ashwin Balakrishna, Suraj Nair, Rafael Rafailov, Ethan Foster, Grace Lam, Pannag Sanketi, et al. Openvla: An open-source vision-language-action model. arXiv preprint arXiv:2406.09246 , 2024. [154] Kevin Black, Noah Brown, Danny Driess, Adnan Esmail, Michael Equi, Chelsea Finn, Niccolo Fusai, Lachy Groom, Karol Hausman, Brian Ichter, Szymon Jakubczak, Tim Jones, Liyiming Ke, Sergey Levine, Adrian Li-Bell, Mohith Mothukuri, Suraj Nair, Karl Pertsch, Lucy Xi- aoyang Shi, James Tanner, Quan Vuong, Anna Walling, Haohuan Wang, and Ury Zhilinsky. π0: A vision-language-action flow model for general robot control, 2024. 19 [155] Steven M Kay. Fundamentals of statistical signal processing: estimation theory . Prentice-Hall, Inc., 1993. [156] Thomas A Dingus, Sheila G Klauer, Vicki Lewis Neale, Andy Petersen, Suzanne E Lee, Jeremy Sudweeks, Miguel A Perez, Jonathan Hankey, David Ramsey, Santosh Gupta, et al. The 100- car naturalistic driving study, phase ii-results of the 100-car field experiment. Technical report, United States. Department of Transportation. National Highway Traffic Safety . . . , 2006. [157] Nicole van Nes, Jonas Bärgman, Michiel Christoph, and Ingrid van Schagen. The potential of naturalistic driving for in-depth understanding of driver behavior: Udrive results and beyond. Safety Science , 119:11–20, 2019. [158] Michael Arthur Regan, Aa Williamson, Raphael Grzebieta, J Charlton, M Lenne, B Watson, Nc Haworth, Andry Rakotonirainy, Jd Woolley, Rd Anderson, et al. The australian 400-car naturalistic driving study: Innovation in road safety research and policy. In Proc. Australasian Road Safety Res., Policing Educ. Conf , pages 1–13, 2013. [159] Meixin Zhu, Xuesong Wang, Andrew Tarko, and Shou’en Fang. Modeling car-following behavior on urban expressways in shanghai: A naturalistic driving study. Transportation research part C: emerging technologies , 93:425–445, 2018. [160] Yvonne Barnard, Fabian Utesch, Nicole van Nes, Rob Eenink, and Martin Baumann. The study design of udrive: the naturalistic driving study across europe for cars, trucks and scooters. European Transport Research Review , 8:1–10, 2016. [161] Holger Caesar, Varun Bankiti, Alex H Lang, Sourabh V ora, Venice Erin Liong, Qiang Xu, Anush Krishnan, Yu Pan, Giancarlo Baldan, and Oscar Beijbom. nuscenes: A multimodal dataset for autonomous driving. In Proceedings
|
https://arxiv.org/abs/2505.21743v1
|
of the IEEE/CVF conference on computer vision and pattern recognition , pages 11621–11631, 2020. [162] Scott Ettinger, Shuyang Cheng, Benjamin Caine, Chenxi Liu, Hang Zhao, Sabeek Pradhan, Yuning Chai, Ben Sapp, Charles R Qi, Yin Zhou, et al. Large scale interactive motion forecasting for autonomous driving: The waymo open motion dataset. In Proceedings of the IEEE/CVF International Conference on Computer Vision , pages 9710–9719, 2021. [163] Benjamin Wilson, William Qi, Tanmay Agarwal, John Lambert, Jagjeet Singh, Siddhesh Khandelwal, Bowen Pan, Ratnesh Kumar, Andrew Hartnett, Jhony Kaesemodel Pontes, et al. Argoverse 2: Next generation datasets for self-driving perception and forecasting. arXiv preprint arXiv:2301.00493 , 2023. [164] R. Kesten, M. Usman, J. Houston, T. Pandya, K. Nadhamuni, A. Ferreira, M. Yuan, B. Low, A. Jain, P. Ondruska, S. Omari, S. Shah, A. Kulkarni, A. Kazakova, C. Tao, L. Platinsky, W. Jiang, and V . Shet. Lyft level 5 av dataset 2019. https://level5.lyft.com/dataset/ , 2019. [165] Andreas Geiger, Philip Lenz, Christoph Stiller, and Raquel Urtasun. Vision meets robotics: The kitti dataset. The international journal of robotics research , 32(11):1231–1237, 2013. [166] Haibao Yu, Yizhen Luo, Mao Shu, Yiyi Huo, Zebang Yang, Yifeng Shi, Zhenglong Guo, Hanyu Li, Xing Hu, Jirui Yuan, et al. Dair-v2x: A large-scale dataset for vehicle-infrastructure cooperative 3d object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages 21361–21370, 2022. [167] Haibao Yu, Wenxian Yang, Hongzhi Ruan, Zhenwei Yang, Yingjuan Tang, Xu Gao, Xin Hao, Yifeng Shi, Yifeng Pan, Ning Sun, et al. V2x-seq: A large-scale sequential dataset for vehicle-infrastructure cooperative perception and forecasting. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages 5486–5495, 2023. [168] Hoon Kim, Kangwook Lee, Gyeongjo Hwang, and Changho Suh. Crash to not crash: Learn to identify dangerous vehicles using a simulator. In Proceedings of the AAAI Conference on Artificial Intelligence , volume 33, pages 978–985, 2019. 20 [169] Yajun Xu, Huan Hu, Chuwen Huang, Yibing Nan, Yuyao Liu, Kai Wang, Zhaoxiang Liu, and Shiguo Lian. Tad: A large-scale benchmark for traffic accidents detection from video surveillance. IEEE Access , 2024. [170] Runsheng Xu, Hao Xiang, Xin Xia, Xu Han, Jinlong Li, and Jiaqi Ma. Opv2v: An open benchmark dataset and fusion pipeline for perception with vehicle-to-vehicle communication. In2022 International Conference on Robotics and Automation (ICRA) , pages 2583–2589. IEEE, 2022. [171] Yiming Li, Dekun Ma, Ziyan An, Zixun Wang, Yiqi Zhong, Siheng Chen, and Chen Feng. V2x-sim: Multi-agent collaborative perception dataset and benchmark for autonomous driving. IEEE Robotics and Automation Letters , 7(4):10914–10921, 2022. [172] Mohammad Sadegh Aliakbarian, Fatemeh Sadat Saleh, Mathieu Salzmann, Basura Fernando, Lars Petersson, and Lars Andersson. Viena2: A driving anticipation dataset. arXiv e-prints , pages arXiv–1810, 2018. [173] Tao Sun, Mattia Segu, Janis Postels, Yuxuan Wang, Luc Van Gool, Bernt Schiele, Federico Tombari, and Fisher Yu. Shift: a synthetic driving dataset for continuous multi-task domain adaptation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages 21371–21382, 2022. [174] Daniel Krajzewicz, Georg Hertkorn, Christian Rössel, and Peter Wagner. Sumo (simulation of urban mobility)-an open-source traffic simulation. In
|
https://arxiv.org/abs/2505.21743v1
|
Proceedings of the 4th middle East Symposium on Simulation and Modelling (MESM20002) , pages 183–187, 2002. [175] PTV Planung Transport Verkehr GmbH, Karlsruhe, Germany. PTV Vissim 2025 [Computer software] , 2025. Version 25.0. User manual available from PTV Group. [176] Aimsun SLU, Barcelona, Spain. Aimsun Next 24 [Computer software] , 2024. Version 24.0.2. User manual available at docs.aimsun.com. [177] SYSTRA Ltd., Edinburgh, United Kingdom. Paramics Discovery [Computer software] , 2025. Version 27.0. User guide available from SYSTRA Ltd. [178] Abolhassan Halati, Henry Lieu, and Susan Walker. Corsim-corridor traffic simulation model. InTraffic Congestion and Traffic Safety in the 21st Century: Challenges, Innovations, and Op- portunitiesUrban Transportation Division, ASCE; Highway Division, ASCE; Federal Highway Administration, USDOT; and National Highway Traffic Safety Administration, USDOT. , 1997. [179] Siemens Digital Industries Software, Plano, TX, USA. Simcenter Prescan 2503 [Computer software] , 2025. Version 25.03. User manual and release notes available from Siemens. [180] IPG Automotive GmbH, Karlsruhe, Germany. CarMaker 14.0 [Computer software] , 2024. Version 14.0. Reference manual available from IPG Automotive. [181] VIRES Simulationstechnologie GmbH, Karlsruhe, Germany. Virtual Test Drive (VTD) [Com- puter software] , 2025. Version 25.0. User manual and datasheet available from Hexagon AB. [182] Guodong Rong, Byung Hyun Shin, Hadi Tabatabaee, Qiang Lu, Steve Lemke, M ¯artin,š Možeiko, Eric Boise, Geehoon Uhm, Mark Gerow, Shalin Mehta, et al. Lgsvl simulator: A high fidelity simulator for autonomous driving. In 2020 IEEE 23rd International conference on intelligent transportation systems (ITSC) , pages 1–6. IEEE, 2020. [183] Nathan Koenig and Andrew Howard. Design and use paradigms for gazebo, an open-source multi-robot simulator. In 2004 IEEE/RSJ international conference on intelligent robots and systems (IROS)(IEEE Cat. No. 04CH37566) , volume 3, pages 2149–2154. Ieee, 2004. [184] Radu Serban, Alessandro Tasora, Dan Negrut, et al. Chrono: An open-source multi-physics simulation package. In presentado en The 5th Joint International Conference on Multibody System Dynamics, Lisboa, Portugal , 2018. 21 [185] Jay Taves, Asher Elmquist, Aaron Young, Radu Serban, and Dan Negrut. Synchrono: A scalable, physics-based simulation platform for testing groups of autonomous vehicles and/or robots. In 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) , pages 2251–2256. IEEE, 2020. [186] Ismet Goksad Erdagi, Slavica Gavric, and Aleksandar Stevanovic. Development of multimodal physical and virtual traffic reality simulation system. Applied Sciences , 15(9):5115, 2025. [187] Peng Chen, Haoyuan Ni, Liang Wang, Guizhen Yu, and Jian Sun. Safety performance evaluation of freeway merging areas under autonomous vehicles environment using a co- simulation platform. Accident Analysis & Prevention , 199:107530, 2024. [188] Yongmin Shuai, Yu Zhang, Fuhao Liu, Xiaobin Qiao, Yunfei Xiong, and Yong Zeng. Co- simulation of power grid, information network and transportation network simulation system. In2022 IEEE 2nd International Conference on Software Engineering and Artificial Intelligence (SEAI) , pages 199–203. IEEE, 2022. [189] Yunsheng Ma, Xu Cao, Wenqian Ye, Can Cui, Kai Mei, and Ziran Wang. Learning autonomous driving tasks via human feedbacks with large language models. In Findings of the Association for Computational Linguistics: EMNLP 2024 , pages 4985–4995, 2024. [190] Shuyang Li, Talha Azfar, and Ruimin Ke. Chatsumo: Large language model for automating traffic scenario generation
|
https://arxiv.org/abs/2505.21743v1
|
in simulation of urban mobility. IEEE Transactions on Intelligent Vehicles , 2024. [191] NVIDIA Corporation. kit-extension-sample-airoomgenerator. https://github.com/ NVIDIA-Omniverse/kit-extension-sample-airoomgenerator , 2025. Git commit 78a4b7c, released 25 Apr 2025. [192] Bo-Kai Ruan, Hao-Tang Tsui, Yung-Hui Li, and Hong-Han Shuai. Traffic scene generation from natural language description for autonomous vehicles with large language model. arXiv preprint arXiv:2409.09575 , 2024. 22 Appendix Roadmap. In Section A, we establish the fundamental limits of rare-event crash probability esti- mation. Section B quantifies the Fisher information loss arising from unobserved stochastic factors. Section C introduces our counterfactual near-miss augmentation framework, demonstrating how it amplifies Fisher information and reduces estimator variance. Section D summarizes existing traffic datasets, including naturalistic driving datasets, autonomous vehicle datasets, and traditional crash records. Section E reviews existing traffic simulation platforms, including traditional traffic simu- lators, autonomous driving simulators, and co-simulation frameworks. It also highlights emerging platforms enhanced by LLMs. A Limits of Rare Crash Probability Estimation Crash occurrence Ytis a binary outcome at each time step, modeled as Yt∼Bernoulli( pt), where the instantaneous crash probability ptlies between 10−9and10−6[3, 30, 81]. Let p=E[pt] denote the average crash probability over time. For a Bernoulli model with parameter p, the Fisher information, which quantifies how much the data informs the estimation of p, is I(p) =1 p(1−p)≈1 p,forp≪1. The Cramér–Rao bound [ 155] gives a lower bound on the variance of any unbiased estimator ˆpofp: Var(ˆp)≥1 NI(p)≈p N, where Nis the number of independent samples. The relative standard error (RSE) of ˆpis therefore RSE =p Var(ˆp) p≳1√Np. This result highlights a fundamental challenge: when crash probabilities are extremely small, even extremely large datasets yield noisy estimates, making rare-event learning statistically inefficient. B Fisher Information Loss from Unobserved Stochastic Factors We define the full state of a traffic scene at time tasZt= (Xt, E t, H t), where Xtrepresents observable vehicle kinematics, Etdenotes environmental conditions (e.g., weather, road surface), andHtcaptures latent human states (e.g., attention, reaction time). While Xtis typically measurable, EtandHtevolve stochastically. Because of the randomness in EtandHt, the crash outcome Ytremains uncertain even if Xtis fully known. This motivates defining the latent crash probability: pt= Pr( Yt= 1|Xt) =E[Yt|Xt]. In practice, estimating the full conditional function pt= Pr( Yt= 1|Xt)is challenging, as environmental factors Etand human states Htare often only partially observed or measured with noise. This partial observability introduces additional variability into the relationship between Xt andYt. As a result, most methods focus on estimating the average crash probability p=E[pt] = Pr(Yt= 1) using only the observed binary outcomes {Yt}. The incomplete information about EtandHtweakens the statistical dependence between XtandYt because it masks the underlying causal factors that contribute to crash risk. Mathematically, this loss of information arises from marginalizing over the unobserved factors: Pr(Yt= 1|Xt) =Z Pr(Yt= 1|Xt, E t, H t)p(Et, H t|Xt)dE tdH t. This marginalization smooths out variability in crash risk, reducing the curvature of the log-likelihood and thereby lowering the Fisher information. As a result, even large datasets offer limited precision in estimating p, particularly when crashes are rare. 23 C Augmentation via Counterfactual Near-Miss Generation To address the inefficiency of
|
https://arxiv.org/abs/2505.21743v1
|
crash-only estimation, we advocate a shift toward counterfactual augmentation . Many traffic samples do not result in crashes but exhibit high-risk behaviors that correspond to elevated latent crash probability. We therefore augment the dataset with near-miss samples satisfying Pr(Yt= 1|Zt−∆:t)> τ, where Zt−∆:tdenotes traffic scene features over a short horizon. Let α= Pr Pr(Yt= 1|Zt−∆:t)> τ, Y t= 0 ≫p, so that the augmented positive rate becomes paug=p+α≫p. Refer to Appendix A, the RSE improves from RSE(ˆp) =1√N p−→ RSE aug(ˆp)≈1p N(p+α)≪1√N p. Thus, counterfactual near-miss augmentation amplifies Fisher information and sharply reduces estimator variance without waiting for additional observed crashes. D Summary Table of Traffic Safety Datasets This section reviews the principal data sources underpinning existing traffic safety research, broadly classified into three categories: traditional crash records (Table 1), naturalistic driving study (NDS, Table 2), and autonomous vehicle (A V) datasets (Table 3). Traditional crash record offer retrospective documentation of crash events but often lack detailed contextual or behavioral information. In contrast, NDS datasets provide continuous, real-world observations of driver behavior through instrumented vehicles equipped with various onboard sensors, enabling the analysis of pre-crash dynamics and near-miss incidents. Note that the NDS datasets in this study focus specifically on safety research. A V datasets represent a more recent advancement, further providing high-resolution, multi-modal sensor data collected from self-driving platforms. Table 1: Overview of traditional crash record datasets Dataset Agency Region SourcesaFocusbPrimary use CRIS TxDOT TX □ ⊕ TX safety, trend analysis, identify high-risk corridors FHSMV FDOT FL □ ⊕ FL crash trend analysis, risk mapping NCDOT NCDOT NC □ ⊕ NC policy planning SWITRS CHP CA □ ⊕ CA crash analysis FARS NHTSA US ▽ ⊘ National fatality trends CRSS NHTSA US △ ⊕ Risk factor estimation CIREN NHTSA US △ ⊙ Crash injury mechanisms, crash biomechanics HSIS FHWA States* ♢ ⊗ Design evaluation, SPF, CMF development CRIS : Crash records information system. FHSMV : Florida Highway Safety and Motor Vehicles. NCDOT : North Carolina Department of Transportation. SWITRS : Statewide Integrated Traffic Records System. FARS : Fatality Analysis Reporting System. CRSS : Crash Report Sampling System. CIREN : Crash Injury Research & Engineering Network. HSIS : Highway Safety Information System. aSources : [□] Law enforcement; [ △] Police reports; [ ▽] State safety agencies; [ ♢] State DOTs. bFocus : [⊕] Crashes result in property damage, injury, or death; [ ⊘] Fatal; [ ⊙] Severe occupant injuries; [ ⊗] Comprehensive database covering motor vehicle crashes, roadway inventory, traffic volumes, and more. States* : CA, IL, ME, MN, NC, WA, OH, and NC (Charlotte). 24 Table 2: Overview of NDS dataset for safety study Dataset Vehicle Class # Drivers Timeline VMTaRegion Study Location SHRP 2 [59]Cars, SUVs, Minivans, Pickups310036 months (2010–2013)33 M US6 cities (WA, NY , PA, NC, IN, FL) 100 Car–NDS [156]Cars, SUVs, Minivans24112–13 months (2004–2005)2 M US2 cities (Washington, Virginia) CNDS [157]Cars, SUVs, Minivans, Trucks14912–18 months (2012–2017)1.43 M Canada 1 city (Saskatoon) ANDS [158] Cars, SUVs 3604 months (2015–2017)0.93 M Australia2 cities (Victoria, New South Wales) SH–NDS [159]Cars, SUVs, Minivans6036 months (2012–2015)0.1 M China 1 city (Shanghai) UDRIVE
|
https://arxiv.org/abs/2505.21743v1
|
[160]Cars, truck, motorcycle19221 months (2012–2017)1.43 M Europe6 countries (France, UK, Spain, Poland, Germany, Netherlands) SHRP 2 : Second Strategic Highway Research Program. 100car–NDS : 100-Car Naturalistic Driving Study. CNDS : Canadian Naturalistic Driving Study. ANDS : Australian Naturalistic Driving Study. SH-NDS : Shanghai Naturalistic Driving Study. UDRIVE : The European Naturalistic Driving Study. aVMT :Vehicle Miles Traveled refers to the total number of miles traveled by vehicles. Table 3: Overview of representative open-source A V datasets from real-world and simulated environ- ments Dataset CrashaSensorbTaskscHzdV2XeHD MapfScope / Notes Real-World Datasets nuScenes [161] ✗CL RDTS M2 ✗ ✗40k urban A V scenes, 1000 seg- ments, 23 class, 2 cities Waymo [162] ✗ CL DTM 10 ✗ ✓390k scenarios,1950 segments, 4 class, 6 cities, 12.6M trajecto- ries Argoverse-2 [163] ✗ CLDTS M10 ✗ ✓250k scenarios, 113 segments, 11s duration, 10 class, 6 cities Lyft Level 5 [164] ✗CL RM 10 ✗ ✓170k urban A V instances, 25s duration, 10 class, 1 city KITTI [165] ✗ L DTM 10 ✗15k scenes, 10 segments, full- stack A V dataset DAIR-V2X [166] ✗ L D – V2I ✓First real-world A V dataset comprise of 71254 frames of images V2X-Seq [167] ✗ L DTM 5-10 V2I ✓15000 frames, 95 scenarios, 80,000 V2I scenarios, 672 driv- ing hrs YoutubeCrash [168] ✓ ✗ ✗ – ✗ ✗ Public video crashes TAD [169] ✓ ✗ ✗ – ✗ ✗ Surveillance crash footage only Simulator-Based Datasets OPV2V [170] ✗ CL DTM 10 V2V ✗33k multi-agent V2V sim, 18k V2X frames, 8 digital towns in CARLA V2X-Sim [171] ✗ CLDTS M5 V2V , V2I ✗47k of annotated samples , 10k V2X co-sim scenarios VIENA2[172] ✓ ✗ ✗ – ✗ ✗Synthetic rare-event videos, crash types GTACrash [168] ✓ ✗ ✗ – ✗ ✗ Simulated crash clips DeepAccident [19] ✓CL RDTS M10 V2V , V2I ✓285k samples, 57k V2X, proac- tive safety sim Shift [173] ✗ CL SM 10 ✗ ✗4850 segments, 33min duration, 2.5M frames, captured 8 cities aCrash : Labeled crash or near-miss events are present. bSensor : Featuring high-resolution sensors Camera: C, LiDAR: L, Radar: R. cTasks : Supports tasks such as Detection: D, Tracking: T, Segmentation: S, Motion Forecasting: M. dHz: Approximate sampling frequency (Hz); "–" indicates not reported. eV2X : Scenario constructed Vehicle-to-Vehicle (V2V), Vehicle-to-Infrastructure (V2I), for None : ✗. 25 E Summary Table of Traffic Simulation Platforms In this section, we summarize existing traffic simulators (Table 4), including traditional traffic simula- tion platforms, autonomous driving simulators, and co-simulation frameworks. Emerging simulation platforms enhanced by LLMs are also included (Table 5). The evaluation focuses on simulator char- acteristics relevant to traffic safety analysis. Specifically, features such as 3D environment modeling and high-fidelity vehicle dynamics are emphasized, as they are essential for safety-critical scenario analysis. In contrast, features less directly related to safety, such as V2X or V2V communication capabilities, are not considered in this review. Table 4: Capabilities of major traffic simulation platforms Name CityTFa3DEnvbDyncSensorsdVehInteOpenf Traffic Microscopic / Mesoscopic Simulators SUMO [174] ✓ ✗ ✗ ✗ ✓∗✓ PTV Vissim [175] ✓ ✓∗✗ ✗ ✓∗✗ Aimsun Next [176] ✓ ✓∗✗ ✗ ✓∗✗
|
https://arxiv.org/abs/2505.21743v1
|
Paramics Discovery [177] ✓ ✓∗✗ ✗ ✓∗✗ CORSIM [178] ✓ ✗ ✗ ✗ ✓∗✗ High-Fidelity Vehicle / Driving & A V Simulators CARLA [117] ✗ ✓ ✓∗✓ ✓∗✓ Simcenter PreScan [179] ✗ ✓ ✓∗✓ ✓∗✗ IPG CarMaker [180] ✗ ✓ ✓∗✓ ✓∗✗ VIRES VTD [181] ✓ ✓ ✓∗✓ ✓∗✗ LG SVL Simulator [182] ✗ ✓ ✓∗✓ ✓∗✓ Gazebo [183] ✗ ✓ ✓∗✓ ✗ ✓ Project Chrono [184] ✗ ✓ ✓ ✓ ✗ ✓ NVIDIA PhysX [116] ✗ ✗ ✓∗✗ ✗ ✓ SynChrono [185] ✗ ✓ ✓ ✓ ✓∗✓ Co-Simulation Frameworks CARLA+SUMO+PhysX [4] ✓ ✓ ✓∗✓ ✓∗- CARLA+Vissim [186] ✓ ✓ ✓∗✓ ✓∗- PreScan+Vissim [187] ✓ ✓ ✓∗✓ ✓∗- SUMO+OMNeT++ [188] ✓ ✗ ✗ ✗ ✓∗- aCityTF : traffic flow with rule-based, car-following, lane-changing, and signal control. b3DEnv : 3-D road geometry, buildings, weather effects ( ✓∗= Presentation-grade, ✓= Sensor/physics-grade). cDyn: high-fidelity vehicle dynamics level ( ✓∗= multi-body, ✓= high-order multi-body + deformable-terrain). dSensors : native virtual sensors (camera, LiDAR, radar, etc.). eVehInt : interaction of multiple road-user classes (vehicle, pedestrian, etc.) ( ✓∗= pair-wise interaction, ✓= group-wise interaction) [18]. fOpen : open source or commercial. Table 5: Key capabilities of LLM-enhanced simulation frameworks Framework SceneGenaPolicyGenbCoveragecWorldBuilddVehInteMultimodalf ChatScene+CARLA [95] ✓ ✗ ✓ ✗ ✓∗✗ CodeLLM+CARLA [189] ✗ ✓ ✗ ✗ ✗ ✗ ChatSUMO [190] ✓ ✗ ✗ ✗ ✗ ✗ ChatGPT+Omniverse[191] ✓ ✗ ✗ ✓ ✗ ✗ ChatGPT+CARLA [192] ✓ ✗ ✓ ✗ ✓∗✗ aSceneGen : text-to-scenario generation (roads, actors, states). bPolicyGen : LLM writes or edits driving-policy code. cCoverage : generate both ordinary (day-to-day) traffic scenes and safety-critical scenarios. dWorldBuild : 3-D asset / map generation or placement (e.g., weather conditions, 3D road geometry). eVehInt : interaction of multiple road-user classes (vehicle, pedestrian, etc.) ( ✓∗= pair-wise interaction, ✓= group-wise interaction) [18]. fMultimodal : multimodal human drivers. 26
|
https://arxiv.org/abs/2505.21743v1
|
arXiv:2505.21746v1 [cs.CV] 27 May 2025Learning to See More: UAS-Guided Super-Resolution of Satellite Imagery for Precision Agriculture Arif Masrur∗ Esri New York, NYPeder A. Olsen Microsoft Research Redmond, WAPaul R. Adler USDA - Agricultural Research Service University Park, PA Carlan Jackson Dept. of Electrical Engineering and Computer Science Alabama A&M University, ALMatthew W. Myers USDA - Agricultural Research Service University Park, PA Nathan Sedghi Dept. of Environmental Science and Technology Univ. of Maryland, College Park, MDRay R. Weil Dept. of Environmental Science and Technology Univ. of Maryland, College Park, MD ABSTRACT Unmanned Aircraft Systems (UAS) and satellites are important data sources for precision agriculture, yet each presents trade-offs. Satellite data provide broad temporal, spatial and spectral coverage but lack the fine resolution needed for many precision farming applications, while UAS can provide high spatial detail but are limited by coverage and cost constraints - especially for hyperspectral data. This study introduces a novel framework that fuses satellite and UAS imagery using neural network-based super-resolution methods. By integrating data across the spatial, spectral, and temporal domains, we can leverage the strengths of both platforms in a cost-effective manner. We used estimation of cover crop biomass and nitrogen (N) as a case study to evaluate our methodology. By spectrally extending the UAS RGB data to the critical vegetation red edge and near-infrared regions, we created high-resolution Sentinel-2 imagery and improved the accuracy of biomass and N estimation by 18% and 31%, respectively. Importantly, our results demonstrate that UAS data need only be collected from a subset of fields and time points. From these limited observations, farmers can then 1) enhance the spectral detail of UAS RGB imagery; 2) increase the spatial resolution by using satellite data; and 3) extend these enhancements spatially and across the growing season at the frequency of the satellite flights. Our SRCNN-based spectral extension model showed considerable promise for model transferability over other cropping systems in the Upper and Lower Chesapeake Bay regions. Additionally, it remains effective even when cloud-free satellite data are unavailable, relying solely on the UAS RGB input. The spatial extension model produces better biomass and N predictions than models built on raw UAS RGB images. Thus, the farmer can stop flying UAS once a specialized spatial extension model has been trained from targeted UAS RGB data. While several super-resolution innovations are introduced, the core contribution is a practical, scalable system for precision agriculture. The model is lightweight by design to ensure accessibility and affordability for on-farm use. Keywords Super-resolution ·SRCNN image reconstruction ·Spectral range and resolution ·Spectral extension · Sentinel-2 ·UAS·Winter cover cropping ·Precision farming ∗Corresponding author: amasrur.du@gmail.com (or amasrur@esri.com) Super-resolution for precision farming 1 Introduction Remote sensing (RS) technologies provide tremendous opportunities to improve precision crop management decisions [ Seelan et al., 2003 ,Weiss et al., 2020 ]. While the effectiveness of these technologies improves with higher spectral range and spatial resolution [ Hunt Jr and Daughtry, 2018 ], these characteristics vary widely across platforms. Unmanned Aircraft Systems (UAS) have become indispensable tools in precision agri- culture [ Radoglou-Grammatikis et al., 2020 ,Singh et al., 2020
|
https://arxiv.org/abs/2505.21746v1
|
,Zhang and Kovacs, 2012 ], enabling timely, high- resolution field-level assessments of crop biomass [ Wang et al., 2021 ] and nitrogen (N) status [ Argento et al., 2021 , Grüner et al., 2021 ], as well as detection of weeds, diseases, and pest infestations [ Dash et al., 2018 ,Watt et al., 2017 , Zhu et al., 2024 ]. However, despite their spatial precision, UAS-based imaging is constrained by limited spectral range and coverage due to high operational costs [ de Oca and Flores, 2021 ]. In contrast, freely available satellite imagery offers broad spatial and temporal coverage but often lacks the fine resolution required for precision agriculture applica- tions. To bridge this gap, AI-based image fusion techniques [ Samadzadegan et al., 2025 ], typically dominated by the pan-sharpening [ Li et al., 2022 ], have been employed to enhance satellite image resolution. A more effective strategy incorporates high-resolution hyperspectral ground-truth data from UAS or airborne platforms, which better support model development. However, these methods are limited when training data consist of imagery from sensors with differing spatial and spectral characteristics. This creates a two-fold challenge for the fusion models: learn to enhance both spatial resolution and spectral richness simultaneously. To address this, we can utilize super-resolution methods [Dong et al., 2015 ] from computer vision to reconstruct high-resolution images from low-resolution inputs by learning spatial detail using deep learning (DL) methods. Typically, in super-resolution for low-resolution remote sensing platforms, the high-resolution training targets have different spectral characteristics. By simulating a low-resolution RS platform (e.g., Sentinel-2) at a higher resolution based on UAS hyperspectral data, we can provide high-resolution training targets that match the spectral properties of the low-resolution multispectral sensor, thus allowing the DL model to focus on the super-resolution task only, removing the spectral distortion issue entirely. In this study, we introduce a novel framework based on super-resolution and spectral extension to simultaneously enhance the spatial and spectral fidelity of satellite data. This approach offers a scalable and cost-effective solution for both research and operational precision agriculture across a range of spectral, spatial, and temporal domains. Figure 1: Application scenarios of super-resolution in cost-effective precision farming. In all these scenarios we present neural networks that improved the performance over existing methods using original UAS RGB or Sentinel-2 datasets (see Tables 4 and 7). 2 Super-resolution for precision farming In practical agricultural applications, the choice of an appropriate RS platform is influenced by factors such as availability, cost, and suitability for specific tasks. Figure 1 illustrates how our super-resolution framework can be utilized in various scenarios by leveraging existing RS platforms like Sentinel-2 and UAS to improve upon traditional methods. We observe consistent qualitative and quantitative improvements in image resolution and spectral information content across all these scenarios–whether UAS RGB, Sentinel-2, or both are available. In at least one scenario demonstrated in this study, our method enables farmers to eliminate the need for UAS flights altogether, achieving comparable performance using only satellite-based imagery. Our model that fuses UAS and satellite data – through a process we term as spectral extension
|
https://arxiv.org/abs/2505.21746v1
|
– produces a very high-fidelity image reconstructions at sub-meter spatial resolution. Combining UAS RGB with satellite imagery in this way unlocks access to critical remote sensing indices, such as those based on vegetation red edge (VRE) and near-infrared (NIR) bands, which are not available from RGB sensors alone. An early version of this spectral extension model is available via the FarmVibes open source repository2. Additionally, these spectrally enriched high-resolution outputs can then be repurposed to train new satellite-only super-resolution models that produce sharpened images over the non-flown areas. We term this process as spatial extension, which enables the development of crop- and field-specific supervised learning models without the need for ground-truth data from UAS RGB or costly hyperspectral sensors. To better contextualize our super-resolution modeling approach and its contributions, we next review the historical development and current best practices in the use of UAS and satellite-based remote sensing for precision agriculture. 1.1 The historical context Historically, both satellite and UAS platforms have been used to estimate key characteristics of plants and soil, such as above-ground biomass [ Schreiber et al., 2022 ] and nitrogen (N) [ Belgiu et al., 2023 ,Berger et al., 2020 ], with studies showing that higher spatial resolution and broader spectral range are associated with improved estimation accuracy. This suggests that hyperspectral sensors covering the 400–2,500 nm range could provide the most accurate models, but it may not be feasible or cost-effective to fly large acreage with a UAS platform. This trade-off has led to increasing interest in image fusion techniques that combine the spatial detail of UAS with the spectral breadth and coverage of satellite data. Together, they have supported applications ranging from crop monitoring [ Maimaitijiang et al., 2020 ], physiological stress detection [ Dash et al., 2018 ,Sagan et al., 2019 ] to forest resource management [ Marx et al., 2017 , Puliti et al., 2018] and habitat monitoring [Stark et al., 2018]. This synergy between satellites and UAS is achieved through various image fusion approaches based mainly on methods such as pan-sharpening, decomposition, and deep learning [ Samadzadegan et al., 2025 ]. Image fusion occurs at the pixel, feature and decision levels [ Li et al., 2017 ], with the common objective of improving the spatial resolution of the low-resolution image while preserving broad contextual and spectral information. Many DL-based pan-sharpening meth- ods are applied at the pixel level to fuse multispectral (MS) and panchromatic (Pan) images, hyperspectral (HS) and Pan images, as well as paired HS and MS images that include autoencoder, convolutional neural network (CNN), generative adversarial network (GAN), and visual transformer (ViT) [ Li et al., 2022 ]. Many methods treat image fusion as an im- age super-resolution problem that recovers a high-resolution image from a given low-resolution one [ Dong et al., 2016 ], thus it allows a DL model to learn the relationship between high-resolution and low-resolution image patches. Super- resolution approaches have been shown to be successful for pixel upscaling factors in the range 2-10 for Sentinel-2 [Adigun et al., 2022 ,Galar et al., 2020 ,Lanaras et al.,
|
https://arxiv.org/abs/2505.21746v1
|
2018 ,Salgueiro Romero et al., 2020 ,Tarasiewicz et al., 2023 ] and historic Landsat imagery [ Kong et al., 2023 ], and are increasingly being applied in precision agriculture [Jonak et al., 2024 ,Meng et al., 2024 ]. Image colorization is another technique that augments a gray-scale or single- channel image [ Wu et al., 2021 ], but may suffer from pixel’s intensity mismatch between input and output image, thus a simultaneous super-resolution and colorization is proposed by [ Liu et al., 2018 ]. Figure 2 gives an overview of the different approaches of super-resolution, and it should be noted that pan-sharpening methods are typically unsupervised while traditional super-resolution methods applied to satellite-based imagery are supervised such that they use higher resolution training targets with different spectral characteristics. Pixel-level fusion of satellite MS and UAS images is generally a challenging task, even for state-of-the-art super- resolution techniques [ Toosi et al., 2025 ], because the pixel resolution in the satellite domain is typically much larger (1-10 m) compared to the UAS domain (1-5 cm). One satellite image pixel may easily cover 10,000 or more UAS pixels, requiring an upscaling factor of 8-10 to obtain a high resolution image from a low resolution image. As a result, fusion between UAS and spaceborne images has traditionally occurred at a late stage, where each platform contributes an independent decision or higher-level features that are then combined [ Alvarez-Vanhard et al., 2021 , Ouhami et al., 2021 ]. Unlike single-modal fusion tasks, such as panchromatic and MS fusion, fusion across different platforms presents additional complexity – particularly in image registration, which aligns the pixels across the 2https://github.com/microsoft/farmvibes-ai/blob/main/notebooks/spectral_extension/spectral_extension.ipynb 3 Super-resolution for precision farming Figure 2: A comparison of common super-resolution modeling frameworks in terms of the input and output targets’ structure. The proposed Spectral SRCNN (see Section 1.2) can combine frameworks by simulating the satellite sensor at high-resolution using hyperspectral UAS imagery. images. When source images originate from disparate platforms such as Sentinel-2 and UAS, accurate image-to-image registration becomes critical to fusion quality, according to [Li et al., 2022]. 1.2 Spatial, spectral and temporal extension In our work, we directly address these challenges by developing a comprehensive image fusion framework that integrates spectral alignment, pixel alignment (image registration), and super-resolution strategies. Super-resolution has been shown to be effective when the resolution gap between aligned pixels is within a factor of 8–10. To this end, we build spatial extension super-resolution convolutional neural network (SRCNN) models that aggressively reduce the resolution from 10 m to 1 m when only low-resolution satellite data is available. This approach is also applied to atemporal extension scenario where the farmer needs to construct high-resolution data for the same field but for a different temporal period that lacks UAS data. Ideally, a temporal extension would be a scenario in which UAS data are collected for one or more years before UAS data collection is stopped. When additional high-resolution data are available, we propose an unconventional super-resolution with a side-information approach. Using UAS RGB imagery co-aligned with Sentinel-2 MS imagery allows us to match our
|
https://arxiv.org/abs/2505.21746v1
|
chosen UAS resolution (12.5 cm in our dataset, derived from data originally collected at 3 cm resolution). We refer to this as a spectral extension model, which adds spectral richness from the satellite bands (i.e., spectral bands in the 700-900 nm range) not available to an UAS RGB sensor. In this context, we explore several deep learning architectures beyond SRCNN (see Section 2.3.1). In scenarios where cloud cover prevents acquisition of high quality co-aligned satellite image scenes, it is also possible to extend the spectral range from the UAS image alone. Such a companion model uses the spatial context to infer the missing spectral data but is not as accurate as when cloud-free satellite imagery is available. Training a spectral extension model requires access to high-resolution ground-truth data that match the spectral charac- teristics of satellite imagery. While pan-sharpening methods [ Javan et al., 2021 ], such as Pan-GAN [ Ma et al., 2020 ], allow training without explicit ground-truth data by preserving high-frequency features from the panchromatic band (see Figure 2), we find that incorporating high-resolution bands (e.g., UAS RGB) as side information and having access to all the satellite bands at high resolution for training targets produces qualitatively better results. UAS-collected hyperspectral imagery is used to generate this high-resolution ground-truth. Specifically, we extracted 17 TB of cloud-free UAS hyperspectral data acquired with the Headwall Nano-Hyperspec sensor, which operates in the VNIR (Visible and Near-Infrared; 400–1,000 nm) range and captures 269-270 spectral bands at a spectral resolution of 6 nm (the bands are spectrally separated by 2.2 nm with a bandwidth around 6 nm - we used 6 nm as a conservative definition of spectral resolution). These data are matched with the corresponding cloud-free Sentinel-2 scenes. To simulate the spectral response of the Sentinel-2 Multispectral Instrument (MSI), we use non-negative linear regression guided by the MSI’s published spectral response functions [ S2A, 2024 ]. Importantly, the hyperspectral-derived MSI simulations do not contain the atmospheric noise present in actual spaceborne observations, resulting in cleaner ground-truth. After 4 Super-resolution for precision farming spectrally harmonizing the sensors, it is equally important to spatially align them by transforming the UAS data to the satellites’ coordinate reference system and co-locating image corners to the satellite pixel corners. Following that, image registration is used for further satellite sub-pixel alignment — a particularly critical step when using lower-cost, commodity UAS systems. In spatial and temporal extension scenarios, UAS flights can be strategically scheduled to capture representative data throughout the farm and the growing season, enabling the model to adapt to specific crops, sites, and temporal stages. Once trained on a full season’s data, the model generalizes well enough to eliminate the need for continued UAS flights. As we will demonstrate, our spectral extension model generalizes well to other regions and crops. The generated images can serve as high-resolution ground-truth for training spatial extension models tailored to new locations and crops. While this approach may incur a modest performance trade-off compared to models trained directly on hyperspectral data, it significantly reduces operational costs. Thus, farmers can avoid investing in
|
https://arxiv.org/abs/2505.21746v1
|
expensive hyperspectral equipment, which can cost as much as $175,000, while still achieving performance accuracy that meets or surpasses what is possible with UAS RGB imagery alone. The novelty of this study lies in the development of an end-to-end spatially and temporally scalable system that integrates spectral simulation of low-resolution MS imagery with super-resolution guided by side information to enhance both spatial and spectral resolution, making the use of scale-appropriate remote sensing tools for precision farming more cost-effective and accurate. Although this work builds upon established deep learning methods, it was far from certain that a spectral extension system could be capable of enhancing Sentinel-2 data from 10 m to 12.5 cm. To our knowledge, both the simulation of Sentinel-2 MS data using hyperspectral data from UAS platforms and the use of super-resolution with side information have not been previously demonstrated in the context of precision agriculture. The most comparable prior study [ Brook et al., 2020 ] used a handheld spectrometer to collect sparse ground-truth and utilized a pan-sharpening-inspired modeling approach, lacking the spatio-temporal scalability and spectral enhancements offered by our framework. Another single platform fusion study by [ Lanaras et al., 2018 ] used a deep CNN-based super-resolution method to upsample Sentinel-2’s 20 m and 60 m bands to 10 m resolution (2 ×and 6×). They accomplished this by building a super-resolution network for 40 to 20 m upsampling (360 to 60 m for the 60m bands) and applying it to super-resolve 20 m input to 10 m. They show that data mismatch causes the RMSE to increase by 50%, yet their methodology still surpasses that of the best pan-sharpening methods. (This result is reasonable as classical pan-sharpening do not have ground truth training data). We go one step further in our study to provide ground truth data at the target resolution by aligning high-resolution hyperspectral data that are used to simulate the ground truth. Moreover, our framework aims to achieve far greater upsampling—from 10 m to 12.5 cm when high-resolution UAS RGB data are available. Therefore, beyond intra-Sentinel-2 resolution harmonization, our method integrates heterogeneous data sources to enable both spectral and spatio-temporal extensions to support generalization across crops, regions, and time periods for precision agriculture. 1.3 Study outline To demonstrate the utility of our proposed super-resolution framework, we apply it to the improved monitoring of cover crop health characterized by the status of biomass yield and nitrogen (N) content. Winter cover cropping is an important component of sustainable agriculture in the Upper and Lower Chesapeake Bay regions. Accurately measuring and optimizing biomass yield and N content play critical role in precision farming to support improved nutrient management, soil health, and the sustainability of cropping systems [ Govindasamy et al., 2023 ]. While our primary focus is on cover cropping, we further show that the proposed methods generalize well to other crops (e.g., wheat, corn) and management contexts. Previous remote sensing approaches to estimate cover crop biomass and N have relied on site-specific low- resolution satellite datasets [ Deines et al., 2023 ,Hively et al., 2020 ,KC et al., 2021
|
https://arxiv.org/abs/2505.21746v1
|
], dominated by the use of Sentinel- 2 [do Nascimento Bendini et al., 2024 ,Fan et al., 2020 ,Gao et al., 2020 ,Goffart et al., 2021 ,Thieme et al., 2020 , Xia et al., 2021 ] or UAS datasets [ Holzhauser et al., 2022 ,Roth and Streit, 2018 ,Yuan et al., 2019 ,Yuan et al., 2021 ], limiting precision and scalability of these approaches across space and time. In contrast, our neural network-based super-resolution framework integrates low-resolution satellite and high-resolution UAS imagery when available and performs robustly even when only one modality is present. We evaluated this framework on multiple treatment datasets and addressed the following key questions: (1) What type of UAS imagery (RGB, multispectral or hyperspectral) is most effective for estimating forage biomass yield and N content? (2) How does it compare with Sentinel-2 in terms of prediction accuracy? (3) Does improving the spatial resolution of Sentinel-2 and the spectral range of UAS RGB enhance the prediction of biomass and N? (4) Can the resulting models be extended across space and time, particularly to regions with limited UAS coverage or persistent cloud cover? These questions guide the experimental evaluation in the sections that follow and highlight the complementary role of UAS and satellite data in scalable, cost-effective precision management strategies. 5 Super-resolution for precision farming Section 2 introduces the datasets used in this study and details the proposed end-to-end super-resolution workflow. Section 3 presents the results, including both the quality of super-resolved image reconstruction and its applications in precision farming. In Section 4 we interpret the findings and discuss their broader implications. Section 5 highlights a few limitations and suggests directions for future research. Finally, Section 6 concludes the paper by summarizing the main contributions and highlighting the broader relevance of the proposed approach. 2 Materials and methods 2.1 Experimental design and site description The study seeks to demonstrate the ability of the proposed super-resolution system in analyzing cropping practices in the upper [UCB] and lower Chesapeake Bay [LCB] region of the United States. The Eastern Shore of Maryland dominates row crop production in the LCB region. In the LCB we focused on cover crops within the dominant corn soybean crop rotation, and a diversity of soils and tillage practices common to the LCB. We included both standard and experimental cover cropping practices in the LCB. The LCB experiments were conducted in collaboration with four commercial grain farmers in Talbot and Kent counties. Farmers used their preferred crop rotation for each field (Figure 3). In each field, we imposed three cover crop management systems [ Sedghi et al., 2023 ]: 1) extended cover crop growing season by aerially interseeding the cover crop several weeks prior to cash crop harvest in fall and terminating it at cash crop planting in spring (Extended), 2) traditional cover crop growing season by drilling seed after cash crop harvest in fall and terminating it 3-4 weeks before cash crop planting in spring (Standard), and 3) no cover crop control (No cover). The cover crop was a three-species mixture with the farmer selecting a species from
|
https://arxiv.org/abs/2505.21746v1
|
each group (brassica, legume, and cereal). In the UCB a dairy crop rotation is common and includes corn and harvested cover crops such as rye and triticale. We also included Miscanthus, switchgrass, and a mix of plant species used in the Conservation Reserved Program (CRP) [Adler et al., 2024]. Figure 3: (A) Cover cropping sites - two farms are near Chestertown, MD and two farms near Easton, MD, studied over three time periods, from December 2018 to March and April 2019; (B) Images 1-4 are showing four sites: Fields A, B, D, E. Each field in a site was divided for three cover crop treatments: fall seed drill, aerial seeding, and 3) none; (C) Cover crop characteristics measured: biomass yield, N content within a quadrat (0.5 m x 0.5 m); (D) UAS - DJI Matrice 600 Pro equipped with a Headwall Nano-Hyperspec [VNIR 400–1000 nm] and Velodyne VLP-16 LiDAR Puck LITE; Flight specs: <10 m/s, 40–50 m above ground level; (E) Hyperspectral data collected in one flight path. The locations of two quadrats are marked in red. 6 Super-resolution for precision farming Figure 4: Hyperspectral flights locations. The study sites are marked as "Maryland cover crops" in light green, while the corn, wheat and miscanthus/switchgrass crop images are marked in gold, beige and dark green respectively. 2.1.1 Prediction targets: cover crop biomass yield and N content Details on the experimental design of cover crop fields, sample timing and location of fields, and cover crops biomass sample collection and processing for N measurements are provided in [ Sedghi et al., 2023 ]. We conducted hyperspectral flights over different fields in Maryland (sites A and E near Easton Maryland and sites B, D and I near Chestertown Maryland) in December 2018, March and April 2019 using a Headwall Nano-Hyperspec [VNIR 400–1000 nm] (see details below). Each field was divided into three sections and each section was randomly assigned to flown, drilled, or no cover crop control treatments. The field designations A, B, D, E, and I are the same as used in [ Sedghi et al., 2023 ]. 2.2 The end-to-end framework: data processing, super-resolution, and application Our spatiotemporally scalable and an end-to-end image fusion system (see Figure 10) begins with spectral alignment of low-resolution satellite imagery – specifically, simulating high-resolution Sentinel-2 data using hyperspectral UAS control data. This is followed by pixel-level alignment and image-to-image registration between the UAS and satellite imagery. Once aligned, we apply super-resolution techniques to enhance both spatial and spectral fidelity. The resulting high-resolution imagery are then applied to downstream agricultural tasks, such as biomass yield or nitrogen content estimation. 2.2.1 Hyperspectral control data To measure the generalization performance of the spectral extension models, we used a larger curated set of hyperspectral images collected from the same Headwall Nano-Hyperspec camera spread over a larger cross section of Maryland and Pennsylvania. These images are from fields growing the crops corn, miscanthus, switchgrass, CRP and wheat in contrast to the three-species cover crop from the Maryland study sites. The hyperspectral flight locations for the Maryland study sites and the images of
|
https://arxiv.org/abs/2505.21746v1
|
corn, wheat and miscanthus are shown in Figure 4. These hyperspectral flights were spread over a period of time between 2018 and 2024. We list the different sites used in this paper along with approximate location in Table 1. 2.2.2 Satellite image spectral alignment To train a super-resolution model for Sentinel-2 we need high-resolution ground-truth data. Sentinel-2 carries a pushbroom multispectral instrument (MSI), which measures the reflected radiance in thirteen spectral bands. We have access to neither the MSI nor to higher-resolution images taken with the MSI. However, the spectral bands are 7 Super-resolution for precision farming Table 1: List of all locations where hyperspectral data was collected. Sites A, B, D, E, and I were from the study sites and sites W:A-W:F where flown at Beltsville Agricultural Research Center in Beltsville Maryland. Nearest city Date of flight(s) Crop Site Easton, MD 5/12/18, 3/20/19, 4/16/19 cover crop A, E 11/21/19 E Chestertown, MD 6/12/18, 3/20/19, 4/16/19 B, D, and I 11/20/19 B, H Wariors Mark, PA 7/29/20, 8/3/22 corn C:A Pennsylvania Furnace, PA 7/8/24 C:B Leck Kill, PA 6/8/20, 9/23/20 miscanthus, switchgrass and CRP M:A Beltsville, MD 4/15/20 wheat W:A, W:B, W:C, W:D, W:E 4/17/20 W:F characterized by the measured spectral response function found in [ S2A, 2024 ] and shown in Figure 5. Given data from a hyperspectral sensor, we took a weighted average of its narrow spectral bands to simulate the spectral response of the MSI. However, the Sentinel-2 MSI will be affected by greater atmospheric noise compared to a UAS. The Headwall Nano-Hyperspec [VNIR 400–1000 nm] was used to collect hyperspectral UAS images. The Headwall Nano-Hyperspec provided 269 bands in the VNIR with a bandwidth given by a FWHM (Full Width at Half Maximum) of 6 nm. Furthermore, Headwall kindly provided us with the relative spectral response as a function of the wavelength. The much narrower bandwidth of the hyperspectral data means that we can approximate the MSI spectral response function by a weighted average of the hyperspectral bands. Although linear regression is a possible approach, we noted that the spectral response is nonnegative, so we resorted instead to the nonnegative linear squares (NNLS) problem to prevent negative values. The NNLS minimized the reconstruction error for a linear combination of hyperspectral bands against the published values of the measured spectral response function (S2-SRF). We used the FORTRAN code published in the book [ Lawson and Hanson, 1995 ] that solves the Karush-Kuhn-Tucker conditions for the non-negative linear squares problem. Specifically, we access the code through the Scipy Optimize library [ Gommers et al., 2022 ]. The spectral response for the 8 Sentinel-2 bands B2-B8 and B8A in the VNIR range was approximated and the results can be seen in Figure 6. Among the 269 hyperspectral bands, only 126 were activated with nonzero weights for the 8 simulated Sentinel-2 bands. Note that the remaining bands are either in the short-wave infrared (SWIR) range not covered by the hyperspectral camera or are targeting atmospheric observations at a resolution of 60 m. 2.2.3 UAS pixel alignment The spectral alignment process provides a
|
https://arxiv.org/abs/2505.21746v1
|
higher spatial resolution image based on the hyperspectral UAS image corresponding to the Sentinel-2 MSI sensor, but does so in the geometry of the original UAS. For the purpose of training neural network models, we also need to align the pixels of the UAS image with the Sentinel-2 image. We do so in two stages; (1) transform the UAS image to the Sentinel-2 reference coordinate system, and (2) align the upper left corner of the UAS image to that of the upper left corner of a Sentinel-2 pixel. The second step ensures that no UAS pixel straddles two or more Sentinel-2 pixels and also that every Sentinel-2 pixel is fully covered by UAS pixels. In this process, it is important to leave the more vulnerable low-resolution image completely unchanged; therefore, the transformations are only applied to the UAS image. Figure 7A illustrates the process. The figure’s illustrations are meant as a schematic and show roughly three by three pixels per Sentinel-2 pixel. In our application, the UAS pixel resolution is 0.125 m versus 10 m for the Sentinel-2 pixels, giving 80×80UAS pixels for each Sentinel-2 pixel. 2.2.4 Image registration If the Sentinel-2 and UAS pixels are perfectly aligned with their respective reference coordinate systems, the pixel alignment process should suffice to align the images. However, it is unreasonable to expect the Sentinel-2 pixels to have a location accuracy lower than its pixel resolution. Moreover, the UAS pixel location accuracy depends on the quality of the equipment used, and we need to provide a working solution for any reasonable UAS. To correct errors from the pixel alignment process, we resort to a classic image registration methodology. However, there are two obstacles standing in our way. Firstly, the spatial resolutions of the two images are different, with the Sentinel-2 image being much coarser than the UAS image. Secondly, the images have different bands that correspond to different spectral response curves. The low resolution of the Sentinel-2 pixels precludes the use of key-points to co-register the images. Instead, we consider all translations of the UAS image by any amount within ±1Sentinel-2 pixel and utilize linear 8 Super-resolution for precision farming Figure 5: Measured normalized spectral response function (SRF) for Sentinel-2A and Sentinel-2B (S2-SRF) for VNIR bands used in this study. Figure 6: Simulated Spectral response average of 8 Sentinel-2 VNIR bands using the Headwall Nano-Hyperspectral camera and the published S2-SRF. The dashed colored lines correspond to the non-negative linear regression estimate of the spectral response function. regression to measure the difference between the UAS and Sentinel-2 image. The shift with the lowest regression error determined the final image registration. Figure 7B illustrates the process. The small circles represent some of the UAS circles, while the large circles represent Sentinel-2 pixels. We see how the process can lead to a better correspondence between the images. Figure 8 shows the amount of image translation for a collection of images as determined by the image-to-image registration process. Each marker type and color correspond to the same larger Sentinel-2 image (also known as a granule ortile). Given the high
|
https://arxiv.org/abs/2505.21746v1
|
accuracy of the location of our hyperspectral images, it is reasonable to assume that 9 Super-resolution for precision farming Figure 7: (A) Transforming and aligning the UAS image to the Sentinel-2 image. The middle image shows that simply changing the coordinate reference system (reprojecting) may leave the Sentinel-2 pixel corners in the middle of a UAS pixel. A second transformation shifts the origin so that the upper left corners of the images align. (B) An illustration of the image registration process. The small circles represents UAS pixels, and the large circles Sentinel-2 pixels. Linear regression is used to determine the shift for the co-registration of the images allowing us to move from the image on the left to that of the right. images corresponding to the same Sentinel-2 tiles should be shifted by the same amount, and indeed some clustering of the calculated shift values agrees with that assumption. Under the optimistic assumption that the unknown real optimal shift corresponds to the median of the clusters we can estimate the registration shift error to be approximately 0.81 m compared to the no-image registration error which is approximately 4.6 m. Figure 9 shows the image-to-image registration for site I. Note how well the colors align and how larger features with distinct colors are reflected in the Sentinel-2 pixels. This example also illustrates how much more detailed the hyperspectral-derived Sentinel-2 simulation is compared to the Sentinel-2 image itself. 2.3 Super-resolution system for fusion and precision farming application We used super-resolution image fusion to reconstruct high-resolution Sentinel-2 bands in the VNIR range and to extend UAS RGB imagery to include VRE and NIR spectral regions. Figure 2 shows a comparison of common super-resolution modeling frameworks and the spectral SRCNN introduced in this paper. We then evaluated whether the reconstructed sub-meter and 1 m resolution multispectral (MS) imagery improved biomass and N prediction compared to 10m 10 Super-resolution for precision farming Figure 8: Optimal shift values found for each image-to-image registration. Sentinel-2 tiles cover more than 100 km × 100 km and images with the same marker type correspond to the same Sentinel-2 tile. Figure 9: An example image-to-image registration for site I. The invalid (no-data) hyperspectral pixels are also masked out in the Sentinel-2 image to aide comparison. 11 Super-resolution for precision farming Sentinel-2 and sub-meter UAS RGB data. Figure 10 illustrates the overall super-resolution system and the application modeling framework. As introduced in section 1.2, we developed three distinct super-resolution scenarios – spectral, spatial, and temporal extensions – considering their real-world applications in cost-effective precision farming. These scenarios account for multispectral data variability caused by plant phenology and field-specific spatial factors such as topography and soil type. For example, in the spectral extension scenario, where only RGB UAS imagery is available, we fuse it with 10m/20m Sentinel-2 VNIR bands to produce sub-meter resolution 8-band MS data. This extends RGB into the VRE and NIR spectral domains. In the spatial extension case, we generate high-resolution MS imagery for farm areas lacking UAS data by using outputs from the spectral extension model, eliminating the need to
|
https://arxiv.org/abs/2505.21746v1
|
survey the entire field. In thetemporal extension scenario, we generate high-resolution imagery for the same field at a different time when no recent UAS data are available. Table 2 details the data splits used for each scenario, capturing spatial and temporal variability across different fields and time periods. To train and evaluate these super-resolution models: •Spectral extension model used all available original Sentinel-2 VNIR, UAS weighted VNIR, and UAS weighted RGB-only (see Section 2.2.2) data from all fields and temporal periods across MD and PA to capture spatial and temporal variability owing to plant phenology. Site-level cross-validation ensured that test data were excluded during training and validation. •Spatial extension model used a subset of available data from different fields for model training and validation. Model inference was conducted on a completely separate set of fields. •Temporal extension model was trained on field data from one season and tested on the same fields from a different season. Table 2: Super-resolution modeling data splits by extension type, site, geography, and timeline. ExtensionTrain & Validation Data Test Data Sites Geography Timeline Sites Geography Timeline Spectral A, B, I, D, E MD Mar–Apr 2019 A, B, I, D, E MD Mar–Apr 2019 W:A, W:B, W:C, W:D, W:E, W:F, C:A, C:B, M:AMD & PA See Table 1 Spatial All sites except A, E MD See Table 1 A, E MD Mar–Apr 2019 Temporal A, D, E MD Mar–Nov 2019 A, D, E MD Dec 2018 2.3.1 Super-resolution methods For the spatial and temporal extension tasks, we utilized techniques from the super-resolution literature. However, for the spectral extension task we could only use super-resolution when we were confined to the VNIR spectrum that we could simulate with our hyperspectral camera. Super-resolution Convolutional Neural Networks (SRCNN): SRCNN [ Dong et al., 2014 ] was one of the pioneering methods that applied deep neural networks to single-image super-resolution (SISR). It demonstrated that end-to-end learning is effective in sharpening blurred photographs. Since SRCNN many other more effective methods have been developed such as residual dense networks (RDN) [ Zhang et al., 2018b ], residual channel attention network (RCAN) [Zhang et al., 2018a ], super-resolution through repeated refinement (SR3) [ Saharia et al., 2022 ]. But here we focus on how super-resolution can be applied to precision farming and not on which super-resolution method is the best. Figure 11 shows the specific SRCNN architecture we used for the spectral extension model in the results section. The SRCNN model consists of 3 components; 1) a feature extractor corresponding to the first layer that convolves the input image with a large kernel (7x7 or larger), 2) non-linear block(s) that applies non-linear activations to capture complex relationships in the feature maps, and finally a 3) reconstruction layer that transforms the feature maps back to the desired image shape. A detail regarding the SRCNN model is that it takes an up-sampled low-resolution image as input. This enables us to simply stack the upsampled Sentinel-2 images with high-resolution UAS RGB images for the spectral SRCNN model. For more modern super-resolution models, actual architecture changes are required to
|
https://arxiv.org/abs/2505.21746v1
|
handle dual image input. For our spatial and temporal extension models, we also used SRCNN, but with different kernel sizes. 12 Super-resolution for precision farming Figure 10: The proposed end-to-end system for data preparation, super-resolution fusion and predictive modeling workflow to assess complementarity of multi-source imagery in precision agriculture. 13 Super-resolution for precision farming Figure 11: The SRCNN network architecture. Other Spectral Extension Super-resolution Networks (VGGExt, ResNetExt, and SRDB): We have deviated from the original SRCNN architecture by not using batch normalization, and instead of the ReLU activation we use the LeakyReLU activation. In general, we find that batch normalization hurts somewhat in the context of satellite images despite being widely used for computer vision for photographs. We also found that larger super-resolution models simply failed on our task. This is almost certainly due to the size of the training data. Despite having collected 17TB of hyperspectral data, the information is greatly compressed in the data preparation process. The overlap of flight strips almost duplicates the data and the simulation of the Sentinel-2 MSI sensor brings the number of bands from 269 to 8. We also resample the 3 cm hyperspectral data to 12.5 cm data. This process reduces the size of the training data by 3 orders of magnitude, which means that we cannot train very large networks and expect good performance. We will see later that a state-of-the-art network like RCAN does not perform as well as SRCNN on our task. We are reaching out to other researchers to get access to more hyperspectral data, but ultimately one should expect to have to train models on small datasets as utilizing petabytes of hyperspectral data is highly impractical. However, having a 17TB database of hyperspectral images does allow us to generate ground-truth data for other satellites (SuperDove, Landsat, Airbus, etc.) and target resolutions without having to collect new data. For the spectral extension super-resolution task in which SRCNN operates directly on images in the target resolution (Sentinel-2 images are upsampled) we consider other networks such as VGG [ Simonyan and Zisserman, 2014 ], ResNet [He et al., 2016 ], and DenseNet [ Huang et al., 2017 ]. These networks can be deeper than SRCNN through the use of max-pooling or residual connections. However, max-pooling has the drawback of leading to blurry images, so for the VGG architecture, we created the miniature model that we call VGGExt in Figure A.1, where we only used one max-pooling layer and later a conjugate convolution for up-sampling. The resulting network is much smaller than even the VGG11 model and has been modified to do spectral extension instead of image classification. Similarly Figure A.2 shows our ResNet inspired network where all batch normalization has been removed, the network has been reduced to essentially one residual block, and up-sampling components have been inserted at strategic locations to avoid loss of resolution in the inferred image. For DenseNet, there is already a super-resolution network, RDN, based on its architecture. However, we found RDN to be too large and we created a new small network that we call SRDB
|
https://arxiv.org/abs/2505.21746v1
|
that encompasses features from DenseNet and uses kernel sizes inspired by our SRCNN model. The interested reader can see the full model in Figure A.3. 2.3.2 Apply reconstructed imagery to predict cover crop biomass yield and quality We developed two sets of cross-validated random forest (RF)-based regression models to predict cover crop biomass yield and quality (assessed by N content) across different fields in the upper and lower Chesapeake Bay regions. The first set of RF models focused on exploring whether and how much the spatial and spectral resolution and the spectral range of the original MS (Sentinel-2 and UAS) and UAS hyperspectral imagery impact the predictive model performance in precision management practices. Insights from these experiments were then used to develop super-resolution models to reconstruct MS imagery with the desired spatial resolution and spectral range. The second set of RF models used these reconstructed high-resolution and broader spectral range data to predict biomass and N for the same fields. In the spectral extension experiment, the RF model used reconstructed spectral data from all eight datasets (i.e., two from site A, one from site B, two from site D, two from site E, and one from site I flights) that were generated by an eight-fold SRCNN model; each SRCNN model was trained on samples from seven datasets to infer on the remaining dataset. In that way, the ground-truth images corresponding to the reconstructed data did not participate in the SRCNN model training. This careful experimental protocol thus avoids data leakage between SRCNN training and testing, thus 14 Super-resolution for precision farming ensuring the fidelity of the reconstructed bands for the test site. The spatial and temporal extension SRCNN model training and testing sites were different, either spatially or temporally, thus no cross-validation approach was needed during super-resolution process. Table 2 Test Data list for MD sites shows which reconstructed Sentinel-2 imagery were used to extract 0.5 m x 0.5 m quadrat-level 8-band predictors for our RF models. A variable number of quadrats were used as data samples, ranging from 72 to 96, for the spectral, spatial, and temporal extension RF models. All RF regression models for spectral, spatial, and temporal extension scenarios were trained with 5-fold cross- validations to maximize the size of the training data. Model testing was performed on a randomly selected fold out of five folds. The root mean squared error (RMSE) and the R-squared statistical measures were calculated to compare the performance of the RF models. 3 Results 3.1 Prediction of biomass yield and quality using original Sentinel-2 and UAS imagery Table 3 presents cover crop biomass yield and quality estimation results from Random Forest (RF) models using Sentinel-2 and UAS imagery across varying spatial resolutions, spectral resolutions, and spectral ranges ( M1-M22). Overall, estimation accuracy improved with broader spectral coverage for both platforms. Sentinel-2-based RF models showed significant gains when the full VNIR range (8 bands) was used for biomass, and when SWIR bands were added (10 bands total) for N estimation ( M1-M4). In contrast, UAS-based models improved up to the RGB range for biomass, and
|
https://arxiv.org/abs/2505.21746v1
|
with five bands (RGB and three VRE) for N ( M5-M6). With weighted 8-band data from Headwall hyperspectral 269 bands (based on the Sentinel-2 MSI spectral response function, as discussed in Section 2.2.2), we observed the same patterns of impacts of spectral range as the original Sentinel-2, however, RMSEs were much lower with the hyperspectral simulated Sentinel-2 8-band data ( M7-M9). To further assess the effect of specific spectral ranges on model performances, hyperspectral 269-band data at varied wavelength ranges were evaluated in models M10-M14. Biomass yield estimation error improved but remained unaffected by the change in spectral ranges (397.9-1002.9 nm). For N estimation, however, broader spectral coverage remained beneficial, with the lowest RMSE (16.09 kg/ha) achieved using model M14, which included spectral information from bands spanning 397.9–1002.9 nm. The higher spectral resolution ( ∼13 nm to ∼86 nm) of the 269-band hyperspectral data contributed to improved RF model performance for both biomass and N estimation ( M15-M17). Model errors increased sharply with decreased spectral resolutions ( ∼150 nm and ∼600 nm) ( M18-M19). In terms of spatial resolution, M20using simulated 8-band (0.125m) data produced the best performance for both targets. However, M21, which used 1m resolution data, was not far behind, performed comparably well, suggesting that both sub-meter and 1 m resolutions are viable options for cover crop mapping. Notably, although M22produced the highest RMSEs among these three models, it still outperformed the Sentinel-2 10 m band models ( M1-M4) by a considerable margin. Key insights from these experiments are as follows: • Spectral range is more important for estimating N content than for biomass yield. •Higher spectral resolution improves both biomass and N estimations (Figure 12). It appears that by improving the spectral resolution of the RGB and VRE-NIR bands, we can capture important information about vegetation health. •Reducing spatial resolution from meters to decimeters (e.g., from 1m to 0.125m) yields substantial gains, potentially influenced by the scale of our experimental design and quadrat size. •For RedEdge-MX, expanding from RGB to 5-band yields smaller gains compared to Sentinel-2, but when simulating Sentinel-2 bands using hyperspectral data, the performance improvement is comparable to actual Sentinel-2. Given the cost and impracticality of flying large areas with UAS and deploying hyperspectral sensors, we next evaluated super-resolution based reconstructed high-resolution imagery as a cost-effective alternative. These reconstructions outperformed raw Sentinel-2 and UAS RGB data in downstream prediction tasks, offering a practical solution for scalable precision agriculture. 3.2 SRCNN reconstructed MS data for spectral, spatial, and temporal extensions In the spectral extension scenario, the RGB bands from the UAS are used alongside Sentinel-2’s bands to generate high-resolution (0.125 m) MS images. Our proposed spectral SRCNN can produce accurate, high-resolution (0.125 m) images containing 8 Sentinel-2 spectral bands across the VNIR range with exceptional fidelity. This approach can be 15 Super-resolution for precision farming Table 3: Impacts of spatial and spectral resolution and spectral range (25 bands per range) on model performance. The native spatial resolution of Sentinel-2A is 10 m for RGB+NIR and 20 m for VRE and Narrow NIR bands, while the spatial resolution
|
https://arxiv.org/abs/2505.21746v1
|
for Micasense RedEdge MX and Hyperspectral images is ∼3 cm. Tag Sensor Biomass Nitrogen R2RMSE (Mg/ha) R2RMSE (kg/ha) M1 Sentinel-2A RGB 18.5 1.33 -1.4 36.79 M2 Sentinel-2A 5-band 51.7 1.04 37.5 28.81 M3 Sentinel-2A 8-band 69.1 0.87 61.1 23.88 M4 Sentinel-2A 8-band & SWIRs 67.9 0.88 64.1 23.09 M5 RedEdge-MX RGB 84.2 0.65 71.1 21.14 M6 RedEdge-MX 5-band 83.9 0.66 76.2 19.79 Hyperspectral Sentinel-2 simulations [397.9, 1002.9] (nm) M7 3 bands: RGB 80.7 0.72 59.9 25.10 M8 5 bands: RGB + VRE + NIR 84.2 0.65 77 19.19 M9 8 bands: RGB + 3x VRE + NIR + Narrow NIR 85.2 0.64 89.6 14.93 Spectral Range (25 bands) M10 Wavelength: [397.9, 511.8] (nm) 85.4 0.63 60.8 24.25 M11 [397.9, 623.9] 85.4 0.63 65.6 22.70 M12 [397.9, 736] 85.7 0.63 74.6 20.23 M13 [397.9, 848.2] 85.8 0.63 86.3 16.35 M14 [397.9, 1002.9] 85.8 0.63 87.4 16.09 Spectral Resolution [397.9, 1002.9] (nm) M15 resolution ∼13nm 85.7 0.63 86.2 16.69 M16 resolution ∼35nm 85.7 0.63 87.4 15.94 M17 resolution ∼86nm 85.7 0.64 88.4 15.89 M18 resolution ∼150nm 85.5 0.68 87.7 17.93 M19 resolution ∼600nm 44.7 1.12 58.6 24.76 Spatial Resolution (simulated Sentinel-2 8-band) M20 0.125m 85.6 0.62 89.2 15.33 M21 1m 84.8 0.64 85.8 16.44 M22 5m 72.5 0.84 59.6 24.64 useful in scenarios where the farmer has access to an RGB UAS but not to an MS or hyperspectral UAS. In contrast, the spatial extension scenario enables the farmer to not fly the UAS over all fields, and instead use Sentinel-2 imagery together with a pretrained super-resolution/SRCNN model (trained on data from different fields). Similarly, the same approach can be used in the temporal extension scenario to reconstruct high-resolution Sentinel-2 data from a different time period. In addition to cost savings, spatial and temporal extension scenarios allow the farmer to balance the time of flying against other crop management activities, which is particularly useful for the most labor intensive periods. The spectral fidelity of the reconstructed high-resolution Sentinel-2 data is assessed using different matrices including mean squared error (MSE), root-mean-square error (RMSE), mean absolute error (MAE), and peak signal-to-noise ratio (PSNR). PSNR is a good measure for how well the color is retained (spectral fidelity). Table 4 shows that the Spectral- SRCNN resulted in reconstructed images with higher spatial and spectral fidelity compared to our other extension models. PSNR values below 24 are considered low, between 24 and 30 are moderate, and above 30 correspond to only minor differences between the reconstructed image and the ground-truth (in contrast, nearly loss-less compression system has a PSNR around 40). We postulate from this result that the Spectral-SRCNN approach is more likely to translate to other applications in precision management. 16 Super-resolution for precision farming Figure 12: RMSEs of biomass (top row) and N estimations (bottom row) as functions of spectral range (M7-M11), spectral resolution (M12-M18) and spatial resolution (M22-M24). Table 4: Accuracy assessment of SRCNN based image reconstruction for spectral, spatial and temporal extensions. The month column represents the UAS flight dates. Extension Scenario Site Month RMSE MAE PSNR Spectral A 3/20/19 0.0164 0.0077 35.69 4/16/19
|
https://arxiv.org/abs/2505.21746v1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.