text string | source string |
|---|---|
appears more than n/2times. Let Xdenote the number of correct predictions among the nsamples. Since each prediction is correct with probability p,X follows a binomial distribution: X∼Binomial (n, p).The majority vote is correct if X > n/ 2, and the corresponding probability of this event (denoted as E) is: P(E) =nX i=⌈... | https://arxiv.org/abs/2505.22453v1 |
explore other fine-grained methods to provide pseudo-reward signals based on our framework, and investigate the scaling laws of unsupervised post-training using synthetic data. Acknowledgement This project is supported by the National Natural Science Foundation of China (No. 62406192), Open- ing Project of the State Ke... | https://arxiv.org/abs/2505.22453v1 |
Towards high-quality visual instruction generation. openreview , 2024. [16] Aaron Hurst, Adam Lerer, Adam P Goucher, Adam Perelman, Aditya Ramesh, Aidan Clark, AJ Ostrow, Akila Welihinda, Alan Hayes, Alec Radford, et al. Gpt-4o system card. arXiv preprint arXiv:2410.21276 , 2024. [17] Aaron Jaech, Adam Kalai, Adam Lere... | https://arxiv.org/abs/2505.22453v1 |
Song, Zhuoma GongQue, Shanglin Lei, Zhe Wei, Miaoxuan Zhang, et al. We-math: Does your large multimodal model achieve human-like mathematical reasoning? arXiv preprint arXiv:2407.01284 , 2024. [33] Rafael Rafailov, Archit Sharma, Eric Mitchell, Christopher D Manning, Stefano Ermon, and Chelsea Finn. Direct preference o... | https://arxiv.org/abs/2505.22453v1 |
Huang. Diff-erank: A novel rank-based metric for evaluating large language models. arXiv preprint arXiv:2401.17139 , 2024. 12 [49] Tianhao Wu, Weizhe Yuan, Olga Golovneva, Jing Xu, Yuandong Tian, Jiantao Jiao, Jason Weston, and Sainbayar Sukhbaatar. Meta-rewarding language models: Self-improving alignment with llm-as-a... | https://arxiv.org/abs/2505.22453v1 |
Question-guided knowledge graph re-scoring and injection for knowledge graph question answering. In Yaser Al-Onaizan, Mohit Bansal, and Yun-Nung Chen, editors, Findings of the Association for Computational Linguistics: EMNLP 2024 , pages 8972–8985, Miami, Florida, USA, November 2024. Association for Computational Lingu... | https://arxiv.org/abs/2505.22453v1 |
the model self-judge the responses following Genixer: Here is a question-answer pair. Is {Q:Xq, A:Xa}true for this image? Please answer this question with Yes or No. In addition, Genixer calculates the probability of predicting the “Yes” rather than prompt the model to directly output “Yes” or “No” as the filtering lab... | https://arxiv.org/abs/2505.22453v1 |
arXiv:2505.22457v1 [cs.CV] 28 May 2025Fostering Video Reasoning via Next-Event Prediction Haonan Wang∗NHongfu Liu∗NXiangyan LiuN Chao DuSKenji KawaguchiNYe WangNTianyu Pang†S NNational University of SingaporeSSea AI Lab, Singapore {haonan.wang,liu.hongfu,liu.xiangyan}@u.nus.edu ; {kenji,wangye}@comp.nus.edu.sg ; {tiany... | https://arxiv.org/abs/2505.22457v1 |
perception of observed past frames and temporal reasoning with commonsense knowledge. As the example in the given first part video, after a defensive stop, the team may push fast in transition (knowledge)—but with under two minutes left in the fourth quarter (visual facts), a coach might call a timeout, or the players ... | https://arxiv.org/abs/2505.22457v1 |
V≤t= [v1, . . . , v t](past frames) and a future part V>t= [vt+1, . . . , v T](future frames). The goal is to train an MLLM that 2 Failed to defendSuccessfully defended Comment:In the final frame, the defender is so close that a miss is less likely. Pushed quickly in transitionComment:With under 2 minutes left in the f... | https://arxiv.org/abs/2505.22457v1 |
potential future scenarios. This process closely mirrors the chain-of-thought and tree-of-thought reasoning employed by LLMs [ 37,40], especially in complex problem-solving scenarios such as mathematical reason- ing. In these contexts, LLMs explicitly produce intermediate steps, such as calculations or logical inferenc... | https://arxiv.org/abs/2505.22457v1 |
instruction-tuning strategies on the NEP task. Each training strategy leverages specific annotations and structures from the V1-33K data pipeline, from ground-truth next event descriptions to critique and reasoning traces. We consider the encoder-decoder architecture model akin to recent MLLMs, Llava [ 23], where a vis... | https://arxiv.org/abs/2505.22457v1 |
both strong visual perception and commonsense reasoning. Unlike prior video Q&A benchmarks, which focus on answer extraction from visible frames [ 6,39], FutureBench emphasizes temporal-causal reasoning toward achieving unobserved future goals. We formalize the evaluation task in a multiple-choice question-answering fo... | https://arxiv.org/abs/2505.22457v1 |
the nuanced temporal logic embedded in each narrative. To scale the generation of QA pairs, we adopt a LLM-based generation pipeline. Specifically, we construct another distinct video dataset from V1-33K, following the same processing pipeline illustrated in Figure 4. Using this video dataset, we employ GPT-4 (text-onl... | https://arxiv.org/abs/2505.22457v1 |
(Previous Event Prediction): Presented with the final segment of a video, the model reasons backward to hypothesize plausible prior events or hidden causes that explain the observed outcome. 4 Experiment 4.1 Comparison Across Video Instruction Tuning Tasks To investigate the effectiveness of NEP as a learning task, we ... | https://arxiv.org/abs/2505.22457v1 |
34.8 66.5 35.7 56.9 48.5 Qwen2.5-VL-7B-Instruct Instruct 59.8 65.3 55.9 60.3 35.4 73.8 37.1 52.6 49.7 SFT 59.2 66.5 53.4 59.7 39.9 69.9 39.1 61.3 52.6 CFT 58.9 65.3 54.2 59.5 35.2 74.1 39.8 55.8 51.2 Distill 60.6 66.7 56.3 61.2 35.9 75.1 37.0 59.5 51.9 Mix 59.6 66.4 53.7 59.9 38.2 72.9 38.5 63.4 53.3 Qwen2.5-VL-7B-Inst... | https://arxiv.org/abs/2505.22457v1 |
CFT Distill Mix 1k 3k 5k 10k 15k 25k Data Size45505560Performance Score FutureBench SFT CFT Distill MixFigure 7: Performance comparison of different data scales for SFT, CFT, Distill, and Mix tuning on Qwen2.5-VL-7B-Instruct . The top showcases the curves for general benchmarks, and the bottom showcases the curves for ... | https://arxiv.org/abs/2505.22457v1 |
robust internal representations of causal and narrative dynamics. To study NEP and facilitate research in this area, we created V1-33K, a large dataset of approximately 33,000 video instances that cover a wide range of real-world scenarios and temporal complexities. Furthermore, we proposed FutureBench, a comprehensive... | https://arxiv.org/abs/2505.22457v1 |
Conference on Computer Vision , pages 5562–5571, 2019. [13] Usha Goswami. Inductive and deductive reasoning. The Wiley-Blackwell handbook of childhood cognitive development , pages 399–419, 2010. [14] Daya Guo, Dejian Yang, Haowei Zhang, Junxiao Song, Ruoyu Zhang, Runxin Xu, Qihao Zhu, Shirong Ma, Peiyi Wang, Xiao Bi, ... | https://arxiv.org/abs/2505.22457v1 |
Song, Xiao Bi, Haowei Zhang, Mingchuan Zhang, YK Li, Y Wu, et al. Deepseekmath: Pushing the limits of mathematical reasoning in open language models. arXiv preprint arXiv:2402.03300 , 2024. [29] Guangming Sheng, Chi Zhang, Zilingfeng Ye, Xibin Wu, Wang Zhang, Ru Zhang, Yanghua Peng, Haibin Lin, and Chuan Wu. Hybridflow... | https://arxiv.org/abs/2505.22457v1 |
Li. Video instruction tuning with synthetic data. arXiv preprint arXiv:2410.02713 , 2024. [45] Yaowei Zheng, Richong Zhang, Junhao Zhang, Yanhan Ye, Zheyan Luo, Zhangchi Feng, and Yongqiang Ma. Llamafactory: Unified efficient fine-tuning of 100+ language models. In Proceedings of the 62nd Annual Meeting of the Associat... | https://arxiv.org/abs/2505.22457v1 |
that current models exhibits stronger reasoning capabilities when working with text, we feed the detailed captions into a LLM. The LLM performs two critical tasks: •Scene Identification: It dissects the caption to extract and delineate distinct scenes. •Causal Analysis: It evaluates the causal relationships between sce... | https://arxiv.org/abs/2505.22457v1 |
15 Critique Fine Tuning (CFT). CFT is a strategy where models learn to critique noisy responses instead of simply imitate answers [ 36]. We leverage critique data generated by an external LLM (e.g., GPT-4) that identify strengths and errors in model predictions relative to ground-truth continuations. During fine-tuning... | https://arxiv.org/abs/2505.22457v1 |
Scene X should be included in the first part (’ caption_part1 ’) , and all scenes from Scene Y onward should be included in the second part (’ caption_part2 ’). The identified events : { json . dumps ( event_identification_result , indent =2)} and the optimal split point : { casual_analysis_result [" optimal_split_poin... | https://arxiv.org/abs/2505.22457v1 |
- " The caption suggests ..." could become "The video suggests ..." - Make sure the replacement sounds natural but does ** not ** otherwise change the meaning . Here is the input : { prediction_content } 19 Future Prediction Verification Prompt This prompt critically evaluates the alignment of predictions with the actu... | https://arxiv.org/abs/2505.22457v1 |
wrong answers could present the wrong order of the predicted future events . - Avoid using scene id in the question and start the question from " Based on the given video , ..." 2. Question Format : - Create one multiple - choice question with four answer options : A, B, C, and D. - Ensure only one correct answer and t... | https://arxiv.org/abs/2505.22457v1 |
: { last } 22 FutureBench 3-Hop Question Construction Prompt This prompt aims to generate the 3-hop QA pairs of FutureBench. FutureBench 3-Hop Question Construction Prompt You are an expert in video understanding . Your task is to generate one multiple - choice question to assess the video understanding ability of a te... | https://arxiv.org/abs/2505.22457v1 |
scenes ( scene 1 to k), the question should force the test model to predict future events ( scene k+1 to scene n) and ask what intermediate events would be supposing ( scene k+i and scene k+j are given , k+i and k+j are potential future events ). - For example , " Question ": " Based on the given video , predict future... | https://arxiv.org/abs/2505.22457v1 |
8 NVIDIA A100 GPUs. We fine-tuned the Qwen2.5-VL-7B-Instruct model with a maximum prompt length of 4096 tokens and a response length capped at 2048 tokens. Training utilized global batch sizes of 16 samples per rollout, with micro-batches of four samples per GPU during parameter updates and eight per GPU for experience... | https://arxiv.org/abs/2505.22457v1 |
such as the risk of reinforcing biases embedded in training datasets, which is exacerbated by the reliance on automatically generated captions without human oversight. Careful consideration, transparent documentation, and strict ethical oversight will be essential to mitigate these risks and ensure responsible deployme... | https://arxiv.org/abs/2505.22457v1 |
arXiv:2505.22467v1 [cs.MA] 28 May 2025Topological Structure Learning Should Be A Research Priority for LLM-Based Multi-Agent Systems Jiaxi Yang1∗, Mengqi Zhang2∗, Yiqiao Jin3∗, Hao Chen4, Qingsong Wen5, Lu Lin1, Yi He2, Weijie Xu6, James Evans7, Jindong Wang2† 1The Pennsylvania State University2William & Mary3Georgia I... | https://arxiv.org/abs/2505.22467v1 |
Star(a) GPT-3.5 MMLU GSM8K HumanEval Dataset80.082.585.087.590.092.595.097.5100.0Accuracy (%)Topology Graph Debate Chain Star (b) GPT-4o Figure 1: Comparison of topological structures across datasets. Results show that various tasks may require a specific topological structure. Figure 1 shows that task performance vari... | https://arxiv.org/abs/2505.22467v1 |
flexible nature of interaction topology, more applications will emerge and benefit. The organization of this paper is as follows: we first introduce the basics of LLM-based MASs in Section 2. We then articulate the proposed research framework for topology optimization with open problems and research directions in Secti... | https://arxiv.org/abs/2505.22467v1 |
task, a pool of candidate agents with diverse capabilities is instantiated and made available. The first step is to select a subset of agents ˆA ⊆ A best suited to collaborate on the task, based on factors such as skill specialization, role diversity, and past performance. This determines “who” participates in the comm... | https://arxiv.org/abs/2505.22467v1 |
structure, evaluation metrics, costs, and the LLM catalog remain unchanged over the deployment horizon, e.g., nightly batch summarization with a locked -in set of SaaS APIs, where the utility of an agent can be seen as stationary, the problem reduces to estimating and selecting an optimal subset based on the interactio... | https://arxiv.org/abs/2505.22467v1 |
node, and each time-varying edge denotes an interaction (e.g., routing a sub -query or forwarding a partial answer) that arrives with a timestamp. TGNNs maintain a latent state for every node that is updated only when events involving that node occur, making them ideal for sparse, asynchronous communication patterns th... | https://arxiv.org/abs/2505.22467v1 |
diverse problems [ 23]. The appropriate structure is often task-dependent, e.g., a simple question might only require a single agent, whereas a complex query might benefit from a divide-and-conquer approach with multiple specialists. This dependency creates a need for multi-agent structure profiling, i.e.,determining t... | https://arxiv.org/abs/2505.22467v1 |
and related work in Appendix D. Open problem 4. How can a society of agent self-edit its topology while a dialogue is still in flight, without creating instability or runaway costs? While we can select a suitable macro-structure, the agents involved in is actually dynamically changing, i.e.,the macro-structure could al... | https://arxiv.org/abs/2505.22467v1 |
specialized roles of agents that should occupy distinct and meaningful positions within the topology. For example, a manager should be in the central position for overall coordination, while a verifier may serve best at the end of the pipeline. Position. We advocate that we should further optimize the macro-level topol... | https://arxiv.org/abs/2505.22467v1 |
the scenarios in this case. Here, we propose several research directions. Developing Topology-Aware Benchmarks. There have been plenty of efforts in LLM and agent evaluation [ 31,32], primarily focusing on general language understanding, coding, and reasoning tasks. However, more should be done to 7 design topology-awa... | https://arxiv.org/abs/2505.22467v1 |
personal assistant might interface with separate systems—weather, music, healthcare, finance—each bound by different data governance rules. In such settings, how can we coordinate multiple agent systems while preserving data locality? Future research could explore privacy-aware topological learning, where only encoded ... | https://arxiv.org/abs/2505.22467v1 |
Itani, Dmitrii Khizbullin, and Bernard Ghanem. Camel: Communicative agents for" mind" exploration of large language model society. NeurIPS , 36:51991–52008, 2023. [3]Bang Liu, Xinfeng Li, Jiayi Zhang, Jinlin Wang, Tanjin He, Sirui Hong, Hongzhang Liu, Shaokun Zhang, Kaitao Song, Kunlun Zhu, et al. Advances and challeng... | https://arxiv.org/abs/2505.22467v1 |
task-oriented agent collaboration. In COLM , 2024. [18] Sirui Hong, Xiawu Zheng, Jonathan Chen, Yuheng Cheng, Jinlin Wang, Ceyao Zhang, Zili Wang, Steven Ka Shing Yau, Zijuan Lin, Liyang Zhou, et al. Metagpt: Meta programming for multi-agent collaborative framework. arXiv:2308.00352 , 3(4):6, 2023. [19] Mingchen Zhuge,... | https://arxiv.org/abs/2505.22467v1 |
empirical analysis on deceptive prompts. arXiv:2402.13220 , 2024. 10 [36] Kaijie Zhu, Jiaao Chen, Jindong Wang, Neil Zhenqiang Gong, Diyi Yang, and Xing Xie. Dyval: Dynamic evaluation of large language models for reasoning tasks. In ICLR , 2024. [37] Kaijie Zhu, Jindong Wang, Qinlin Zhao, Ruochen Xu, and Xing Xie. Dyva... | https://arxiv.org/abs/2505.22467v1 |
systems. In NeurIPS 2024 Workshop on Open-World Agents , 2024. [56] Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuohan Li, Dacheng Li, Eric Xing, et al. Judging llm-as-a-judge with mt-bench and chatbot arena. NeurIPS , 36:46595– 46623, 2023. 11 A Notations Table 1 sho... | https://arxiv.org/abs/2505.22467v1 |
success rate, latency, recent tasks metrics such as F1/BLEU scores [ 43]; and 4) Persona attributes : communication styles, creativity, safety stance. Task Bank. TasksXare organized in a coarse-to-fine faceted taxonomy: 1) skill: reasoning, code generation, tool use; 2)modality : text, code, image, audio, and video; 3)... | https://arxiv.org/abs/2505.22467v1 |
low-cost reward estimates. Scorer 13 models are architecturally agnostic and can take various forms: a smaller, instruction-tuned specialist model [ 54,55], a rule-based evaluator (e.g. exact string match, BLEU/ROUGE, BERTScore, toxicity-detectors), or LLM-as-a-Judge [ 56]. Critically, the scorer does not need to surpa... | https://arxiv.org/abs/2505.22467v1 |
example, an edge eij∈ Eindicates agent ai’s output is forwarded as input context to agent aj, namely ai→aj. This graph can encode various topologies (chain, star, fully connected, etc.) and implicit schedules of agent interactions (a directed acyclic graph imposes a partial order of information flow). Given a particula... | https://arxiv.org/abs/2505.22467v1 |
utilized, then decide the directed connections Eamong them. This could be achieved by outputting additional binary variables qi(x)indicating whether to activate agent aifor task x. The resulting structure policy could be factorized as Π(x) = (V(x),E(x)). For example, Πmight select agents with the required skills for x(... | https://arxiv.org/abs/2505.22467v1 |
gratuitous use of fully connected messaging. The outcome of training is a policy Π∗that profiles the task structure, i.e., given a new problem, Π∗can swiftly instantiate a near-optimal multi-agent network for solving it. Formally, we hope to approximate GΠ(x)≈ G∗(x)for all xin the domain of interest, without an exhaust... | https://arxiv.org/abs/2505.22467v1 |
micro-topology G, and G\ {e}orˆA \ { ai}respectively represent topologies with the edge or agent removed. Given these contribution scores, we formalize the topology refinement as a constrained selection problem. The goal is to identify a subgraph G′= (ˆA′, E′)⊆Gthat retains only components with sufficiently high task-a... | https://arxiv.org/abs/2505.22467v1 |
1 HUMAN -CENTERED HUMAN -AI COLLABORATION (HCHAC) A. Author’s name and details Qi Gao (Zhejiang University, Hangzhou, China, qi.gao@zju.edu.cn, https://orcid.org/0000 -0002- 4984-877X ), Wei Xu (Zhejiang University, Hangzhou, China, weixu6@yahoo.com, https://orcid.org/0000 -0001- 8913-2672) , Hanxi Pan (Zhejiang Univer... | https://arxiv.org/abs/2505.22477v1 |
capabilities rather than replace them, promot ing utility and human values throughout the AI lifecycle. This chapter provides a comprehensive examination of HAC from a human- centered perspective, addressing both theoretical foundations and empirical evidence in this emerging field. The levels and forms of human- AI in... | https://arxiv.org/abs/2505.22477v1 |
AI gains at this level compared to the previous one. This AI -capability -driven approach offers a concise perspective for classifying levels of HAC. However, it reflects a technology -centered viewpoint, the undergoing HAC research needs a shift toward a human- centered perspective (Xu & Gao, 2024). This shift calls f... | https://arxiv.org/abs/2505.22477v1 |
with broad human- computer interaction (HCI). O'Neill et al. (2022) proposed that possessing the following characteristics is enough for constituting human -AI teaming: (1) The intelligent agent is regarded as an "independent entity" by human teammates in the human -machine team, and the intelligent a gent has a consid... | https://arxiv.org/abs/2505.22477v1 |
(LOA 1 -4), the machine only provides information about the task, lack ing the freedom to make decisions and the ability to independe ntly engage in activities without pre - programmed instructions. As it moves to higher levels (LOA 5- 6), the machine begins to recommend and execute actions, albeit with human oversight... | https://arxiv.org/abs/2505.22477v1 |
a sense of belonging, social reciprocity, trust, and equality, which also appear in the interaction between humans and intelligent agents (Lyons et al., 2021). However, HAC has distinct advantages and disadvantages, which determine the need for new research paradigms stepping out of the shadow of human teams. First, HA... | https://arxiv.org/abs/2505.22477v1 |
on performance and the shift towards AI as a collaborative partner. These clusters highlight the diverse aspects of HAC, from human factors, and design science, to computer science and engineering, which requires multi -discipline collaborations. 7 Table 3 Five clusters identified in Human -AI teaming -related research... | https://arxiv.org/abs/2505.22477v1 |
human- AI interaction while highlighting the emerging focus areas in human factors research that are essential for developing effective, ethical , and human- centered AI systems. 9 Table 4 Research Focus Transition in Human -AI Interaction (Xu et al., 2024) New Features New Issues Research Focus From “ expected” to “p ... | https://arxiv.org/abs/2505.22477v1 |
space interaction New demands for HCI in virtual- real integrated spaces; New experiences of immersion, interactivity, and parallel seasonal environments in the metaverse; Ethics, information presentation, brain -computer integration, etc., in the metaverse interaction space; New issues brought by multimodal continuity... | https://arxiv.org/abs/2505.22477v1 |
and recognize user physiological, cognitive, behavioral, intentional, emotional, and other states through sensing systems, while humans obtain the best SA through multimodal interfaces. (5) Emphasize the complementarity of humans and machines . The bidirectional interaction framework enables humans and intelligent agen... | https://arxiv.org/abs/2505.22477v1 |
memory, and cognition Engineering psychology In human- computer operating environments, using work performance measurement (reaction time, error rate, etc.) and subjective evaluation methods to evaluate the relationship between human psychological activities and performance, and to optimize human -comp uter system desi... | https://arxiv.org/abs/2505.22477v1 |
tasks, thereby establishing models that facilitate system adaptability (Vicente, 1999). During the promotion and implementation phases, AI technologies may be influenced by various social factors. It is essential to in corporate considerations of privacy, ethics, and other socio - moral elements to develop intelligent ... | https://arxiv.org/abs/2505.22477v1 |
often applied to some non- life-critical systems. Due to the learning of intelligent agents leading to the iteration of HAC systems, longitudinal studies that track team performance over a long period have become one of the best solutions for studying team relationships. 3.2. Scenarios and Platforms Most contemporary r... | https://arxiv.org/abs/2505.22477v1 |
Berretta et al., 2023; Committee et al., 2022; Vaccaro et al., 2024; O’Neil et al., 2023 ), and these are synthesized using the IMO model in Figure 2. Figure 2 the Input -Mediator -Outcome model integrating multiple frameworks ( Berretta et al., 2023; Committee et al., 2022; Vaccaro et al., 2024; O’Neil et al., 2023;).... | https://arxiv.org/abs/2505.22477v1 |
task complexity and interdependence further shape human -AI collaboration. Tasks requiring close human- machine cooperation foster interdependence, which reduces individual workload (Walliser et al., 2017). However, increased tas k complexity can negatively affect team performance, potentially leading to errors or inef... | https://arxiv.org/abs/2505.22477v1 |
2022). Team viability reflects the long -term stability and adaptability of the team. Factors such as trust calibration, the mitigation of biases, and the synergy between humans and AI contribute to the sustainability of the team’s operations. These aspects ensure that the team remains functional and effective even und... | https://arxiv.org/abs/2505.22477v1 |
et al., 2022). The importance of forming team mental models is heightened by the distinct behavioral patterns exhibited by AI agents, which differ significantly from human behaviors. This divergence introduces additional complexity into human judgment and comprehension of AI agents within HAC, making such interactions ... | https://arxiv.org/abs/2505.22477v1 |
obtain appropriate information from the right sources at the right time, enabling the entire team to have synchronous perception, understanding, and prediction of the task and environmental conditions, ensuring coordinated deci sion-making (Gorman et al., 2017). In high- level team collaboratio n, the team needs to und... | https://arxiv.org/abs/2505.22477v1 |
conceptualizing SA as the activated subset of these cognitive structures. The model posits that SA emerges from activated components of the mental model—those with higher informational priority and greater accessibility compared to non -activated information. The ATSA framework employs a principle of heterogeneous homo... | https://arxiv.org/abs/2505.22477v1 |
social competencies, humans need to develop new ways of establishing rapport with AI teammates (Duan et al., 2024). Through deliberately designed appropriate social traits (e.g., appearance, social roles, social competencies) and behaviors (both verbal and non-verbal), AI agents can modulate human social cognition, fos... | https://arxiv.org/abs/2505.22477v1 |
allocation requires the system to maximize human -intelligence complementarity on the premise of being human - centered. Therefore, humans should have as much control as possible in the process of task allocation; while machines can adaptively adjust the LOA according to the advantages and disadvantages of humans in th... | https://arxiv.org/abs/2505.22477v1 |
override where humans as the initiator transfer system control from intelli gent agents to humans is called human override; although it is a widely used TOC method in many situations, current research is relatively less. Subsequent research added factors such as time, compulsion, and predictability based on this classi... | https://arxiv.org/abs/2505.22477v1 |
referring to a complete transfer of decision -making and control from humans to AI. The remaining six modes involve cooperative human -AI int eractions, which the author further categorized based on information processing and interaction initiation. These six modes are as follows: AI -first, Secondary, AI -guided, AI -... | https://arxiv.org/abs/2505.22477v1 |
These modalities may be employed independently or in combination. Similar to human -human team s, which utilize behavioral cues for implicit communication such as directional actions and gaze signals, communication in HAC must maintain consistency between verbal statements and actions (Banerjee et al., 2018) and dynami... | https://arxiv.org/abs/2505.22477v1 |
interdisciplinary collaboration across algorithm development, psychology, human factors, sociology, and related disciplines. 4.4 Team Relationship : Trust Relationship fulfillment represents one of three fundamental human psychological needs (Ryan & Deci, 2000), and assumes particular significance when examining intera... | https://arxiv.org/abs/2505.22477v1 |
the box to “prior knowledge” represents that all factors in the box can be converted into the user's prior experience. The purpose of studying trust is to calibrat e trust , maintain ing an appropriate level of trust in machines by human operators. Not surprisingly, studies on human -machine trust in HAC mainly revolve... | https://arxiv.org/abs/2505.22477v1 |
(Adriasola et al., 2021). The traditional static leadership, vertical leadership , is characterized by a hierarchical structure where the leader holds authority and is responsible for guiding the team. It describes the top–down leadership of external team leaders, ensuring formalized roles and decision -making processe... | https://arxiv.org/abs/2505.22477v1 |
where both human and AI sample from the shared world , forming mental models, which further guide their actions. 26 Figure 5 Human Centered Human- AI Collaboration (HCHAC) Framework . The framework delineates two foundational pathways: (1) AI empowering human (indicated in yellow) stems from the value alignment of the ... | https://arxiv.org/abs/2505.22477v1 |
leverage AI capabilities (Dellermann et al., 2021). Recent research by Akata et al. (2024) also suggests that orchestrator frameworks, where humans strategically deploy AI systems based on their strengths, outperform both fully automated and traditionally supervised approaches. (2) Principle 2: AI Empowering Humans Tra... | https://arxiv.org/abs/2505.22477v1 |
cognitive readiness (de Melo et al., 2021). These processes support shared leadership where decision authority shifts based on contextual advantages. Vehicles with theory of mind capabilities demonstrate superior colla boration in complex scenarios (Rabinowitz et al., 2018). Team decision -making represents a process w... | https://arxiv.org/abs/2505.22477v1 |
autonomous system collaborate on achieving team understanding and team control. The ultimate control of human manifests through design choices prioritizing driver agency and control . Effective autonomous systems enhance capabilities rather than replace drivers, supporting through cognitive augmentation while preservin... | https://arxiv.org/abs/2505.22477v1 |
computational models that encapsulate HAC characteristics . (4) As AI technology develops, HAC not only happens in mechanical tasks but also occurs in complex social contexts. This poses new requirements for HAC, particularly regarding situational awareness and mental models, which must consider more complex social inf... | https://arxiv.org/abs/2505.22477v1 |
–35. Barber, D., Leontyev, S., Sun, B., Davis, L., Nicholson, D., & Chen, J. Y . C. (2008). The mixed -initiative experimental testbed for collaborative human robot interactions. 2008 International Symposium on Collaborative Technologies and Systems , 483 –489. Bass, B. M., & Riggio, R. E. (2006). Transformational lead... | https://arxiv.org/abs/2505.22477v1 |
thinking. Nat Hum Behav 8, 1829 –1830. https://doi.org/10.1038/s41562- 024-01995- 5 Cohen, M. C., Demir, M., Chiou, E. K., & Cooke, N. J. (2021). The Dynamics of Trust and Verbal Anthropomorphism in Human- Autonomy Teaming. 2021 IEEE 2nd International Conference on Human- Machine Systems (ICHMS) , 1–6. Committee on Hum... | https://arxiv.org/abs/2505.22477v1 |
Automated, Connected, and Intelligent Vehicles . Fan, X., & Yen, J. (2011). Modeling Cognitive Loads for Evolving Shared Mental Models in Human –Agent Collaboration. IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics) , 41(2), 354–367. Fan, L., Xu, M., Cao, Z., Zhu, Y ., & Zhu, S.- C. (2022). Artif... | https://arxiv.org/abs/2505.22477v1 |
V . (2004). Culture, Leadership, and Organizations: The GLOBE Study of 62 Societies . SAGE Publications. Huang, Y . (2024). Levels of AI Agents: From Rules to Large Language Models (No. arXiv:2405.06643). arXiv. https://doi.org/10.48550/arXiv.2405.06643 Hussain, S., Naqvi, R. A., Abbas, S., Khan, M. A., Sohail, T., & H... | https://arxiv.org/abs/2505.22477v1 |
& Zhang, J. (2023). The influence of anthropomorphic cues on patients’ perceived anthropomorphism, social presence, trust building, and acceptance of health care conversational agents: within -subject web -based experiment. Journal of medical Internet research, 25, e44479. Lu, Z., & de Winter, J. C. F. (2015). A Review... | https://arxiv.org/abs/2505.22477v1 |
Rouse, W. B. (1985). The effects of type of knowledge upon human problem solving in a process control task. IEEE Transactions on Systems, Man, and Cybernetics , SMC -15(6), 698 –707. Murphy, R. R. (2024). What will robots think of us? Science Robotics, 9(86), eadn6096. https://doi.org/10.1126/scirobotics.adn6096 Neisse... | https://arxiv.org/abs/2505.22477v1 |
https://doi.org/10.1038/s44401- 025-00016 -5 Salas, E., and Fiore, S. M. (2004). Team Cognition: Understanding the Factors That Drive Process and Performance. Washington, D.C: American Psychological Association. Salmon, P. M., & Plant, K. L. (2022). Distributed situation awareness: From awareness in individuals and tea... | https://arxiv.org/abs/2505.22477v1 |
A Knowledge Discovery Based Big Data for Context aware Monitoring Model for Assisted Healthcare. International Journal of Applied Engineering Research , 11(5), 3241 –3246. Walliser, J. C., Mead, P. R., & Shaw, T. H. (2017). The Perception of Teamwork With an Autonomous Agent Enhances Affect and Performance Outcomes. Pr... | https://arxiv.org/abs/2505.22477v1 |
arXiv:2505.22483v1 [cs.LG] 28 May 2025A Closer Look at Multimodal Representation Collapse Abhra Chaudhuri1Anjan Dutta2Tu Bui1Serban Georgescu1 Abstract We aim to develop a fundamental understand- ing of modality collapse, a recently observed empirical phenomenon wherein models trained for multimodal fusion tend to rely... | https://arxiv.org/abs/2505.22483v1 |
the fusion strat- egy (Ma et al., 2022), to the best of our knowledge, there have been no prior efforts towards developing a bottom-up understanding of the underlying learning-theoretic phenom- ena at play. We aim to bridge this gap by developing a mechanistic the- ory of multimodal feature encoding that is agnostic of... | https://arxiv.org/abs/2505.22483v1 |
sim- plicity bias of neural networks (Huh et al., 2023) limiting the rank of the gradient updates received at any given layer.Consequently, through Theorem 2, we arrive at the result that this gradient-rank bottleneck forces SGD to parameter- ize the fusion head neurons in a polysemantic manner. Interestingly, we obser... | https://arxiv.org/abs/2505.22483v1 |
collapse from the perspective of polysemanticity (Scherlis et al., 2022; Huben et al., 2024; Lecomte et al., 2024) and low-rank simplicity bias (Huh et al., 2023). These theoretical tools have also so far been restricted primarily to unimodal cases, and to the best of our knowledge, we are the first to explore them for... | https://arxiv.org/abs/2505.22483v1 |
features contributing to the reduction of the task loss decreases, resulting in the following limit: lim p(wp)→1X ∀zy∈X∂ ∂wpL(φ(zy),y) = 0 , where zydenotes predictive conjugate features in X. The modality facing the above marginal decrease in contri- bution to the loss reduction across its feature space, is the one th... | https://arxiv.org/abs/2505.22483v1 |
puts to φare dynamic (for instance, when the unimodal representations are aligned via cross-modal knowledge dis- tillation) under some distance metric d, then at any itera- tionnof SGD, the norm of the difference between wand the AGOP of Wis bounded as follows for all modalities i, j∈Mand datapoints x∈X: lim d(˜xi,˜xj)... | https://arxiv.org/abs/2505.22483v1 |
∇ ψLmd gi←gi− ∇ giLsem+∇giLmd h−1 i←h−1 i− ∇h−1 iLsem Theoretical Rationale: The maximization of Lmdbygibrings all the modalities within the ϵ-neighborhood under d specified in Theorem 3, implementing an explicit disentan- glement of noisy and predictive features. The adversarial updates to ψandgiare continued until th... | https://arxiv.org/abs/2505.22483v1 |
2), backpropagated gradients through the fusion head into the multimodal prefix also get rank- constrained, forcing it to allocate fractional capacities to features that would otherwise have been monosemantically represented. This makes the predictive features harder to decode, leading to the observed gap between the t... | https://arxiv.org/abs/2505.22483v1 |
ble 2, all of which unanimously and unambiguously show the effectiveness of basis reallocation towards preventing modality collapse, confirming the result in Theorem 3. Rank and Similarity with the Multimodal Representa- tion: Figure 5 (a) and (c) provide the most direct evidence that basis reallocation frees up rank b... | https://arxiv.org/abs/2505.22483v1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.