text string | source string |
|---|---|
on Societies: Understanding Attitude Formation Towards AI , pages 57–70. Springer, 2024. [25] Pujen Shrestha, Dario Krpan, Fatima Koaik, Robin Schnider, Dima Sayess, and May Saad Binbaz. Beyond weird: Can synthetic survey participants substitute for humans in global policy research? Behavioral Science & Policy , page 2... | https://arxiv.org/abs/2505.22125v1 |
Jacky CK Ng, Ben CP Lam, Algae KY Au, Wesley CH Wu, Hilary KY Ng, and Sylvia Xiaohua Chen. Social axioms and psychological toll: A study of emotional, behavioral, and cognitive responses across 35 cultures during the covid-19 pandemic. Applied Psychology: Health and Well-Being , 16(4):1679–1698, 2024. [42] Pulse Asia R... | https://arxiv.org/abs/2505.22125v1 |
It also encompasses social and political behavior (e.g., Civic Engagement). In addition to the sociodemographics and psychological frameworks, the survey instrument also includes an additional section to assess general citizen attitudes toward four major economic issues (e.g., inflation, minimum wage, etc) and four key... | https://arxiv.org/abs/2505.22125v1 |
model is then asked to identify the sentiment that best reflects how someone with its assigned profile would most likely feel in response. 14 Sentiment Simulation Using Generative AI Agents Figure 5: Prompt Format for Categorical Profile Encoding. Figure 6: Prompt Format for Contextualized Profile Encoding. 15 Sentimen... | https://arxiv.org/abs/2505.22125v1 |
arXiv:2505.22126v1 [cs.CV] 28 May 2025SridBench: Benchmark of Scientific Research Illustration Drawing of Image Generation Model Yifan Chang1,2∗Yukang Feng2,3∗Jianwen Sun2,3∗Jiaxin Ai2,4 Chuanhao Li5S. Kevin Zhou1Kaipeng Zhang2,5† 1University of Science and Technology of China2Shanghai Innovation Institute 3Nankai Univ... | https://arxiv.org/abs/2505.22126v1 |
research on AI-assisted scientific illustration remains in its early stages and is mainly focused on benchmarking the understanding capabilities of multimodal models (e.g., SciFIBench [ 13], ScImage [ 14]). There is a noticeable lack of evaluation frameworks for assessing the ability of models to generate scientific di... | https://arxiv.org/abs/2505.22126v1 |
such as DEsignBench [ 17], indicating its strong text-to-image alignment ability. The FLUX series of models strikes a balance among image resolution, generation speed, and cost-efficiency, being particularly suitable for high-resolution image generation tasks. Although diffusion models have achieved remarkable results ... | https://arxiv.org/abs/2505.22126v1 |
order to test the scientific research drawing ability of image generation models, we collect and carefully select the scientific research illustration drawing data, and set the process and standard for scientific research drawing evaluation. This process can be seen in Fig.2. We collect data in two disciplines: Compute... | https://arxiv.org/abs/2505.22126v1 |
invited human experts to assess the content, quality, and quality of the illustrations. Only papers and illustrations considered to be of high quality and scientific quality by human experts will be used to construct triples. For natural science papers, we crawl from the Nature website, which ensures the quality and au... | https://arxiv.org/abs/2505.22126v1 |
Structure diagram. 4.2 Overall Evaluation As we can see in Fig.3(a), Gemini-2.0-Flash scored less than 2 on each of these measures, meaning that the model had little or no ability to draw professional illustrations for scientific research papers. In the category “diagrammatic structural integrity”, Gemini-2.0-Flash ear... | https://arxiv.org/abs/2505.22126v1 |
As can be seen from Fig.5, the Gemini-2.0-Flash is still considered to have no preliminary ability to generate scientific maps in computer science, although there is some improvement in the ratings compared to the natural science data. For GPT-4o-image, there was a significant decrease in the scores on the measures of ... | https://arxiv.org/abs/2505.22126v1 |
advantage over other models in terms of the quality of content generated. It produces illustrations with well-defined and well-expressed text. The structure of the illustration is clear. The basic elements of the reference image are reflected in the generated results. It can be said that GPT-4o-image has had preliminar... | https://arxiv.org/abs/2505.22126v1 |
drawing of image generation models. How to improve the generation ability of the image generation model in the task of strong inference should be the focus of the next researchers. 10 References [1]N. Metzger, “Dsm refinement with deep encoder-decoder networks,” 2020. [Online]. Available: https://arxiv.org/abs/2012.074... | https://arxiv.org/abs/2505.22126v1 |
Liao, A. Lokhmotov, F. Massa, P. Meng, P. Micikevicius, C. Osborne, G. Pekhimenko, A. T. R. Rajan, D. Sequeira, A. Sirasao, F. Sun, H. Tang, M. Thomson, F. Wei, E. Wu, L. Xu, K. Yamada, B. Yu, G. Yuan, A. Zhong, P. Zhang, and Y . Zhou, “Mlperf inference benchmark,” 2020. [Online]. Available: https://arxiv.org/abs/1911.... | https://arxiv.org/abs/2505.22126v1 |
Singh, T. Yu, S. Kim, V . Bursztyn, N. Vlassis, and R. A. Rossi, “Figcaps-hf: A figure-to-caption generative framework and benchmark with human feedback,” 2023. [Online]. Available: https://arxiv.org/abs/2307.10867 [29] Z. Xu, S. Du, Y . Qi, C. Xu, C. Yuan, and J. Guo, “Chartbench: A benchmark for complex visual reason... | https://arxiv.org/abs/2505.22126v1 |
readability (does it allow the reader to understand the content concisely) , aesthetic feeling, ie whether a drawing is aesthetically pleasing or has a sense of design. On a scale of 1 to 5(1: fail, 2: poor, 3: fair, 4: good, 5: excellent). Please return your comments in the following format: ’completeness of textual i... | https://arxiv.org/abs/2505.22126v1 |
arXiv:2505.22128v1 [cs.CV] 28 May 2025REAL-TIME BLIND DEFOCUS DEBLURRING FOR EARTH OBSERVATION: THE IMAGIN-E MISSION APPROACH Alejandro D. Mousist Thales Alenia Space, Tres Cantos, Spain ABSTRACT This work addresses mechanical defocus in Earth observation im- ages from the IMAGIN-e mission aboard the ISS, proposing a b... | https://arxiv.org/abs/2505.22128v1 |
the blur kernel is unknown. More recently, transformer-based architectures have emerged as promising candidates for image restoration tasks. For instance, DeblurDiNAT[8] presents a compact model that leverages dilated neighborhood attention mechanisms to achieve robust generalization and high perceptual fidelity, even ... | https://arxiv.org/abs/2505.22128v1 |
upon availability, rely on this preprocessing step to enhance data quality and optimize downstream computational tasks. Given the constraints of onboard execution without specialized hardware, the deblurring model must operate efficiently within the platform’s limited computational resources. To meet this challenge, (a... | https://arxiv.org/abs/2505.22128v1 |
and spectral normalization, ensuring effective extraction of features across all resolutions and promoting superior image reconstruc- tion. The overall loss function combined the standard adversarial loss with an L1 loss and an FFT-domain loss—as proposed in the original MIMO-Unet++ framework—as well as a perceptual lo... | https://arxiv.org/abs/2505.22128v1 |
this contributes to an extended processing time, the results highlight the model’s adaptability in constrained environments and underscore the role of efficient memory management in optimizing performance. Occasional ringing artifacts were observed, probably due to scaling operations during patch processing (see Fig. 7... | https://arxiv.org/abs/2505.22128v1 |
Computer Vision and Pattern Recognition , pages 8183–8192, 2018. [7] Orest Kupyn, Tetiana Martyniuk, Junru Wu, and Zhangyang Wang. Deblurgan-v2: Deblurring (orders-of-magnitude) faster and better. In Proceedings of the IEEE/CVF International Con- ference on Computer Vision (ICCV) , October 2019. [8] Hanzhou Liu, Bingha... | https://arxiv.org/abs/2505.22128v1 |
arXiv:2505.22137v1 [cs.CL] 28 May 2025Limited Generalizability in Argument Mining: State-Of-The-Art Models Learn Datasets, Not Arguments Marc Feger Heinrich-Heine-University Düsseldorf, Germany marc.feger@hhu.deKatarina Boland Heinrich-Heine-University Düsseldorf, Germany katarina.boland@hhu.de Stefan Dietze GESIS - Le... | https://arxiv.org/abs/2505.22137v1 |
Generalizability, in this regard, takes high pri- ority, especially at leading NLP conferences such as ACL 2025, as it allows models to make reliable and reasonable predictions on data that does not correspond to their training data. This is especially true for real-world models, which should mimic human-like generaliz... | https://arxiv.org/abs/2505.22137v1 |
answering Q2 - Q3 in Section 5. The results of this paper are then discussed in Section 6 and concluded in Section 7. In order not only to elucidate the process but also to foster discussion that may inspire new ap- proaches for novel datasets and broader generaliza- tion of argument mining methods, we contribute: 1.A ... | https://arxiv.org/abs/2505.22137v1 |
searched Google Scholar and Google Dataset Search for the keyword argument mining to find contributions beyond survey papers. Based on our assessment, we found 52 such pa- pers with datasets, mostly from top NLP confer- ences like ACL, NAACL, LREC, or EMNLP. 2.2 Selection Criteria The dataset selection process for this... | https://arxiv.org/abs/2505.22137v1 |
et al., 2018) provides annotations for the Dr. In- ventor dataset (Fisas et al., 2016) for computer graphics publications, totaling 16,102 sentences. CE (Rinott et al., 2015) contains 86,963 sentences from Wikipedia across 58 topics (e.g., one-child policy, physical education). CMV (Hidey et al., 2017) consists of 2,57... | https://arxiv.org/abs/2505.22137v1 |
further clarification is needed, especially concerning their generalization as part of Q2 - Q3 . Thereby, Table 2, with examples from different definitions, illustrates whether their efforts never- theless converge in the identification of arguments despite different perspectives. Label Dataset Example ARGACQUA We chos... | https://arxiv.org/abs/2505.22137v1 |
similarity, a measure of similarity between two sets based on the ratio of their intersection to their union. (Q1) The sentence structures are strongly corre- lated across all datasets and labels. On average, a sentence contains 21 words, with nearly every second word (48%) being a stop or function word. Sentences are ... | https://arxiv.org/abs/2505.22137v1 |
adaptable to downstream classification. However, our goal is to assess the generalizability of these state-of-the-art argument mining models, not to find the best. For these, we use the standard hyperparameter grid for GLUE (Wang et al., 2018), as accepted in the BERT and RoBERTa papers, bal- ancing performance and tim... | https://arxiv.org/abs/2505.22137v1 |
no-argument ( ¬ARG) sentence in the original and manipulated form. 5 Results In this section, we will address and answer ques- tions Q2 - Q3 . To this end, we will mainly focus on Figure 1, which compares the pairwise exper- iments to show which state-of-the-art argument mining model performs best, thus reflecting the ... | https://arxiv.org/abs/2505.22137v1 |
primarily achieved in the benchmark settings, as reflected along the main diagonal. Furthermore, WRAP excels in gen- eralizing to TACO, as seen on the right.(Q2) Strong argument mining baselines do not necessarily imply strong argument generalization. A notable observation in Figure 1 is the contrast between baselines ... | https://arxiv.org/abs/2505.22137v1 |
0.1 WTP 0.59 0.55 0.55 0.54 0.65 0.06 / 0.11 AFS 0.57 0.58 0.59 0.6 0.84 0.24 / 0.27 UKP 0.7 0.67 0.7 0.68 0.79 0.09 / 0.12 AEC 0.52 0.57 0.51 0.56 0.96 0.39 / 0.45 TACO 0.76 0.61 0.65 0.55 0.88 0.12 / 0.33 Table 4: Transformers trained on all but the target bench- mark are evaluated against their state-of-the-art base... | https://arxiv.org/abs/2505.22137v1 |
when trained on joined datasets. However, in this merged setting, RoBERTa and BERT also show improved robustness, despite their stronger reliance on shortcuts in the pairwise setup. Furthermore, average differences remain moderate with ¯∆max= 0.12and¯∆min= 0.18while the models learn from heterogeneous data sources. Dif... | https://arxiv.org/abs/2505.22137v1 |
(V ACC, CE, TACO, UKP, IAM), where UKP, IAM, and TACO already aim for generalizable annotations.WRAP BERT RoBERTa DistilBERT SOTA ∆max/min ACQUA 0.73 0.77 0.76 0.78 0.84 0.06 / 0.11 WEBIS 0.61 0.66 0.66 0.67 0.74 0.07 / 0.13 ABSTRCT 0.83 0.87 0.84 0.87 0.89 0.02 / 0.06 ARGUMINSCI 0.78 0.79 0.77 0.77 0.84 0.05 / 0.07 CE... | https://arxiv.org/abs/2505.22137v1 |
This was particularly relevant for datasets where .ann files only provided anno- tated sequence boundaries for larger documents stored in .txt or.json formats. In such cases, we used spaCy2for sentence boundary extraction, which may produce boundaries that differ from the original assumptions. Nevertheless, we confirme... | https://arxiv.org/abs/2505.22137v1 |
Computational Linguistics. Jérémie Cabessa, Hugo Hernault, and Umer Mushtaq. 2025. Argument mining with fine-tuned large lan- guage models. In Proceedings of the 31st Inter- national Conference on Computational Linguistics , pages 6624–6635, Abu Dhabi, UAE. Association for Computational Linguistics. Elena Cabrio and Se... | https://arxiv.org/abs/2505.22137v1 |
35(6):4758–4766. Robert Geirhos, Jörn-Henrik Jacobsen, Claudio Michaelis, Richard Zemel, Wieland Brendel, Matthias Bethge, and Felix A. Wichmann. 2020. Shortcut learning in deep neural networks. Nature Machine Intelligence , 2(11):665–673. Nancy Green. 2018. Proposed method for annotation of scientific arguments in ter... | https://arxiv.org/abs/2505.22137v1 |
on Argumentation Mining , pages 19–23, Baltimore, Maryland. Association for Computational Linguistics. Xinyu Hua, Mitko Nikolov, Nikhil Badugu, and Lu Wang. 2019. Argument mining for understanding peer reviews. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Ling... | https://arxiv.org/abs/2505.22137v1 |
of the Twelfth Language Re- sources and Evaluation Conference , pages 4964– 4973, Marseille, France. European Language Re- sources Association. Vlad Niculae, Joonsuk Park, and Claire Cardie. 2017. Argument mining with structured SVMs and RNNs. InProceedings of the 55th Annual Meeting of the Association for Computationa... | https://arxiv.org/abs/2505.22137v1 |
evidence detection. In Pro- ceedings of the 2015 Conference on Empirical Meth- ods in Natural Language Processing , pages 440–450, Lisbon, Portugal. Association for Computational Lin- guistics. Anna Rogers, Olga Kovaleva, and Anna Rumshisky. 2020. A primer in BERTology: What we know about how BERT works. Transactions o... | https://arxiv.org/abs/2505.22137v1 |
Computational Linguistics. Dietrich Trautmann, Johannes Daxenberger, Christian Stab, Hinrich Schütze, and Iryna Gurevych. 2020. Fine-grained argument unit recognition and classi- fication. Proceedings of the AAAI Conference on Artificial Intelligence , 34(05):9048–9056. Eva Maria Vecchi, Neele Falk, Iman Jundi, and Gab... | https://arxiv.org/abs/2505.22137v1 |
ACC VG WD WTP ECHR AFS UKP AEC TACO 20406080100 Jaccard Similarity Figure 3: The word overlaps, measured by the Jac- card similarity between the vocabularies of two datasets, show that the datasets (as well as the labels) are gen- erally distinct from each other. The overlaps range between 3–36%, with an average of 19%... | https://arxiv.org/abs/2505.22137v1 |
WEBIS (Al-Khatib et al., 2016a) Argumentative Online Debate Yes Yes Yes 10,804 5,543 Yes AAE (Stab and Gurevych, 2014) Claim-based Academic Yes Yes Yes PE No ABSTRCT (Mayer et al., 2020b) Claim-based Academic Yes Yes Yes 1,308 7,323 Yes AMECHR (Teruel et al., 2018) Claim-based Legal Yes Yes No No AMSR (Fromm et al., 20... | https://arxiv.org/abs/2505.22137v1 |
Yes Yes IAC 4,001 1,374 Yes TACO (Feger and Dietze, 2024b) Inference-Information Twitter Debate Yes Yes Yes 864 868 Yes Table 6: Summary of the 52 datasets from the reviewed papers, sorted by their applied definitions. Data collection followed the methodology described in Section 2.1, and selection criteria are detaile... | https://arxiv.org/abs/2505.22137v1 |
FaceEditTalker: Interactive Talking Head Generation with Facial Attribute Editing Guanwen Feng School of Computer Science and Technology Xidian University Xi’an 710071, China gwfeng_1@stu.xidian.edu.cnZhiyuan Ma School of Computer Science and Technology Xidian University Xi’an 710071, China zjmazy@stu.xidian.edu.cn Yun... | https://arxiv.org/abs/2505.22141v1 |
identities. Dynamic and fine-grained attribute control can greatly enhance personalization and user engagement. Although image-level facial attribute editing methods such as GAN-based or text-driven ap- proaches [ 21,23] have achieved initial success in static image generation tasks, extending these methods to video ge... | https://arxiv.org/abs/2505.22141v1 |
21,23,43,1] leverage latent space disentanglement to enable controllable editing, while CLIP-guided approaches [ 42,36] introduce semantic alignment between language and image, allowing intuitive text-driven attribute modifications. Diffusion models offer enhanced control and fewer artifacts for editing tasks [ 5,11, 4... | https://arxiv.org/abs/2505.22141v1 |
Space Editing Module To achieve effective facial attribute editing, the Image Feature Space Editing Module leverages the design of DiffAE [ 40] with a dual-layer latent encoding structure. Inspired by the style vector mechanism in StyleGAN [ 21], our model decouples the latent space into two subspaces: semantic code zs... | https://arxiv.org/abs/2505.22141v1 |
as dynamic motion information. This mechanism ensures stable expression of semantic attributes alongside synchronized audio-driven facial movements: zt−1=√αtzt+√ 1−αtϵθ(zt, zsem, K, t ), (6) where αtrepresents the noise scheduling parameters, and ϵθis the conditional denoising network that guides the denoising process ... | https://arxiv.org/abs/2505.22141v1 |
38] optimizes direct mappings between audio and lip motion for highly synchronized lip movements while preserving facial textures. SadTalker [ 58] employs explicit facial landmarks and adversarial networks to produce smooth animations. DiffTalk [ 46], EchoMimic [9], and Hallo [ 54] leverage diffusion models to model co... | https://arxiv.org/abs/2505.22141v1 |
conducted quantitative evaluation. Our method edits semantic features to generate videos with 20 attributes, compared against video editing algorithms. Evaluation of identity consistency between frames showed that while our generative model achieved similar overall identity consistency as video editing methods, it sign... | https://arxiv.org/abs/2505.22141v1 |
samples of different attributes exhibit clear separation along the principal component directions, proving that semantic latent variables exhibit a linear 8 Figure 4: Video generation results with the editing feature enabled. Using three different reference images and the same audio clip, we demonstrate the editing and... | https://arxiv.org/abs/2505.22141v1 |
Framework for Audio-driven Multi-Subject Lip-Sync using 3D Gaussian Splatting // arXiv preprint arXiv:2505.01928. 2025. [3]Assessment Image Quality . From error visibility to structural similarity // IEEE transactions on image processing. 2004. 13, 4. 93. [4]Baevski Alexei, Zhou Yuhao, Mohamed Abdelrahman, Auli Michael... | https://arxiv.org/abs/2505.22141v1 |
[19] Jähne Bernd . Digital image processing. 2005. [20] Jiang Diqiong, Chang Jian, You Lihua, Bian Shaojun, Kosk Robert, Maguire Greg . Audio- Driven Facial Animation with Deep Learning: A Survey // Information. 2024. 15, 11. 675. [21] Karras Tero, Laine Samuli, Aila Timo . A Style-Based Generator Architecture for Gene... | https://arxiv.org/abs/2505.22141v1 |
memory // Proceedings of the AAAI Conference on Artificial Intelligence. 36. 2022. 2062–2070. [36] Patashnik Or, Wu Zongze, Shechtman Eli, Cohen-Or Daniel, Lischinski Dani . StyleCLIP: Text-Driven Manipulation of StyleGAN Imagery // 2021 IEEE/CVF International Conference on Computer Vision (ICCV). 2021. 2065–2074. [37]... | https://arxiv.org/abs/2505.22141v1 |
He Shan, Wu Xiaoyan, Hu Qiming, others . EmotiveTalk: Expressive Talking Head Generation through Au- dio Information Decoupling and Emotional Video Diffusion // arXiv preprint arXiv:2411.16726. 2024. [51] Wang Suzhen, Li Lincheng, Ding Yu, Fan Changjie, Yu Xin . Audio2head: Audio-driven one-shot talking-head generation... | https://arxiv.org/abs/2505.22141v1 |
high-quality, identity-consistent talking face videos. In the inference phase, the model uses the input audio sequence, reference image, and attribute information to generate facial motion features, which are processed by the diffusion model to produce high-quality, editable talking face videos. A.2 Training Stage Algo... | https://arxiv.org/abs/2505.22141v1 |
arXiv:2505.22146v1 [cs.CV] 28 May 20251 Flexible Tool Selection through Low-dimensional Attribute Alignment of Vision and Language Guangfu Hao1,2,†,Haojie Wen3,4,†,Liangxuna Guo1,5,†,Yang Chen1,Yanchao Bi3,4,*,Shan Yu1,2,5,* 1Laboratory of Brain Atlas and Brain-inspired Intelligence, Institute of Automation Chinese Aca... | https://arxiv.org/abs/2505.22146v1 |
suitable alternatives when preferred tools are unavailable. Despite these insights from cognitive neuroscience, com- putational models that effectively capture this attribute-based flexible tool selection mechanism remain underdeveloped [18]. Current approaches to modeling tool selection often rely on either direct map... | https://arxiv.org/abs/2505.22146v1 |
attribute-based approach achieves 74% accuracy in tool selection tasks, substan- tially outperforming direct tool name matching (20%) and smaller multimodal large language models (LLMs) (21%-58%), while showing competitive performanceagainst much larger multimodal LLMs like GPT-4o [21] (73%) and Gemini-2.0-Pro [22] (72... | https://arxiv.org/abs/2505.22146v1 |
about tool properties and stored manipulation knowledge, with distinct neural substrates supporting each process [36]. More recent computational perspectives propose that humans build internal models of tools that enable mental simulation of potential uses before physical interaction [37]. Despite these advances in und... | https://arxiv.org/abs/2505.22146v1 |
Space Dim 1Dim 2 c broom Attritube Vector 6323326635221hammer Attritube Vector 5323266655523 broom Attritube Vector 6323326635221 5323266655523The spilled flour was efficiently gathered into a heap on the kitchen floor. After the birthday party, the confetti was swept from the living room with ease. The gardener quickl... | https://arxiv.org/abs/2505.22146v1 |
both visual tool representations and linguistic usage descriptions—capabilities essential for flexible tool selection. The key challenge in apply- ing these methods to tool selection lies in defining an appropri- ate attribute space that captures both physical and functional properties relevant to tool use. By adapting... | https://arxiv.org/abs/2505.22146v1 |
attribute prediction, we developed Tool Scenario-Attribute Dataset of natural language scenarios describing tool usage contexts, generated using the Gemini- 2.0-flash-experimental LLM. The generation process leverages each tool’s attribute ratings and attribute descriptions to createnatural language descriptions of too... | https://arxiv.org/abs/2505.22146v1 |
the compatibility between a task description and a candidate tool, we define a similarity function s:A×A → Rthat measures the correspondence be- tween attribute vectors. We investigate two primary similarity metrics: scos(ad,at) =ad·at ||ad|| · ||at||(3) seuc(ad,at) =−||ad−at||2 (4) where scosrepresents cosine similari... | https://arxiv.org/abs/2505.22146v1 |
task description. The attribute prediction head transforms the language features into the 13-dimensional attribute vector through a multi-layer architecture consisting of fully connected layers with dimensions [256, 128, 64, 13]. This specialized head is trained to extract attribute requirements implied by natural lang... | https://arxiv.org/abs/2505.22146v1 |
met- rics. First, attribute-wise accuracy measures the model’s ability to predict individual attribute values accurately on the 7-point scale. Specifically, the predicted values are rounded to the nearest integer, and the prediction is considered correct only if it exactly matches the ground truth value. ResNet50 achie... | https://arxiv.org/abs/2505.22146v1 |
training performance and strong generalization. Additionally, we tested whether explicitly adding a question prompt (”What tool is relevant to this scene?”) at the end of each scenario description would improve attribute extraction. As shown in Fig.3(b), this modification surprisingly decreased both training accuracy (... | https://arxiv.org/abs/2505.22146v1 |
attribute-based intermediate representation. This approach achieved only 20% accuracy, highlighting the limitations of direct mapping between scenario descriptions and tool names. 10 Second, we tested Qwen-VL-7B, a smaller multimodal model with approximately 7 billion parameters. With straight-to- answer (STA) promptin... | https://arxiv.org/abs/2505.22146v1 |
These ablation studies reveal that our attribute space ef- fectively captures the most salient properties for flexible tool selection, with functional and manipulation-related attributes proving particularly critical across modalities. V. D ISCUSSION AND CONCLUSION Our work establishes a cognitively inspired computatio... | https://arxiv.org/abs/2505.22146v1 |
both cognitive science and com- putational modeling of human-like intelligent systems. Our approach provides a foundation for developing more inter- pretable, efficient, and neurally-grounded systems that reflect the remarkable flexibility of human tool use. REFERENCES [1] C. Baber, Cognition and tool use: Forms of eng... | https://arxiv.org/abs/2505.22146v1 |
space describes the representation of thousands of object and action categories across the human brain,” Neuron , vol. 76, no. 6, pp. 1210–1224, 2012. [18] F. Osiurak and D. Heinke, “Looking for intoolligence: A unified framework for the cognitive study of human tool use and technology.” American Psychologist , vol. 73... | https://arxiv.org/abs/2505.22146v1 |
Psychological review , vol. 123, no. 5, p. 534, 2016. [34] G. Goldenberg and S. Hagmann, “Tool use and mechanical problem solving in apraxia,” Neuropsychologia , vol. 36, no. 7, pp. 581–589, 1998. [35] F. Osiurak, C. Jarry, P. Allain, G. Aubin, F. Etcharry-Bouyx, I. Richard, I. Bernard, and D. Le Gall, “Unusual use of ... | https://arxiv.org/abs/2505.22146v1 |
preprint arXiv:1904.05538 , 2019. [49] A. Myers, C. L. Teo, C. Ferm ¨uller, and Y . Aloimonos, “Affordance detection of tool parts from geometric features,” in 2015 IEEE interna- tional conference on robotics and automation (ICRA) . IEEE, 2015, pp. 1374–1381. [50] Y . Huang, J. Shi, Y . Li, C. Fan, S. Wu, Q. Zhang, Y .... | https://arxiv.org/abs/2505.22146v1 |
Albert, A. Almahairi, Y . Babaei, N. Bashlykov, S. Batra, P. Bhargava, S. Bhosale et al. , “Llama 2: Open foundation and fine-tuned chat models,” arXiv preprint arXiv:2307.09288 , 2023. [64] A. Liu, B. Feng, B. Xue, B. Wang, B. Wu, C. Lu, C. Zhao, C. Deng, C. Zhang, C. Ruan et al. , “Deepseek-v3 technical report,” arXi... | https://arxiv.org/abs/2505.22146v1 |
arXiv:2505.22147v1 [cs.AI] 28 May 2025Lifted Forward Planning in Relational Factored Markov Decision Processes with Concurrent Actions Florian Andreas Marwitza,*, Tanya Braunb, Ralf Mölleraand Marcel Gehrkea aUniversity of Hamburg bUniversity of Münster ORCID (Florian Andreas Marwitz): https://orcid.org/0000-0002-9683-... | https://arxiv.org/abs/2505.22147v1 |
mayor. Within these groups, it does not matter on which exact citizen the mayor imposes a travel ban on, only on how many she imposes a travel ban on. Additionally, many computations over subsets of indis- tinguishable citizens are redundant. Thus, we propose to drastically reduce the search and action space by using a... | https://arxiv.org/abs/2505.22147v1 |
groups of indistinguishable random variables [26, 20, 9]. There are online decision making approaches adding action and util- itity nodes to this representation [1, 14, 13], here, we focus on offline planning. To carry out even more lifted computations, Taghipour [30] extends lifted probabilistic inference by a general... | https://arxiv.org/abs/2505.22147v1 |
.The utility of a state sis given by U(s) =R(s) +γmax a∈AX s′∈SP(s′|s, a)·U(s′). (1) To find the utility of a state algorithmically, we find a value func- tionVsatisfying the Bellman equation. The value function induces a policy by selecting the action that yields the maximum expected value. For computing a value funct... | https://arxiv.org/abs/2505.22147v1 |
joint distribution PG=1 ZQ f∈gr(G)f, with gr(G)refer- ring to the groundings of Gw.r.t. given constraints. A grounding is the instantiation of each parfactor with an allowed constant. Let us illustrate the definition of a parfactor model: Example 2. LetW={Sick, Epidemic },L={M},D(M) = D={a, b, c, d, e, f, g, h }with Bo... | https://arxiv.org/abs/2505.22147v1 |
set of constants and the set Lis a set of logvars over D. The set Xis a set of PRVs defined over L. The set of possible interpretations IXfor the ground- ings of the set Xdefines the state space. The set Ais a set of action PRVs. A prafctor model GoverAandXrepresents the the transi- tion function T:IX× IA× IX7→R+ 0, wi... | https://arxiv.org/abs/2505.22147v1 |
functions in our epidemic example: Example 5. The parameterized local reward functions for Exam- ple 3 are R1(Sick(M)), evaluating to −1(1) for each person (not) being sick, and R2(Travel (M)), evaluating to 2for each person travelling. If five persons are sick, three are not sick and four people are travelling, the to... | https://arxiv.org/abs/2505.22147v1 |
a vertex for each PRV in the current state. Two vertices are connected by an edge if and only if the PRVs associated with these two vertices share a logvar and occur together in a parfactor or a parameterized local reward function. We denote the number of (maximal) cliques by cand the size of the largest clique by w. W... | https://arxiv.org/abs/2505.22147v1 |
the state space for Example 3: Example 8. As the vertices are not connected in the relational cost graph, the state representation is (#[Sick(M)],#[Travel (M)], Epidemic ). We prove that our state representation exactly covers S: Theorem 11. The representation in Definition 10 is correct. Proof Sketch. Given groundings... | https://arxiv.org/abs/2505.22147v1 |
can represent actions on sets of objects, they fail to do so efficiently. Therein, the actions for each subset would be represented on their own, resulting in exponentially many actions. In the next section, we show the complexity of Foreplan. 5 Complexity Analysis of Foreplan Having outlined Foreplan, we analyze the c... | https://arxiv.org/abs/2505.22147v1 |
approximation to prevent iterating the whole state space and thus circumventing the exponential influence of c. 6 Foreplan: Faster by Approximation While Foreplan runs in time polynomial in the number of objects, the runtime still depends exponentially on c. In this section, we present an approximation technique inspir... | https://arxiv.org/abs/2505.22147v1 |
objects, polynomial in cand exponential in the induced width of each cost network, when wis bound. Proof. Approximate Foreplan has to solve the linear program in Equation 5. The number of variables and constraints in the linear program is linear in the action space and exponential in the induced width of each cost netw... | https://arxiv.org/abs/2505.22147v1 |
(7) We can use the techniques from (Approximate) Foreplan to calculate Qa(x)efficiently. Then, we know for every action the expected re- ward in state xand keep the ones where the expected reward is at leastt. We further filter the actions by checking the restriction query P(· |x, a)≥pwith a call to Lifted Variable Eli... | https://arxiv.org/abs/2505.22147v1 |
of symbolic value iteration using extended algebraic decision dia- grams (XADDs) [18, 32] for the epidemic example introduced in Example 3. We use Python 3.12 and HiGHS for solving the linear programs [17]. We run all implementations on a 13th Gen Intel(R) Core(TM) i5-1345U with 1.60 GHz and 16 GB of RAM. Figure 2 show... | https://arxiv.org/abs/2505.22147v1 |
initial state while the back- wards search in Golog computes the exact optimal policy. Further- more, the techniques from Foreplan can be transferred to first-order partially observable MDPs [34]. Acknowledgements The research for this paper was funded by the Deutsche Forschungs- gemeinschaft (DFG, German Research Foun... | https://arxiv.org/abs/2505.22147v1 |
method. Mathematical Programming Computation , 10(1):119–142, 2018. [18] J. Jeong, P. Jaggi, A. Butler, and S. Sanner. An exact symbolic reduction of linear smart Predict+Optimize to mixed integer linear programming. InProceedings of the 39th International Conference on Machine Learn- ing, volume 162, pages 10053–10067... | https://arxiv.org/abs/2505.22147v1 |
more complex setting: Example 12. Consider the PRVs Sick(M)andRemoteWork (M) and assume we have a parfactor defined over these two PRVs as well as the PRV Sick′(M)for the next state. Then, the relational cost graph consists of two vertices Sick(M)andRemoteWork (M) with an edge between them. Furthermore, we can understa... | https://arxiv.org/abs/2505.22147v1 |
Probabilities In this section, we show how Foreplan calculates the transition prob- abilities for the constraints in the linear program in Equation 2. Foe each state and action combination, Foreplan generates one constraint. Within this constraint, a sum is taken over all future states. We show how to calculate the req... | https://arxiv.org/abs/2505.22147v1 |
number of persons in our example. We denote by ϕ(travel′, travel, restrict )the probability of a person travelling in the next state given that person is currently (not) travelling and (not) being restricted from travelling. Then, the values Pti,kiare calcu- lated by Pt1,a1= a1 t1! ·ϕ(t, t, t)t1·ϕ(f, t, t )a1−t1(17) Pt... | https://arxiv.org/abs/2505.22147v1 |
occur together in a parfactor, a parameterized local reward function, or a basis function. In particu- lar, all these vertices and edges are introduced in the total relational cost graph too, when we connect two vertices once the corresponding PRVs occur together in a parfactor. We may add more edges than for the relat... | https://arxiv.org/abs/2505.22147v1 |
basis func- tions. Since the grouped basis functions are indistinguishable, the weights used in Approximate Foreplan are evenly distributed in ALP. Next, we have the constraints. Since the backprojections and basis functions in Approximate Foreplan evaluate to the same terms as the grounded functions in ALP, each indiv... | https://arxiv.org/abs/2505.22147v1 |
the exam- ple. The relational cost graph does not contain any edge, because the two PRVs do not occur together in a parfactor, reward or ba- sis function. Thus, the vertices Sick(M)andTravel (M)each are a clique of size one. Therefore, the state space representation con- tains two histograms along the value of the prop... | https://arxiv.org/abs/2505.22147v1 |
the following, we write the lifted computa- tion in terms of xiandgiorhi. In the remainder of this subsection, we show the complete constraint generation for the linear program for one example action. We omit all other actions because of lim- ited insights compared to one example instantiation and constraint generation... | https://arxiv.org/abs/2505.22147v1 |
arXiv:2505.22148v1 [cs.AI] 28 May 2025What Makes a Good Reasoning Chain? Uncovering Structural Patterns in Long Chain-of-Thought Reasoning Gangwei Jiang1,2*, Yahui Liu3*, Zhaoyi Li1,2, Qi Wang3, Fuzheng Zhang3, Linqi Song2,Ying Wei4,Defu Lian1† 1University of Science and Technology of China ,2City University of Hong Ko... | https://arxiv.org/abs/2505.22148v1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.