text
string
source
string
is inside of ?y. This predicate applies when ?x is a block and ?y is a drawer or a slider. - (stacked ?x - item ?y - item): ?x is stacked on top of ?y. This predicate applies when ?x and ?y are blocks. - (unstacked ?x - item ?y - item): ?x is unstacked from ?y. This predicate applies when ?x and ?y are blocks. **Action...
https://arxiv.org/abs/2505.21981v1
arXiv:2505.21985v1 [cs.MA] 28 May 2025Reward-Independent Messaging for Decentralized Multi-Agent Reinforcement Learning Naoto Yoshida1[0000 −0002−9813−0668]and Tadahiro Taniguchi1,2[0000 −0002−5682−2076] 1Kyoto University, Kyoto, Japan 2Research Organization of Science and Technology, Ritsumeikan University {yoshida.na...
https://arxiv.org/abs/2505.21985v1
information and coordination mechanisms, limiting their relevance to models of emergent communication. Additionally, re- liance on agent homogeneity has been criticized for its inability to support role differentiation based on context [45]. Second, most prior work on communication in MARL assumes cooperative environme...
https://arxiv.org/abs/2505.21985v1
state distribution is denoted by µ0∈ △(S), where △(·)represents the set of probability distributions over the 4 N. Yoshida and T. Taniguchi given set. The transition-observation function is given by P(s′,x|s,a), where s′∈ Sis the next state, x= (x1, x2, . . . , x N)∈ Xis the joint observation, and a= (a1, a2, . . . , a...
https://arxiv.org/abs/2505.21985v1
(7) Therefore, by maximizing JCPC(ϕi, θi)for each agent, the variational distribu- tionQθ(m|x)approximates the posterior Pθ(m|x), and the message vari- ablem, which integrates the observations xfrom both agents, corresponds to a state estimation of the entire environment s∈ S. Furthermore, Qϕ(m|x) inJCPC(ϕi, θi)can be ...
https://arxiv.org/abs/2505.21985v1
(6). 9: Update parameters using some gradient ascent: (ηi, θi, ϕi)←(ηi, θi, ϕi) +α∇ηi,θi,ϕiJ(ηi, θi, ϕi). 10: end for 11: end for 12:end for multi-agent contextual bandit problems. The second, IPPO-CPC , extends the approach to more complex scenarios involving state transitions. During execution, each agent samples a m...
https://arxiv.org/abs/2505.21985v1
the performance upper bound under full information shar- ing, we also evaluated a sharedcondition, where each agent’s policy and value networks receive both observations (x1, x2)as joint input. Implementation details For all agent architectures, the policy and value net- works consisted of multilayer perceptrons (MLPs)...
https://arxiv.org/abs/2505.21985v1
2.5 3 1e4Episode1.01.21.41.61.82.0 Welfare Fig. 5.Results in Bandit environment.Results Figure 5 presents the ex- perimental results. The results are reported as the interquartile mean (IQM) in Nruns, accompanied by bootstrapped 95% confidence inter- vals (CIs) with 2,000 iterations of re- sampling [1]. First, the shar...
https://arxiv.org/abs/2505.21985v1
MARL remains under debate [24,30]. To assess message utility in MARL- CPC, we conducted an ablation study to evaluate the impact of disrupting trained agents’ messages. Two con- ditions were tested: random , where messages were replaced with random values, and no, where message vectors were set to zero when input as on...
https://arxiv.org/abs/2505.21985v1
Systems, Man, and Cybernetics, Part C (Applications and Reviews) 38(2), 156–172 (2008) 5. Cangelosi, A., Parisi, D.: The emergence of a’language’in an evolving population of neural networks. Connection Science 10(2), 83–97 (1998) 6. De Witt, C.S., Gupta, T., Makoviichuk, D., Makoviychuk, V., Torr, P.H., Sun, M., Whites...
https://arxiv.org/abs/2505.21985v1
mechanism in multi-agent reinforcement learning. arXiv preprint arXiv:2307.05004 (2023) Title Suppressed Due to Excessive Length 15 28. Nomura,K.,Aoki,T.,Taniguchi,T.,Horii,T.:Decentralizedcollectiveworldmodel for emergent communication and coordination. arXiv preprint arXiv:2504.03353 (2025) 29. Nowak, M.A., Krakauer,...
https://arxiv.org/abs/2505.21985v1
arXiv:2505.21988v1 [cs.AI] 28 May 2025Functional Matching of Logic Subgraphs: Beyond Structural Isomorphism Ziyang Zheng Kezhi Li Zhengyuan Shi Qiang Xu The Chinese University of Hong Kong {zyzheng23,kzli24,zyzshi21,qxu}@cse.cuhk.edu.hk Abstract Subgraph matching in logic circuits is foundational for numerous Electroni...
https://arxiv.org/abs/2505.21988v1
traditional structural matching techniques, particularly in applications requiring cross-stage queries—for example, identifying subgraphs from an abstract netlist (like an And-Inverter Graph, or AIG) within a synthesized, technology-mapped netlist. Motivated by this critical gap, we introduce an approach explicitly des...
https://arxiv.org/abs/2505.21988v1
adder: multiplierFigure 1: We present the query and candidate graphs. Above : 1-bit adder and multiplier. Below : AIG netlist. The query b+cis explicitly contained within the candidate (b+c)×a, making it straightforward to identify the exact subgraph in the candidate. In contrast, the query b+cis implicitly contained w...
https://arxiv.org/abs/2505.21988v1
the implicit containment relation between graphs. Definition 3 (Functional Subgraph ).A graph Qis afunctional subgraph ofG, denoted Q≼G, if there exists a graph G′such that G′≡funcGandQis isomorphic to a subgraph of G′. This definition captures the idea that the query’s function is implicitly contained within the targe...
https://arxiv.org/abs/2505.21988v1
minimal collection is required to functionally contain b+cvia transformation. Based on Definition 4, we further define another task as: Task #2: Fuzzy Boundary Identification. Given a query graph Qand a candidate graph Gsuch thatQ≼G, determine for each node in G, whether it belongs to the fuzzy boundary G∗ofQ. 3 Method...
https://arxiv.org/abs/2505.21988v1
determine whether a node in Gpmlies within the fuzzy boundary of Gsub. Enc aigand the PM encoder Enc pmfor different modalities as follows: fsub=Enc aig(Gsub), faig=Enc aig(Gaig) fsyn=Enc aig(Gsyn), fpm=Enc pm(Gpm) Next, we concatenate the embeddings of the query graph and the candidate graphs and feed them into a clas...
https://arxiv.org/abs/2505.21988v1
ForgeEDA [ 17]. Each metric in Tables 1 and 2 is reported as the mean±standard deviation over three independent runs. For data processing, we begin by randomly sampling k-hop subgraphs (with kranging from 8 to 12) to partition large circuits into smaller circuits. Next, we randomly sample subgraphs from these smaller c...
https://arxiv.org/abs/2505.21988v1
54.5±0.8 31.9±2.6 48.2±3.0 Ours 83 .0±1.490 .7±0.985 .2±0.992 .0±0.583 .8±0.891 .2±0.4 In this stage, we treat Gsubas the query and aim to locate its fuzzy boundary within the post-mapping netlist Gpm. Since Gamora and ABGNN are designed for the detection of the input-output boundary, we first apply each to identify th...
https://arxiv.org/abs/2505.21988v1
and Overlapping Fuzzy Boundaries: Our fuzzy boundary identification method presently assumes a single, minimal enclosing region within the target graph. In practical scenarios, multiple occurrences or overlapping functional subgraphs might exist within a single large circuit, compli- cating boundary identification task...
https://arxiv.org/abs/2505.21988v1
perspective. In Proceedings of the 56th Annual Design Automation Conference 2019 , pages 1–6, 2019. [6]Luigi P Cordella, Pasquale Foggia, Carlo Sansone, and Mario Vento. A (sub) graph isomorphism algorithm for matching large graphs. IEEE transactions on pattern analysis and machine intelligence , 26(10):1367–1372, 2004...
https://arxiv.org/abs/2505.21988v1
circuit repre- sentation learning. In 2023 IEEE/ACM International Conference on Computer Aided Design (ICCAD) , pages 1–9. IEEE, 2023. [23] Zhengyuan Shi, Ziyang Zheng, Sadaf Khan, Jianyuan Zhong, Min Li, and Qiang Xu. Deepgate3: Towards scalable circuit representation learning. arXiv preprint arXiv:2407.11095 , 2024. ...
https://arxiv.org/abs/2505.21988v1
of functional equivalence, we then have G1≡func¯G2≡funcG′ 1. (4) Thus, by replacing ¯G2inG′ 2withG′ 1, we obtain a new graph G′′ 2= (G′ 2\¯G2)∪ G′ 1, (5) which satisfies G′′ 2≡funcG2. (6) From the definition of functional equivalence, we know that G′′ 2≡funcG2and that G′ 1is a subgraph ofG′′ 2. Therefore, it follows th...
https://arxiv.org/abs/2505.21988v1
cut subgraph for the training circuit and test circuits to generate our small circuit dataset. For ITC99 and OpenABCD, the split follow the previous work [ 24]. For ForgeEDA, we randomly select 10% circuits in the dataset as test circuits. For small circuit, we apply Algorithm 1 to randomly sample subgraph. Table 6: Da...
https://arxiv.org/abs/2505.21988v1
arXiv:2505.21996v1 [cs.CV] 28 May 2025Learning World Models for Interactive Video Generation Taiye Chen1∗Xun Hu2∗Zihan Ding3∗Chi Jin3† 1School of EECS, Peking University 2Department of Engineering Science, University of Oxford 3Department of Electrical and Computer Engineering, Princeton University Abstract Foundationa...
https://arxiv.org/abs/2505.21996v1
powerful technique for incorporating external knowledge in LLMs, shows limited benefits in our experiments with video models. Neither static retrieval with heuristic sampling nor dynamic retrieval based on similarity search significantly improved world model consistency. These findings suggest that implicitly learning ...
https://arxiv.org/abs/2505.21996v1
diverse trajectories, even directly in pixel space. The Sora model [ 8] introduced the concept of leveraging video generation models as world simulators. Extending video generation models with interactive capabilities has led to promising applications in diverse domains, including game simulation like Genie [ 9], GameN...
https://arxiv.org/abs/2505.21996v1
The framework incorporates global state conditioning and memory retrieval mechanisms to ensure spatiotemporal consistency and mitigate error accumulation. During both training and inference, retrieved memory serves as context for joint self-attention in spatiotemporal DiT blocks. The model employs per-frame noise injec...
https://arxiv.org/abs/2505.21996v1
adapting to both the input actions and the character’s spatial context, thereby improving overall coherence. Video Retrieval Augmented Generation (VRAG) Beyond global state conditioning, we propose memory retrieval augmented generation to enhance the model’s ability to leverage historical context while maintaining temp...
https://arxiv.org/abs/2505.21996v1
these baselines, we aim to verify the in-context learning capabilities of video diffusion models and assess their effectiveness in handling extended sequences. 5 Long-context Enhancement We extend the temporal context window using YaRN [ 59] modifica- tion for RoPE in temporal attention. RoPE encodes relative positions...
https://arxiv.org/abs/2505.21996v1
length. We evaluate the models against ground-truth test sets using several metrics: Structural Similarity Index (SSIM) [ 62] to measure spatial consistency, Peak Signal-to-Noise Ratio (PSNR) for pixel- level reconstruction quality, Learned Perceptual Image Patch Similarity (LPIPS) [ 63] to assess perceptual similarity...
https://arxiv.org/abs/2505.21996v1
further context extension using YaRN shows no improvement over the vanilla DF model. This suggests that YaRN’s context extension capabilities, while effective in language models, do not transfer effectively to video generation for maintaining world coherence. Similarly, the History Buffer method fails to effectively ut...
https://arxiv.org/abs/2505.21996v1
and LPIPS increasing by 8.1%. This demonstrates that the memory mechanism is crucial for maintaining spatiotemporal consistency and quality. Second, removing the training component also causes significant degradation, with SSIM dropping by 10.1% and LPIPS increasing by 4.3%, high- lighting the weak capabilities of in-c...
https://arxiv.org/abs/2505.21996v1
Stefano Ermon, and Ben Poole. Score-based generative modeling through stochastic differential equations. In International Conference on Learning Representations , 2021. [8]Tim Brooks, Bill Peebles, Connor Holmes, Will DePue, Yufei Guo, Li Jing, David Schnurr, Joe Taylor, Troy Luhman, Eric Luhman, Clarence Ng, Ricky Wan...
https://arxiv.org/abs/2505.21996v1
David J. Fleet. Video diffusion models. arXiv preprint arXiv:2204.03458 , 2022. [24] Uriel Singer, Adam Polyak, Eliya Nachmani, Guy Dahan, Eli Shechtman, and Haggai Ha- cohen. Make-a-video: Text-to-video generation without text-video data. arXiv preprint arXiv:2209.14792 , 2022. [25] Yu Hong, Jing Wei, Xing Liu, Xiaodi...
https://arxiv.org/abs/2505.21996v1
and Vincent Sitzmann. Diffusion forcing: Next-token prediction meets full-sequence diffusion. Advances in Neural Information Processing Systems , 37:24081–24125, 2024. [42] Zihan Ding, Amy Zhang, Yuandong Tian, and Qinqing Zheng. Diffusion world model: Fu- ture modeling beyond step-by-step rollout for offline reinforce...
https://arxiv.org/abs/2505.21996v1
Xiao, Yushi Lan, Yifan Zhou, Wenqi Ouyang, Shuai Yang, Yanhong Zeng, and Xin- gang Pan. Worldmem: Long-term consistent world simulation with memory. arXiv preprint arXiv:2504.12369 , 2025. [58] Jianlin Su, Murtadha Ahmed, Yu Lu, Shengfeng Pan, Wen Bo, and Yunfeng Liu. Roformer: Enhanced transformer with rotary position...
https://arxiv.org/abs/2505.21996v1
segment Gj,kframes are randomly sampled to form a subset Fj⊆Gj(where |Fj|=k). The retrieved memory zmemis constructed as the concatenation of these sampled frames, zmem= [F1, F2, . . . , F NS], totaling NS·kframes. This design with recency bias implies that the sampling density k/Ljis higher for more recent segments, t...
https://arxiv.org/abs/2505.21996v1
from current context frames along the temporal dimension, we incorporate a temporal offset of 100 in the rotary position embeddings of temporal attentional for retrieved frames. C Additional Experiments C.1 Analysis of Compounding Error Evaluation Metrics Traditional metrics like SSIM, PSNR, and LPIPS measure pixel-lev...
https://arxiv.org/abs/2505.21996v1
generalization capability. When evaluating videos generated by new methods or datasets, the discriminator may be deceived into assigning inappropriately high scores. Therefore we do not report the discriminator score in the main paper, and advocate more investigation into faithful evaluation of compounding error in fut...
https://arxiv.org/abs/2505.21996v1
training) 0.218 11.588 0.712 VRAG (no memory) 0.205 11.367 0.746 18 Figure 15: World coherence evaluation on all methods for PSNR (left) and LPIPS (right). Figure 16: Compounding error evaluation on all methods for PSNR (left) and LPIPS (right). Figure 17: Ablation study of VRAG components for world coherence (left) an...
https://arxiv.org/abs/2505.21996v1
arXiv:2505.22003v1 [cs.CL] 28 May 2025LEGAL ASSIST AI: L EVERAGING TRANSFORMER -BASED MODEL FOR EFFECTIVE LEGAL ASSISTANCE∗ Jatin Gupta†, Akhil Sharma†, Saransh Singhania†, Ali Imam Abidi‡ Department of Computer Science and Engineering, Sharda University, Greater Noida, India †Equal contribution‡Corresponding author: a...
https://arxiv.org/abs/2505.22003v1
enable legal departments and law firms to significantly scale their contract review throughput without a proportional increase in costs [ 6]. This paper proposes “Legal Assist” which in concept and application is a robust legal assistant. The proposed model has been tested for two different unstructured lines of questi...
https://arxiv.org/abs/2505.22003v1
[10]. Evaluation of the law-specific LLMs has to be done using benchmarks that would stress legal tasks. Because the legal processes of the different countries vary so significantly, it is very difficult to create a single global evaluation benchmark for the legal LLM. LegalBench is a collaboratively built legal reason...
https://arxiv.org/abs/2505.22003v1
The problem described can be represented as the following equation: DlegalLangChain Loader− − − − − → DchunksHuggingFace Embeddings− − − − − − − → EvectorsFAISS Indexing− − − − → VstoreStorage− − − − − → Vsaved (1) where: •Dlegalis the collection of legal documents •Dchunks are the processed document chunks •Evectors a...
https://arxiv.org/abs/2505.22003v1
contextual data from a vector store via semantic search ( search (f(X),VectorStore )), merging the retrieved information into a final prompt, and then utilizing the LLM to generate the response Y. Y=LLM (RQAC (f(X),search (f(X),VectorStore ))) (2) The equation (2) encapsulates the entire workflow, ensuring efficient an...
https://arxiv.org/abs/2505.22003v1
will be quite handy for a student/researcher or AI systems intending to understand or provide responses to questions on Indian Polity. 4.1.2 Observations i.The model had been iterated to answer all 150 questions with the given prompt template. Then each generated answer and actual answer was checked for semantic simila...
https://arxiv.org/abs/2505.22003v1
AIBE examination, outperforming all other models. Notably, unlike GPT 3.5 Turbo, the proposed model did not hallucinate. The superior performance of the Legal Assist AI Model highlights the importance of high-quality training data in developing effective legal AI solutions. However, due to the lack of data specific to ...
https://arxiv.org/abs/2505.22003v1
[7] Jinzhe Tan, H. Westermann, and Karim Benyekhlef. Chatgpt as an artificial lawyer? In AI4AJ@ICAIL , 2023. [8]Geoff Currie, Stephanie Robbie, and Peter Tually. Chatgpt and patient information in nuclear medicine: Gpt-3.5 versus gpt-4. Journal of Nuclear Medicine Technology , 51:307–313, 12 2023. [9]Aman Tiwari, Prath...
https://arxiv.org/abs/2505.22003v1
main argu- ment of the Constitutional Bench judgment in Kesa- vananda Bharti v. State of Kerala?In Kesavananda Bharti v. State of Kerala, the Supreme Court ruled that the basic structure of the Con- stitution is not amendable. Funda- mental rights cannot be abrogated by the Parliament.The Supreme Court ruled by a 7-6 m...
https://arxiv.org/abs/2505.22003v1
arXiv:2505.22006v1 [cs.AI] 28 May 2025EFFICIENTLY ENHANCING GENERAL AGENTS WITH HIERARCHICAL-CATEGORICAL MEMORY Changze Qiao1, Mingming Lu1,* 1School of Computer Science and Engineering, Central South University ABSTRACT With large language models (LLMs) demonstrating re- markable capabilities, there has been a surge i...
https://arxiv.org/abs/2505.22006v1
as in- context examples at test time. Yet, as memory size grows and the need for semantic structuring increases, these systems encounter memory redundancy and overhead issues. Addi- tionally, diverse task types stored in memory can interfere with the outputs of LLMs, making it essential for agents to focus specifically...
https://arxiv.org/abs/2505.22006v1
metricto retrieve the top- kmost relevant in-context examples (i.e., those with confidence scores greater than θ). If the number of suitable matches is insufficient, the remaining candidates are retrieved from the deep-retrieval pool via database queries. Finally, the most relevant trajectories are integrated into the ...
https://arxiv.org/abs/2505.22006v1
compositional visual reasoning framework [6]. What distinguishes our ap- proach is that, for each specific task, EHC retrieves insights of the same category from the memory pool and selects the top-kmost relevant trajectories within that category as few- shot contextual examples. These, combined with a trajectory templ...
https://arxiv.org/abs/2505.22006v1
few-shot reasoning from LLMs, TOEL strengthened the model’s adaptability to diverse categories. When combined with HMR, TOEL achieved additional im- provements in accuracy on both datasets. This indicates that TOEL can effectively identify and leverage category differ- ences, thus significantly enhancing the accuracy a...
https://arxiv.org/abs/2505.22006v1
“Visual programming: Compositional visual reasoning without training,” in Proceedings of the IEEE/CVF Conference on Com- puter Vision and Pattern Recognition , 2023, pp. 14953– 14962. [7] D. Sur ´ıs, S. Menon, and C. V ondrick, “Vipergpt: Visual inference via python execution for reasoning,” in Pro- ceedings of the IEE...
https://arxiv.org/abs/2505.22006v1
Technical Report Qwen VRAG-RL: Empower Vision-Perception-Based RAG for Visually Rich Information Understanding via Iterative Reasoning with Re- inforcement Learning Qiuchen Wang1, Ruixue Ding2, Yu Zeng1, Zehui Chen1, Lin Chen1 Shihang Wang2, Pengjun Xie2, Fei Huang2, Feng Zhao1† 1MoE Key Laboratory of Brain-inspired In...
https://arxiv.org/abs/2505.22019v1
judgement:True or FalseInstructions + Human QueryRollout Module:Interaction with external environmentSearch EngineVisual Perception FunctionPolicy Model (VLMs) %='(%)*++*(%,-.++(%/0+Reward Func (RAG Specific)Reference ModelH1,H2,...HG.r1,r2,...rG.Adv1,Adv2,...AdvG.GroupComputationKLUpdate Policy ModelAdv. Trajectories ...
https://arxiv.org/abs/2505.22019v1
Inspired by the current think-then-answer approach and the ReAct paradigm, we model the interaction between the VLMs and the search engine, along with the visual perception action space, as a process of iterative reasoning and tool invocation. Figure 1(a) illustrates our training pipeline, which supports automatic samp...
https://arxiv.org/abs/2505.22019v1
select regions with high information density or regions relevant to the query for a detailed view, acquiring 3 Technical Report Qwen information from a coarse to fine perspective. We integrate search queries, answer summaries, and visually specific actions into a unified action space to align with the model’s pre-train...
https://arxiv.org/abs/2505.22019v1
propose a multi-expert sampling strategy to scale up the trajectory data, aiming to sample diverse interactions within the same reasoning trajectory for each data. 4 Technical Report Qwen The core idea is to utilize large-scale models πLMto effectively guide the reasoning process and tool selections within a trajectory...
https://arxiv.org/abs/2505.22019v1
pattern reward is designed to encourage the model to follow the reasoning patterns during the interaction process: rPat∼Parse (H), (10) where His the generated trajectory. Parse (·)employ action tokens <search> and </search> to extract predefined actions in the action space. This is crucial for a reasoning agent with a...
https://arxiv.org/abs/2505.22019v1
METHODSLIDE VQA VIDOSEEK MML ONG BENCHOVERALLSingle-hop Multi-hop Extraction Logic Text Table Chart Figure Layout Qwen2.5-VL-3B-Instruct Vanilla RAG 15.1 12.1 8.8 14.3 3.9 5.1 1.7 3.1 2.5 11.2 ReAct 11.8 9.9 5.3 7.4 6.5 3.7 3.9 5.2 2.5 8.4 Search-R1 17.5 13.8 13.3 20.7 3.4 3.2 4.5 4.1 6.8 14.1 Vanilla RAG 19.4 12.2 10....
https://arxiv.org/abs/2505.22019v1
methods, OCR-based methods exhibit significant limitations on visually intensive benchmarks. On the one hand, visual information inherently contains elements that cannot be represented by text, such as element positions, layout, and color, etc. On the other hand, the perceptual capabilities of OCR models are considerab...
https://arxiv.org/abs/2505.22019v1
informative context. As shown in Figure 4, our model is more effective at retrieving relevant information compared to traditional prompt-based rewrite methods. Our approach provides the vision model with a better context for generating high-quality answers. 8 Technical Report Qwen TextTable ChartFigureLayout Vanillaw/ ...
https://arxiv.org/abs/2505.22019v1
an evaluation model to assess the quality and relevance of generated responses in a more flexible manner. This approach not only aligns better with real-world applications but also provides a more stable and effective training signal, as demonstrated in Appendix A. The model-based reward thus enables VRAG-RL to achieve...
https://arxiv.org/abs/2505.22019v1
approach to enhance these capabilitiesGuo et al. (2025); Jaech et al. (2024). Previous work applied RL in the training of LLMs Meng et al. (2024); Williams (1992); Rafailov et al. (2023); Schulman et al. (2017); Guo et al. (2025). Additionally, more and more works aim to use RL to enhance the reasoning capabilities of ...
https://arxiv.org/abs/2505.22019v1
Jaemin Cho, Debanjan Mahata, Ozan Irsoy, Yujie He, and Mohit Bansal. M3docrag: Multi- modal retrieval is what you need for multi-page multi-document understanding. arXiv preprint arXiv:2411.04952 , 2024. Yuning Du, Chenxia Li, Ruoyu Guo, Xiaoting Yin, Weiwei Liu, Jun Zhou, Yifan Bai, Zilin Yu, Yehua Yang, Qingqing Dang...
https://arxiv.org/abs/2505.22019v1
et al. Benchmarking multimodal retrieval augmented generation with dynamic vqa dataset and self-adaptive planning agent. arXiv preprint arXiv:2411.02937 , 2024. Zehan Li, Xin Zhang, Yanzhao Zhang, Dingkun Long, Pengjun Xie, and Meishan Zhang. Towards general text embeddings with multi-stage contrastive learning. arXiv ...
https://arxiv.org/abs/2505.22019v1
Choi, and Manling Li. Ragen: Understanding self-evolution in llm agents via multi-turn reinforcement learning, 2025b. URL https://arxiv.org/abs/2504.20073 . Ronald J Williams. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine learning , 8:229–256, 1992. Jialong Wu, Wenbi...
https://arxiv.org/abs/2505.22019v1
(EM) or Recall, used in previous work Jin et al. (2025); Chen et al. (2025a), our model-based reward provides a more flexible and comprehensive evaluation of the generated response. This leads to higher training efficiency and better generalization to diverse datasets. B The implementation of the search engine To effec...
https://arxiv.org/abs/2505.22019v1
designed for visually rich document retrieval-reason-answer tasks. It aims to evaluate the performance of RAG systems on large-scale document collections. Unlike traditional VQA datasets that focus on single images or documents, ViDoSeek contains queries with unique answers across a collection of approximately 6,000 im...
https://arxiv.org/abs/2505.22019v1
we can observe that the model demonstrated reflective capability, and eventually identified subtle clues in the relevant images. Moreover, as shown in Figure 8, the model engages in visual perception actions only when required, showcasing human-like reasoning instead of simply replicating patterns from its training dat...
https://arxiv.org/abs/2505.22019v1
GL-PGENet: A Parameterized Generation Framework for Robust Document Image Enhancement Zhihong Tang Yang Li* QQ Browser R&D Team, Tencent CSIG harryzhtang@tencent.com Abstract Document Image Enhancement (DIE) serves as a criti- cal component in Document AI systems, where its per- formance substantially determines the ef...
https://arxiv.org/abs/2505.22021v1
al. 2021). Contemporary image restoration research predominantly focuses on specialized models targeting individual degradation types - exemplified by BEDSR-Net (Lin, Chen, and Chuang 2020) for shadow removal and DeepDeblur (Mei et al. 2019) for deblurring. Recent efforts to address multi-degradation challenges in colo...
https://arxiv.org/abs/2505.22021v1
optimization (Annabestani and Saadatmand-Tarzjan 2019). And a spatial segmentation approach proposed by Xiong et al (Xiong et al. 2018) par- titions the document image into non-overlapping w×w sub-regions for localized contrast enhancement, followed by SVM-based threshold selection. However, empirical analy- ses reveal...
https://arxiv.org/abs/2505.22021v1
address multi-degradation in real-world scenarios while maintaining critical document information. 2.2 Generative Models Generative Adversarial Networks (GANs) (Goodfellow et al. 2020) represent a fundamental advancement in genera- tive modeling through implicit density estimation. The GAN framework employs a dual-netw...
https://arxiv.org/abs/2505.22021v1
is common that multiple degrada- tions often occur together in real-world scenarios. The Do- cUNet dataset (Ma et al. 2018) is a valuable resource for DIE, featuring a variety of degradations such as shadows, wrinkles, and bleed-through. But this dataset is limited by two key constraints: its small size(130 images) pro...
https://arxiv.org/abs/2505.22021v1
et al. 2022), enabling efficient 4https://github.com/sparkfish/augraphy (a) (b) (c) (d) (e) (f) (g) Figure 3: Qualitative Comparison with State-of-the-Art DIE Methods. (a) Original degraded images; (b) Ground-truth reference images; (c) DocProj (Li et al. 2019); (d) DocRes (Zhang et al. 2024); (e) DocTr (Feng et al. 20...
https://arxiv.org/abs/2505.22021v1
al. 2018) with dense block. The enhanced output is for- mulated through Eq. 2-4, combining these two branches. This architecture provides two principal advantages: (1) en- hanced local consistency achieved by constraining linear transformation parameters through Eq. 6, which generates smoother output through parameter ...
https://arxiv.org/abs/2505.22021v1
reveals fundamen- tal limitations in its direct application to document images enhancement. The document images exhibit richer high- frequency components compared to natural images, neces- sitating specialized architectural considerations. Our analysis identifies two critical design requirements: (1) enhanced feature r...
https://arxiv.org/abs/2505.22021v1
both Local + Global cascad- ing and standalone global processing. These systematic eval- uations validate our design choices for operation fusion and multi-stage coordination. The final GL-PGENet architec- ture, illustrated in Fig. 1, integrates these optimal configu- rations. NestUNetR G B3 coefficient0.5x2xFigure 6: ...
https://arxiv.org/abs/2505.22021v1
LRNet Fusion Strategy SSIM/PSNR Cascading 0.5781/12.36 Additive 0.8717/18.55 Concatenation 0.9111/21.92 (a) Operation fusion strategies Integration Strategy SSIM/PSNR Local + Global 0.9231/23.44 Global + Local 0.9480/24.10 Global 0.8823/20.12 (b) Stage integration strategies ing task of DIE. 4.6 Efficient High-resoluti...
https://arxiv.org/abs/2505.22021v1
contributions to Document AI research: 1) A hierarchical coarse-to-fine enhancement architecture that efficiently balances global consistency with local detail preservation; 2) A lightweight Global Perception Parame- ter Network that replaces computationally intensive pixel- wise estimation with efficient parametric re...
https://arxiv.org/abs/2505.22021v1
IEEE. Goodfellow, I.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; and Bengio, Y . 2020. Generative adversarial networks. Communications of the ACM 63(11):139–144. Gulrajani, I.; Ahmed, F.; Arjovsky, M.; Dumoulin, V .; and Courville, A. C. 2017. Improved training of wasserstein gans...
https://arxiv.org/abs/2505.22021v1
Desmaison, A.; Antiga, L.; and Lerer, A. 2017. Automatic differentiation in pytorch. Peebles, W., and Xie, S. 2023. Scalable diffusion models with transformers. In Proceedings of the IEEE/CVF Inter- national Conference on Computer Vision , 4195–4205. Pratikakis, I.; Zagoris, K.; Barlas, G.; and Gatos, B. 2017. Icdar201...
https://arxiv.org/abs/2505.22021v1
the 31st ACM international conference on multimedia , 2795–2806. Zamora-Mart ´ınez, F.; Espa ˜na-Boquera, S.; and Castro- Bleda, M. 2007. Behaviour-based clustering of neural net- works applied to document enhancement. In International Work-Conference on Artificial Neural Networks , 144–151. Springer. Zhang, J.; Chen, ...
https://arxiv.org/abs/2505.22021v1
Improving Respiratory Sound Classification with Architecture-Agnostic Knowledge Distillation from Ensembles Miika Toikkanen1, June-Woo Kim†1,2 1RSC LAB, MODULABS, Republic of Korea 2Department of Psychiatry, Wonkwang University Hospital, Republic of Korea miika.toikkanen.2@gmail.com, kaen2891@gmail.com Abstract Respira...
https://arxiv.org/abs/2505.22027v1
architecture-agnostic approach to distill ensemble knowledge into individual models. Our main contributions are summarized as follows: • We demonstrate the strength of soft label distillation from ensembles on RSC data and set the new SOTA score 64.39 on the ICBHI dataset. • We explore methods for creating the soft lab...
https://arxiv.org/abs/2505.22027v1
ensemble creates soft labels by comput- ing the mean of the logits from all predictors, or by sampling the logits of a single predictor from the ensemble. The follow- ing sections explain the process in detail. 3.1. BTS Model The BTS (Bridging Text and Sound) [20] model is a multimodal framework designed to integrate r...
https://arxiv.org/abs/2505.22027v1
soft labels from the teacher model instead of the hard labels y. In this process, the hard labels are not used at all. The resulting knowledge distillation losses are Lµfor the mean teacher, and Lrfor the random teacher. LCE=H(y,ˆy) =−nX i=1yilog (ˆyi) (3) Lµ=H(pµ,ˆy) (4) Lr=H(pr,ˆy) (5) We use the suffix “-d” to denot...
https://arxiv.org/abs/2505.22027v1
comparison, we sep- arate the ensemble models (labeled with “++”) from the non- ensemble models as they multiply the test-time compute cost. Without additional test-time compute cost, the distilled BTS- d outperforms all previous approaches. Highest score was achieved with random teacher k= 15 , increasing the previous...
https://arxiv.org/abs/2505.22027v1
teacher at k= 5 and random teacher at k= 15 , respec- tively. Selecting the best models as teachers is also beneficial. We picked the 5 highest score checkpoints as the curated mean teacher ensemble and further raised the score to 64.61%. Since the soft label is only applied during training, this can be consid- ered a ...
https://arxiv.org/abs/2505.22027v1
label setting for our experiments because random teacher requires larger kvalue to reach similar performance. We also evaluated the ensembled model BTS++ at each k∈[1...30]. Figure 3 compares the ICBHI score and validation loss of teacher ensemble BTS++, as well as the ICBHI score of distilled student model BTS-d plott...
https://arxiv.org/abs/2505.22027v1
sound analysis,” in ICASSP 2022-2022 IEEE International Conference on Acoustics, Speech and Signal Pro- cessing (ICASSP) . IEEE, 2022, pp. 9087–9091. [6] Z. Wang and Z. Wang, “A domain transfer based data augmenta- tion method for automated respiratory classification,” in ICASSP 2022-2022 IEEE International Conference ...
https://arxiv.org/abs/2505.22027v1
, 2024. [Online]. Available: https://openreview.net/forum?id=vXnGXRbOfb [20] J.-W. Kim, M. Toikkanen, Y . Choi, S.-E. Moon, and H.-Y . Jung, “Bts: Bridging text and sound modalities for metadata-aided res- piratory sound classification,” in Interspeech 2024 , 2024, pp. 1690–1694. [21] B. Rocha, D. Filos, L. Mendes, I. ...
https://arxiv.org/abs/2505.22027v1
Analysis and Evaluation of Synthetic Data Generation in Speech Dysfluency Detection Jinming Zhang1, Xuanru Zhou1, Jiachen Lian2, Shuhe Li1, William Li2, Zoe Ezzes3, Rian Bogley3, Lisa Wauters3, Zachary Miller3, Jet Vonk3, Brittany Morin3, Maria Gorno-Tempini3, Gopala Anumanchipalli2 1Zhejiang University, China2UC Berke...
https://arxiv.org/abs/2505.22029v1
For normal ASR tasks, the dysfluency labels can just be limited to filler words [25]. The lack of a high-quality, text-diversified, naturalistic corpus with unified labels makes scaling efforts particularly challenging. In this work, we propose leveraging Large Language Mod- els (LLMs) to generate dysfluent text across...
https://arxiv.org/abs/2505.22029v1
set your clocks forward this weekend. •Repetition: The conference will feature keynote speeches from [REP] leading experts in the field. •Deletion: The client wants us to [DEL] deliver the product by next month. •Substitution: The patient needs immediate medical attention [SUB]. •Pause/Block: The team is working hard t...
https://arxiv.org/abs/2505.22029v1
requires reference audios, we extract sample clips from each VCTK speaker. This allows us to generate an equiv- alent set of 109 variations per LLM-generated utterance, ensur-ing dataset consistency. Some examples of LLM-generated ut- terances are shown in Fig 1. We provide explanations for pause and prolongation imple...
https://arxiv.org/abs/2505.22029v1