text
string
source
string
w.r.t. the supremum norm. Since the existence of such a close approximation is guaranteed by (b), the sequence {Hℓ}is asymptotically characteristic. A.2 Proof of Thm. 4.2 (Oracle consistency) This result follows trivially from the universal consistency of the RF algorithm itself. Any partition ofXthat satisfies regular...
https://arxiv.org/abs/2505.21441v1
kernel evaluations via Eq. 2. If two distinct vectors ψ,ψ′produce identical values when left-multiplied by ΠS, then the corresponding inputs x,x′cannot be separated by fn. Therefore, with probability tending toward 1 as sample size grows, we conclude that the ILP of Eq. 3 is uniquely solved by the true leaf assignments...
https://arxiv.org/abs/2505.21441v1
% Categorical indicates the proportion of categorical features. # Classes indicates the total cardinality of all the categorical features. Dataset Code #Samples #Numerical #Categorical # Total %Categorical #Classes Abalone abalone 4177 8 1 9 0.11 3 Adult adult 45222 5 9 14 0.64 100 Banknote Auth. banknote 1372 4 1 5 0....
https://arxiv.org/abs/2505.21441v1
want to structure the network such that the size of each hidden layer reduces uniformly from dXtodZat encoding and increases uniformly from dZtodXat decoding. Our structure then is: Input( dX)→Dense( dX−(dX−dZ)×1/3)→Dense( dX−(dZ−dZ)×2/3)→ Latent( dZdZ)→Dense( dX−(dX−dZ)×2/3)→Dense( dX−(dX−dZ)×1/3)→ Output( dX). IfdX= ...
https://arxiv.org/abs/2505.21441v1
multip as well as supervised vs. unsupervised RF embeddings. Decoder Comparison We compare the performance of three decoders that we describe in section 4: the kNN, the split relabelling and the LASSO decoder, on a smaller compression / reconstruction benchmark. We follow the same experimental setup as the previous exp...
https://arxiv.org/abs/2505.21441v1
does not have to be an objectively good choice. Given the forest’s hierarchical nature, inaccuracies can be compounded traversing down the tree, and the final forest is completely dissimilar to the original. The small dZalso means more variance is introduced into the process. For the LASSO decoder, several things may h...
https://arxiv.org/abs/2505.21441v1
challenging for many learning algorithms. RFs excel in these settings, attaining out-of-bag accuracy of 98% on this task. KPC1 clearly separates the two classes in the left panel, while KPC2 appears to isolate a potential outlier within the AML cohort. Using an unsupervised adversarial RF [ 94], we find far greater ove...
https://arxiv.org/abs/2505.21441v1
of training samples is a common consistency condition for nonpara- metric models in general and local averaging estimators in particular [84]. 23 Algorithm 1 GREEDY LEAFASSIGNMENTS Input : Fuzzy leaf assignments ˆp∈[0,1]dΦ Output : Hard leaf assignments q∈ {1, . . . , d(b) Φ}B 1: Initialize: t←0,C(t)← ∅,S(t)← X ,conver...
https://arxiv.org/abs/2505.21441v1
in kernel methods such as Gaussian process regression [72]. While the ILP solution is generally intractable, the lasso relaxation requires O(d3 Φ)operations [ 16] to score leaves (although dΦcan in fact be reduced to ai=∥ki∥0for each test point i∈[m]; see Appx. C). The subsequent greedy leaf assignment algorithm search...
https://arxiv.org/abs/2505.21441v1
VoxAging: Continuously Tracking Speaker Aging with a Large-Scale Longitudinal Dataset in English and Mandarin Zhiqi Ai1, Meixuan Bao1, Zhiyong Chen1, Zhi Yang1, Xinnuo Li2, Shugong Xu3,∗ 1Shanghai University, China 2New York University, USA 3Xi’an Jiaotong-Liverpool University, China aizhiqi-work@shu.edu.cn, shugong.xu...
https://arxiv.org/abs/2505.21445v1
offers dense sampling, continuous weekly inter- vals, long time spans, and multi-modal data. fixed decision threshold can exacerbate classification error rates even with just a few years’ age difference [7]. Recent work on advanced SR models, such as ResNet34 and ECAPA-TDNN [8, 9], confirms that aging-related changes d...
https://arxiv.org/abs/2505.21445v1
oxCeleb-AE [9] and V oxCeleb-CA [8], derived from the V oxCeleb [17] dataset (originally designed for general speaker recognition), feature imprecise age labels and limited samples per speaker (an av- erage of 123 utterances). In contrast, continuous datasets [15, 14, 16] feature shorter session intervals and higher co...
https://arxiv.org/abs/2505.21445v1
noise reduction. We uti- lize multiple expert models to annotate and refine the cleaned data. Specifically, we employ a speech transcription model [23], a multi-modal emotion recognition model [24, 25], and an age estimation model [24] to label the data. The age esti- mation model is particularly crucial, as it assigns...
https://arxiv.org/abs/2505.21445v1
the speaker verification system deteriorates, indicating that the speaker recognition accuracy declines over time. Additionally, we use the face recognition model (ArcFace [21]) as the base- line for aging analysis. Compared to the speaker verification model, ArcFace demonstrates greater robustness to facial ag- ing, d...
https://arxiv.org/abs/2505.21445v1
the speaker similarity score decline. It is clearly evident that there is a difference in the decay rate of speaker similarity between English and Mandarin. For the English average trend, it takes about 500 weeks ( ∼10 years) for the speaker similarity to fall below the 0.5 threshold, while for the Mandarin average tre...
https://arxiv.org/abs/2505.21445v1
aging. Ad- ditionally, speaker similarity scores significantly declines over time. The impact of age and gender on speaker aging shows that 40∼50 age group and female group exhibit more pronounced voice deterioration. 6. References [1] W. Zhao, R. Chellappa, P. J. Phillips, and A. Rosenfeld, “Face recognition: A litera...
https://arxiv.org/abs/2505.21445v1
long-term and short-term time-varying speaker verification,” IEEE/ACM Trans- actions on Audio, Speech, and Language Processing , 2024. [17] A. Nagrani, J. S. Chung, and A. Zisserman, “V oxceleb: A large- scale speaker identification dataset,” in Interspeech 2017 , 2017, pp. 2616–2620. [18] L. Li, X. Li, H. Jiang, C. Ch...
https://arxiv.org/abs/2505.21445v1
arXiv:2505.21457v1 [cs.CV] 27 May 2025 ACTIVE -O3 : Empowering Multimodal Large Language Models with Active Perception via GRPO∗ Muzhi Zhu1,2, Hao Zhong1, Canyu Zhao1, Zongze Du1, Zheng Huang1, Mingyu Liu1, Hao Chen1, Cheng Zou2, Jingdong Chen2, Ming Yang2, Chunhua Shen1 1Zhejiang University, China2Ant Group, China Abs...
https://arxiv.org/abs/2505.21457v1
two potential regions to consider: **Region 1**: This area is near the center of the image, slightly to the left. It includes a traffic light with a red signal illuminated. This is a strong candidate because it's a common place for traffic lights to be located. ... </think> Figure 1: Zero-shot reasoning on the V∗benchm...
https://arxiv.org/abs/2505.21457v1
visual understanding tasks [ 7,19,20,21,9,22,23,24,10,25]. While supervised learning and instruction tuning remain the dominant approaches for training MLLMs, several limitations persist—such as aligning model behavior with human preferences and handling complex reasoning tasks. Reinforcement Learning (RL) has been int...
https://arxiv.org/abs/2505.21457v1
only perceive but also interact purposefully with their environments to accomplish complex goals. Meanwhile, there is a clear trend toward integrating Multimodal Large Language Models (MLLMs) as the central reasoning modules—or “brains”—of embodied AI systems [ 15,14,38]. In this context, enabling MLLMs with active per...
https://arxiv.org/abs/2505.21457v1
on the deterministic mapping g(st)and do not explicitly model observation noise. 5 Prompt for ACTIVE -O3Detection •“Find up to three different regions in the image that likely contain a high number of ‘ {object} ’.” • “Even if the ‘ {object} ’ are not clearly visible, infer where they are most likely to appear." • "Eac...
https://arxiv.org/abs/2505.21457v1
image Iand instruction I, we first generate a global observation oinitby resizing I. A shared multi-modal large language model (MLLM) is treated as a unified policy πthat generates a textual response y—containing both intermediate reasoning and action outputs—conditioned on the visual input and instruction, i.e., π(y|o...
https://arxiv.org/abs/2505.21457v1
wn(θ) =πθ(yn|oinit,IO) πθold(yn|oinit,IO)is the importance ratio between current and behavior policies4,An is a normalized reward-based advantage for sample n, and πrefis a frozen reference policy (e.g., the base MLLM) used to regularize the update. An=rn−mean ({r1, . . . , r N}) std({r1, . . . , r N})(3) 4.3 Dual-Form...
https://arxiv.org/abs/2505.21457v1
4In our implementation, we adopt a single-update variant of GRPO where πθold=πθduring training. 8 5 Experiments <think> To identify areas with many people, I look for vehicles and related urban context. Three key regions stand out: 1. Left side: Two motorcycles—one with a rider carrying a box, another with a rider—indi...
https://arxiv.org/abs/2505.21457v1
the LVIS dataset [ 41], known for its rich long-tail vocabulary and abundance of small, densely packed objects. For small object grounding, we use instances under 100 pixels; for dense grounding, we select images with over 15 annotated instances. In both cases, 9 Table 1: Comparison of grounding and detection performan...
https://arxiv.org/abs/2505.21457v1
after interaction. Effect of Zoom-in Budget. Figure 5 compares QWEN 2.5-VL-C OTandACTIVE -O3under dif- ferent zoom-in budgets. While both start at the same initial mIoU, QWEN 2.5-VL-C OTsuffers performance degradation as budget increases, dropping to 0.561 at budget 3. This is due to its tendency to zoom into incorrect...
https://arxiv.org/abs/2505.21457v1
region proposals whose areas fall within a reasonable proportion of the image: AreaRatio (bi) =(x2−x1+ 1)( y2−y1+ 1) W·H Rarea({bi}) =1,if∀i, rmin≤AreaRatio (bi)≤rmax 0,otherwisewithrmin= 0.01, rmax= 0.5 B.4 Coverage-Based Reward Rcoverage This reward evaluates how well the proposed regions align with task-relevant ar...
https://arxiv.org/abs/2505.21457v1
is, under a given number of allowed sensing actions, the objective becomes maximizing task reward. This is the setup we adopt in our experiments. Remark D.3 (2D Setting as a Single-Step Active Perception Problem ).A key property of the 2D visual scenario is that the environment state senvremains static across time (sin...
https://arxiv.org/abs/2505.21457v1
MLLMs may struggle to accurately identify domain-specific categories (e.g., windmills, 14 Prompt for ACTIVE -O3Detection • "Find up to three different regions in the image that likely contain a high number of ‘ {object} ’." • "Even if the ‘ {object} ’ are not clearly visible, infer where they are most likely to appear....
https://arxiv.org/abs/2505.21457v1
format." Figure 9: Prompt for the task model MA. E.2 Implementation Details We use Qwen2.5-VL-7B-Instruct as the shared policy backbone πθ. All experiments are conducted using GRPO with KL regularization coefficient β= 0.04, group size 8, and a learning rate of 1e−6 using the AdamW optimizer with weight decay 0.01. Tra...
https://arxiv.org/abs/2505.21457v1
the model’s generalization ability in these more challenging domains. ThinObjects. We adopt the ThinObjects [ 44] dataset for this task, as it provides both semantic annotations and high-quality, fine-grained segmentation masks, making it suitable for evaluating interactive segmentation under zoom-in conditions. One co...
https://arxiv.org/abs/2505.21457v1
-O3generalizes well to previously unseen tasks. We highlight several challenging cases involving OCR (Figures 10, 1) and attribute recognition (Figures 11, 12) where base models struggle. In contrast, ACTIVE -O3can successfully complete the task by leveraging its ability to reason and zoom in adaptively. 17 G.2 Small O...
https://arxiv.org/abs/2505.21457v1
To determine the color of the van, we need to look for any vehicles in the image. Since there are no vans visible in the provided image, we can infer that the van's color might be present in the background or on the left side of the image where there are some buildings and structures. However, without a clear view of a...
https://arxiv.org/abs/2505.21457v1
2. Bottom Right: The mask cuts off part of the harp—under-segmentation. 3. Middle: A visible gap where the mask is missing—under-segmentation. These distinct regions clearly illustrate segmentation errors without significant overlap. Figure 15: Interactive segmentation analysis on ThinObjects. ACTIVE -O3identifies spec...
https://arxiv.org/abs/2505.21457v1
arXiv:2502.13130 , 2025. 2 [13] Gemini Robotics Team, Saminda Abeyruwan, Joshua Ainslie, Jean-Baptiste Alayrac, Montser- rat Gonzalez Arenas, Travis Armstrong, Ashwin Balakrishna, Robert Baruch, Maria Bauza, Michiel Blokzijl, et al. Gemini robotics: Bringing ai into the physical world. arXiv preprint arXiv:2503.20020 ,...
https://arxiv.org/abs/2505.21457v1
Kelton, Luke Miller, Maddie Simens, Amanda Askell, Peter Welinder, Paul Christiano, Jan Leike, and Ryan Lowe. Training language models to follow instructions with human feedback, 2022. 3 [27] Zhihong Shao, Peiyi Wang, Qihao Zhu, Runxin Xu, Junxiao Song, Xiao Bi, Haowei Zhang, Mingchuan Zhang, YK Li, Y Wu, et al. Deepse...
https://arxiv.org/abs/2505.21457v1
Yan, et al. Grounded sam: Assembling open-world models for diverse visual tasks. arXiv preprint arXiv:2401.14159 , 2024. 9 [41] Agrim Gupta, Piotr Dollar, and Ross Girshick. Lvis: A dataset for large vocabulary instance segmentation. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition ,...
https://arxiv.org/abs/2505.21457v1
LazyVLM: Neuro-Symbolic Approach to Video Analytics Xiangru Jian∗ University of Waterloo xiangru.jian@uwaterloo.caWei Pang∗ University of Waterloo w3pang@uwaterloo.caZhengyuan Dong∗ University of Waterloo zhengyuan.dong@uwaterloo.ca Chao Zhang∗ University of Waterloo chao.zhang@uwaterloo.caM. Tamer Özsu University of W...
https://arxiv.org/abs/2505.21459v1
the en- tire context window, further exacerbating inefficiencies. Therefore, using VLMs out of the box for video query processing leads to low system efficiency. To overcome the above issues, we propose LazyVLM, a neuro- symbolic approach designed for scalable video analysis. LazyVLM introduces a semi-structured text i...
https://arxiv.org/abs/2505.21459v1
integration enables a powerful and efficient frame- work for querying open-domain video data. The remainder of this paper provides a detailed overview of LazyVLM’s architecture and interaction mechanisms. 2 SYSTEM OVERVIEW Figure 1 illustrates the processing pipeline of LazyVLM. Video data is first preprocessed to gene...
https://arxiv.org/abs/2505.21459v1
a vehicle in the same frame), while (6) sequencing queries enforce temporal order between events ( e.g., detecting a person walking before enter- ing a car). Additionally, (7) window queries constrain events within a defined time duration ( e.g., detecting a car stopping within 10 seconds after a pedestrian appearing)....
https://arxiv.org/abs/2505.21459v1
Generation ,Relationship Matching and Refinement , and Temporal Matching . Entity Matching. For each entity defined in the query, a vector similarity search is performed to match the textual description of the entity against the embeddings stored in the Entity Store. The result is a set of candidate entities for each q...
https://arxiv.org/abs/2505.21459v1
matching accuracy. Step❷: Enter Entities. Users input descriptive text labels for the entities involved in their query via a dedicated input field. In the provided example query, users define entities like "man with backpack, " "bicycle, " and"man in red" . These entities are then listed on the interface and can be rev...
https://arxiv.org/abs/2505.21459v1
dropdown list. For the example provided, Frame 1 is set to contain triples "man with backpack is near bicycle" and"man in red is on the left of bicycle, " while Frame 2 includes "man with backpack is near bicycle" and"man in red is on the right of bicycle. " Users also define temporal constraints, such as specifying Fr...
https://arxiv.org/abs/2505.21459v1
Analytics using Vision-Language Models. arXiv:2305.03785 [cs.DB] https://arxiv.org/abs/2305.03785 [10] Liang Wang, Nan Yang, Xiaolong Huang, Linjun Yang, Rangan Majumder, and Furu Wei. 2024. Improving Text Embeddings with Large Language Models. arXiv:2401.00368 [cs.CL] https://arxiv.org/abs/2401.00368 [11] Renzhi Wu, P...
https://arxiv.org/abs/2505.21459v1
arXiv:2505.21478v1 [cs.CV] 27 May 2025Policy Optimized Text-to-Image Pipeline Design Uri Gadot1,2Rinon Gal2Yftah Ziser2Gal Chechik2Shie Mannor1,2 1Technion2NVIDIA Research Abstract Text-to-image generation has evolved beyond single monolithic models to complex multi-component pipelines. These combine fine-tuned generat...
https://arxiv.org/abs/2505.21478v1
(RL) has emerged as a powerful paradigm for fine-tuning large language models (LLMs), enabling them to optimize their outputs directly based on reward signals derived from human preferences or other evaluative metrics. Techniques such as Reinforcement Learning from Human Feedback (RLHF) have demonstrated remarkable suc...
https://arxiv.org/abs/2505.21478v1
a lack of ability to synthesize unseen flows at inference time. Our work aims to address this challenge by leveraging a policy-optimization approach for more effective exploration of the flow parameter space, coupled with a surrogate reward function which avoids the need to generate and rank a large set of images. 2 Fi...
https://arxiv.org/abs/2505.21478v1
large set of fixed flows, whose parameters were sampled uniformly from a predefined set of options. To overcome this hurdle, we propose a two-phase training strategy. In the first, we pre-train on a large set of un-scored flows. This avoids the need to generate and score images, allowing us to use a much larger set to ...
https://arxiv.org/abs/2505.21478v1
reflects strong alignment with the encoded workflows structural patterns. Efficient Flow Representation Scheme While prior work [ 16] directly predicts ComfyUI JSON representations, we note that these JSONs typically contain thousands of tokens, leading to long generation times and increasing memory requirements. An in...
https://arxiv.org/abs/2505.21478v1
predict the human-preference score for the image produced by this pair. For data, we use the ComfyGen dataset DR, which contains triplets of prompt pi, flow fiand score si. The surrogate’s loss is then: LR(ϕ) =X (pi,fi,si)∈DRMSE (Rϕ(pi, fi), si). (1) Although the construction of the original ComfyGen dataset still requ...
https://arxiv.org/abs/2505.21478v1
to a set of baselines across two main metrics: (1) The GenEval [ 18] benchmark which measures prompt-adherence by using object detection and classification modules to evaluate correct object generation, placement, and attribute binding. (2) Human preference, using the CivitAI prompt-set of ComfyGen [ 16]. For the latte...
https://arxiv.org/abs/2505.21478v1
FlowRL (Ours) 1.00 0.85 0.44 0.86 0.11 0.38 0.61 - Table 1: GenEval and HPS v2 comparisons. FlowRL is on-par with ComfyGen on GenEval and outperforms all other baseline approaches in overall score. On human preference metrics, FlowRL significantly outperforms prior methods. CIs are calculated as one standard deviation ...
https://arxiv.org/abs/2505.21478v1
and without our key improvements. We evaluated the following modifications: (1) removing the component-aware reward model, (2) removing the uncertainty ensemble cutoff, (3) varying number of BERT models in our reward ensemble, (4) dropping the SFT step (stage 1), and (5) dropping the GRPO-tuning step. For (5), we inste...
https://arxiv.org/abs/2505.21478v1
by streamlining the integration of independently trained, specialized modules. 9 References [1]Google DeepMind AlphaCode Team. Alphacode 2 technical report. https://storage. googleapis.com/deepmind-media/AlphaCode2/AlphaCode2_Tech_Report.pdf , 2024. [2]Halil Beglerovic, Michael Stolz, and Martin Horn. Testing of autono...
https://arxiv.org/abs/2505.21478v1
model overoptimization. InInternational Conference on Machine Learning , pages 10835–10866. PMLR, 2023. [18] Dhruba Ghosh, Hannaneh Hajishirzi, and Ludwig Schmidt. Geneval: An object-focused framework for evaluating text-to-image alignment. Advances in Neural Information Processing Systems , 36, 2024. [19] Aaron Gratta...
https://arxiv.org/abs/2505.21478v1
Xiubo Geng, Qingwei Lin, Shifeng Chen, and Dongmei Zhang. Wizardmath: Empowering mathematical rea- soning for large language models via reinforced evol-instruct. arXiv preprint arXiv:2308.09583 , 2023. [35] Yang Luo, Yiheng Zhang, Zhaofan Qiu, Ting Yao, Zhineng Chen, Yu-Gang Jiang, and Tao Mei. Freeenhance: Tuning-free...
https://arxiv.org/abs/2505.21478v1
Investigating length correlations in rlhf. arXiv preprint arXiv:2310.03716 , 2023. [51] Dominik Sobania, Martin Briesch, and Franz Rothlauf. Comfygi: Automatic improvement of image generation workflows, 2024. [52] Keiichiro Tashima, Hirohisa Aman, Sousuke Amasaki, Tomoyuki Yokogawa, and Minoru Kawa- hara. Fault-prone j...
https://arxiv.org/abs/2505.21478v1
Eric Tzeng, Yilun Du, and Dmitry Kislyuk. Large-scale reinforcement learning for diffusion models. arXiv preprint arXiv:2401.12244 , 2024. [67] Wang Zhenyu, Li Aoxue, Li Zhenguo, and Liu Xihui. Genartist: Multimodal llm as an agent for unified image generation and editing. arXiv preprint arXiv:2407.05600 , 2024. [68] M...
https://arxiv.org/abs/2505.21478v1
wind lines, clouds, above clouds, cliff, wind magic, aurora, ultra wide angle shot, cinematic style, highly detailed, extremely detailed, sharp detail, majestic, shallow depth of field, movie still, soft light, circular polarizer, colorful, wallpaper, professional illustration, anime" 3."pixar style of turtle, as a pix...
https://arxiv.org/abs/2505.21478v1
Easter bunny character covered in yeast, evil, creepy, in dark forest, fine textures, high quality textures of materials, volumetric textures, natural textures" 18."In a wondrously gleaming futuristic realm composed entirely of ripe peaches, a towering palace made of glistening peach flesh and pitted stone stands as th...
https://arxiv.org/abs/2505.21478v1
black leather coat, horror atmosphere, side view" 28."mysterious silhouette of woman from the enchanted pond, abstract art, by Minjae Lee, Carne Griffiths, Emily Kell, Geoffroy Thoorens, Aaron Horkey, Jordan Grimmer, Greg Rutkowski, extraordinary depth, masterpiece, surreal, geometric patterns, extremely detailed, boke...
https://arxiv.org/abs/2505.21478v1
Method We developed a systematic procedure to transform JSON-based workflow representations into a compact, encoded format. This process utilizes schema learning to ensure both accuracy and efficiency in data transformation. Methodology First, we infer a schema from a collection of workflow JSON files by iterating thro...
https://arxiv.org/abs/2505.21478v1
stage, only the prompt is given to the LLM, and it is tasked with generating one or more candidate flows. This setup encourages the model to learn to produce the most appropriate flow for each prompt: ">>> Prompt: {p_i} >>> Flow:" A.6.2 Reward model training For training the Reward BERT model, we utilized the "answerdo...
https://arxiv.org/abs/2505.21478v1
Robust Hypothesis Generation: LLM-Automated Language Bias for Inductive Logic Programming Yang Yang∗, Jiemin Wu∗, Yutao Yue† HKUST(GZ) {frankyangy, jieminwu, yutaoyue}@hkust-gz.edu.cn Abstract Automating robust hypothesis generation in open environments is pivotal for AI cognition. We introduce a novel framework integr...
https://arxiv.org/abs/2505.21486v1
reasoning framework, shown in Figure 1. This framework first employs a multi-agent LLM system to automate the generation of a structured language bias, particularly the predicate system, directly from raw text. Subsequently, this LLM-generated bias guides the transformation of large-scale textual data into symbolic fac...
https://arxiv.org/abs/2505.21486v1
z) form the rule’s body. Multiple such Horn clauses can be assembled into a rule set , typically exhibiting an "OR-of-ANDs" structure. In such a set, if all preconditions (the body) of any individual rule are satisfied (based on background knowledge), its conclusion (the head) is considered true. 2 3 Related Work 3.1 I...
https://arxiv.org/abs/2505.21486v1
proposals and provides guiding feedback. Through multiple rounds of collaborative interaction between the Actor and Critic, the system automatically generates a predicate system that is highly relevant to the task, structurally sound, and compliant with the requirements of an ILP solver. Actor Agent The Actor’s role is...
https://arxiv.org/abs/2505.21486v1
fed back to the Actor for the next round of refinement. The predicate system is finalized and used for subsequent symbolic knowledge encoding and ILP learning only when it passes all checks or when a predefined maximum number of iterations (set to five in our experiments) is reached. 4.2 Symbolic Knowledge Encoding Fol...
https://arxiv.org/abs/2505.21486v1
(see Appendix for details). For each task, we first specify a set of rules and generate corresponding logical facts to construct samples, which are then further converted into natural language form using templates. Baselines: We consider two LLM-based inductive reasoning algorithms as baselines: HypoGeniC andIterative ...
https://arxiv.org/abs/2505.21486v1
80% for training and 20% for testing. For each experiment, we perform three independent dataset generation processes and report the average results on the test set across the three runs. 6 Experiments and Results Based on the experimental setup, we evaluate our method by addressing the following research questions: RQ1...
https://arxiv.org/abs/2505.21486v1
but to varying degrees. Our method demonstrates the greatest stability, maintaining high performance even with increased rule complexity as shown in Figure 2. This resilience stems from the systematic logical decomposition provided by our approach, which effectively handles conjunction and disjunction of multiple rules...
https://arxiv.org/abs/2505.21486v1
zendo_world (A):- has_piece (A,B), ,→strange_oriented (B), large (B ,→).85.0 Table 2: Case study comparing the inputs, outputs, and performance (Acc, %) and of different hypothesis generation methods on the Zendo dataset. The example shows how each method processes the same input differently, with IHR employing code-ba...
https://arxiv.org/abs/2505.21486v1
real-world data (e.g., richer textual content with highly sparse information or more ambiguous semantics) remain to be further explored and validated. Future Plan. Future research will extend this framework to broader real-world scenarios, particu- larly tasks requiring hypothesis generation from large-scale unstructur...
https://arxiv.org/abs/2505.21486v1
Alex Gu, Benjamin Lipkin, Cedegao E Zhang, Armando Solar-Lezama, Joshua B Tenenbaum, and Roger Levy. Linc: A neurosymbolic approach for logical reasoning by combining language models with first-order logic provers. arXiv preprint arXiv:2310.15164 , 2023. 10 [19] OpenAI. Hello gpt-4o. OpenAI Blog , 2024. [20] Linlu Qiu,...
https://arxiv.org/abs/2505.21486v1
the ability to learn recursive rules. It adopts an iterative "generation–combination–constraint" process, integrating an optimal solver based on MaxSAT to progressively eliminate candidate programs that do not satisfy the optimal MDL criterion. This ensures that the remaining programs are globally optimal or near-optim...
https://arxiv.org/abs/2505.21486v1
to evaluate a model’s ability to perform inductive learning on spatial relationships, object interactions, and attribute compositions. Compared to BUSINESS SHOES, which contains only a single object per sample, each ZENDO sample typically consists of multiple objects ( piece ), which may have spatial relationships such...
https://arxiv.org/abs/2505.21486v1
arXiv:2505.21488v1 [cs.CV] 27 May 2025Be Decisive: Noise-Induced Layouts for Multi-Subject Generation OMER DAHARY, Tel Aviv University, Israel and Snap Research, Israel YEHONATHAN COHEN, Tel Aviv University, Israel OR PATASHNIK, Tel Aviv University, Israel and Snap Research, Israel KFIR ABERMAN, Snap Research, United S...
https://arxiv.org/abs/2505.21488v1
the sampled initial noise, creating tension with the model’s prior and potentially leading to inferior results or deviations from the model’s prior. Specifically, as the image’s low frequencies are defined early in the denoising process, the initial noise plays a fundamental role in shaping the final layout of the gene...
https://arxiv.org/abs/2505.21488v1
limitations in adhering to detailed prompts, particularly those involving multiple subjects. Previous works have addressed chal- lenges in multi-subject generation through two distinct approaches: conditioning the generation on a spatial layout or applying heuris- tics to attention maps to enforce the generation of eac...
https://arxiv.org/abs/2505.21488v1
Layouts for Multi-Subject Generation •3 Soft layout network 𝑧!Bounded Attention 𝑧!"#′Hard clustering Soft layout networkGuidance 𝑧!"#𝑀!$#Bounded Attention...ℒ! 𝑀!𝑆! 𝑀! Fig. 3. Our method steers the denoising process by applying iterative guidance (turquoise box) after each denoising step (orange regions). At den...
https://arxiv.org/abs/2505.21488v1
[Podell et al .2023]. We steer the denoising process to adhere to a layout that allows preventing unwanted leakage among the subjects. Our key idea is to progressively define a prompt-aligned spatial layout based on features extracted from the noisy latent images along the denoising process. We then encourage the denoi...
https://arxiv.org/abs/2505.21488v1
images synthesized by the diffusion model, along with their segmentation maps. First, we randomly generate a set of prompts specifying multiple subject classes and their quantities (see full details in the supplemental). Then, we synthesize images based on these prompts, and segment them by feeding the corresponding su...
https://arxiv.org/abs/2505.21488v1
While the soft-layout represents the original model’s future intent, to successfully generate multiple prompt-aligned subjects, it is neces- sary to uphold clear subject boundaries in accordance to the prompt. To achieve this, we derive a hard-layout from the soft-layout pro- duced by our network. More specifically, gi...
https://arxiv.org/abs/2505.21488v1
to the previous hard-layout 𝑀𝑡: Lvar=1 𝑘+1𝑘∑︁ 𝑗=01 𝑀𝑡 𝑗 ∑︁ 𝑥𝑖∈𝑀𝑡 𝑗sim2 𝑆𝑡−1[𝑥𝑖], 𝜇𝑡−1 𝑗 , (2) where 𝜇𝑡−1 𝑗is the mean soft-layout feature vector of cluster 𝑗: 𝜇𝑡−1 𝑗=1 𝑀𝑡 𝑗 ∑︁ 𝑥𝑖∈𝑀𝑡 𝑗𝑆𝑡−1[𝑥𝑖]. (3) This loss promotes intra-cluster similarity, encouraging each cluster to represent...
https://arxiv.org/abs/2505.21488v1
Ranni struggle due to under-generation. On the other hand, LMD+ is able to construct the correct quantities, but is prone to generating un- natural compositions, where subjects appear disjointed from the background. Multiple Personalized Subjects. Leakage between subjects is partic- ularly noticeable when generating pe...
https://arxiv.org/abs/2505.21488v1
the same prompt. As reported, our method achieves significantly higher diversity than the baseline, preserving the innate variability of the model’s prior, in contrast to the limited diversity of LLM-based methods. All other metrics were assessed on 200 prompts, sampled from the respective category in CompBench. Color ...
https://arxiv.org/abs/2505.21488v1
to handle complex multi-subject arrangements. As a result, any approach aiming to improve multi-subject generation, ours included, must contend with these fundamental distributional constraints. Although our method outperforms existing alterna- tives, there remains a ceiling imposed by the model training data, restrict...
https://arxiv.org/abs/2505.21488v1
text-to-image generation. In European Conference on Computer Vision . Springer, 432–448. Prafulla Dhariwal and Alexander Nichol. 2021. Diffusion models beat gans on image synthesis. Advances in neural information processing systems 34 (2021), 8780–8794. Dave Epstein, Allan Jabri, Ben Poole, Alexei Efros, and Aleksander...
https://arxiv.org/abs/2505.21488v1
from diffusion features. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition . 8217–8227. Tuna Han Salih Meral, Enis Simsar, Federico Tombari, and Pinar Yanardag. 2024. Con- form: Contrast is all you need for high-fidelity text-to-image diffusion models. In Proceedings of the IEEE/CVF C...
https://arxiv.org/abs/2505.21488v1
Alignment. arXiv preprint arXiv:2306.08877 (2023). Tianhe Ren, Shilong Liu, Ailing Zeng, Jing Lin, Kunchang Li, He Cao, Jiayu Chen, Xinyu Huang, Yukang Chen, Feng Yan, et al .2024a. Grounded sam: Assembling open-world models for diverse visual tasks. arXiv preprint arXiv:2401.14159 (2024). Tianhe Ren, Shilong Liu, Aili...
https://arxiv.org/abs/2505.21488v1
Zero-shot Learning in Large Foundation Models .Guangcong Zheng, Xianpan Zhou, Xuewei Li, Zhongang Qi, Ying Shan, and Xi Li. 2023. LayoutDiffusion: Controllable Diffusion Model for Layout-to-image Generation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition . 22490–22499. Dewei Zhou, ...
https://arxiv.org/abs/2505.21488v1
Ours 0.704 0.686 0.837 0.723 0.718 SDXL 0.568 0.660 0.746 0.676 - A&E 0.537 0.659 0.742 0.682 - LLM+BA 0.685 0.665 0.659 0.603 0.408 RPG 0.604 0.643 0.609 0.635 0.155 Ranni 0.259 0.445 0.729 0.579 0.679 LMD+ 0.457 0.614 0.885 0.898 0.408 Table 3. Ablation user study results. Method Prompt-Alignment Accuracy w/oLdecisiv...
https://arxiv.org/abs/2505.21488v1
during guidance, we compute Ldecisive between 𝑆𝑡−1and𝑀𝑡−1. As evident in the second-to-right column, this approach also compromises accurate subject generation, yield- ing redundant subject instance due to clustering inconsistencies between timesteps. B.4 Limitations In Figure 14, we present two limitations of our ...
https://arxiv.org/abs/2505.21488v1
arXiv:2505.21497v1 [cs.CV] 27 May 2025Paper2Poster: Towards Multimodal Poster Automation from Scientific Papers 1Wei Pang∗,2Kevin Qinghong Lin∗ ,1Xiangru Jian∗,1Xi He,3Philip Torr 1University of Waterloo2National University of Singapore3University of Oxford Project Page: https://paper2poster.github.io Abstract Academ...
https://arxiv.org/abs/2505.21497v1
We address two core challenges in scientific poster generation: Left: How to create a poster from a paper —we propose PosterAgent (Sec. 4), a framework that transforms long-context scientific papers (20K+ tokens) into structured visual posters; and Right: How to evaluate poster quality —we introduce the Paper2Poster be...
https://arxiv.org/abs/2505.21497v1
a major challenge, as generated text at the pixel level appears blurry and hard to read. (ii) Complex Visual Layouts. Tasks like website designing [ 7,27,16,23] or slide generation [ 37,2,8,18,26,29] involve intricate visual structures and require integrating diverse components. To handle such complexity, mainstream ap...
https://arxiv.org/abs/2505.21497v1
to reduce the risk of overlap with training data. Diverse Sampling. Based on the initial candidate set, we apply two filtering criteria to curate high- quality data: (1) Length Control : We deliberately include longer papers, including supplementary material, selecting PDFs that exceed 15pages and extend up to 50pages....
https://arxiv.org/abs/2505.21497v1
and engagement. To evaluate visual quality from both global and local perspectives, we employ two metrics: (1) We measure “ Visual Similarity ” between the generated and the author-designed posters as ground-truth using CLIP image embeddings. This approach is favored over traditional distribution-based metrics (such as...
https://arxiv.org/abs/2505.21497v1