text
string
source
string
w.r.t. the supremum norm. Since the existence of such a close approximation is guaranteed by (b), the sequence {Hℓ}is asymptotically characteristic. A.2 Proof of Thm. 4.2 (Oracle consistency) This result follows trivially from the universal consistency of the RF algorithm itself. Any partition ofXthat satisfies regularity conditions (A1)-(A5) converges on the true joint distribution PXas n, dΦ→ ∞ (see [ 63, Thm. 1] and [ 23, Thm. 2]). Resulting leaves have shrinking volume, effectively converging on individual points xwith coverage proportional to the density p(x). Because there are no gaps in these leaves (i.e., no subregions of zero density), a weighted mixture of uniform draws—with weights given by the leaf coverage—is asymptotically equivalent to sampling from PX itself. A leaf assignment oracle would therefore be guaranteed to map each latent vector z∈ Z to the corresponding input point x∈ X, since leaf assignments effectively determine feature values in the large sample limit. A.3 Proof of Thm. 4.3 (Uniqueness) It is immediately obvious that when ˆK0=K∗ 0, the true leaf assignment matrix Ψ∗drives the ILP objective to zero and automatically satisfies the one-hot and overlap constraints. Our task, therefore, is to demonstrate that no other binary matrix ˆΨ̸=Ψ∗can do the same. For simplicity, consider just a single test point ( m= 1). LetF ⊂ { 0,1}dΦdenote the feasible region of leaf assignment vectors, i.e. all and only those that satisfy the one-hot and overlap constraints. A sufficient condition for our desired uniqueness result is that no two feasible leaf assignments produce the same kernel values. More formally, if there exist no ψ,ψ′∈ F such that ΠSψ⊤=ΠSψ′⊤, then the map ψ7→ΠSψ⊤is injective over F, in which case only a single solution can minimize the ILP objective. Note that this injectivity property cannot be shown to hold in full generality. As a minimal coun- terexample, consider a case where the feature space is a pair of binary variables X={0,1}2and our forest contains just two trees, the first placing a single split on X1and the second placing a single split on X2. Now, say our training data comprises just two points, x= [0,0]andx′= [1,1]. We observe the kernel entries k(x0,x) =k(x0,x′) = 1 /2. In this case, we know that x0colocates with each training sample exactly once, and therefore that it shares exactly one coordinate with each of our two training points. However, we do not have enough information to determine where these colocations occur, since both [0,1]and[1,0]are plausible feature vectors for x0. Such counterexamples become increasingly rare as the forest grows more complex. As previously noted, under (A3) and (A4), leaf diameter vanishes in probability [ 65, Lemma 2]. This is why random forests are asymptotically injective, a property we exploited in the proof for part (b) of Thm. 3.4. A 17 corollary of Meinshausen’s lemma is that for all distinct points x,x′∈ X: lim n→∞P fn(x) =fn(x′) = 0. By modus tollens, it follows that if function outputs converge on the same value, then corresponding inputs must be identical. This is important, since with fixed training labels Y, model predictions are fully determined by
https://arxiv.org/abs/2505.21441v1
kernel evaluations via Eq. 2. If two distinct vectors ψ,ψ′produce identical values when left-multiplied by ΠS, then the corresponding inputs x,x′cannot be separated by fn. Therefore, with probability tending toward 1 as sample size grows, we conclude that the ILP of Eq. 3 is uniquely solved by the true leaf assignments. A.4 Proof of Thm. 4.4 ( k-NN consistency) To secure the result, we must show that (a)˜xp→x: sampling uniformly from the intersection of each sample’s assigned leaf regions asymptotically recovers the original inputs; and (b)ˆxp→˜x:k-NN regression in the spectral embedding space is universally consistent. Item (a), which amounts to a universal consistency guarantee for the eForest method of Feng and Zhou [30], follows immediately from our oracle consistency result (Thm. 4.2), plugging in the identity function for g. Since we know the true leaf assignments for each training sample, under universal consistency conditions for RFs, we can reconstruct the data by taking uniform draws from all leaf intersections. Item (b) follows from the properties of the k-NN algorithm, which is known to be universally consistent as k→ ∞ , k/n→0[84]. B Experiments RFs are trained either using the ranger package [ 95], or the arfpackage [ 94], which also returns a trained forest of class ranger . Truncated eigendecompositions are computed using RSpectra . Memory-efficient methods for sparse matrices are made possible with the Matrix package. We use theRANN package for fast k-NN regression with kd-trees. For standard lasso, we use the glmnet package [ 32], and ExclusiveLasso for the exclusive variant [ 16]. In both cases, we do not tune the penalty parameter λ, but simply leave it fixed at a small value ( 0.0001 in our experiments). This is appropriate because we are not attempting to solve a prediction problem, but rather a ranking problem over coefficients. B.1 Reconstruction benchmark We describe the setup for the experiments presented in the main text, Sect. 5. For the compression reconstruction benchmark, we use 20 datasets sourced from the UCI Machine Learning Repository [ 29], OpenML [ 89], Kaggle, and the R palmerpenguins [52] package. In each dataset, we remove all rows with missing values. We also remove features that are not rele- vant to this task (e.g., customer names), features that duplicate in meaning (e.g., education and education_num inadult ) and date-time features. We report the subsequent in Table 1, describing for each used dataset the number of rows, features, and the proportion of categorical features in the total feature set, as a measure of the ’mixed’-ness of the tabular data. Then, our pipeline proceeds as follows: •For each dataset, we take ten different bootstrap samples, to form ten training sets, and use the remaining out-of-bag data as the testing set. We do this to ensure data is not unnecessarily unused, which could skew results in datasets with a small n. •For each dataset, we define 10 compression ratios, or latent rates, going uniformly from 0.1 (10% of the number of features) to 1 (100% the number of features). For each value, we finddZ=dX×latent_rate 18 Table 1: Summary of datasets used.
https://arxiv.org/abs/2505.21441v1
% Categorical indicates the proportion of categorical features. # Classes indicates the total cardinality of all the categorical features. Dataset Code #Samples #Numerical #Categorical # Total %Categorical #Classes Abalone abalone 4177 8 1 9 0.11 3 Adult adult 45222 5 9 14 0.64 100 Banknote Auth. banknote 1372 4 1 5 0.20 2 Breast Cancer bc 570 30 1 32 0.03 2 Car car 1728 0 7 7 1.00 25 Bank Cust. Churn churn 10000 6 5 11 0.56 11 German Credit credit 1000 7 14 21 0.67 56 Diabetes diabetes 768 8 1 9 0.11 2 Dry Bean dry_bean 13611 16 1 17 0.06 7 Forest Fires forestfires 517 11 2 13 0.15 19 Heart Disease hd 298 7 7 14 0.50 23 King County Housing king 21613 19 0 19 0.00 0 Bank Marketing marketing 45211 7 10 17 0.59 44 Mushroom mushroom 8124 0 22 22 1.00 119 Obesity Levels obesity 2111 8 8 16 0.50 30 Palmer Penguins plpn 333 5 3 8 0.38 8 Spambase spambase 4601 58 1 59 0.02 2 Student Performance student 649 16 17 33 0.52 43 Telco Churn telco 7032 3 17 20 0.85 43 Wine Quality wq 4898 12 0 12 0.00 0 •For each dataset and bootstrap, we run each method with the specified dZfor the dimension- ality of their latent embeddings. This involves training the method on the bootstrap training data, then passing the testing data through its encoding and decoding stages to produce a reconstruction. •We then compute the distortion of these reconstructions compared to the original test samples, using metrics described in the main text. For each bootstrap and dZon a dataset, we aggregate the results into a mean, and report the standard error as error bars. This standard error represents a combination of any stochastic component of the methods used, as well as the finite sample uncertainty of the used data. Within this specification, we run each method 20×10×10 = 2000 times, and for 5 methods this balloons to 10,000 runs. To meet these computational demands, we run these experiments from a high-performance computing partition, with 12 AMD EPYC 7282 CPUs, 64GB RAM, and an NVIDIA A30 graphics card. These high-performance computing units were used as part of King’s College London’s CREATE HPC [62] Next, we describe in detail each of the methods used in the reconstruction benchmark: •TV AE & TTV AE: For both the TV AE and TTV AE models, we do not make any changes to the underlying model architecture. However, we make minor modifications to not perform any sampling/interpolation at the latent stage, and simply decode the same embeddings that we acquired from the encoder. •Autoencoder: We use an MLP-based autoencoder for this benchmark. This is in contrast to a CNN-based autoencoder, which is more suited to image data, but is not relevant in tabular data tasks, where there is no inherent structure that could be extracted by convolution kernels. We structure our network to have five hidden layers, where the size of these layers are adaptive to dXanddZ. In particular, we
https://arxiv.org/abs/2505.21441v1
want to structure the network such that the size of each hidden layer reduces uniformly from dXtodZat encoding and increases uniformly from dZtodXat decoding. Our structure then is: Input( dX)→Dense( dX−(dX−dZ)×1/3)→Dense( dX−(dZ−dZ)×2/3)→ Latent( dZdZ)→Dense( dX−(dX−dZ)×2/3)→Dense( dX−(dX−dZ)×1/3)→ Output( dX). IfdX= 8, dZ= 2then with this rule the network will be 8→6→4→2→4→6→8. For hyperparameters, we use common defaults: epochs = 50 ,optimizer = ADAM . We use ReLU activations at hidden layers, a sigmoid for the output, and a random 10% validation set from the training data. 19 Table 2: Mean distortion (with standard deviation) for each method and dataset, across all dZvalues used and all runs. Best result per row is bolded. Dataset RFAE TV AE TTV AE AE V AE abalone 0.167 (0.002) 0.309 (0.005) 0.260 (0.003) 0.230 (0.025) 0.211 (0.006) adult 0.326 (0.007) 0.158 (0.005) 0.195 (0.007) 0.401 (0.003) 0.391 (0.004) banknote 0.100 (0.012) 0.312 (0.013) 0.276 (0.023) 0.724 (0.023) 0.771 (0.013) bc 0.333 (0.003) 0.564 (0.003) 0.359 (0.005) 0.287 (0.008) 0.578 (0.003) car 0.320 (0.011) 0.195 (0.014) 0.107 (0.015) 0.349 (0.012) 0.313 (0.011) churn 0.352 (0.012) 0.603 (0.011) 0.422 (0.014) 0.861 (0.005) 0.731 (0.006) credit 0.315 (0.004) 0.450 (0.005) 0.375 (0.011) 0.450 (0.005) 0.456 (0.004) diabetes 0.479 (0.016) 0.726 (0.007) 0.643 (0.014) 0.799 (0.011) 0.895 (0.004) dry_bean 0.137 (0.002) 0.273 (0.002) 0.303 (0.008) 0.083 (0.014) 0.206 (0.001) forestfires 0.575 (0.008) 0.804 (0.003) 0.705 (0.008) 0.782 (0.007) 0.790 (0.003) hd 0.432 (0.008) 0.582 (0.003) 0.605 (0.006) 0.892 (0.003) 0.916 (0.002) king 0.308 (0.008) 0.352 (0.006) 0.348 (0.008) 0.377 (0.011) 0.518 (0.004) marketing 0.292 (0.009) 0.304 (0.005) 0.259 (0.011) 0.357 (0.007) 0.372 (0.004) mushroom 0.083 (0.001) 0.093 (0.003) 0.011 (0.003) 0.055 (0.004) 0.035 (0.004) obesity 0.227 (0.008) 0.354 (0.004) 0.299 (0.008) 0.306 (0.009) 0.358 (0.003) plpn 0.176 (0.006) 0.282 (0.006) 0.224 (0.011) 0.384 (0.013) 0.410 (0.009) spambase 0.558 (0.005) 0.825 (0.002) 0.807 (0.003) 0.446 (0.010) 0.784 (0.001) student 0.371 (0.002) 0.424 (0.001) 0.426 (0.004) 0.536 (0.003) 0.551 (0.002) telco 0.177 (0.003) 0.155 (0.003) 0.091 (0.007) 0.128 (0.005) 0.130 (0.005) wq 0.240 (0.005) 0.691 (0.008) 0.759 (0.006) 0.467 (0.019) 0.708 (0.004) Average Rank 1.80 3.38 2.45 3.27 4.10 •Variational Autoencoder: Similar to the autoencoder, we use an MLP-based variational autoencoder. For comparison, we mimic the autoencoder’s architecture, activation function, and defaults, only changing epochs = 100 and adding batch_size = 32 . We also impose a βcoefficient inspired from [48]. However, we select a β= 0.1, to avoid pos The RF training approach we use here is the adversarial random forests (ARFs) [ 94]. Using this algorithm allows us to train the forest unsupervised, and preserve all features. Other approaches involving unsupervised random forests can also be used, such as the ones proposed in Shi and Horvath [80] or Feng and Zhou [30]. However, we choose ARF on the basis that our decoders’ complexity scale on dΦ, which is smaller for the ARF than others, because it learns its structure over several iterations of training. We present the results in Table 2 corresponding to the plots in Fig. 4. B.2 Additional results We present the results of additional experiments comparing different decoding methods,
https://arxiv.org/abs/2505.21441v1
multip as well as supervised vs. unsupervised RF embeddings. Decoder Comparison We compare the performance of three decoders that we describe in section 4: the kNN, the split relabelling and the LASSO decoder, on a smaller compression / reconstruction benchmark. We follow the same experimental setup as the previous experiment, but only use two datasets of small size ( credit &student ), and for each of these, we only use the first 5 bootstrap splits. This is because we only aim to illustrate the performance of these decoders side by side, and also because of the both the relabeling and LASSO’s decoder much higher complexity compared to kNN. We maintain the RFAE setup as previously described in Appx. B.1. To make comparison easier, for each iteration we run the encoding stage once, and apply the three decoders to the same embeddings. We describe the relabeling and the LASSO decoder in more detail: •Relabeling Decoder: For the relabeling decoder, we take the original RF object, and go through each split to find a corresponding split with the data on the latent space with the highest simple matching coefficient (SMC) to replace it. This process is global, so all data points are used in each split. 20 An alternative approach to this is to re-find splits locally, i.e., only use the data located at each split in the original forest. However, in practice, the lack of training points at high depths will cause the relabeled splits to be inaccurate, so we use the global approach. Once the forest is ’relabeled’, we can pass the embeddings for test data through it, and sample by using the leaf bounds of the original forest. •LASSO Decoder: For the LASSO decoder, we follow the derivations in the main text and compute the kernel matrix ˆK0=Z0ΛZ†, and recover the leaf assignments for test samples via the LASSO & greedy leaf search algorithm in Appx. C. Once the leaf assignments are recovered, we once again sample by using the forest’s leaf bounds. To reduce complexity, we impose a sparsity on the number of training samples allowed in the LASSO, sorted by the the values of the estimated kernel matrix. Because in most cases, a test sample will only land in the same leaf with a small number of training samples, even across an entire forest, this allows us to narrow the search space significantly with minimal impact on the model performance. For our datasets with 649 and 1000 samples, we set sparsity = 100 . We present the results in figure 6, where the kNN method is denoted as RFAE. credit student 0.25 0.50 0.75 1.00 0.25 0.50 0.75 1.000.40.50.60.7 0.30.40.50.60.7 Compression FactorDistortion Method RFAE LASSO RELABEL Figure 6: Compression-distortion trade-off on student andcredit . Shading represents standard errors across five bootstraps. From this plot, we can see that the kNN decoder dominates performance. Several reasons can explain the poor performance that the other decoders display. For the relabeling decoder, because our only criteria is to find the best matching split, if all candidates are bad, the best matching split
https://arxiv.org/abs/2505.21441v1
does not have to be an objectively good choice. Given the forest’s hierarchical nature, inaccuracies can be compounded traversing down the tree, and the final forest is completely dissimilar to the original. The small dZalso means more variance is introduced into the process. For the LASSO decoder, several things may have caused this performance. First, the estimation ofˆKmay not be accurate, which is passed on to the LASSO optimization. The LASSO is also an approximate optimization, and variance here can lead to the greedy leaf algorithm to find the wrong leaf assignments for the test samples. This, in combination with the high complexity, motivate us to use the kNN decoder for RFAE. MNIST reconstruction: hyperparameter analysis Figure 7 complements Figure 3 in Section 5 by showing the effects of different parameters on the reconstruction performance for MNIST test digits: the number of trees B(Figure 7a), the number of samples ntrain_enc used for encoder training (Figure 7b), the diffusion time step t(Figure 7c), and the number kof nearest neighbors used for decoding (Figure 7d). In each subplot, the respective parameter varies in a pre-defined range, while all other ones are kept fixed at B= 1000 ,ntrain_enc = 30 000 ,t= 1, andk= 50 ; the latent dimension is set to dZ= 32 throughout. 21 B= 50 B= 100 B= 200 B= 500 B= 1000 Original (a) Varying number of decision trees Bin random forest. ntrain enc = 1000 ntrain enc = 5000 ntrain enc = 10000 ntrain enc = 30000 ntrain enc = 60000 Original (b) Varying number of training points ntrain_enc used for encoder training. t= 0 t= 1 t= 2 t= 3 t= 10 t= 50 Original (c) Varying number of time steps tfor diffusion map encoding. k= 1 k= 2 k= 3 k= 5 k= 10 k= 20 k= 50 Original (d) Varying number of nearest neighbors kused fork-NN-based decoding. Figure 7: MNIST digit reconstructions produced by RFAE with varying parameter values; original images are displayed in the bottom row. RFAE (supervised) RFAE (completely random) ConvAE Original Figure 8: MNIST digit reconstructions produced by RFAE using supervised and copmletely random forests, and by a convolutional autoencoder; original images are displayed in the bottom row. Figure 8 compares RFAE reconstructions using supervised and (unsupervised) completely random forests with those produced by a convolutional autoencoder with three convolutional layers. All models were trained on full training data and with dZ= 32 . 22 Supervised Unsupervised −1.0 −0.5 0.0 0.5 −0.4 0.0 0.4−0.50.00.5 0.00.51.0 KPC1KPC2 Class ALLAMLFigure 9: Embeddings can represent supervised or unsupervised RFs. Data from the Golub et al. [38] leukemia transcriptomics study.Supervised vs. unsupervised embeddings RF embeddings provide different perspectives on the data depending on whether we use su- pervised or unsupervised learning objectives. We train a standard RF classifier to distinguish acute lymphoblastic (ALL) from acute myeloid (AML) leukemia samples in a well known tran- scriptomic dataset [ 38]. After filtering, the feature matrix includes expression values for dX= 3572 genes recorded in n= 72 patients. These kind of short, wide datasets are common in bioinformatics, but can be
https://arxiv.org/abs/2505.21441v1
challenging for many learning algorithms. RFs excel in these settings, attaining out-of-bag accuracy of 98% on this task. KPC1 clearly separates the two classes in the left panel, while KPC2 appears to isolate a potential outlier within the AML cohort. Using an unsupervised adversarial RF [ 94], we find far greater overlap among the two classes (as expected), although ALL and AML samples are hardly uniform throughout the latent space. This example demonstrates the flexibility of our method. Whereas standard spectral embedding algorithms with fixed kernels are limited to unsupervised representations, RFAE can take any RF input. C Leaf assignments via lasso and greedy search In this section, we briefly describe the lasso relaxation of the ILP in Eq. 3. First, we split the task into mseparate subproblems, one for each test vector. Let ˆk0denote a row of ˆK0, say for test point x0. Observe that this adjacency vector is generally sparse, since most training points are unlikely to be neighbors of x0.5Leta=∥ˆk0∥0be the number of neighbors for x0, and write ˆk↓ 0∈[0,1]afor the reduction of ˆk0to just its nonzero entries. We also write L(b)↓⊆L(b)for the set of leaves to which x0’s neighbors are routed in tree b, with cardinality d(b)↓ Φ≤d(b) Φ, and d↓ Φ=P bd(b)↓ Φ. (Though the reduction operation is defined only w.r.t. some x0, we suppress the dependency to avoid clutter.) This implies corresponding reductions of Φto the submatrix Φ↓∈ {0,1}k×d↓ Φ, with one row per neighbor of x0and columns for each leaf to which at least one neighbor is routed in f; andSto S↓∈[0,1]d↓ Φ×d↓ Φ, with diagonal entries for each leaf inS bL(b)↓. Following these simplifications, we solve: min ψ∈[0,1]d↓ Φ∥Bˆk↓⊤ 0−Φ↓S↓ψ⊤∥2 2+λX b∈[B]X ℓ∈L(b)↓ψℓ2 , (4) where the penalty factor λpromotes a sparse solution with a similar effect to the one-hot constraint above. Specifically, it encourages competition both within trees (via the L1norm), and between trees (via the L2norm). Eq. 4 is an exclusive lasso problem, which can be efficiently solved via coordinate descent [ 101,16] or dual Newton methods [ 66]. The interval (rather than integer) constraints on our decision variables effectively allow for “fuzzy” leaf membership, in which samples may receive nonzero weight in multiple leaves of the same tree. Exploiting these fuzzy leaf assignments, we propose a greedy method to determine leaf member- ship. The method provisionally assigns a sample to the leaves with maximal entries in ˆψ. If any inconsistencies arise, we replace the “bad” assignments—i.e., those less likely to overlap with other assigned regions—with the next best leaves according to ˆψ. The procedure repeats until assignments are consistent. Though convergence is guaranteed, greedy search may prove inaccurate if coefficients are poorly estimated. Alg. 1 provides an overview of the greedy leaf assignment procedure. This is run for a single sample x∈ X, which has associated fuzzy leaf assignment vector ˆp∈[0,1]dΦ. We select the leaf coordinates associated with tree b∈[B]by writing ˆp(b)∈[0,1]d(b) Φ. We overload notation somewhat by writing R(b) i⊂ X to denote the hyperrectangular region associated with leaf i∈[d(b) Φ]of tree b∈[B]in step 5That neighbors vanish as a proportion
https://arxiv.org/abs/2505.21441v1
of training samples is a common consistency condition for nonpara- metric models in general and local averaging estimators in particular [84]. 23 Algorithm 1 GREEDY LEAFASSIGNMENTS Input : Fuzzy leaf assignments ˆp∈[0,1]dΦ Output : Hard leaf assignments q∈ {1, . . . , d(b) Φ}B 1: Initialize: t←0,C(t)← ∅,S(t)← X ,converged ←FALSE 2:while not converged do 3: t←t+ 1 4: forall trees b∈[B]do 5: Find leaf with maximum fuzzy value in the feasible region: q(b)(t)←arg max i∈[d(b) Φ]ˆp(b) i s.t.R(b) i∩S(t)̸=∅ 6: Let R(b) q(t)be the region corresponding to leaf q(b)(t) 7: end for 8: Let G(t) =⟨[B],E(t)⟩be a graph with edges: E(t) :={i, j∈[B] :R(i) q(t)∩R(j) q(t)̸=∅} 9: ifG(t)is complete then 10: converged ←TRUE 11: else 12: ifthe maximal clique of G(t)is unique then 13: Let C(t)⊂[B]be the unique maximal clique of G(t) 14: else if the maximal cliques of G(t)have nonempty intersection then 15: Let C(t)⊂[B]be the intersection of all maximal cliques of G(t) 16: else 17: Let C(t)⊂[B]be any maximal clique of G(t) 18: end if 19: Let S(t)←T b∈C(t)R(b) q(t)be the feasible region associated with C(t) 20: end if 21:end while 22:q←q(t) 5; and then R(b) q(t)to denote the region associated with leaf q(b)(t), which maximizes ˆp(b)among all leaves that intersect with the feasible region S(t). If the arg max in step 5 is not unique, then we select among the maximizing arguments at random. Similarly, if there are multiple non-overlapping maximal cliques, then we select one at random in step 14. (This should be exceedingly rare in sufficiently large forests.) The undirected graph G(t)encodes whether the regions associated with assigned leaves overlap. Since consistent leaf assignments require intersecting regions for all trees, the algorithm terminates only when G(t)is complete. The procedure is greedy in the sense that edges can only be added and never removed, leading to larger maximal cliques C(t)and smaller feasible regions S(t)with each passing round. While the algorithm is guaranteed to converge on a consistent set of leaf assignments, this may only be achievable by random sampling of leaves under a consistency constraint. Such uninformative results are likely the product of noisy estimates for ˆK0. Note that while the maximal clique problem is famously NP-complete [ 56], we must solve this combinatorial task at most once (at t= 1). In subsequent rounds t≥2, we simply check whether any new nodes have been added to C(t−1). Maximal clique solvers are highly optimized and have worst-case complexity O(2B/3), but are often far more efficient due to clever heuristics [ 96]. In practice, the method tends to find reasonable leaf assignments in just a few rounds. When computing leaf assignments for multiple test points, we may run Alg. 1 in parallel, as solutions are independent for each sample. D Computational complexity In this section, we study the complexity of different RFAE pipelines. The encoding step requires O(n2)space to store the adjacency matrix. For large datasets, this can be approximated by using a subsample of all training points to compute K, at the cost of losing the 24 exact functional equivalence of Eq. 2. Similar tricks are common
https://arxiv.org/abs/2505.21441v1
in kernel methods such as Gaussian process regression [72]. While the ILP solution is generally intractable, the lasso relaxation requires O(d3 Φ)operations [ 16] to score leaves (although dΦcan in fact be reduced to ai=∥ki∥0for each test point i∈[m]; see Appx. C). The subsequent greedy leaf assignment algorithm searches for the maximal clique in a graph with Bnodes for each test sample, incurring worst-case complexity O exp(B) . Though the maximal clique problem is famously NP-complete [ 56], modern solvers often execute very quickly due to clever heuristics [ 96]. We can also parallelize over the test samples, as each greedy solution is independent. Note that both the cubic term in the exclusive lasso task and the exponential term in the greedy leaf assignment algorithm are user-controlled parameters that can be reduced by training a forest with fewer and/or shallower trees. Alternatively, we could use just a subset of the RF’s trees to reduce the time complexity of decoding via constrained optimization, much like we could use a subset of training samples to reduce the space complexity of encoding. The split relabeling method is essentially an iterated CART algorithm, and therefore requires O(dZ˜nlog ˜n)time to relabel each split, where ˜nis the number of synthetic samples ˜X. How- ever, since the number of splits is generally exponential in tree depth, this can still be challenging for deep forests. The most efficient decoder is the k-NN method, which proceeds in two steps. First, we find the nearest neighbors in embedding space, which for reasonably small dZcan be completed in O(mlogn)time using kd-trees. Next, we compute the intersection of all leaf regions for selected neighbors, incurring a cost of O(kmd ΦdX). 25
https://arxiv.org/abs/2505.21441v1
VoxAging: Continuously Tracking Speaker Aging with a Large-Scale Longitudinal Dataset in English and Mandarin Zhiqi Ai1, Meixuan Bao1, Zhiyong Chen1, Zhi Yang1, Xinnuo Li2, Shugong Xu3,∗ 1Shanghai University, China 2New York University, USA 3Xi’an Jiaotong-Liverpool University, China aizhiqi-work@shu.edu.cn, shugong.xu@xjtlu.edu.cn Abstract The performance of speaker verification systems is ad- versely affected by speaker aging. However, due to challenges in data collection, particularly the lack of sustained and large- scale longitudinal data for individuals, research on speaker ag- ing remains difficult. In this paper, we present V oxAging, a large-scale longitudinal dataset collected from 293 speakers (226 English speakers and 67 Mandarin speakers) over several years, with the longest time span reaching 17 years (approxi- mately 900 weeks). For each speaker, the data were recorded at weekly intervals. We studied the phenomenon of speaker aging and its effects on advanced speaker verification systems, ana- lyzed individual speaker aging processes, and explored the im- pact of factors such as age group and gender on speaker aging research. Index Terms : speaker verification, speaker aging, longitudinal dataset 1. Introduction Speaker recognition (SR) and face recognition (FR) are widely used biometric technologies for identity authentication [1, 2]. However, both face challenges related to aging [3, 4, 5]. As people age, physiological changes in the face and vocal tract lead to gradual alterations in their features, negatively affecting the accuracy of SR and FR systems. In SR systems, the impact of aging is particularly significant [3, 4, 6, 7, 8, 9]. Aging affects the vocal cords and vocal tract, causing voiceprint features to deteriorate, which reduces the reliability of SR systems [10, 11]. Consequently, SR systems require more frequent updates to ID templates to maintain performance, as voiceprint features are highly sensitive to aging-related changes. Early research on speaker aging was limited by scarce data and the capabilities of SR models. These studies primar- ily relied on traditional speech datasets with short time spans [11, 12, 13]. For instance, [12] used SEARP pitch analysis and observed that healthy individuals exhibited less tremor during vowel production, whereas elderly individuals displayed more pronounced tremors. Similarly, [11] and [13] employed models such as GMM-UBM and found that speaker aging negatively impacted SR system performance, suggesting that incorporat- ing age-related factors could enhance accuracy. Recent studies have increasingly focused on the impact of speaker aging using cross-age speaker datasets. Research on the TCDSA dataset [4, 6, 7] shows that verification scores de- cline as the time span increases, with short-term aging effects being relatively minor. Over time, genuine speaker scores de- crease significantly, while impostor scores remain stable [6]. A *Corresponding author N years......Previous Timeline 3months~2years Samples/Speakers (sparse) Time spans Intervals (continous) ...... Timeline(1) Shot-term : (2) Long-term :Intervals: 1 week ~ N months VoxAging Timeline 17 years Intervals: N years Intervals: 1 week...... ...... ...... .... Samples/Speakers (sparse) Time spans Intervals ( discrete ) Samples/Speakers (dense) Time spans Intervals (continous) Multi-modal (Speech&Video) Figure 1: Previous short-term datasets have continuous inter- vals but limited time spans, while long-term datasets have long time spans with discrete intervals, both with sparse sampling. The VoxAging
https://arxiv.org/abs/2505.21445v1
offers dense sampling, continuous weekly inter- vals, long time spans, and multi-modal data. fixed decision threshold can exacerbate classification error rates even with just a few years’ age difference [7]. Recent work on advanced SR models, such as ResNet34 and ECAPA-TDNN [8, 9], confirms that aging-related changes degrade system per- formance. These effects are more pronounced in female English speakers but have a greater impact on male Finnish speakers [9]. A major challenge in speaker aging research is the scarcity of long-term data. Most existing datasets cover relatively short periods (typically around 3 months to 2 years) with a limited number of speakers, leading to data jitter and outliers. For in- stance, the CSLT-Chronos dataset [14] includes 60 speakers and 84,000 samples collected over two years. Long-term datasets, such as TCDSA, contain recordings from 17 speakers over a span of 28 to 58 years but with fewer than 10 samples per speaker [4]. The LCFSH Finnish dataset [9] only covers two discrete intervals (20 and 40 years), while the V oxCeleb dataset [8, 9], annotated with age-related face models for aging analy- sis, still lacks sufficiently dense audio evaluation data for each speaker. To address the challenges of speaker aging in SR systems, we present V oxAging, a large-scale longitudinal dataset. It in- cludes recordings from 293 speakers (226 English and 67 Man- darin) over a span of 17 years, totaling 7,522 hours, with weekly samples. Our research investigates how aging affects voice fea- tures and the performance of advanced SR models, as well as the impact of age group and gender on speaker aging.arXiv:2505.21445v1 [cs.SD] 27 May 2025 Table 1: Comparison of existing speaker aging datasets. ’-’ indicates unavailable information. ”Discrete” means datasets with long session intervals, where each ID has only a few samples. ”Continuous” means datasets with short session intervals and continuous collection. ”gradient*” indicates that the session intervals gradually increase over time. Dataset # of Spks # of Segments # of Hours # Max Span (years) # Session Intervals Language Modality Discrete TCDSA [4] 17 231 30 58 1 ∼23 years English Speech LCFSH [9] 109 15,474 - 40 20 years Finnish Speech V oxCeleb-AE [9] 670 79,063 <352 10 - English Speech V oxCeleb-CA [8] 971 92,635 <352 20 - English Speech Continous MARP [15] 60 - - 3 2 months English Speech CSLT-Chronos [14] 60 84,000 70 2 gradient* Mandarin Speech SMIIP-TV [16] 373 325,049 305 0.25 4 days Mandarin Speech V oxAging (Ours) 293 2,629,100 7,522 17 1 week English, Mandarin Speech, Video 2. VoxAging Dataset 2.1. Previous speaker aging datasets As shown in Table 1, existing speaker aging datasets can be classified into two types: discrete and continuous, based on ses- sion intervals. Discrete datasets [4, 9, 8] have long session in- tervals and limited samples per speaker, spanning several years to two decades. For instance, TCDSA [4] includes recordings from 17 speakers over a span of 28 to 58 years, but with fewer than 10 samples per speaker. LCFSH [9], a Finnish dataset, has only two time spans: 20 and 40 years. V
https://arxiv.org/abs/2505.21445v1
oxCeleb-AE [9] and V oxCeleb-CA [8], derived from the V oxCeleb [17] dataset (originally designed for general speaker recognition), feature imprecise age labels and limited samples per speaker (an av- erage of 123 utterances). In contrast, continuous datasets [15, 14, 16] feature shorter session intervals and higher collection frequencies, ranging from a few months to days. MARP [15] covers 60 speakers with a 2-month interval, CSLT-Chronos [14] includes 60 speak- ers over 2 years, with 14 sessions collected at gradient inter- vals, and SMIIP-TV [16], a recently collected dataset, tracks data from 373 individuals continuously over 3 months at a high cost. 2.2. Data description The V oxAging dataset is a large-scale, longitudinal collection compiled from 293 speakers, including 226 English speakers (112 female, 114 male) and 67 Mandarin speakers (23 female, 44 male). The dataset spans up to 17 years (approximately 900 weeks) with weekly recordings, offering dense sampling over an extended period. It contains 2,629,100 segments, amounting to 7,522 hours of audio-visual data. The data was sourced from YouTube1and Bilibili2, with channels manually filtered to en- sure high-quality videos and appropriate time spans. As shown in Table 1, the unique advantage of the V oxAging dataset lies in its continuous weekly intervals over such an extended period, setting it apart from previous speaker aging datasets, which have limited time spans or discrete intervals. Figure 2 illustrates the static distribution of the V oxAging dataset. The time span and data size for English speakers are larger, primarily because Mandarin data collection is more chal- lenging, with recordings often starting later (mostly after 2017). 1https://www.youtube.com 2https://www.bilibili.com Figure 2: VoxAging dataset distribution: (a) timespan distri- bution, (b) duration distribution. In VoxAging, there are 293 speakers: 226 English speakers (112 female and 114 male) and 67 Mandarin speakers (23 female and 44 male). For more detailed statistics, refer to the project page3. 2.3. Collection pipeline Our data cleaning process differs from traditional methods [17, 18] that rely on a single static template, as we place greater emphasis on the impact of individual aging on facial and voice features. To address this, we employ dynamic templates in the cleaning process to account for the aging of facial appearance and voice characteristics, as illustrated in Figure 3. The entire cleaning process is divided into three steps: •Step 1. Video split via multi-modal methods. We segment videos into clips using multi-modal methods, including shot boundary detection4to identify scene transitions, YOLO- world [19] for person detection, and voice activity detection [20] to isolate speech segments. The intersection of visual and audio boundaries is then calculated to define each seg- ment. •Step 2. Longitudinal data cleaning with dynamic templates. We employ dynamic templates for data cleaning and use face 3https://github.com/aizhiqi-work/voxaging 4https://www.scenedetect.com recognition [21] and speaker verification [22] models to ex- tract feature representations for each segment. Then, we apply the DBSCAN clustering algorithm to group similar speaker identities from different periods, removing noisy data and ensuring ID consistency. Finally, these dynamic tem- plates are used to refine the cleaning process for each time segment. •Step 3. Multi-experts labeling &
https://arxiv.org/abs/2505.21445v1
noise reduction. We uti- lize multiple expert models to annotate and refine the cleaned data. Specifically, we employ a speech transcription model [23], a multi-modal emotion recognition model [24, 25], and an age estimation model [24] to label the data. The age esti- mation model is particularly crucial, as it assigns age groups to each ID. During the initial data collection, we could only determine the timespan of each video, without knowing the user’s actual age. Finally, we apply speech enhancement models [26] to the high-quality data for noise reduction, fur- ther improving the accuracy of age analysis. 3. Experiments 3.1. Data setting As shown in Table 2, the data settings of V oxAging include ”X- Independent” and ”X-Dependent” configurations. • The ”X-Independent” setting consists of two subsets: V oxAging-EN (English speakers) and V oxAging-ZH (Man- darin speakers). This setup investigates the impact of aging on speaker verification systems. V oxAging-EN is divided into 11 time spans (0 to 10 years), while V oxAging-ZH is divided into 5 time spans (0 to 4 years). • The ”X-Dependent” setting, using V oxAging-EN, explores the effects of age group (V oxAging-AgeGroup) and gender (V oxAging-Gender) on speaker aging. The dataset is divided into 5 age groups. It also includes 114 male and 112 female speakers. Both analyses cover 6 time spans: 0, 2, 4, 6, 8, and 10 years. 3.2. Model setting As shown in Table 3, to investigate the impact of speaker ag- ing on state-of-the-art speaker recognition models, we first em- ployed the face recognition model ArcFace [21] as a base- line for aging. Subsequently, we evaluated seven advanced speaker recognition models [27], which demonstrated varying performances on the V oxCeleb dataset [17]. These models in- clude RDINO [28], TDNN [29], SDPN [30], ECAPA-TDNN [22], CAM++ [31], ERes2Net [32], and ERes2Net-large [32]. Among these, the best-performing model was ERes2Net-large5, achieving an EER of 0.57% on V ox-O6. 4. Results 4.1. Impact of speaker aging on advanced speaker verifica- tion systems Table 3 shows the impact of speaker aging on advanced speaker verification systems. These models perform differently on the general test set V ox-O [17], and we use Equal Error Rate (EER) to evaluate the effect of aging on the V oxAging-EN and V oxAging-ZH subsets. As the time span increases, the EER of 5https://github.com/modelscope/3D-Speaker 6https://www.modelscope.cn/models/iic/speech_ eres2net_large_sv_en_voxceleb_16k Step1: Video split via multi-modal methods Step2: Longitudinal data cleaning with dynamic templatesAPIs V: Shot B oundary Detection V: Person Detection A: Voice Activity Detection k k+1 ... k-1 ... Update Update Update Dynamic templatesID consistency ... ... ... ...APIs V&A: ID Clustering V: Face Detection & Recog. A: Speaker Verification Step3: Multi-experts labeling & noise r eduction ... ... V&A: Emotion Recog. A: Speech Transcription A: Speech Enhancement (opt.)APIs V: Age Estimation Transcript Emotion ... Age(model) Age(real)Figure 3: Illustration of the collection pipeline. Table 2: Data setting for VoxAging. Setting # of Spks # of Trails X-Independent V oxAging-EN Cross-Age 226 1.1M V oxAging-ZH Cross-Age 67 0.5M X-Dependent (EN) V oxAging-AgeGroup<30 92 3.0M30∼40 77 40∼50 31 50∼60 16 >60 10 V oxAging-GenderMale 1141.2MFamale 112
https://arxiv.org/abs/2505.21445v1
the speaker verification system deteriorates, indicating that the speaker recognition accuracy declines over time. Additionally, we use the face recognition model (ArcFace [21]) as the base- line for aging analysis. Compared to the speaker verification model, ArcFace demonstrates greater robustness to facial ag- ing, delivering exceptional performance. However, despite this robustness, recognition accuracy still declines over time, with the EER rising from 0.31% to 1.52%. In V oxAging-EN, RDINO [28] and TDNN [29] show rel- atively poor performance, as reflected by their higher initial EERs and deterioration rates of 2.98% and 1.93%, respectively. In contrast, ECAPA-TDNN [22], ERes2Net [32], CAM++ [31], and ERes2Net-Large [32] exhibit lower initial EERs and slower deterioration rates, suggesting that improving the performance of speaker recognition models can enhance their robustness against speaker aging. In V oxAging-ZH, all models display generally higher initial EERs and greater deterioration rates (significantly higher than in V oxAging-EN), but the overall trend remains consistent with V oxAging-EN. Table 3: Impact of speaker aging on advanced speaker verification systems. V oxAging-EN EER(%) ↓ V oxAging-ZH EER(%) ↓ Model V ox-O0 1 2 3 4 5 6 7 8 9 10 ∆ 0 1 2 3 4 ∆ Face Modality ArcFace [21] - 0.31 0.38 0.55 0.72 0.75 1.00 1.23 1.23 1.42 1.56 1.52 1.21 0.52 0.58 0.62 0.75 0.82 0.30 Speech Modality RDINO [28] 3.16 6.19 6.42 6.82 7.25 7.27 7.61 7.92 8.09 8.51 8.72 9.17 2.98 17.70 19.31 19.68 20.32 20.16 2.46 TDNN [29] 2.22 6.43 6.59 6.85 7.15 7.15 7.37 7.53 7.65 7.81 8.23 8.36 1.93 9.06 9.82 10.63 11.58 11.78 2.72 SDPN [30] 1.88 2.86 2.91 3.00 3.20 3.18 3.20 3.31 3.45 3.55 3.78 3.73 0.87 13.98 15.76 16.50 17.15 16.34 2.36 ECAPA-TDNN [22] 0.86 4.07 4.16 4.47 4.49 4.52 4.53 4.66 4.86 5.04 5.27 5.36 1.29 11.15 12.77 14.02 14.88 14.75 3.60 ERes2Net [32] 0.83 3.02 3.26 3.40 3.61 3.56 3.57 3.67 3.75 3.87 4.08 4.20 1.18 10.30 11.08 12.18 12.57 12.43 2.13 CAM++ [31] 0 .65 3.72 3.94 4.13 4.19 4.31 4.19 4.29 4.46 4.64 4.76 4.80 1.08 12.53 14.37 15.91 16.44 16.37 3.84 ERes2Net-large [32] 0.57 2.89 3.05 3.11 3.24 3.22 3.24 3.37 3.47 3.66 3.80 3.87 0.98 10.52 11.87 12.91 13.74 13.72 3.20 However, there are some special cases. In V oxAging-EN, the initial EER and deterioration rate of SDPN [30] are compa- rable to those of ERes2Net-Large. In V oxAging-ZH, the initial EER of TDNN is relatively low, at only 9.06%. 4.2. Speaker similarity scores over time Figure 4 shows the trend of speaker similarity scores over time in V oxAging, where embeddings were extracted using ECAPA- TDNN [22]. We randomly selected 10 English and 10 Mandarin speakers from the dataset and analyzed speaker similarity using a cubic polynomial fitting method over a weekly time span. The results show that speaker similarity decreases over time from the point of enrollment. This decline is caused by age-related changes in the speakers’ voices, which emphasizes a key factor affecting the performance of speaker verification systems. In Figure 4, the black dashed line represents the average trend of
https://arxiv.org/abs/2505.21445v1
the speaker similarity score decline. It is clearly evident that there is a difference in the decay rate of speaker similarity between English and Mandarin. For the English average trend, it takes about 500 weeks ( ∼10 years) for the speaker similarity to fall below the 0.5 threshold, while for the Mandarin average trend, it takes about 400 weeks ( ∼8 years) for the speaker sim- ilarity to fall below the 0.5 threshold. 4.3. Impact of age group and gender on speaker aging Table 4 shows the impact of age group and gender on speaker aging, with embeddings extracted using ERes2Net-Large [32]. In all age groups, the performance of the speaker verification system deteriorates with age. In V oxAging-AgeGroup, the ini- tial EER for the young age group ( <30years) is relatively high, reaching 5.24% at the 10-year mark. The initial EER for the 30∼40 and 40 ∼50 age groups is lower than that of the young group, but the aging effect is more pronounced, with deteriora- tion rates of 1.50% and 1.67%, respectively. The initial EER for the 50∼60 age group is similar to that of the 40 ∼50 group, but the deterioration is slower (1.17%). For those over 60 years old, the aging effect is the least pronounced, and the overall EER remains relatively stable, with a deterioration rate of 0.30%. Overall, the experiment shows that age-related voice changes are particularly significant in the 40 ∼50 age group. In V oxAging-Gender, it is clear that male speakers have lower initial EER values than female speakers. Additionally, both genders exhibit similar trends, with EER values increasing over time. The deterioration is more pronounced in the female group, with a deterioration rate of 2.62%, reaching an EER of 6.77% at the 10-year mark, higher than the male group (4.09%). This suggests that age-related voice changes may have a more noticeable impact on female group in V oxAging. 052 200 500 800 TimeSpans (weeks) (a)0.20.30.40.50.60.70.8ScoreV oxAging-EN EN_0 EN_1 EN_2 EN_3 EN_4 EN_5EN_6 EN_7 EN_8 EN_9 Average 052 200 500 TimeSpans (weeks) (b)0.20.30.40.50.60.70.8ScoreV oxAging-ZH ZH_0 ZH_1 ZH_2 ZH_3 ZH_4 ZH_5ZH_6 ZH_7 ZH_8 ZH_9 AverageFigure 4: Speaker similarity scores over time in VoxAging. Dashed black line indicates the average aging trend. Table 4: The impact of age group and gender on speaker aging. SettingEER(%) ↓ 0 2 4 6 8 10 ∆ AgeGroup<30 4.12 4.58 4.64 4.76 5.14 5.24 1.12 30∼40 3.43 3.63 3.84 4.20 4.58 4.93 1.50 40∼50 2.57 2.70 3.10 3.26 3.62 4.24 1.67 50∼60 2.58 2.70 2.90 2.96 3.33 3.75 1.17 >60 2.82 2.91 2.74 2.91 3.22 3.12 0.30 GenderMale 3.54 3.71 3.77 3.75 3.92 4.09 0.55 Female 4.15 4.77 5.05 5.50 6.02 6.77 2.62 5. Conclusions In this paper, we present V oxAging, a large-scale longitudinal dataset. It includes recordings from 293 speakers (226 English and 67 Mandarin) over a span of 17 years, totaling 7,522 hours, with weekly samples. Our analysis of speaker aging reveals that the performance of speaker verification systems deterio- rates with age. Improving the performance of speaker recogni- tion models can enhance their resistance to speaker
https://arxiv.org/abs/2505.21445v1
aging. Ad- ditionally, speaker similarity scores significantly declines over time. The impact of age and gender on speaker aging shows that 40∼50 age group and female group exhibit more pronounced voice deterioration. 6. References [1] W. Zhao, R. Chellappa, P. J. Phillips, and A. Rosenfeld, “Face recognition: A literature survey,” ACM computing surveys (CSUR) , vol. 35, no. 4, pp. 399–458, 2003. [2] M. M. Kabir, M. F. Mridha, J. Shin, I. Jahan, and A. Q. Ohi, “A survey of speaker recognition: Fundamental theories, recogni- tion methods and opportunities,” IEEE Access , vol. 9, pp. 79 236– 79 263, 2021. [3] X. Qin, N. Li, S. Duan, and M. Li, “Investigating long-term and short-term time-varying speaker verification,” IEEE/ACM Trans- actions on Audio, Speech, and Language Processing , vol. 32, pp. 3408–3423, 2024. [4] F. Kelly, A. Drygajlo, and N. Harte, “Speaker verification with long-term ageing data,” in 2012 5th IAPR international confer- ence on biometrics (ICB) . IEEE, 2012, pp. 478–483. [5] K. Baruni, N. Mokoena, M. Veeraragoo, and R. Holder, “Age invariant face recognition methods: A review,” in 2021 Interna- tional Conference on Computational Science and Computational Intelligence (CSCI) . IEEE, 2021, pp. 1657–1662. [6] F. Kelly and J. H. L. Hansen, “Evaluation and calibration of short- term aging effects in speaker verification,” in Interspeech 2015 , 2015, pp. 224–228. [7] F. Kelly and J. H. Hansen, “Score-aging calibration for speaker verification,” IEEE/ACM Transactions on Audio, Speech, and Language Processing , vol. 24, no. 12, pp. 2414–2424, 2016. [8] X. Qin, N. Li, W. Chao, D. Su, and M. Li, “Cross-age speaker verification: Learning age-invariant speaker embeddings,” in In- terspeech 2022 , 2022, pp. 1436–1440. [9] V . P. Singh, M. Sahidullah, and T. Kinnunen, “Speaker verification across ages: Investigating deep speaker embedding sensitivity to age mismatch in enrollment and test speech,” in Interspeech 2023 , 2023, pp. 1948–1952. [10] “V ocal aging effects on f0 and the first formant: A longitudinal analysis in adult speakers,” Speech Communication , vol. 52, no. 7, pp. 638–651, 2010. [11] Y . Lei and J. H. Hansen, “The role of age in factor analysis for speaker identification,” in Tenth Annual Conference of the Inter- national Speech Communication Association , 2009. [12] L. A. Ramig and R. L. Ringel, “Effects of physiological aging on selected acoustic characteristics of voice,” Journal of Speech, Language, and Hearing Research , vol. 26, no. 1, pp. 22–30, 1983. [13] D. A. Reynolds, T. F. Quatieri, and R. B. Dunn, “Speaker veri- fication using adapted gaussian mixture models,” Digital signal processing , vol. 10, no. 1-3, pp. 19–41, 2000. [14] L. Wang, J. Wang, L. Li, T. F. Zheng, and F. K. Soong, “Improving speaker verification performance against long-term speaker vari- ability,” Speech Communication , vol. 79, pp. 14–29, 2016. [15] A. D. Lawson, A. R. Stauffer, E. J. Cupples, S. J. Wenndt, W. P. Bray, and J. J. Grieco, “The multi-session audio research project (marp) corpus: goals, design and initial findings,” in Interspeech 2009 , 2009, pp. 1811–1814. [16] X. Qin, N. Li, S. Duan, and M. Li, “Investigating
https://arxiv.org/abs/2505.21445v1
long-term and short-term time-varying speaker verification,” IEEE/ACM Trans- actions on Audio, Speech, and Language Processing , 2024. [17] A. Nagrani, J. S. Chung, and A. Zisserman, “V oxceleb: A large- scale speaker identification dataset,” in Interspeech 2017 , 2017, pp. 2616–2620. [18] L. Li, X. Li, H. Jiang, C. Chen, R. Hou, and D. Wang, “Cn-celeb- av: A multi-genre audio-visual dataset for person recognition,” in Interspeech 2023 , 2023, pp. 2118–2122. [19] T. Cheng, L. Song, Y . Ge, W. Liu, X. Wang, and Y . Shan, “Yolo- world: Real-time open-vocabulary object detection,” in Proceed- ings of the IEEE/CVF Conference on Computer Vision and Pat- tern Recognition , 2024, pp. 16 901–16 911.[20] Z. Gao, Z. Li, J. Wang, H. Luo, X. Shi, M. Chen, Y . Li, L. Zuo, Z. Du, and S. Zhang, “Funasr: A fundamental end-to-end speech recognition toolkit,” in Interspeech 2023 , 2023, pp. 1593–1597. [21] J. Deng, J. Guo, X. Niannan, and S. Zafeiriou, “Arcface: Additive angular margin loss for deep face recognition,” in CVPR , 2019. [22] B. Desplanques, J. Thienpondt, and K. Demuynck, “Ecapa-tdnn: Emphasized channel attention, propagation and aggregation in tdnn based speaker verification,” in Interspeech 2020 , 2020, pp. 3830–3834. [23] A. Radford, J. W. Kim, T. Xu, G. Brockman, C. McLeavey, and I. Sutskever, “Robust speech recognition via large-scale weak supervision,” in International conference on machine learning . PMLR, 2023, pp. 28 492–28 518. [24] S. I. Serengil and A. Ozpinar, “Hyperextended lightface: A facial attribute analysis framework,” in 2021 International Conference on Engineering and Emerging Technologies (ICEET) . IEEE, 2021, pp. 1–4. [25] Z. Ma, M. Chen, H. Zhang, Z. Zheng, W. Chen, X. Li, J. Ye, X. Chen, and T. Hain, “Emobox: Multilingual multi-corpus speech emotion recognition toolkit and benchmark,” in Inter- speech 2024 , 2024, pp. 1580–1584. [26] X. Liu, H. Liu, Q. Kong, X. Mei, J. Zhao, Q. Huang, M. D. Plumbley, and W. Wang, “Separate what you describe: Language-queried audio source separation,” arXiv preprint arXiv:2203.15147 , 2022. [27] S. Zheng, L. Cheng, Y . Chen, H. Wang, and Q. Chen, “3d-speaker: A large-scale multi-device, multi-distance, and multi-dialect cor- pus for speech representation disentanglement,” arXiv preprint arXiv:2306.15354 , 2023. [28] Y . Chen, S. Zheng, H. Wang, L. Cheng, and Q. Chen, “Pushing the limits of self-supervised speaker verification using regular- ized distillation framework,” in ICASSP 2023-2023 IEEE Inter- national Conference on Acoustics, Speech and Signal Processing (ICASSP) . IEEE, 2023. [29] D. Snyder, D. Garcia-Romero, G. Sell, D. Povey, and S. Khudan- pur, “X-vectors: Robust dnn embeddings for speaker recognition,” in2018 IEEE international conference on acoustics, speech and signal processing (ICASSP) . IEEE, 2018, pp. 5329–5333. [30] Y . Chen, S. Zheng, H. Wang, L. Cheng, Q. Chen, S. Zhang, and W. Wang, “Self-distillation prototypes network: Learning ro- bust speaker representations without supervision,” arXiv preprint arXiv:2406.11169 , 2024. [31] H. Wang, S. Zheng, Y . Chen, L. Cheng, and Q. Chen, “Cam++: A fast and efficient network for speaker verification using context- aware masking,” arXiv preprint arXiv:2303.00332 , 2023. [32] Y . Chen, S. Zheng, H. Wang,
https://arxiv.org/abs/2505.21445v1
arXiv:2505.21457v1 [cs.CV] 27 May 2025 ACTIVE -O3 : Empowering Multimodal Large Language Models with Active Perception via GRPO∗ Muzhi Zhu1,2, Hao Zhong1, Canyu Zhao1, Zongze Du1, Zheng Huang1, Mingyu Liu1, Hao Chen1, Cheng Zou2, Jingdong Chen2, Ming Yang2, Chunhua Shen1 1Zhejiang University, China2Ant Group, China Abstract Active vision, also known as active perception, refers to the process of actively selecting where and how to look in order to gather task-relevant information. It is a critical component of efficient perception and decision-making in humans and advanced embodied agents. Recently, the use of Multimodal Large Language Models (MLLMs) as central planning and decision-making modules in robotic systems has gained extensive attention. However, despite the importance of active perception in embodied intelligence, there is little to no exploration of how MLLMs can be equipped with or learn active perception capabilities. In this paper, we first provide a systematic definition of MLLM-based active perception tasks. We point out that the recently proposed GPT-o3 model’s zoom-in search strategy can be regarded as a special case of active perception; however, it still suffers from low search efficiency and inaccurate region selection. To address these issues, we propose ACTIVE -O3, a purely reinforcement learning-based training framework built on top of GRPO, designed to equip MLLMs with active perception capabilities. We further establish a comprehensive benchmark suite to evaluate ACTIVE -O3across both general open-world tasks—such as small-object and dense- object grounding—and domain-specific scenarios, including small object detection in remote sensing and autonomous driving, as well as fine-grained interactive segmentation. Experimental results demonstrate that ACTIVE -O3significantly enhances active perception capabilities compared to Qwen-VL2.5-CoT. For example, Figure 1 shows an example of zero-shot reasoning on the V∗benchmark, where ACTIVE -- O3successfully identifies the number on the traffic light by zooming in on the relevant region, while Qwen2.5 VL fails to do so. Moreover, across all downstream tasks, ACTIVE -O3consistently improves performance under fixed computational budgets. We hope that our work here can provide a simple codebase and evaluation protocol to facilitate future research on active perception MLLM . Our code is released at: https://github.com/aim-uofa/Active-o3 . 1 Introduction Among the many components of perception, active perception, the process of selective acquisition of sensory information to achieve specific goals, has proven essential for efficient information gathering and decision making in complex environments [ 1,2,3]. For humans, active perception enables tasks such as focusing on relevant details in a cluttered scene or dynamically adjusting viewpoints to better understand ambiguous objects. Similarly, embodied agents, such as autonomous robots, must also make intelligent choices about where to look and how to look to succeed in real-world tasks [ 4,5,6]. ∗C. Shen is the corresponding author. Part of the work was done when M. Zhu was an intern at Ant Group. Preprint. Under review. Tell me the number on the traffic light. The number on the police car is 19.The number on the traffic light is 10. Qwen2.5 VL <think> To answer the question about the number on the traffic light, we need to focus on areas where traffic lights are typically located. In this image, there are
https://arxiv.org/abs/2505.21457v1
two potential regions to consider: **Region 1**: This area is near the center of the image, slightly to the left. It includes a traffic light with a red signal illuminated. This is a strong candidate because it's a common place for traffic lights to be located. ... </think> Figure 1: Zero-shot reasoning on the V∗benchmark. When asked “Tell me the number on the traffic light”, Qwen2.5 VL incorrectly refers to unrelated text. In contrast, ACTIVE -O3locates and magnifies the precise area on the traffic light, accurately answering 10 through effective spatial localization. With the recent surge in the capabilities of multimodal large language models (MLLMs) [ 7,8,9,10], these models are increasingly being integrated into robotic systems [ 11,12,13,14,15,16] as central modules for planning, reasoning, and decision-making. However, despite their impressive generalization and compositionality, current MLLMs are typically passive consumers of visual inputs, relying on static, fixed views of the environment. This contrasts sharply with the dynamic information-seeking behavior that characterizes active perception. A recent attempt to move towards active perception in MLLMs is the zoom-in search strategy proposed in GPT-o3 . Although this strategy offers a first step, it remains limited by inefficient region proposals and low target localization accuracy(see Figures 16 in Appendix), especially in dense or fine-grained scenarios. Crucially, there is still a lack of systematic frameworks and evaluation protocols to study and develop active perception capabilities within MLLMs. 2 In this paper, we proffer ACTIVE -O3, a novel reinforcement learning-based training framework built on Group Relative Policy Optimization (GRPO) [ 17], specifically designed to equip MLLM with active perception skills. We provide a formal task definition for MLLM-based active perception, and construct a comprehensive benchmark suite to evaluate performance across a wide range of tasks—from open-world grounding of small and dense objects, to domain-specific applications such as remote sensing, autonomous driving, and fine-grained segmentation. Our extensive experiments show that ACTIVE -O3substantially improves search efficiency, accuracy, and downstream task per- formance under fixed computational budgets, compared to strong baselines such as Qwen-VL2.5-CoT. Furthermore, we observe that, despite not being explicitly trained on reasoning or question answering data, ACTIVE -O3demonstrates remarkable zero-shot generalization and reasoning capabilities on challenging fine-grained understanding tasks such as V∗[18] benchmark (see Figure 1). Our primary contributions are summarized as follows: •We propose ACTIVE -O3, the first reinforcement learning framework for active perception with MLLMs, formalized via a unified two-stage policy that separates region proposal (sensing) and task execution. Our method combines structured instruction prompts with a dual-form reward design—integrating both task-aware and heuristic feedback—to guide the model toward producing diverse, interpretable, and task-effective region proposals. •We target two representative yet challenging applications—namely, small/dense object detection and interactive segmentation—and demonstrate that ACTIVE -O3significantly improves perception quality and task performance across both general-purpose and domain-specific visual tasks. •We establish a comprehensive benchmark and release all code, prompts, and evaluation protocols to facilitate reproducible research and future exploration in MLLM-based active perception. 2 Related Work 2.1 Reinforcement Learning for Multimodal Large Language Models Large Language Models (LLMs) and their multimodal extensions (MLLMs) have achieved impressive progress in language and
https://arxiv.org/abs/2505.21457v1
visual understanding tasks [ 7,19,20,21,9,22,23,24,10,25]. While supervised learning and instruction tuning remain the dominant approaches for training MLLMs, several limitations persist—such as aligning model behavior with human preferences and handling complex reasoning tasks. Reinforcement Learning (RL) has been introduced as a promising approach to address these challenges. An early and influential example is Reinforcement Learning from Human Feedback (RLHF) [ 26], which was primarily developed to align model behavior with human preferences and played a central role in the success of ChatGPT [ 7]. A recent advancement in this direction is Group Relative Policy Optimization (GRPO), proposed in DeepSeek-R1 [ 17] and DeepSeek-Math [ 27]. GRPO introduces a novel way to estimate the advantage function using the mean and variance of rewards across a group of responses, guided by verifiable reward signals. This approach eliminates the need for a separate critic model and significantly enhances reasoning capabilities on complex problems. Concurrently, several works [ 28,29,30,31,32] have explored applying GRPO to MLLMs. However, these efforts mainly focus on text-centric reasoning or simple visual grounding tasks. In contrast, our work investigates how GRPO can empower MLLMs with active perception abilities, targeting visually grounded reasoning tasks that require spatial understanding and goal-directed attention. Moreover, due to the difficulty of collecting high-quality trajectories for active perception scenarios, reinforcement learning becomes even more essential in this context. 2.2 Active Perception Active perception refers to the paradigm in which an agent intelligently and dynamically controls its own sensors or actions to achieve a specific task or goal. Early foundational work [ 1,2,3]—often termed “active vision” when focusing on visual sensors—demonstrated that by actively controlling parameters such as camera pose or sensor configuration, agents can transform otherwise ill-posed perception problems into well-posed ones. This enables more efficient and effective information gathering for tasks like object recognition, scene understanding, navigation, and manipulation. With the advent of deep reinforcement learning, agents are now able to learn sophisticated sensorimotor policies end-to-end from raw sensory inputs and reward signals [ 33,34,35,36,37,6], without 3 Find uptothree different regions intheimage that likely contain ahigh number of Coins Missing Wrong Heuristic Rewards Response Structure Non-Overlapping Area Range Coverage GTRollouts1Rollouts3 Rollouts2 <Answer> [ {"bbox_2d ": [198, 367, 370, 524], "label": "coin -dense region"}, {"bbox_2d ": [364, 342, 446, 412], "label": "coin -dense region"} ] </Answer> <Think> To identify regions with a high number of coins, I'll … I will define three distinct regions based on this observation . </Think> TaskModel Only Sensing Field: Please find allCoins intheimage andreturn thebounding boxcoordinates inJSON format . Sensing Model TaskModel Task Rewards Object Detection AP@IoU + AR@IoU GRPOFigure 2: Overview of the proposed Active-O3 framework. Given a multimodal query (e.g., “find all coins”), traditional task models often miss or misidentify target objects due to limited perceptual coverage. Active-O3 enhances perception by allowing the model to actively propose informative subregions (zoom-in regions) based on a learnable sensing policy. the need for explicit models of environmental uncertainty or information gain. More recently, the principles of active perception have been widely embraced in the field of embodied AI [ 4,5,6,35], where agents must not
https://arxiv.org/abs/2505.21457v1
only perceive but also interact purposefully with their environments to accomplish complex goals. Meanwhile, there is a clear trend toward integrating Multimodal Large Language Models (MLLMs) as the central reasoning modules—or “brains”—of embodied AI systems [ 15,14,38]. In this context, enabling MLLMs with active perception capabilities is of critical importance for advancing the autonomy and intelligence of such systems. However, despite 4 rapid progress in MLLM research, active perception remains largely underexplored. Our work aims to bridge this gap, leveraging the strong generalization and reasoning capabilities of MLLMs to tackle challenges in active perception. 3 MLLM-based Active Perception: Definition and Analysis In this section, we provide a formal definition of active perception tasks based on multi-modal large language models (MLLMs) (see Figure 2 for our framework.) Modular View of Active Perception. Consider an embodied agent that receives a human instruction Iand is required to perform a complex physical-world task. At each time step t, the agent state is defined as st= (senv t, scam t), where senv tdescribes the environment (e.g., objects and their properties), andscam tdenotes the sensor’s pose and viewpoint. A deterministic observation function gmaps the current system state to a visual observation: ot=g(st) +ϵt, where ϵtis a stochastic noise term.2The action space is similarly factorized as at= (aenv t, acam t)∈A, where aenv tdenotes the task-oriented interaction action (e.g., grasping, pointing), and acam tcontrols the sensing parameters (e.g., moving or rotating the camera). In order to effectively interact with the environment, the agent must continuously adjust its visual perspective based on current observations to acquire more informative inputs that guide subsequent actions. Active perception can thus be modeled as a coordination between two modules: •Task Model MA: decides how to act on the environment to accomplish external tasks. It takes the current observation otand the task instruction Ias input, and outputs a task-level action: aenv t=MA(ot,I) •Sensing Model MO: decides how to control perception parameters to improve observation quality. It also takes the current observation and task instruction as input, and outputs a perception action: acam t=MO(ot,I) In our formulation, each action component primarily affects a specific part of the system state: acam t updates scam t, and aenv tupdates senv t, formalized as scam t+1=fcam(scam t, acam t), senv t+1=fenv(senv t, aenv t) where fcamandfenvare deterministic transition functions. System Dynamics. At each time step, the system operates in a closed loop as follows: 1) the sensing model selects a perception action acam t=MO(oprev t,I), which updates the sensor state via scam t←fcam(scam t, acam t); 2) the system receives a new observation ot=g(st)+ϵt; 3) based on otand I, the action model selects an interaction action aenv t=MA(ot,I), which updates the environment state as senv t+1=fenv(senv t, aenv t). Objective Function We jointly optimize the action model MAand the sensing model MOto maximize task success while minimizing perceptual cost: max MA,MOE"TX t=1R(st, aenv t)−λ·C(acam t)# where R(st, aenv t)denotes the task-level reward (e.g., success or progress), C(acam t)is the cost of the sensing action (e.g., viewpoint shift or latency), and λis a balancing factor. 2In this paper, we focus
https://arxiv.org/abs/2505.21457v1
on the deterministic mapping g(st)and do not explicitly model observation noise. 5 Prompt for ACTIVE -O3Detection •“Find up to three different regions in the image that likely contain a high number of ‘ {object} ’.” • “Even if the ‘ {object} ’ are not clearly visible, infer where they are most likely to appear." • "Each region should cover multiple ‘ {object} ’ and include some visual context.” •“The selected regions should be as distinct as possible, with minimal or no overlap between them.” • “Return the coordinates in JSON format as: {“bbox_2d”: [x1, y1, x2, y2], “label”: “ {object} - dense region”}.” •“Explain your reasoning in <think>...</think> and output the final result in <an- swer>...</answer>.” • “Example: <think> thinking process here </think> <answer> JSON format here </answer>” Figure 3: Prompt for A CTIVE -O3-DET. Specialization to 2D Visual Scenes While our general formulation applies to embodied agents in complex physical environments, such settings are often difficult to deploy and evaluate in a reproducible manner. To facilitate more controlled and fair comparisons, we specialize the problem to a simplified yet representative 2D scenario: active perception over static images. In this setting, the environment state senv tis a high-resolution static image I∈RH×W×3. The sensing action acam tspecifies a rectangular region within I, parameterized as a bounding box (x, y, w, h )3. The observation otis obtained by cropping the region defined by acam tfrom Iand resizing it to a fixed resolution : ot=ResizeCrop (I, acam t) The task model MAthen operates on the selected region to perform downstream functions such as classification, detection, or answering visual questions. This setting preserves the core challenge of active perception—selecting informative views—while simplifying execution and enabling systematic evaluation. Objective in 2D Active Perception A key property of the 2D visual scenario is that the environment statesenvremains static across time (since the interaction action aenv tdoes not change the image). In the 2D setting, we assume a fixed task model MAand focus on learning a sensing policy MO that selects Kinformative regions from a static image Ibased on an initial observation oinitand instruction I. Here, oinitrepresents a low-resolution global view of the image (e.g., a thumbnail), which serves as a coarse prior for guiding the selection of detailed regions. The goal is to maximize overall task performance under a fixed sensing budget. Formally, the optimization objective is: max MOEI,I"KX k=1R(MA(ok),I)# ,where{acam k}K k=1=MO(oinit,I) ok=ResizeCrop (I, acam k)(1) We treat active perception in this static 2D setting as a single-step decision problem ( T= 1). As a result, the index krefers to parallel candidate sensing actions rather than time steps. 4 A CTIVE -O3 Building on the formulation in the previous section, we now present ACTIVE -O3, a unified framework for MLLM-driven active perception in vision-language tasks. We target two representative and challenging applications: (1) small object detection/grounding and (2) interactive segmentation. Both tasks require selecting multiple informative regions from an image before performing task-specific action. 3We focus on axis-aligned rectangular regions and omit rotation for simplicity, although it can be incorporated into the action space. 6 Given an
https://arxiv.org/abs/2505.21457v1
image Iand instruction I, we first generate a global observation oinitby resizing I. A shared multi-modal large language model (MLLM) is treated as a unified policy πthat generates a textual response y—containing both intermediate reasoning and action outputs—conditioned on the visual input and instruction, i.e., π(y|o,I). The MLLM is then guided by two prompts: IOfor proposing regions, and IAfor performing task- specific operations. We extract actionable components from yvia task-specific parsers tailored to each subtask. In this setup: •Sensing module: MO(oinit,IO) := Parse cam(π(y|oinit,IO)) which produces Kcandidate perception actions {acam k}K k=1parsed from the full response. •Task module: MA(ok,IA) := Parse env(π(y|ok,IA)) which operates on the k-th region crop and produces the final task-level output aenv k. In detection-style tasks, aenv kshares the same state as acam k—a bounding box list; the distinction lies in their roles: acamselects candidate regions for further inspection, while aenvexpresses the final localization prediction. We evaluate the alignment between aenv 1:Kand the ground truth boxes GT box={(x1, y1, x2, y2)}using standard detection metrics such as Average Precision (AP) and Average Recall (AR). 4.1 Sensing Policy via MLLM To enable active perception without additional supervised fine-tuning (SFT), we leverage the instruction-following and reasoning capabilities of MLLMs to implement the sensing policy MOvia prompting. This zero-shot setup serves as the necessary starting point for our subsequent reinforce- ment learning (RL) optimization, which assumes the initial model has non-trivial performance. We design a task-specific instruction prompt IO(Figure 3) to guide MOin producing meaningful and diverse region proposals acam 1:K. The prompt serves three key purposes: •Format regularization: The prompt enforces a structured output format and encourages step-by- step reasoning using tags such as <think> and<answer> . •Task guidance: It introduces domain-specific priors, such as encouraging the model to: –infer likely object locations even when objects are not clearly visible, –select spatially diverse and minimally overlapping regions, –prefer regions with sufficient surrounding context to support downstream decisions. These constraints help MOgenerate interpretable and effective sensing actions that form the basis for active region selection. 4.2 Policy Improvement with GRPO While the prompt-based sensing policy MOprovides a strong initialization, it lacks adaptability to task-specific feedback. A central challenge is that the utility of a sensing action acamcannot be evaluated in isolation—it must be judged by its downstream effect on task performance via MA. This indirect supervision makes it difficult to provide ground-truth labels or optimal targets for training MO. Moreover, we desire π(y|oinit,IO)to produce not only candidate regions but also intermediate reasoning traces, which are inherently difficult to supervise through standard imitation learning or SFT. To overcome these challenges, we adopt a reinforcement learning–based approach that enables MO—or more precisely, the underlying language policy π—to improve itself based on task-level reward signals. We apply GRPO , a lightweight method that avoids the need for training a separate critic model. Let πθdenote the current policy and πθoldthe behavior policy used to sample Nresponses {yn}N n=1. Each response contains reasoning and candidate region proposals parsed as acam 1:K=Parse cam(yn). The training objective is: JGRPO(θ) =EI,I" 1 NNX n=1min ( wn(θ)An,clip(wn(θ),1−ϵ,1 +ϵ)An)−βDKL(πθ∥πref)# (2) 7 where
https://arxiv.org/abs/2505.21457v1
wn(θ) =πθ(yn|oinit,IO) πθold(yn|oinit,IO)is the importance ratio between current and behavior policies4,An is a normalized reward-based advantage for sample n, and πrefis a frozen reference policy (e.g., the base MLLM) used to regularize the update. An=rn−mean ({r1, . . . , r N}) std({r1, . . . , r N})(3) 4.3 Dual-Form Reward Design The reward function rnin Eq. 2 is a crucial component of the GRPO objective. It provides feedback on the quality of the selected regions and the reasoning traces generated by the MLLM. To effectively guide learning under different supervision regimes, we design two types of reward functions: a task-aware reward that is coupled with the task model and reflects the success of the final task, and aheuristic reward that is decoupled from the task model and based on intrinsic properties of the proposed regions. Heuristic Reward. This reward evaluates a single MLLM response based on task-independent criteria that promote interpretable and spatially meaningful region proposals. It is composed of four components: •Format Validity. The response must conform to a valid structured format. We reward responses that are parseable as JSON with bounding boxes under the bbox_2d field and that include both reasoning and answer segments marked by <think> and<answer> tags. •Non-overlapping Proposals. To encourage spatial diversity, we reward proposals whose pairwise Intersection-over-Union (IoU) falls below a threshold. Responses with any overlapping regions are penalized. •Area Range Constraint. Each bounding box is required to fall within a reasonable size range relative to the image (e.g., 1% to 50%). This avoids overly small or overly large boxes that may be either noisy or uninformative. •Coverage-Based Reward. When ground truth masks or boxes are available, we assess how well the predicted regions align with task-relevant areas. This can include: (i) the proportion of ground-truth mask pixels covered by a region, (ii) the percentage of ground-truth boxes matched by at least one proposal, or (iii) the Dice/IoU between predicted and reference masks. The final heuristic reward Rheuristic (y)is computed as a weighted sum of the above components. Task-Aware Reward. The task-aware reward evaluates the quality of the selected regions based on their downstream utility as measured by task-specific performance metrics. To compute this reward, we execute the task model MAon each selected region ok, generating outputs aenv k. This requires additional forward passes of MAduring training, for which we implement an efficient batched inference system to support parallel evaluation. The form of the reward depends on the specific task: •Detection: MAreturns a set of predicted bounding boxes {bi}K i=1, which are compared against ground-truth boxes {bj}J j=1using standard metrics such as Average Precision (AP) and Average Recall (AR), based on IoU matching. •Interactive Segmentation: MApredicts interaction points (positive/negative) based on each region, which are fed to a local instance of Segment Anything (SAM) via an internal API. The resulting segmentation mask is compared against ground-truth masks using mean Intersection over Union (mIoU). This reward provides precise task-aligned feedback and is critical for fine-tuning the sensing policy toward optimal end-task performance. Formal definitions and implementation details are provided in Appendix Sections B and C.
https://arxiv.org/abs/2505.21457v1
4In our implementation, we adopt a single-update variant of GRPO where πθold=πθduring training. 8 5 Experiments <think> To identify areas with many people, I look for vehicles and related urban context. Three key regions stand out: 1. Left side: Two motorcycles—one with a rider carrying a box, another with a rider—indicate multiple people. 2. Right side: A motorcycle with two riders suggests a group. 3. Center background: Several people near a construction barrier form a visible cluster. These regions show distinct groups with minimal overlap. </think> SODA-D Please find all the human in the image. Task Model Please find all the pigeons in the image. <think> To locate areas with many pigeons, I focus on clusters near the bench: 1. Under bench: Several pigeons gathered directly beneath. 2. Left of bench: More pigeons on the ground to the left. 3. Right of bench: Additional pigeons visible on the right side. These regions are distinct, centered around the bench, and each contains multiple pigeons. </think> Task Model LVIS Please find all the windmills in the image. Task Model SODA-A <think> To find areas with many windmills, I look for clusters of small white objects in open rural spaces. Three regions stand out: 1. Top left: Cluster near the top-left edge. 2. Center right: Windmills grouped toward the center-right. 3. Bottom right: Another distinct cluster in the bottom-right. Each region contains multiple windmills and is spatially distinct. </think> Figure 4: Visualization details of our proposed method on three datasets. 5.1 Compared Methods In this section, we introduce three baseline methods and a variant of ACTIVE -O3to conduct a comparison. (see Figure 4 for visualization result and more ablation results can be found in Appendix.) Grounding DINO (GDINO) [ 39].Grounding DINO is one of the strongest open-world object detection and grounding models available, and it has been widely adopted in the research community [40]. We use it as a non-MLLM-based task model MA, which performs grounding directly on images without requiring additional instruction modules such as MO. Despite its simplicity, it can handle a variety of grounding tasks effectively. Qwen2.5-VL-7B [ 10].We adopt Qwen2.5-VL-7B as an MLLM-based task model MA, allowing us to evaluate the performance of a pure MLLM on small object detection and grounding tasks, without any auxiliary guidance from MO. Qwen2.5-VL-CoT. As introduced in Section 4.1, we can formulate a sensing policy MOby prompting an MLLM with a crafted instruction IO. In this baseline, we reuse Qwen2.5-VL-7B both as the policy model MO(to generate action proposals) and as the task model MA(to execute the proposed actions aenv k). This setup tests the effectiveness of using a single MLLM for both sensing and acting. ACTIVE -O3+ GDINO. Although ACTIVE -O3uses a unified MLLM model πto instantiate both MAandMOduring RL training, it allows decoupling at test time. In this variant, we replace the action model MAwith Grounding DINO, while retaining the original MOfrom ACTIVE -O3. This configuration tests whether ACTIVE -O3’s sensing policy can generalize when paired with a stronger, specialized task model. 5.2 Open-World Small/Dense Object Grounding Dataset. We build our benchmark on
https://arxiv.org/abs/2505.21457v1
the LVIS dataset [ 41], known for its rich long-tail vocabulary and abundance of small, densely packed objects. For small object grounding, we use instances under 100 pixels; for dense grounding, we select images with over 15 annotated instances. In both cases, 9 Table 1: Comparison of grounding and detection performance on LVIS small andLVIS dense. Numbers in parentheses denote improvements over the corresponding baseline. MethodLVIS small LVIS dense APs ARs APs ARs APm ARm APl ARl Qwen2.5-VL 1.2 1.8 1.6 2.0 9.7 11.0 15.0 18.7 GDINO 0.5 1.2 5.7 6.3 20.2 22.5 40.2 44.9 Qwen2.5-VL-CoT 1.2 2.2 2.5 3.5 11.2 14.4 20.3 25.8 ACTIVE -O3 2.2 (+1.0) 4.6 (+2.8) 4.3 (+2.7) 5.5 (+3.5) 14.3 (+4.6) 19.7 (+8.7) 20.9 (+5.9) 33.3 (+14.6) ACTIVE -O3+GDINO 1.2 (+0.7) 2.5 (+1.3) 7.0 (+1.3) 7.9 (+1.6) 25.1 (+4.9) 29.3 (+6.8) 45.1 (+4.9) 55.9 (+11.0) we replace <object> in instruction IOwith the target category. We sample 10,000 training images and 1,200 validation images, ensuring each category appears no more than three times in the test set for balance. Results. This benchmark is challenging due to small, densely packed objects. As shown in Table 1, both GDINO and Qwen2.5-VL struggle in this setting. In contrast, ACTIVE -O3outperforms Qwen2.5-VL and its CoT variant, improving AP s/ARsby +1.0/+2.8 on LVISsmall , and by +2.7/+3.5 onLVISdense . It also improves AR lby +14.6 in large-object retrieval. When paired with GDINO, ACTIVE -O3+GDINO achieves 7.0 AP sand 7.9 AR s, surpassing GDINO by +1.3/+1.6. These results highlight ACTIVE -O3as a strong and generalizable sensing policy MOfor complex, open-world scenarios. 5.3 Domain-Specific Small Object Detection Dataset. To evaluate domain generalization, we use the SODA benchmark [ 42], which includes two large-scale datasets for small object detection: SODA-D (autonomous driving) and SODA-A (aerial imagery). SODA-D has 24,828 traffic images with 278,433 instances in 9 categories, while SODA-A offers 2,513 aerial images with 872,069 instances across 9 classes like vehicles and buildings. These datasets cover diverse and practical small-object detection scenarios. Table 2: Performance comparison on SODA-A andSODA-D for small object detection. Numbers in parentheses denote improvement over Qwen2.5-VL. MethodSODA-A SODA-D APs ARs APs ARs Qwen2.5-VL 0.7 1.5 2.1 4.5 GDINO 0.5 1.2 8.0 8.7 Qwen2.5-VL-CoT 3.2 5.2 7.8 15.2 ACTIVE -O3 9.2 (+8.5) 10.4 (+8.9) 15.1 (+13.0) 22.0 (+17.5)Results. Table 2 shows that ACTI- VE-O3achieves strong performance across both domains, with 9.2/10.4 APs/ARsonSODA-A and 15.1/22.0 onSODA-D . Despite the larger do- main gap in the aerial scenario, ACTI- VE-O3still outperforms Qwen2.5-VL by +8.5 AP son SODA-A, indicating robust generalization. Performance on SODA-D is even higher, suggesting that our learned sensing policy MO effectively transfers across distinct visual domains. 5.4 Fine-Grained Interactive Segmentation Dataset and Setup. We use the ThinObjects dataset for its fine-grained segmentation masks and semantic labels, ideal for evaluating zoom-in interactive segmentation. Due to the lack of a strong public task model MA, we use an oracle version that simulates perfect click-based feedback to isolate the impact of our sensing policy MO. Each sample allows up to 3 zoom-in steps, and performance is measured by mean IoU between predicted and ground-truth masks
https://arxiv.org/abs/2505.21457v1
after interaction. Effect of Zoom-in Budget. Figure 5 compares QWEN 2.5-VL-C OTandACTIVE -O3under dif- ferent zoom-in budgets. While both start at the same initial mIoU, QWEN 2.5-VL-C OTsuffers performance degradation as budget increases, dropping to 0.561 at budget 3. This is due to its tendency to zoom into incorrect regions, compounding errors in subsequent steps. In contrast, ACTIVE -O3progressively improves to 0.863, demonstrating that our reinforcement learning policy effectively learns to identify and correct errors by selectively zooming in on challenging regions. 10 0 1 2 3 Zoom-in Budget0.500.550.600.650.700.750.800.850.90IoU0.796 0.654 0.601 0.5610.7960.8400.859 0.863Qwen2.5-VL-CoT ACTIVE-o3Figure 5: Comparison of segmentation performance (mIoU) under different zoom-in budgets. 6 Conclusion We propose ACTIVE -O3, a reinforcement learning framework that empowers MLLMs with active perception via a two-module policy for sensing and action. Trained with task-aware and exploratory rewards, ACTIVE -O3enables MLLMs to reason about where to look and how to act more effectively. Experiments across open-world grounding, fine-grained segmentation, and domain-specific small object detection show that ACTIVE -O3consistently improves accuracy and efficiency under limited computational budgets, while generalizing well across diverse domains. We hope that this work encourages further research on active vision with MLLMs. Acknowledgement This work is supported by Ant Group Research Intern Program. A Appendix Overview This appendix provides additional technical details, implementation insights, and extended results to supplement the main paper. It is organized as follows: •Section B: Heuristic Reward Formulations Describes the manually designed reward components used to evaluate MLLM outputs, including format validity, spatial overlap, area constraints, and coverage metrics. •Section C: Task-Aware Reward Formulation Defines the reward signals computed using downstream task-specific models (e.g., object detection and interactive segmentation). •Section D: Discussion: Framework Considerations and Insights Discusses the design choices and considerations behind our MLLM-based active perception framework. •Section E: Method Details Discusses implementation details of our active perception system, including MLLM prompt design, reward integration, evaluation metrics, and model configuration. •Section F: Ablation Studies Presents ablation experiments on different reward combinations and dataset configurations to understand the contribution of each component. 11 •Section G: Qualitative Visualization Visual comparisons of model outputs, including correct cases and failure modes, to highlight model behavior under different conditions. B Heuristic Reward Formulations In this section, we detail the heuristic reward functions used to evaluate the quality of region proposals generated by the MLLM. Each reward component is applied to a single MLLM response y, which typically includes multiple bounding boxes {bi}N i=1and optional reasoning traces. The final reward Rheuristic (y)is a weighted combination of the components described below. B.1 Format Validity Reward Rformat This reward ensures the response adheres to expected syntax and structure. It includes two checks: •JSON validity: the output must be parseable as a list of objects with bounding box fields bbox_2d . •Response structure: the output should include the required reasoning and answer format using tags <think> and<answer> . Rformat(y) =1,ifyis valid JSON and contains both <think> ,<answer> 0,otherwise B.2 Non-overlapping Reward Rno-overlap This reward penalizes overlapping region proposals to promote spatial diversity: Rno-overlap ({bi}) =1,if IoU (bi, bj)≤τ,∀i̸=j 0,otherwisewithτ= 0.3 B.3 Area Range Reward Rarea We encourage
https://arxiv.org/abs/2505.21457v1
region proposals whose areas fall within a reasonable proportion of the image: AreaRatio (bi) =(x2−x1+ 1)( y2−y1+ 1) W·H Rarea({bi}) =1,if∀i, rmin≤AreaRatio (bi)≤rmax 0,otherwisewithrmin= 0.01, rmax= 0.5 B.4 Coverage-Based Reward Rcoverage This reward evaluates how well the proposed regions align with task-relevant areas. It is defined in multiple modes: •Ground-truth mask coverage: for binary mask M∈ {0,1}H×W, we compute the average proportion of mask pixels covered: Coverage (bi, M) =P (x,y)∈biM(x, y) Area(bi) Rmask({bi}) =1 NNX i=11[Coverage (bi, M)≥θ] •Ground-truth box coverage: we count how many ground-truth boxes have at least one matching predicted box (IoU ≥δ), producing a coverage ratio: Rgt-box =#matched GT boxes #total GT boxes 12 •Mask-to-mask alignment: if both predicted and ground-truth masks are available, we compute Dice or IoU over the merged regions. The final coverage reward can be defined as a soft combination of the above modes when applicable. B.5 Overall Heuristic Reward We define the total heuristic reward as a weighted sum of the components: Rheuristic (y) =λ1Rformat +λ2Rno-overlap +λ3Rarea+λ4Rcoverage where λiare all set to 1. C Task-Aware Reward Formulation We provide task-specific definitions of the reward signal computed from the outputs of the task model MA. Object Detection. LetˆB={bi}K i=1be the predicted bounding boxes and B∗={bj}J j=1be the ground-truth boxes. The reward is computed using standard detection metrics: Rdetect=AP@IoU=0.5 +AR@IoU=0.5 Interactive Segmentation. LetˆMbe the predicted mask returned by the SAM [ 43] API and M∗ be the ground-truth mask. The segmentation reward is defined as: Rseg=mIoU (ˆM, M∗) =|ˆM∩M∗| |ˆM∪M∗| We generate the SAM prediction using positive and negative points inferred by MA. D Discussion: Framework Considerations and Insights In this section, we provide further insights into the design of our MLLM-based active perception framework, building upon the main formulation introduced in Section 3 of main paper. The following remarks highlight critical architectural choices and theoretical simplifications made to improve performance, efficiency, and generalization. Remark D.1 (MLLM-Driven Action and Sensing Modules ).Unlike prior approaches that use specialist models for each module, we adopt a single multi-modal large language model (MLLM) to jointly handle both action and sensing. This design offers several advantages. First, MLLMs exhibit strong capabilities in following natural language instructions and generalizing to open-ended semantic goals. Second, they can leverage rich contextual information, including long-term observation history, to make more informed and coherent decisions. Finally, in addition to predicting aenv tandacam t, MLLMs can also generate intermediate reasoning steps, which not only enhance interpretability but have also been shown to improve task performance in prior work (e.g., chain-of-thought prompting). Remark D.2 (Optimization Strategy ).In principle, the action model MAand the sensing model MO can be jointly optimized. However, this requires MAto already possess sufficient baseline capability. A common alternative is to perform staged or iterative optimization, where one alternately updates MAandMOin a bootstrapping manner. In this work, we assume access to a reasonably strong MA and focus on optimizing MOaccordingly, since our goal is to investigate how to equip MLLMs with effective active perception strategies. Furthermore, to simplify the problem, we reformulate the perceptual cost term as a fixed sensing budget. That
https://arxiv.org/abs/2505.21457v1
is, under a given number of allowed sensing actions, the objective becomes maximizing task reward. This is the setup we adopt in our experiments. Remark D.3 (2D Setting as a Single-Step Active Perception Problem ).A key property of the 2D visual scenario is that the environment state senvremains static across time (since the interaction action aenv tdoes not change the image). As a result, the task reduces to a single-step decision problem (T= 1), and the agent’s objective becomes repeatedly selecting an initial sensing action acam 0. This 13 What animal is drawn on that red signicade. Low Efficiency Search Inaccurate Region Selection It looks like a stylized bull (or cow) head with horns is painted on that red sidewalk sign. Figure 6: A failure case of GPT-o3 in answering the question: What animal is drawn on that red signicade?. The reasoning trajectory reveals two key limitations: inaccurate region selection (left), and inefficient, near-exhaustive search patterns (right). reframing allows for a significantly more efficient implementation: multiple candidate sensing actions can be evaluated in parallel, enabling broader exploration of the observation space without relying on sequential interaction. In this sense, 2D active perception can be viewed as a parallelized search over viewpoints within a fixed scene. Remark D.4 (GPT-o3 vs. ACTIVE -O3).The zoom-in search strategy used in GPT-o3 can be seen as a special case of the active perception framework defined in this paper. However, it suffers from two major limitations. First, its search process is purely sequential—only one region can be selected and zoomed in at a time—which leads to low efficiency. Second, its region selection is often inaccurate, resulting in unnecessary zooms and missed critical areas. In contrast, ACTIVE -O3 enables parallel selection of multiple candidate regions, improving search coverage and efficiency. Moreover, by leveraging the reasoning capability of MLLMs and optimizing the sensing policy MO through reinforcement learning, ACTIVE -O3is able to identify more informative regions under a fixed sensing budget. We further illustrate the limitations of GPT-o3 with a failure case shown in Figure 6. The task is to answer the question “What animal is drawn on that red signicade?”. The correct answer is tiger, as a stylized tiger face is clearly visible on the red sidewalk sign. However, GPT-o3 fails to accurately locate the relevant region. It initially zooms into irrelevant parts of the image—such as metallic structures and background textures—due to its limited context and short-horizon planning. As shown in the left panel of Figure 6, the chosen regions completely miss the actual sign. Moreover, as highlighted on the right side, GPT-o3’s sensing process becomes inefficient, closely resembling exhaustive grid-based search in some cases. This leads to redundant actions and poor use of the limited zoom-in budget. In contrast, ACTIVE -O3identifies more informative regions early by reasoning over spatial layout and task context, significantly improving efficiency and accuracy. D.1 Limitations and Future Work Despite the promising results, our framework has several limitations that open avenues for future research. (see Figure 16). First, the domain gap remains a challenge, particularly for specialized domains such as remote sensing. Current
https://arxiv.org/abs/2505.21457v1
MLLMs may struggle to accurately identify domain-specific categories (e.g., windmills, 14 Prompt for ACTIVE -O3Detection • "Find up to three different regions in the image that likely contain a high number of ‘ {object} ’." • "Even if the ‘ {object} ’ are not clearly visible, infer where they are most likely to appear." • "Each region should cover multiple ‘ {object} ’ and include some visual context." • "The selected regions should be as distinct as possible, with minimal or no overlap between them." •"Return the coordinates in JSON format as: {“bbox_2d”: [x1, y1, x2, y2], “label”: “ {object} -dense region”}." • "Explain your reasoning in <think>...</think> and output the final result in <answer>...</answer>." • "Example: <think> thinking process here </think> <answer> JSON format here </answer>" Figure 7: Prompt for A CTIVE -O3-DET. Prompt for ACTIVE -O3Segmentation •"Identify exactly three distinct regions in the image that illustrate segmentation inaccuracies in the translucent green mask for the ‘ {object} ’." • "The selected regions should be as distinct as possible, with minimal or no overlap between them." •"Check whether the mask accurately covers the ‘ {object} ’, meaning it should fully include the object without significant over-segmentation (mask extends into background) or under-segmentation (parts of the object are not covered)." •"Each region should represent a clear segmentation mistake and include enough surrounding context for verification." •"Return the results in JSON format as: {“bbox_2d”: [x1, y1, x2, y2], “label”: “ {object} segmentation error”}." • "Explain your reasoning in <think>...</think> and output the final result in <answer>...</answer>." • "Example: <think> reasoning process here </think> <answer> JSON format here </answer>" Figure 8: Prompt for A CTIVE -O3-Seg. storage tanks), which can lead to inaccurate task-aware reward estimation due to the limited capability of the task model. Second, the current action space is constrained. Our framework only allows zooming into three target regions per step. However, certain applications may require more flexible control, such as selecting a larger number of regions or introducing transformations like rotation—especially relevant for tasks like OCR, though less critical for tasks such as grounding. Third, the input to the sensing model is limited to the current observation. In practice, incorporating a memory mechanism to store past actions and observations could enable more informed decision- making. This extension may support more sophisticated strategies, such as trajectory-level planning, long-term search, and rollback operations. Addressing these limitations could further improve the adaptability, generalization, and decision quality of the proposed sensing policy in more complex or specialized scenarios. E Method Details E.1 Prompt Design In this section, we provide the prompts used to guide the MLLM in both detection (Figure 3) and segmentation (Figure 8) tasks as the sensing policy MO. The prompts are designed to elicit specific behaviors from the model, ensuring that it generates appropriate region proposals and reasoning. For the task model MA, we use an simple instruction to ask the model to perform the task (Figure 9). 15 Prompt for Task Model •"Please find all instances of ‘ {object} ’ in the image and return the bounding box coordinates in JSON
https://arxiv.org/abs/2505.21457v1
format." Figure 9: Prompt for the task model MA. E.2 Implementation Details We use Qwen2.5-VL-7B-Instruct as the shared policy backbone πθ. All experiments are conducted using GRPO with KL regularization coefficient β= 0.04, group size 8, and a learning rate of 1e−6 using the AdamW optimizer with weight decay 0.01. Training is performed on 8 GPUs with 80–90GB memory each, using bf16 precision, a per-device batch size of 1, gradient accumulation of 1, and gradient checkpointing enabled. Training is performed with DeepSpeed ZeRO-3 for memory efficiency. Each experiment typically completes within 24 hours. For the sensing model MO, we resize the input image such that the shorter side is 1024 pixels, while preserving the original aspect ratio. For the task model MA, all images are resized to a fixed resolution of 840×840. For Grounding DINO, we follow the official preprocessing pipeline provided by the authors. E.3 Datasets Details LVIS. We construct our benchmark for open-world small and dense object grounding based on the LVIS [ 41] dataset, which offers the richest long-tail object vocabulary and the highest prevalence of small and densely packed instances among existing segmentation datasets. To assess small object grounding, we identify all instances with an area less than 100 pixels and retain their corresponding categories as test queries. For dense object grounding, we select images that contain more than 15 annotated instances and treat all instance categories within such images as query targets. In both cases, we replace the placeholder <object> in the original instruction IOwith the chosen category name. We sample 10,000 training images from the LVIS training set using this strategy, and 1,200 images from the validation set for evaluation. During test set construction, we ensure that each category appears at most three times to promote category balance. We adopt standard COCO evaluation metrics using the official COCO API. Specifically, we report average precision (AP) across IoU thresholds from 0.5 to 0.95 (in 0.05 increments), as well as AP for small (AP s), medium (AP m), and large (AP l) object sizes. SODA. To further evaluate the generalization of our framework in specialized visual domains, we adopt the SODA [ 42] benchmark, which includes two large-scale datasets designed for small object detection: SODA-D (autonomous driving) and SODA-A (aerial imagery). SODA-D contains 24,828 traffic images with 278,433 annotated instances across nine traffic-related categories. SODA-A includes 2,513 high-resolution aerial images with 872,069 object instances across nine categories such as vehicles and buildings. These datasets present a wide range of realistic and challenging small-object detection scenarios. During training, we randomly select 1,000 images from each dataset as the training set. For SODA-A, whose annotations are originally provided as polygons, we convert them into bounding boxes to serve as ground truth for training and evaluation. Due to the significant domain shift compared to LVIS, direct use of standard evaluation settings (e.g., COCO-style AP at IoU 0.5–0.95) leads to very low scores and poor comparability. To better capture performance under such domain-specific conditions, we lower the IoU threshold to 0.1 when computing detection metrics. This adjustment allows a fairer evaluation of
https://arxiv.org/abs/2505.21457v1
the model’s generalization ability in these more challenging domains. ThinObjects. We adopt the ThinObjects [ 44] dataset for this task, as it provides both semantic annotations and high-quality, fine-grained segmentation masks, making it suitable for evaluating interactive segmentation under zoom-in conditions. One core challenge is the lack of a robust existing task model MAfor click-based interactive segmentation. To focus on evaluating the effectiveness of our method as a sensing policy MO, we construct an oracle variant of MAas a proxy. This oracle 16 Table 3: Impact of training data combinations on small object detection performance. We report APs/ARson SODA-A and SODA-D. Training SetSODA-A SODA-D APsARsAPsARs SODA-A 3.7 7.5 – – SODA-D – – 11.4 18.9 LVIS + SODA-A 6.4 8.8 14.0 17.9 LVIS + SODA-A + D 9.2 10.4 15.1 22.0 Table 4: Ablation study on reward design. Comparison of task reward, heuristic reward, and their combination across different object sizes (small, medium, large). Metrics are AP and AR. Reward Type AP sARsAPm ARm APlARl Task Reward 3.6 5.0 12.1 15.7 16.4 25.2 Heuristic Reward 3.0 4.2 9.7 13.8 13.2 21.7 Combined Reward 4.4 5.8 15.4 20.2 19.1 27.4 simulates perfect feedback during interaction. We set a maximum budget of 3 zoom-in steps per sample. The final performance is evaluated using the mean Intersection over Union (mIoU) between the predicted and ground-truth masks after the interaction sequence. F Ablation Studies Training Data Combination. Table 3 presents the effect of different training data combinations on small object detection performance, evaluated on SODA-A andSODA-D . When incorporating LVIS into the training set, the performance improves significantly across both domains. For example, adding LVIS to SODA-A yields a +2.7 AP sand +1.3 AR sgain on SODA-A, and also enables reasonable generalization to SODA-D. Finally, using the full combination of LVIS, SODA-A, and SODA-D leads to the best overall performance, achieving 9.2/10.4 on SODA-A and 15.1/22.0 on SODA-D . These results demonstrate that ACTIVE -O3serves as a general and flexible framework capable of leveraging heterogeneous domain-specific datasets to learn a unified sensing policy MO. By incorporating diverse training sources such as LVIS, SODA-A, and SODA-D, ACTIVE -O3is able to generalize effectively across multiple domains, highlighting its scalability and adaptability in open-world scenarios. Reward Design. As mentioned in Section 4, we adopt a dual-form reward design that combines heuristic and task-aware rewards. To evaluate the impact of each component, we conduct an ablation study on the reward design. As shown in Table 4, the combined reward achieves the best performance across all object sizes, especially for small objects (AP s: 4.4, AR s: 5.8). Compared to using only task or heuristic rewards, the combination leads to consistent improvements, indicating that it effectively balances exploration (via heuristics) and task-driven optimization. This validates the effectiveness of our dual-form reward design in guiding better policy learning. G Qualitative Visualization G.1 Zero-shot Transfer on V∗Benchmark We demonstrate that ACTIVE -O3is capable of zero-shot transfer to fine-grained VQA tasks, such as those in the V∗[18] benchmark. By learning effective reasoning and search strategies through reinforcement learning on small object detection tasks, ACTIVE
https://arxiv.org/abs/2505.21457v1
-O3generalizes well to previously unseen tasks. We highlight several challenging cases involving OCR (Figures 10, 1) and attribute recognition (Figures 11, 12) where base models struggle. In contrast, ACTIVE -O3can successfully complete the task by leveraging its ability to reason and zoom in adaptively. 17 G.2 Small Object Detection on SODA-A and SODA-D Figure 13 presents qualitative results of ACTIVE -O3on the SODA-A and SODA-D datasets. Com- pared with several baselines, ACTIVE -O3consistently selects more relevant regions to zoom into, leading to improved detection performance on small objects. These results demonstrate that our sensing model can effectively identify task-critical regions and enhance performance in both aerial and driving scenarios. G.3 Small Object Detection on LVIS We further evaluate ACTIVE -O3on the LVIS dataset and visualize its performance in Figure 14. Com- pared with alternative methods, ACTIVE -O3demonstrates superior ability in selecting semantically meaningful regions for zoom-in, resulting in improved detection of small and rare object instances. These examples validate the general applicability of our approach to long-tail and fine-grained detection benchmarks. G.4 Interactive Segmentation on ThinObjects We show in Figure 15 the performance of ACTIVE -O3on the ThinObjects dataset for interactive segmentation. Our sensing model effectively identifies and focuses on regions with poor initial segmentation quality, enabling more precise refinement. These results highlight the utility of ACTI- VE-O3beyond detection, extending to segmentation tasks that require spatial reasoning and adaptive focus. 18 Tell me the number on the police car. The number on the police car is 102.The image does not clearly show the number on the police car. The police car is visible in the foreground, but the details of the number plate are not discernible due to the angle and distance from which the photo was taken. Qwen2.5 VL <think> To find the number on the police car, I need to look for any part of the image where a police car is visible. Given the context, it's likely that the police car is located near the center of the image, possibly in front of the large red building. The number on the police car would typically be displayed on its side or rear, so I'll focus on the area around the police car for potential answers. </think> Figure 10: Zero-shot reasoning on the V∗benchmark (Example 2). Given the question “Tell me the number on the police car”, the baseline model (Qwen2.5 VL) fails to locate the relevant visual evidence due to limited resolution and reasoning capability. In contrast, our method ( ACTIVE -O3) identifies the appropriate region through contextual reasoning and zoom-in selection. It successfully locates the number 102 on the police car, demonstrating strong spatial inference and fine-grained visual understanding. 19 What is the color of the van? The van in the image is red.The image does not show a van. Instead, it features a river with several boats, including a large white and red ferry in the foreground. The background includes a cityscape with a prominent cathedral and other buildings. If you have another image or context related to a van, please provide that information! Qwen2.5 VL <think>
https://arxiv.org/abs/2505.21457v1
To determine the color of the van, we need to look for any vehicles in the image. Since there are no vans visible in the provided image, we can infer that the van's color might be present in the background or on the left side of the image where there are some buildings and structures. However, without a clear view of a van, we can only make educated guesses based on the context. Here are three regions that might contain relevant visual cues. </think> Figure 11: Zero-shot reasoning on the V∗benchmark (Example 3). For the question “What is the color of the van?”, the baseline model (Qwen2.5 VL) fails to detect the presence of the van and incorrectly claims that no such object is visible. In contrast, ACTIVE -O3accurately identifies the small red van in the background and correctly answers red, demonstrating its ability to localize and reason over subtle visual cues that are easily overlooked. 20 What is the color of the watchband? The watchband in the picture appears to be purple.The watchband in the image appears to be black. Qwen2.5 VL <think> To determine the color of the watchband, we need to look for a person who might be wearing a watch on their wrist. In this image, there are two people visible, but only one appears to have a watchband that could be seen. The woman in the foreground seems to be wearing a watch on her left wrist, which is partially visible. Therefore, focusing on the woman in the foreground will provide the necessary information. </think> Figure 12: Zero-shot reasoning on the V∗benchmark (Example 4). Given the question “What is the color of the watchband?”, baseline predictions are inconsistent. ACTIVE -O3focuses on the wrist of the foreground figure, providing the accurate answer (purple) by effectively zooming in on the fine-grained detail. 21 Ground Truth Active-O3 Qwen2.5VL 3-Bounding Boxes GDino Figure 13: Visualization of Small Object Detection results on SODA-A and SODA-D datasets. Each row shows a different example from either SODA-A (top two rows) or SODA-D (remaining rows). The second column illustrates the candidate regions selected by our sensing model. Zoom in for better visibility of fine details and small objects. 22 Figure 14: Visualization of object detection results on various scenes from the LVIS dataset. The left column shows the candidate regions selected by our sensing model. To identify segmentation inaccuracies, I analyzed areas where the mask misaligns with the helicopter’s boundaries: 1. Region 1: At the front, around the nose and cockpit, the mask extends beyond the helicopter, indicating over-segmentation. 2. Region 2: Near the tail rotor and vertical stabilizer, the mask again spills into the background, another case of over-segmentation. 3. Region 3: On the left side near the main rotor blades, the mask misses parts of the helicopter, showing under-segmentation. These distinct regions highlight both over- and under-segmentation issues in the mask.To identify segmentation inaccuracies in the translucent green mask for the harp, I focused on areas where the mask misaligns with the object: 1. Top Left: The mask slightly exceeds the harp’s edge—over-segmentation.
https://arxiv.org/abs/2505.21457v1
2. Bottom Right: The mask cuts off part of the harp—under-segmentation. 3. Middle: A visible gap where the mask is missing—under-segmentation. These distinct regions clearly illustrate segmentation errors without significant overlap. Figure 15: Interactive segmentation analysis on ThinObjects. ACTIVE -O3identifies specific regions with segmentation inaccuracies by reasoning over visual cues. The left example (helicopter) reveals both over-segmentation (e.g., mask spilling beyond the nose and tail) and under-segmentation (e.g., missing rotor parts). The right example (harp) similarly highlights areas where the mask exceeds or misses the object boundary. These results demonstrate ACTIVE -O3’s capability to localize fine- grained segmentation errors, facilitating efficient and targeted mask refinement. 23 LVISSODA-A Figure 16: Failure cases. Left (LVIS) : When objects are densely packed, the model fails to distinguish between them, resulting in inaccurate segmentation. Right (SODA-A) : For small objects in aerial images, domain gap issues lead to poor localization—even if the object is roughly boxed, the model can fail to identify it correctly. 24 References [1]John Aloimonos, Isaac Weiss, and Amit Bandyopadhyay. Active vision. International journal of computer vision , 1:333–356, 1988. 1, 3 [2] Dana H Ballard. Animate vision. Artificial intelligence , 48(1):57–86, 1991. 1, 3 [3]Peter Whaite and Frank P Ferrie. Autonomous exploration: Driven by uncertainty. IEEE Transactions on Pattern Analysis and Machine Intelligence , 19(3):193–205, 1997. 1, 3 [4]Ermano Arruda, Jeremy Wyatt, and Marek Kopicki. Active vision for dexterous grasping of novel objects. In 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) , pages 2881–2888. IEEE, 2016. 1, 4 [5]Abhishek Das, Georgia Gkioxari, Stefan Lee, Devi Parikh, and Dhruv Batra. Neural modular control for embodied question answering. In Conference on robot learning , pages 53–62. PMLR, 2018. 1, 4 [6]Devendra Singh Chaplot, Dhiraj Gandhi, Saurabh Gupta, Abhinav Gupta, and Ruslan Salakhut- dinov. Learning to explore using active neural slam. arXiv preprint arXiv:2004.05155 , 2020. 1, 3, 4 [7]Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, et al. Gpt-4 technical report. arXiv preprint arXiv:2303.08774 , 2023. 2, 3 [8]Aaron Jaech, Adam Kalai, Adam Lerer, Adam Richardson, Ahmed El-Kishky, Aiden Low, Alec Helyar, Aleksander Madry, Alex Beutel, Alex Carney, et al. Openai o1 system card. arXiv preprint arXiv:2412.16720 , 2024. 2 [9]Aixin Liu, Bei Feng, Bing Xue, Bingxuan Wang, Bochao Wu, Chengda Lu, Chenggang Zhao, Chengqi Deng, Chenyu Zhang, Chong Ruan, et al. Deepseek-v3 technical report. arXiv preprint arXiv:2412.19437 , 2024. 2, 3 [10] Shuai Bai, Keqin Chen, Xuejing Liu, Jialin Wang, Wenbin Ge, Sibo Song, Kai Dang, Peng Wang, Shijie Wang, Jun Tang, et al. Qwen2. 5-vl technical report. arXiv preprint arXiv:2502.13923 , 2025. 2, 3, 9 [11] Zekun Qi, Wenyao Zhang, Yufei Ding, Runpei Dong, Xinqiang Yu, Jingwen Li, Lingyun Xu, Baoyu Li, Xialin He, Guofan Fan, et al. Sofar: Language-grounded orientation bridges spatial reasoning and object manipulation. arXiv preprint arXiv:2502.13143 , 2025. 2 [12] Jianwei Yang, Reuben Tan, Qianhui Wu, Ruijie Zheng, Baolin Peng, Yongyuan Liang, Yu Gu, Mu Cai, Seonghyeon Ye, Joel Jang, et al. Magma: A foundation model for multimodal ai agents. arXiv preprint
https://arxiv.org/abs/2505.21457v1
arXiv:2502.13130 , 2025. 2 [13] Gemini Robotics Team, Saminda Abeyruwan, Joshua Ainslie, Jean-Baptiste Alayrac, Montser- rat Gonzalez Arenas, Travis Armstrong, Ashwin Balakrishna, Robert Baruch, Maria Bauza, Michiel Blokzijl, et al. Gemini robotics: Bringing ai into the physical world. arXiv preprint arXiv:2503.20020 , 2025. 2 [14] Moo Jin Kim, Karl Pertsch, Siddharth Karamcheti, Ted Xiao, Ashwin Balakrishna, Suraj Nair, Rafael Rafailov, Ethan Foster, Grace Lam, Pannag Sanketi, et al. Openvla: An open-source vision-language-action model. arXiv preprint arXiv:2406.09246 , 2024. 2, 4 [15] Kevin Black, Noah Brown, Danny Driess, Adnan Esmail, Michael Equi, Chelsea Finn, Niccolo Fusai, Lachy Groom, Karol Hausman, Brian Ichter, et al. pi0: A vision-language-action flow model for general robot control. arXiv preprint arXiv:2410.24164 , 2024. 2, 4 [16] Physical Intelligence, Kevin Black, Noah Brown, James Darpinian, Karan Dhabalia, Danny Driess, Adnan Esmail, Michael Equi, Chelsea Finn, Niccolo Fusai, et al. \pi_0.5: A vision- language-action model with open-world generalization. arXiv preprint arXiv:2504.16054 , 2025. 2 [17] Daya Guo, Dejian Yang, Haowei Zhang, Junxiao Song, Ruoyu Zhang, Runxin Xu, Qihao Zhu, Shirong Ma, Peiyi Wang, Xiao Bi, et al. Deepseek-r1: Incentivizing reasoning capability in llms via reinforcement learning. arXiv preprint arXiv:2501.12948 , 2025. 3 [18] Penghao Wu and Saining Xie. V?: Guided visual search as a core mechanism in multimodal llms. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages 13084–13094, 2024. 3, 17 25 [19] Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, et al. Llama 2: Open foundation and fine-tuned chat models. arXiv preprint arXiv:2307.09288 , 2023. 3 [20] Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, et al. Qwen technical report. arXiv preprint arXiv:2309.16609 , 2023. 3 [21] Xiao Bi, Deli Chen, Guanting Chen, Shanhuang Chen, Damai Dai, Chengqi Deng, Honghui Ding, Kai Dong, Qiushi Du, Zhe Fu, et al. Deepseek llm: Scaling open-source language models with longtermism. arXiv preprint arXiv:2401.02954 , 2024. 3 [22] Haotian Liu, Chunyuan Li, Qingyang Wu, and Yong Jae Lee. Visual instruction tuning. Advances in neural information processing systems , 36:34892–34916, 2023. 3 [23] Zhe Chen, Jiannan Wu, Wenhai Wang, Weijie Su, Guo Chen, Sen Xing, Muyan Zhong, Qinglong Zhang, Xizhou Zhu, Lewei Lu, et al. Internvl: Scaling up vision foundation models and aligning for generic visual-linguistic tasks. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition , pages 24185–24198, 2024. 3 [24] Haoyu Lu, Wen Liu, Bo Zhang, Bingxuan Wang, Kai Dong, Bo Liu, Jingxiang Sun, Tongzheng Ren, Zhuoshu Li, Hao Yang, et al. Deepseek-vl: towards real-world vision-language under- standing. arXiv preprint arXiv:2403.05525 , 2024. 3 [25] Zhiyu Wu, Xiaokang Chen, Zizheng Pan, Xingchao Liu, Wen Liu, Damai Dai, Huazuo Gao, Yiyang Ma, Chengyue Wu, Bingxuan Wang, et al. Deepseek-vl2: Mixture-of-experts vision- language models for advanced multimodal understanding. arXiv preprint arXiv:2412.10302 , 2024. 3 [26] Long Ouyang, Jeff Wu, Xu Jiang, Diogo Almeida, Carroll L. Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, John Schulman, Jacob Hilton, Fraser
https://arxiv.org/abs/2505.21457v1
Kelton, Luke Miller, Maddie Simens, Amanda Askell, Peter Welinder, Paul Christiano, Jan Leike, and Ryan Lowe. Training language models to follow instructions with human feedback, 2022. 3 [27] Zhihong Shao, Peiyi Wang, Qihao Zhu, Runxin Xu, Junxiao Song, Xiao Bi, Haowei Zhang, Mingchuan Zhang, YK Li, Y Wu, et al. Deepseekmath: Pushing the limits of mathematical reasoning in open language models. arXiv preprint arXiv:2402.03300 , 2024. 3 [28] Jiaxing Zhao, Xihan Wei, and Liefeng Bo. R1-omni: Explainable omni-multimodal emotion recognition with reinforcement learning, 2025. 3 [29] Kaituo Feng, Kaixiong Gong, Bohao Li, Zonghao Guo, Yibing Wang, Tianshuo Peng, Junfei Wu, Xiaoying Zhang, Benyou Wang, and Xiangyu Yue. Video-r1: Reinforcing video reasoning in mllms, 2025. 3 [30] Ziyu Liu, Zeyi Sun, Yuhang Zang, Xiaoyi Dong, Yuhang Cao, Haodong Duan, Dahua Lin, and Jiaqi Wang. Visual-rft: Visual reinforcement fine-tuning. arXiv preprint arXiv:2503.01785 , 2025. 3 [31] Wenxuan Huang, Bohan Jia, Zijie Zhai, Shaosheng Cao, Zheyu Ye, Fei Zhao, Zhe Xu, Yao Hu, and Shaohui Lin. Vision-r1: Incentivizing reasoning capability in multimodal large language models. arXiv preprint arXiv:2503.06749 , 2025. 3 [32] Haozhan Shen, Peng Liu, Jingcheng Li, Chunxin Fang, Yibo Ma, Jiajia Liao, Qiaoli Shen, Zilun Zhang, Kangjia Zhao, Qianqian Zhang, et al. Vlm-r1: A stable and generalizable r1-style large vision-language model. arXiv preprint arXiv:2504.07615 , 2025. 3 [33] Daniel Gordon, Aniruddha Kembhavi, Mohammad Rastegari, Joseph Redmon, Dieter Fox, and Ali Farhadi. Iqa: Visual question answering in interactive environments. In Proceedings of the IEEE conference on computer vision and pattern recognition , pages 4089–4098, 2018. 3 [34] Devendra Singh Chaplot, Dhiraj Prakashchand Gandhi, Abhinav Gupta, and Russ R Salakhut- dinov. Object goal navigation using goal-oriented semantic exploration. Advances in Neural Information Processing Systems , 33:4247–4258, 2020. 3 [35] Dinesh Jayaraman and Kristen Grauman. Learning to look around: Intelligently exploring unseen environments for unknown tasks. In Proceedings of the IEEE conference on computer vision and pattern recognition , pages 1238–1247, 2018. 3, 4 26 [36] Yuke Zhu, Roozbeh Mottaghi, Eric Kolve, Joseph J Lim, Abhinav Gupta, Li Fei-Fei, and Ali Farhadi. Target-driven visual navigation in indoor scenes using deep reinforcement learning. In2017 IEEE international conference on robotics and automation (ICRA) , pages 3357–3364. IEEE, 2017. 3 [37] Dhruv Batra, Aaron Gokaslan, Aniruddha Kembhavi, Oleksandr Maksymets, Roozbeh Mottaghi, Manolis Savva, Alexander Toshev, and Erik Wijmans. Objectnav revisited: On evaluation of embodied agents navigating to objects. arXiv preprint arXiv:2006.13171 , 2020. 3 [38] Zekun Qi, Wenyao Zhang, Yufei Ding, Runpei Dong, Xinqiang Yu, Jingwen Li, Lingyun Xu, Baoyu Li, Xialin He, Guofan Fan, Jiazhao Zhang, Jiawei He, Jiayuan Gu, Xin Jin, Kaisheng Ma, Zhizheng Zhang, He Wang, and Li Yi. Sofar: Language-grounded orientation bridges spatial reasoning and object manipulation, 2025. 4 [39] Shilong Liu, Zhaoyang Zeng, Tianhe Ren, Feng Li, Hao Zhang, Jie Yang, Qing Jiang, Chunyuan Li, Jianwei Yang, Hang Su, et al. Grounding dino: Marrying dino with grounded pre-training for open-set object detection. In European Conference on Computer Vision , pages 38–55. Springer, 2024. 9 [40] Tianhe Ren, Shilong Liu, Ailing Zeng, Jing Lin, Kunchang Li, He Cao, Jiayu Chen, Xinyu Huang, Yukang Chen, Feng
https://arxiv.org/abs/2505.21457v1
Yan, et al. Grounded sam: Assembling open-world models for diverse visual tasks. arXiv preprint arXiv:2401.14159 , 2024. 9 [41] Agrim Gupta, Piotr Dollar, and Ross Girshick. Lvis: A dataset for large vocabulary instance segmentation. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition , pages 5356–5364, 2019. 9, 16 [42] Gong Cheng, Xiang Yuan, Xiwen Yao, Kebing Yan, Qinghua Zeng, Xingxing Xie, and Junwei Han. Towards large-scale small object detection: Survey and benchmarks. IEEE Transactions on Pattern Analysis and Machine Intelligence , 45(11):13467–13488, 2023. 10, 16 [43] Nikhila Ravi, Valentin Gabeur, Yuan-Ting Hu, Ronghang Hu, Chaitanya Ryali, Tengyu Ma, Haitham Khedr, Roman Rädle, Chloe Rolland, Laura Gustafson, et al. Sam 2: Segment anything in images and videos. arXiv preprint arXiv:2408.00714 , 2024. 13 [44] Jun Hao Liew, Scott Cohen, Brian Price, Long Mai, and Jiashi Feng. Deep interactive thin object selection. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision , pages 305–314, 2021. 16 27
https://arxiv.org/abs/2505.21457v1
LazyVLM: Neuro-Symbolic Approach to Video Analytics Xiangru Jian∗ University of Waterloo xiangru.jian@uwaterloo.caWei Pang∗ University of Waterloo w3pang@uwaterloo.caZhengyuan Dong∗ University of Waterloo zhengyuan.dong@uwaterloo.ca Chao Zhang∗ University of Waterloo chao.zhang@uwaterloo.caM. Tamer Özsu University of Waterloo tamer.ozsu@uwaterloo.ca ABSTRACT Current video analytics approaches face a fundamental trade-off between flexibility and efficiency. End-to-end Vision Language Mod- els (VLMs) often struggle with long-context processing and incur high computational costs, while neural-symbolic methods depend heavily on manual labeling and rigid rule design. In this paper, we introduce LazyVLM, a neuro-symbolic video analytics system that provides a user-friendly query interface similar to VLMs, while addressing their scalability limitation. LazyVLM enables users to ef- fortlessly drop in video data and specify complex multi-frame video queries using a semi-structured text interface for video analytics. To address the scalability limitations of VLMs, LazyVLM decomposes multi-frame video queries into fine-grained operations and offloads the bulk of the processing to efficient relational query execution and vector similarity search. We demonstrate that LazyVLM pro- vides a robust, efficient, and user-friendly solution for querying open-domain video data at scale. 1 INTRODUCTION With the massive volume of video data in real-world applications, analyzing video content has become increasingly crucial across var- ious domains [ 2,7–9,11,13,14]. A common task in video analytics involves identifying a specific short video segment within a longer video. For instance, consider a surveillance video capturing road traffic: retrieving a moment where a motorcycle is positioned to the right of a bus and then moves to the left of the bus within a few seconds can be instrumental in detecting anomalous behavior in road transportation. We refer to this type of query that retrieves multi-frame events as a video moment retrieval query (VMRQ). One category of existing systems for VMRQ processing relies on manual-intensive configurations and interactions. In such systems, users must define task-specific machine learning models for video content processing, which requires prior domain knowledge. These systems typically employ either a SQL-like query language [ 7,8] or a query-by-example interface [ 2,11,13]. The former struggles to express complex video moments due to the need for multiple join expressions or recursive joins to define video moments, while the latter demands extensive user interaction with the video content to perform the search manually (labeling intermediate query results). These designs result in low human efficiency. Alternatively, vision large multimodal models (VLMs) can be used for video query processing [ 1]. VLMs offer an intuitive, end-to- end video processing interface: users can simply load a video and input natural language queries into the context window, providing ∗Equal contribution to this research.high human efficiency. However, VLMs fall short when processing video queries at scale due to several limitations. First, VLMs’ infer- ence is highly time-consuming due to the underlying self-attention mechanism, which requires 𝑂(𝑛2)time complexity where 𝑛is the total number of tokens in the context window—this overhead is primarily dominated by video length. Second, the autoregressive nature of decoder-only VLMs enforces a serial computation mech- anism, significantly limiting opportunities for parallel processing. Finally, executing ad-hoc exploratory queries or video updates, such as adding new videos, necessitates repeated processing of
https://arxiv.org/abs/2505.21459v1
the en- tire context window, further exacerbating inefficiencies. Therefore, using VLMs out of the box for video query processing leads to low system efficiency. To overcome the above issues, we propose LazyVLM, a neuro- symbolic approach designed for scalable video analysis. LazyVLM introduces a semi-structured text interface for defining video events, allowing users to describe visual content in frames using subject- predicate-object (SPO) triples, where each element (S, P, or O) is specified in text (see Example 2.1). Video can be loaded into LazyVLM without defining task-specific models, i.e., by simply dropping in the video with no additional effort required. By offer- ing an interface similar to that of VLMs, LazyVLM improves human efficiency, compared to traditional manual-intensive systems. LazyVLM significantly reduces the computation cost, compared to out-of-box VLMs, thereby improving system efficiency for video analytics. This is achieved by generating structured views of video data—specifically, scene graphs for frames—and embedding nodes (representing entities) within these graphs. LazyVLM then decom- poses video queries into semantic search and symbolic search. Se- mantic search is based on embedding vectors and is dedicated to searching an entity (either a subject or an object in the SPO triples). Symbolic search is based on relational queries and focuses on verify- ing the existence of relationships (predicates between subjects and objects in SPO triples). Additionally, LazyVLM leverages VLMs to refine the results of relational queries. However, since most compu- tation is offloaded to semantic search and vector search, VLMs are used in a lightweight manner. Instead of processing an entire video, LazyVLM selectively applies VLMs to individual frames that have been verified through relational queries—dramatically reducing computational overhead. Thanks to its fine-grained query decom- position, LazyVLM enables parallel computing, allowing multiple relational queries, vector searches, and VLM refinements to be exe- cuted simultaneously. Moreover, structured views and embedding vectors are precomputed once and stored. This makes LazyVLM update-friendly, supporting incremental updates by inserting newarXiv:2505.21459v1 [cs.DB] 27 May 2025 Xiangru Jian, Wei Pang, Zhengyuan Dong, Chao Zhang, and M. Tamer Özsu User Sub: "man in red " Pre: "left of" Obj: "bicycle" SELECT vid, fid, ... while sid = 25, ... UI Text Query Entity StoreEntity MatchingSynthetic SQL Relationship StoreTriple of ("man in red", " left of ", "bicycle") in frame 2 in video segment 1 (vid=1, /f_id=2, sid=25, rl="close to" , sid=...)Ent 24: ...Ent 25: ("man"(vid=1, eid=25, ete, eie))Ent 26: ...Relationship Matching Found Frames Processed Video SQL Generator VLM Re/f_iner Figure 1: Overview of query processing in LazyVLM. The diagram illustrates the processing of a semi-structured text query, which includes entity descriptions ( e.g.,"man in red" ) and relationship terms ( e.g.,"near" ). The query is processed through a sequence of stages: entity matching via vector similarity search, SQL query processing to retrieve candidate relationships, relationship verification using a VLM to refine results, and temporal matching to identify the final set of video segments. structured views and vectors, eliminating the need to reprocess the entire video from scratch. LazyVLM enhances both human efficiency and system efficiency by combining the strengths of symbolic search, semantic search and VLMs. This
https://arxiv.org/abs/2505.21459v1
integration enables a powerful and efficient frame- work for querying open-domain video data. The remainder of this paper provides a detailed overview of LazyVLM’s architecture and interaction mechanisms. 2 SYSTEM OVERVIEW Figure 1 illustrates the processing pipeline of LazyVLM. Video data is first preprocessed to generate structured views, which are stored in the Entity Store andRelationship Store (see Section 2.2 for details). LazyVLM provides a semi-structured text interface that al- lows users to query the loaded video data. At query time, LazyVLM employs a series of components to retrieve relevant information from the two stores, followed by the use of a VLM to refine the pruned set of candidate results. The remainder of this section de- scribes the key components of LazyVLM in detail. 2.1 Query Interface and Functionality The query interface of LazyVLM is based using SPO triples to de- scribe the visual content in video frames. We use the following example to illustrate how users can describe multi-frame events in a natural and intuitive manner. Example 2.1. Consider the following query that defines a com- plex event: a man with a backpack is near a bicycle, and another man in red clothes moves from the left of the bicycle to the right of the bicycle after more than 2 seconds. This query is formally specified in the following steps in LazyVLM. (1) Entity Descrip- tion:𝐸={𝑒1,𝑒2,𝑒3}, with𝑒1.text="man with backpack" ,𝑒2.text= "bicycle" , and𝑒3.text="man in red" . These defined entities can serve as either a subject or an object in SPO triples. (2) Rela- tionship Description: 𝑅={𝑟1,𝑟2,𝑟3}, where𝑟1.text="is near" , 𝑟2.text="leftOf ", and𝑟3.text="rightOf ". These relationships function as predicates in SPO triples. (3) Frame Description: 𝐹=(𝑓0,𝑓1), where𝑓0={(𝑒1,𝑟1,𝑒2),(𝑒3,𝑟2,𝑒2)}represents "manwith backpack is near bicycle; man in red is on the left of bicycle" , and𝑓1={(𝑒1,𝑟1,𝑒2),(𝑒3,𝑟3,𝑒2)}represents "man with backpack is near bicycle; man in red is on the right of bicycle" .(4) Temporal Constraint: 𝑓1−𝑓0>4, ensuring that the second frame occurs more than 2 seconds after the first, assuming a frame rate of 2 frames per second. In LazyVLM, input video is automatically divided into non- overlapping video clips, i.e., short segments lasting a few seconds or minutes, with the length defined by the user. LazyVLM then processes an input query, e.g., the one in Example 2.1, by searching for events specified in the query, retrieving, and returning the clips that contain the detected event. Functionality Supported. LazyVLM supports a comprehensive range of core video analytics functionalities. At the object level, LazyVLM supports (1) object detection , identifying and localizing en- tities such as cars, people, and bicycles within video frames, and (2) object tracking , which follows these objects across multiple frames, facilitating trajectory-based analysis. In addition, LazyVLM includes (3)attribute recognition , allowing queries based on object properties like color or size. At the relationship level, LazyVLM includes (4) relationship detection , which captures spatial and interaction-based relationships ( e.g., car near pedestrian or person holding an object). Beyond basic object and relationship analytics, LazyVLM supports advanced query operations: (5) conjunction queries allow users to specify multiple conditions ( e.g., detecting a person and
https://arxiv.org/abs/2505.21459v1
a vehicle in the same frame), while (6) sequencing queries enforce temporal order between events ( e.g., detecting a person walking before enter- ing a car). Additionally, (7) window queries constrain events within a defined time duration ( e.g., detecting a car stopping within 10 seconds after a pedestrian appearing). User Interaction Flow. User interaction is straightforward with LazyVLM. (1) Upload Video Dataset: Users begin by uploading their videos. LazyVLM automatically segments the videos, extracts visual features, and builds the corresponding structured views. (2) Compose a Query: Through the intuitive text interface, users de- scribe the event of interest. For example, to retrieve clips where “a man with a backpack is near a bicycle, and a man in red moves from LazyVLM: Neuro-Symbolic Approach to Video Analytics being on the left of the bicycle to the right of the bicycle after more than 2 seconds” , users enter the corresponding entity descriptions, relationship descriptions, frame information, and temporal con- straints, as shown in Example 2.1. The query is then submitted to the system, and users can view the results after the query execution in the interface. Detailed steps are presented in Section 3. 2.2 Video Preprocessing Video preprocessing converts raw video content into a structured, searchable format through the following integrated stages. Video Segmentation. When a video dataset is uploaded, each video 𝑉is automatically partitioned into a sequence of non-overlapping segments,𝑉=(𝑣1,𝑣2,...,𝑣 𝑛). Each segment 𝑣𝑖comprises a fixed number of frames, 𝑣𝑖=(𝑓1,𝑓2,...,𝑓 𝑚), facilitating parallel process- ing and management of long videos. Content Extraction. Each video segment is processed to extract detailed visual information. To represent the visual content of each frame, we construct a scene graph that encodes the frame as a set of subject–predicate–object (SPO) triples in the form (subject, predicate, object). For example, a frame may include triples such as ("man with backpack", "near", "bicycle" ) and ( "bicycle", "on", "side- walk" ). We employ the IETrans [ 12] model to generate these scene graphs. For extracted triples, each subject or object—internally re- ferred to as an entity—is represented as a tuple (𝑣𝑖𝑑,𝑒𝑖𝑑,𝑒𝑡𝑒,𝑒𝑖𝑒), and all entities are stored in the Entity Store .𝑣𝑖𝑑denotes the iden- tifier of the video segment containing the entity; 𝑒𝑖𝑑is a unique entity identifier within the segment, obtained through entity track- ing using YOLOv8 [ 6];𝑒𝑡𝑒is a text embedding derived from the entity’s description using the e5-mistral-7b [ 10] model; and 𝑒𝑖𝑒 is an image embedding that captures the entity’s visual appear- ance, generated by the VLM2Vec [ 5] model. In addition to entity representations, inter-object relationships are captured as tuples (𝑣𝑖𝑑,𝑓𝑖𝑑,𝑠𝑖𝑑,𝑟𝑙,𝑜𝑖𝑑), which are stored in the Relationship Store . 𝑣𝑖𝑑and𝑓𝑖𝑑refer to the video segment and frame identifiers, respec- tively;𝑠𝑖𝑑and𝑜𝑖𝑑are the unique entity identifiers of the subject and object involved in the relationship; and 𝑟𝑙denotes the relationship label, e.g.,"near" . 2.3 Query Processing Users express an event query as a four-part specification, including entity descriptions, relationship descriptions, frame-level specifi- cations, and temporal constraints, as illustrated in Example 2.1. The query engine of LazyVLM processes the query through a se- ries of stages: Entity Matching ,SQL Query
https://arxiv.org/abs/2505.21459v1
Generation ,Relationship Matching and Refinement , and Temporal Matching . Entity Matching. For each entity defined in the query, a vector similarity search is performed to match the textual description of the entity against the embeddings stored in the Entity Store. The result is a set of candidate entities for each query entity, represented as(𝑣𝑖𝑑,𝑒𝑖𝑑)pairs, i.e., matched entities in specific video segments. SQL Query Generation. Based on the entity matching results, the query engine automatically generates SQL queries to retrieve candidate frames from the Relationship Store. These queries filter rows by matching entity identifiers, i.e.,(𝑣𝑖𝑑,𝑒𝑖𝑑)pairs, and they return frames that potentially include the specified entities. Theoutput is a set of rows from the Relationship Store corresponding to each query entity, denoted as candidate frames. Relationship Matching and Refinement. For each SPO triple in the query, the query engine performs a join between the candidate frames for the subject and object, based on shared 𝑣𝑖𝑑and𝑓𝑖𝑑values. This identifies potential relationships between subject and object candidates. Following this coarse-grained match, a refinement step is applied to further verify relationships. Specifically, a lightweight local VLM ( e.g., Qwen-2.5-VL 7B [ 1]) is used for the verification. Optionally, users may apply cost-efficient closed-source VLMs ( e.g., GPT-4o-mini) for deployment-friendly setups. This refinement en- sures that the relationships specified in the query are visually and semantically grounded in the candidate frames. Finally, the engine joins the refined relationship results to identify frames where all specified SPO triples in a query frame co-occur, producing a set of candidate video frames for each query frame. Temporal Matching. In the final stage, the engine checks whether the candidate video frames satisfy the temporal constraints defined in the query. This involves join over frame identifiers to ensure compliance with temporal logic, e.g.,𝑓1−𝑓0>4. Ultimately, video identifiers including candidate video frames that satisfy the tempo- ral conditions are returned as the final query results. A major advantage of the proposed query processing pipeline is that each step is inherently parallelizable. For instance, entity matching tasks can be executed in parallel, as they are independent of one another. In addition, the query engine applies VLMs only for fine-grained relationship verification on a pruned set of can- didate frames, rather than processing all video frames, making it significantly more lightweight compared to end-to-end VLM-based approaches for video query processing. 3 DEMONSTRATION We demonstrate LazyVLM using the MOT20 [ 4] and TAO [ 3] datasets. Specifically, we use Example 2.1 to perform a query on the MOT20- 02 dataset. We guide the users through the following detailed steps, illustrating the interactive and intuitive interface of LazyVLM. Step❶: Load Dataset and Enter Hyperparameters. Users select the MOT20-02 dataset path from a dropdown menu. The inter- face shows dataset metadata, including the total number of video segments ( e.g., 11 segments for MOT20-02), as well as paths for preprocessed and raw data. Users then configure several hyper- parameters for query execution: the top-k parameter (e.g., top 3 results), the temperature for controlling search strictness, and the thresholds for vision embedding andtextual embedding similarity searches, which affect entity and relationship
https://arxiv.org/abs/2505.21459v1
matching accuracy. Step❷: Enter Entities. Users input descriptive text labels for the entities involved in their query via a dedicated input field. In the provided example query, users define entities like "man with backpack, " "bicycle, " and"man in red" . These entities are then listed on the interface and can be reviewed or removed if necessary. Step❸: Enter Relationships. Users specify relationships that describe interactions or spatial positions between entities, such as "is near, " "is on the left of, " or"is talking to" . Each entered relationship appears on the interface for adjusting before proceeding. Step❹: Enter Triples. Users construct SPO triples to precisely define the interactions between entities. The interface supports Xiangru Jian, Wei Pang, Zhengyuan Dong, Chao Zhang, and M. Tamer Özsu Select Dataset Path Dataset Info: -Dataset Path: ▪LazyVLM/preprocessed_datasets/mot20-train-2002 -Raw Path: ▪LazyVLM/mot20-train-2002-sgg-1226/mot20-train-2002-yolo -Num Segments: 11 Hyperparameters: -Top k 3 -+ -Temperature 0.00 2.00 -VisioOEmbedding -Textual Embedding 0.00 1.00 0.00 1.00mot20-train-2002 ▼ mot20-train-2002 mot20-train-2005LazyVLM Query System Step 1: Entity Needed -Please enter an Entity: ThresholdThresholdman in red Current Entities: 1. man with backpack 2. bicycle 3. man in redStep 2: Relation Needed -Please add a Relationship ➕ ❌ ❌ ❌1. is near 2. is on the left of 3. is on the right ofis on the right of ➕ Current Relations : ❌ ❌ ❌LazyVLM Query System LazyVLM Query System LazyVLM Query System LazyVLM Query System Step3: Triple Subject Predicate Object 1. man with backpack is near bicycle 2. man in red is on the left of bicycle 3. man in red is on the right of bicycleExplanation: Subject is the starting entity, Predicate is the relationship,and Object is the target entity. ➕ man in red ▼is on the right of ▼bicycle ▼ ❌ ❌ ❌Step4: Frame Order & Frame Contained Triple Selection For each frame, select the contained triples. For frames beyond the first, set a time constraint between the previous frame and the current one. Frame 1: Triples ❌ man with backp… man in red ... ❌ ❌▼ Frame 2: Triples ❌ man in red ... ❌ ▼Frame1 is at least x frames before Frame2 4 -+Query ➕add frame Query Completed! Final results: Video Segment 3 contains the frame Video [3] segment (total 5 matching) Matching 1 Matching 2 ... Figure 2: Pipeline of user interactions in LazyVLM for specifying and executing a video query: Step ❶: Load Dataset and Enter Hyperparameters; Step ❷: Enter Entities; Step ❸: Enter Relationships; Step ❹: Enter Triples; Step ❺: Enter Frames and Temporal Constraints; and Step ❻: Query Execution and Presentation of Results. triple formation by allowing users to select from previously entered entities and relationships. For instance, users define triples like "man with backpack is near bicycle," "man in red is on the left of bicycle," and "man in red is on the right of bicycle" . Each formed triple is displayed for user verification. Step❺: Enter Frames and Temporal Constraints. Users or- ganize the defined triples into specific frames according to their query’s temporal constraint. For each frame, users explicitly select which triples it contains from a
https://arxiv.org/abs/2505.21459v1
dropdown list. For the example provided, Frame 1 is set to contain triples "man with backpack is near bicycle" and"man in red is on the left of bicycle, " while Frame 2 includes "man with backpack is near bicycle" and"man in red is on the right of bicycle. " Users also define temporal constraints, such as specifying Frame 1 to occur at least 4 frames before Frame 2 since we have 2 frames per second in the video. Step❻: Query Execution and Presentation of Results. Upon completing the query setup, users initiate query execution by click- ing the "Query" button. LazyVLM processes the query and displays matching results. The results detail the precise video segments and exact frame identifiers corresponding to each user-defined frame. 4 CONCLUSION We present LazyVLM, a neuro-symbolic video analytics system that efficiently supports complex, multi-frame queries through intuitive, text-based interaction. Our demonstration highlights LazyVLM’s scalability, accuracy, and usability in practical, open-domain video analytics scenarios.REFERENCES [1] Shuai Bai, Keqin Chen, Xuejing Liu, Jialin Wang, Wenbin Ge, Sibo Song, Kai Dang, Peng Wang, Shijie Wang, Jun Tang, Humen Zhong, Yuanzhi Zhu, Mingkun Yang, Zhaohai Li, Jianqiang Wan, Pengfei Wang, Wei Ding, Zheren Fu, Yiheng Xu, Jiabo Ye, Xi Zhang, Tianbao Xie, Zesen Cheng, Hang Zhang, Zhibo Yang, Haiyang Xu, and Junyang Lin. 2025. Qwen2.5-VL Technical Report. arXiv:2502.13923 [cs.CV] https://arxiv.org/abs/2502.13923 [2] Maureen Daum, Enhao Zhang, Dong He, Stephen Mussmann, Brandon Haynes, Ranjay Krishna, and Magdalena Balazinska. 2023. VOCALExplore: Pay-as-You- Go Video Data Exploration and Model Building. Proc. VLDB Endow. 16, 13 (Sept. 2023), 4188–4201. https://doi.org/10.14778/3625054.3625057 [3]Achal Dave, Tarasha Khurana, Pavel Tokmakov, Cordelia Schmid, and Deva Ramanan. 2020. TAO: A Large-Scale Benchmark for Tracking Any Object. In European Conference on Computer Vision . https://arxiv.org/abs/2005.10356 [4]P. Dendorfer, H. Rezatofighi, A. Milan, J. Shi, D. Cremers, I. Reid, S. Roth, K. Schindler, and L. Leal-Taixé. 2020. MOT20: A benchmark for multi object tracking in crowded scenes. arXiv:2003.09003[cs] (March 2020). http://arxiv.org/abs/1906. 04567 arXiv: 2003.09003. [5]Ziyan Jiang, Rui Meng, Xinyi Yang, Semih Yavuz, Yingbo Zhou, and Wenhu Chen. 2025. VLM2Vec: Training Vision-Language Models for Massive Multi- modal Embedding Tasks. In The Thirteenth International Conference on Learning Representations . https://openreview.net/forum?id=TE0KOzWYAF [6]Glenn Jocher et al. 2023. Ultralytics YOLOv8 . https://github.com/ultralytics/ ultralytics [7]Gaurav Tarlok Kakkar, Jiashen Cao, Pramod Chunduri, Zhuangdi Xu, Sury- atej Reddy Vyalla, Prashanth Dintyala, Anirudh Prabakaran, Jaeho Bang, Aubhro Sengupta, Kaushik Ravichandran, Ishwarya Sivakumar, Aryan Rajoria, Ash- mita Raju, Tushar Aggarwal, Abdullah Shah, Sanjana Garg, Shashank Suman, Myna Prasanna Kalluraya, Subrata Mitra, Ali Payani, Yao Lu, Umakishore Ra- machandran, and Joy Arulraj. 2023. EVA: An End-to-End Exploratory Video Analytics System. In Proceedings of the Seventh Workshop on Data Manage- ment for End-to-End Machine Learning (Seattle, WA, USA) (DEEM ’23) . As- sociation for Computing Machinery, New York, NY, USA, Article 8, 5 pages. https://doi.org/10.1145/3595360.3595858 [8] Daniel Kang, Peter Bailis, and Matei Zaharia. 2019. BlazeIt: optimizing declarative aggregation and limit queries for neural network-based video analytics. Proc. VLDB Endow. 13, 4 (dec 2019), 533–546. https://doi.org/10.14778/3372716.3372725 LazyVLM: Neuro-Symbolic Approach to Video Analytics [9] Francisco Romero, Caleb Winston, Johann Hauswald, Matei Zaharia, and Chris- tos Kozyrakis. 2023. Zelda: Video
https://arxiv.org/abs/2505.21459v1
Analytics using Vision-Language Models. arXiv:2305.03785 [cs.DB] https://arxiv.org/abs/2305.03785 [10] Liang Wang, Nan Yang, Xiaolong Huang, Linjun Yang, Rangan Majumder, and Furu Wei. 2024. Improving Text Embeddings with Large Language Models. arXiv:2401.00368 [cs.CL] https://arxiv.org/abs/2401.00368 [11] Renzhi Wu, Pramod Chunduri, Ali Payani, Xu Chu, Joy Arulraj, and Kexin Rong. 2024. SketchQL: Video Moment Querying with a Visual Query Interface. Proc. ACM Manag. Data 2, 4, Article 204 (Sept. 2024), 27 pages. https://doi.org/10. 1145/3677140 [12] Ao Zhang, Yuan Yao, Qianyu Chen, Wei Ji, Zhiyuan Liu, Maosong Sun, and Tat-Seng Chua. 2022. Fine-Grained Scene Graph Generation with Data Transfer.InComputer Vision – ECCV 2022 , Shai Avidan, Gabriel Brostow, Moustapha Cissé, Giovanni Maria Farinella, and Tal Hassner (Eds.). Springer Nature Switzerland, Cham, 409–424. [13] Enhao Zhang, Maureen Daum, Dong He, Manasi Ganti, Brandon Haynes, Ran- jay Krishna, and Magdalena Balazinska. 2023. EQUI-VOCAL Demonstration: Synthesizing Video Queries from User Interactions. Proc. VLDB Endow. 16, 12 (Aug. 2023), 3978–3981. https://doi.org/10.14778/3611540.3611600 [14] Enhao Zhang, Maureen Daum, Dong He, Brandon Haynes, Ranjay Krishna, and Magdalena Balazinska. 2023. EQUI-VOCAL: Synthesizing Queries for Composi- tional Video Events from Limited User Interactions. Proc. VLDB Endow. 16, 11 (jul 2023), 2714–2727. https://doi.org/10.14778/3611479.3611482
https://arxiv.org/abs/2505.21459v1
arXiv:2505.21478v1 [cs.CV] 27 May 2025Policy Optimized Text-to-Image Pipeline Design Uri Gadot1,2Rinon Gal2Yftah Ziser2Gal Chechik2Shie Mannor1,2 1Technion2NVIDIA Research Abstract Text-to-image generation has evolved beyond single monolithic models to complex multi-component pipelines. These combine fine-tuned generators, adapters, upscal- ing blocks and even editing steps, leading to significant improvements in image quality. However, their effective design requires substantial expertise. Recent approaches have shown promise in automating this process through large language models (LLMs), but they suffer from two critical limitations: extensive computa- tional requirements from generating images with hundreds of predefined pipelines, and poor generalization beyond memorized training examples. We introduce a novel reinforcement learning-based framework that addresses these inefficiencies. Our approach first trains an ensemble of reward models capable of predicting image quality scores directly from prompt-workflow combinations, eliminating the need for costly image generation during training. We then implement a two-phase training strategy: initial workflow vocabulary training followed by GRPO-based op- timization that guides the model toward higher-performing regions of the workflow space. Additionally, we incorporate a classifier-free guidance based enhancement technique that extrapolates along the path between the initial and GRPO-tuned models, further improving output quality. We validate our approach through a set of comparisons, showing that it can successfully create new flows with greater diversity and lead to superior image quality compared to existing baselines. 1 Introduction Recent advancements in generative AI have significantly improved the quality and diversity of text-to-image generation. Early models relied on monolithic architectures, where a single neural network directly translated textual prompts into visual outputs. However, as the field matured, it became clear that combining multiple specialized components—such as fine-tuned diffusion models, super-resolution modules, or specialized embeddings, into more sophisticated workflows leads to superior image quality and greater creative control [ 5,40,63]. This shift from monolithic models to modular workflows has been supported by user-friendly platforms such as ComfyUI1, a popular open-source tool that allows users to visually construct complex generative pipelines through interconnected nodes represented in JSON format. ComfyUI has rapidly gained popularity due to its intuitive node-based interface, enabling users to assemble diverse generative models (e.g., Stable Diffusion, ControlNet, LoRAs) into flexible workflows tailored to specific image-generation tasks. Despite its accessibility, designing effective workflows remains challenging due to the vast space of possible component combinations and their prompt-dependent effectiveness. Consequently, crafting high-quality workflows typically requires considerable expertise and manual experimentation. To address this challenge, recent work introduced ComfyGen [ 16], which uses large language models (LLMs) to automate the construction of prompt-adaptive workflows within ComfyUI. However, a key limitation of ComfyGen was its inability to generate genuinely novel workflow structures. At its core, their approach required synthesizing images using an extensive collection of pre-defined workflows Preprint. Under review. 1https://www.comfy.org/ and prompts, an expensive process limiting their training set’s size. Constrained by this small set, their approach essentially learned a classifier over existing flows rather than synthesizing original graph topologies or selecting novel model combinations. This limitation significantly constrains the potential creativity and adaptability of automated workflow generation systems and, as we later show — may also limit their downstream performance. In parallel, reinforcement learning
https://arxiv.org/abs/2505.21478v1
(RL) has emerged as a powerful paradigm for fine-tuning large language models (LLMs), enabling them to optimize their outputs directly based on reward signals derived from human preferences or other evaluative metrics. Techniques such as Reinforcement Learning from Human Feedback (RLHF) have demonstrated remarkable success in aligning model behaviors with human expectations by iteratively refining model parameters based on explicit reward feedback. Furthermore, recent developments like Group Relative Policy Optimization (GRPO) introduced memory-efficient RL algorithms capable of optimizing policies without separate value functions, making them particularly suitable for complex sequential decision-making tasks. Building on these advancements, we propose FlowRL, a novel extension that integrates reinforcement learning into the workflow prediction framework to overcome its originality limitations. Specifically, we formulate workflow generation as an RL problem where an LLM-based policy sequentially constructs workflow graphs by selecting nodes and connections conditioned on textual prompts. To efficiently guide this process without incurring prohibitive computational costs associated with direct image generation for each candidate workflow during training, we introduce a surrogate reward model trained to predict image quality scores directly from prompts and workflow structures. Finally, we adopt GRPO combined with per-token reward attribution mechanisms to provide granular feedback during policy updates. This affords our RL agent greater precision in identifying decisions within a generated workflow that contribute positively or negatively toward overall image quality. In summary, our contributions are as follows •We introduce ComfyGen-RL, the first RL-based approach for generating genuinely novel ComfyUI workflows tailored to align with human preference feedback. •We propose a surrogate human-preference reward model enabling efficient RL training without computationally expensive image generations. •We integrate GRPO with per-token reward attribution for stable and memory-efficient policy optimization. Through these innovations, FlowRL significantly advances automated workflow generation capabili- ties, enabling richer creativity and greater adaptability in text-to-image synthesis pipelines. 2 Related Work Workflow Generation A recent line of research explores the use of compound systems, where multiple models or modules are chained together, often yielding superior performance compared to isolated models. These multi- component systems have been applied across fields ranging from programming challenges [ 1] and olympiad-level mathematics [ 53] to medical diagnostics [ 38] and video generation [ 64]. However, building compound systems presents significant challenges. Models must be chosen not only for their individual strengths, but also for their ability to complement each other. Moreover, the parameters of the different components should be selected with the entire system in mind. To address these difficulties, recent work has explored meta-optimization frameworks, where the structure and parameters of entire pipelines are automatically tuned for downstream performance [ 28]. Others have adopted graph-based architectures allowing dynamic reconfiguration of component interactions [ 68]. In the realm of text-to-image generation, recent work explores the use of pipelines using agentic systems [ 67,61,23], genetic algorithms [ 51] or by fine-tuning LLMs using large flow datasets tagged with human preference scores [ 16]. Although the human preference-based framework has shown promising results, it relies on creating and ranking images using large sets of flows. This, in turn, leads to challenges in effectively scaling the dataset and to
https://arxiv.org/abs/2505.21478v1
a lack of ability to synthesize unseen flows at inference time. Our work aims to address this challenge by leveraging a policy-optimization approach for more effective exploration of the flow parameter space, coupled with a surrogate reward function which avoids the need to generate and rank a large set of images. 2 Fine-Tuning LLMs with RL: Reinforcement learning (RL) has become increasingly central to the development of large language models (LLMs), playing a key role in aligning model outputs with user preferences and enhancing task-specific capabilities. A prominent example is Reinforcement Learning from Human Feedback (RLHF) [ 39], which fine-tunes models using reward signals derived from human preferences to better align with communicative goals and social norms [ 8,24]. Beyond alignment, RL has shown promise in improving LLMs’ performance on domains requiring precise reasoning, such as mathematics [ 54,56,34] and code generation [ 30,33]. Recently, [ 48] proposed Group Relative Policy Optimization (GRPO) as a scalable alternative to Proximal Policy Optimization (PPO). GRPO removes the need for a critic model by optimizing contrastive objective based on intra-group ranking, yielding better sample efficiency, improved stability, and reduced computational complexity [ 36,46]. GRPO-trained LLMs demonstrated state-of-the-art performance in mathematical problem solving and code generation, highlighting its effectiveness on tasks requiring structured reasoning and adherence to correctness [48]. Improving Text-to-Image Generation Quality The rapid adoption of text-to-image models [ 45, 37,44,13,41] has led to many research efforts focused on improving their image quality and better matching human preferences. Some works focus on inference-time modifications, either optimizing noise seeds towards better behaving regions of the diffusion space [ 14,43] or applying self-guidance and frequency-based modulations [21, 49, 35] to the generated features. More commonly, models are tuned to provide better quality outputs. This is often done through carefully selected high-quality datasets or better captioning methods [ 9,3,47]. Another approach uses reward models [ 29,59,60,31] to guide the generation process. These reward models can be used with reinforcement learning [4, 11, 15, 66], or through direct optimization [6, 42, 55]. Finally, recent methods explore the use of LLMs to improve text-to-image generation [ 62], commonly by using them to construct workflows featuring multiple models or chained editing tools [ 67,51,16]. Our work similarly uses LLMs to construct workflows, but better aligns them to human preferences through the use of reward models coupled with a reinforcement-learning feedback mechanism. Prompt LLM FlowStage 1: SFT Prompt Flow𝑥500𝑘 pairs Data Prompt FlowStage 2.1: Surrogate Reward Training Prompt Flow 𝑥100𝑘 triplets DataScore ScoreModern BERT Prompt LLMFlow 1Stage 2.2: Policy -Optimization Prompt 𝑥5000 DataModern BERTScore 1 Flow N Score N Figure 1: Pipeline overview. Step 1: Finetune LLM for general flow generation (SFT, 500K prompt- flow pairs). Step 2.1: Train reward model (100K prompt-flow-score triplets). Step 2.2: Optimize for quality using GRPO. ὒ5= learning, ❄= frozen. 3 3 Methodology Our goal is to enable efficient training of a human-preference based, prompt-to-workflow prediction system. Ideally, this system should be able to innovate and produce novel, unseen flows. Prior work struggled with this aspect, primarily due to their reliance on scoring images generated with a
https://arxiv.org/abs/2505.21478v1
large set of fixed flows, whose parameters were sampled uniformly from a predefined set of options. To overcome this hurdle, we propose a two-phase training strategy. In the first, we pre-train on a large set of un-scored flows. This avoids the need to generate and score images, allowing us to use a much larger set to teach the LLM the structure of flows and the available components. Then, we perform a second tuning stage, where we leverage human-preference predictor models jointly with recent reinforcement-learning ideas (GRPO [ 48]) to drive the model towards better-performing subsets of the flow space. As training progresses, more samples are drawn from these regions, and hence, less computation is wasted on inefficient exploration. However, generating and scoring images during LLM training is itself a costly process, which requires an order of a minute for every training step. Hence, we draw on ideas from the autonomous driving literature, where costly simulations are often replaced by faster predictors trained to replicate simulation outputs [ 2,25,26]. Here, we apply this idea by learning surrogate reward models that predict the final image score directly from the prompt and workflow pair. Notably, prior work has observed that such surrogates are susceptible to reward-hacking solutions [ 17,50,58]. Motivated by findings that ensembles can mitigate reward hacking [ 7,65], we train an ensemble of such models and use their variance as a measure of uncertainty, allowing us to filter out samples that optimize for any individual surrogate reward model. Below we present these core components in greater detail and provide an overview of additional design choices or components that allow us to increase efficiency further or refine our results. An overview of our training pipeline is shown in Figure 1. 3.1 Training Data To train our model we use the flow and prompt dataset of ComfyGen [ 16]. This set contains 33 human-created flows that define an overall graph structure, further augmented by randomly sampling novel parameter choices for existing blocks such as different base models, differnet LoRAs, diffusion samplers or even the number of steps and guidance scale. Since we do not need to score images for our first stage, we can apply more extensive augmentations and create 2,000variants from each baseline flow structure (compared with ComfyGen’s 100). The set also contains 10000 prompts taken from the generation sharing website CivitAI.com. We keep the 500prompts used to test ComfyGen as a holdout, and train using the rest. 3.2 Stage 1: Supervised Fine-Tuning on Flow Dataset The first stage involves supervised fine-tuning (SFT) an LLM on a dataset of prompt-flow pairs without explicit score labels. At this stage, our goal is to teach the LLM the appropriate vocabulary and flow structure while maintaining output diversity. Our flow dataset DSFT consists of pairs (pi, fi)where pirepresents a randomly sampled prompt and firepresents a randomly sampled flow. We tune the model to take the sampled prompt piand return its matching flow fi. The full LLM query is shown in the supplementary. After fine-tuning, we evaluate the model’s perplexity on DSFT, achieving a score of 1.9, which
https://arxiv.org/abs/2505.21478v1
reflects strong alignment with the encoded workflows structural patterns. Efficient Flow Representation Scheme While prior work [ 16] directly predicts ComfyUI JSON representations, we note that these JSONs typically contain thousands of tokens, leading to long generation times and increasing memory requirements. An inspection of the tokenized JSONs shows that many tokens are wasted on maintaining the JSON format (e.g., on brackets or quotation marks) or on breaking down model or component names. Hence, to improve training efficiency and reduce token usage, we propose to modify the encoding scheme, using a novel structured representation that captures essential components while reducing token count. Additionally, we introduce specialized tokens to represent key elements of the flow. (e.g. tokens for ComfyUI node names or for model choices). An example of the difference between the two tokenization methods is outlined in Figure 2. This new encoding scheme yields significant practical advantages resulting in substantial improve- ments in both computational efficiency and memory utilization. Quantitatively, the 86.7% reduction 4 Figure 2: An example of a single ComfyUI node tokenized. (a)displays the original JSON input as tokenized by the standard Llama tokenizer. (b)shows our custom encoding, with introducing additional tokens to explicitly represent relevant components within workflow. Colored segment corresponds to a different token. (c)histogram of flows length (in token) of all training-set in the average token length ( 1500→200tokens per workflow) enabled a 16×batch size increase (2→32samples/batch) during the first-stage training. Ultimately reaching a 3×time improvement over the original tokenization. These enhancements make it feasible to train complex models and apply memory-intensive algorithms, such as GRPO. 3.3 Stage 2: Reward-based policy-optimization In the second stage, our goal is to tune the workflow-prediction LLM to better align it with flows that produce high quality outputs for a given prompt. To do so, we propose to leverage the recently introduced Group Relative Policy Optimization (GRPO) approach, which estimates advantages by comparing responses within groups of similar prompts, rather than relying on a separate value function. Using GRPO has two main benefits: (1) it eliminates the need to learn a separate value function, enabling better memory utilization during training and (2) its group-based reward normalization encourages greater exploration and diversity in generated workflows. However, the use of this approach requires us to score and rank the different candidate flows generated for each input prompt at training time. Naively, we could simply generate images with each such flow and score them using the human-preference predictors used by ComfyGen [ 16]. However, for complex flows, creating the images might take an order of a minute, greatly limiting the speed of training. Hence, we propose to avoid this lengthy generation step and instead train a surrogate reward model that will directly estimate the final reward from a pair of prompt and flow inputs. Surrogate Reward Model Training We implement the surrogate reward model Rϕon top of a ModernBert [57] backbone, with a novel output head trained to map the CLS token into a score. To tune the model, we feed it with strings containing a prompt and flow pair, and task it to
https://arxiv.org/abs/2505.21478v1
predict the human-preference score for the image produced by this pair. For data, we use the ComfyGen dataset DR, which contains triplets of prompt pi, flow fiand score si. The surrogate’s loss is then: LR(ϕ) =X (pi,fi,si)∈DRMSE (Rϕ(pi, fi), si). (1) Although the construction of the original ComfyGen dataset still required generating images and scoring them, we find that the surrogate reward is much more sample efficient, performing well with just the 330post-augmentation flows of ComfyGen (compared with our own 80kunscored flows). 3.3.1 Component-Aware Hybrid Reward Formulation Since downstream flow performance can be heavily influenced by relatively few tokens (model choices, existence of specific blocks), we propose to further refine our surrogate model with a prefix- prediction score that is better able to assign credit to specific components. Specifically, we tune an additional reward model Rprefix ϕto predict the generated image score even when presented only with 5 randomly sampled prefixes of the flow: LRpre(ϕ) =X (pi,fi[1:j],si)∈DRMSE (Rpre ϕ(pi, fi[1 :j]), si). (2) Our final reward design combines these two complementary signals to assign a different reward to each token t, depending on both the expected performance of the full flow, as well as a prefix ending with its component: R(t) =Rϕ(p, f) +JX j=11t∈Tj·Rpre ϕ(p, f1:j), (3) where Tare the tokens comprising the same flow component as t, and we sum over the contribution of the entire component. 3.3.2 Uncertainty-Aware Reinforcement Learning Finally, prior work [ 17,50,58] observed that the use of surrogate reward models can lead to reward hacking. To avoid this pitfall, we train an ensemble of Nsurrogate models {Rϕ1, Rϕ2, ..., R ϕN}, each using a different split of our training data. The ensemble provides us with both a more robust mean prediction, as well as with an uncertainty estimate: µ(p, f) =1 NNX i=1Rϕi(p, f);σ(p, f) =vuut1 NNX i=1(Rϕi(p, f)−µ(p, f))2. (4) We can then define an uncertainty-aware reward function: R(p, f) =µ(p, f) σ(p, f)≤τ 0 σ(p, f)>0 where τis a threshold parameter. This pessimistic approach assigns zero reward to prompt-flow pairs with high uncertainty, preventing the model from optimizing specific subsets of the reward ensemble, or from drifting to regions where the surrogate’s predictions are unreliable. 3.4 Dual model guidance As an additional step, we propose that results may be further improved through the use of a novel inference mechanism inspired by classifier-free guidance (CFG, [ 20]). Specifically, we draw on recent work on image generation [ 27] which demonstrate that diffusion models can be guided by extrapolating the predicted scores along the direction from an under-trained version of the model, and the fully trained one. We propose to apply a similar idea here, where we consider both our policy-optimized model ( MGRPO , stage 2) and its “undertrained" SFT version ( MSFT, stage 1). At inference time, generations are sampled by interpolating the logits of of both models: logpCFG(fj|f<j, p) = log pSFT(fj|f<j, p) +γ logpGRPO (fj|f<j, p)−logpSFT(fj|f<j, p) (5) where γ≥0controls the guidance strength. Unless otherwise noted, we use γ= 1.5. 4 Experiments 4.1 Comparisons We follow [ 16] and compare our approach
https://arxiv.org/abs/2505.21478v1
to a set of baselines across two main metrics: (1) The GenEval [ 18] benchmark which measures prompt-adherence by using object detection and classification modules to evaluate correct object generation, placement, and attribute binding. (2) Human preference, using the CivitAI prompt-set of ComfyGen [ 16]. For the latter, we evaluate our approach using both an automated preference metric (HPS v2, [59]) as well as a user study. We compare our approach against the following types of baselines: (1) Fixed, monolithic mod- els including: SDXL, popular fine-tuned versions thereof, and SDXL-DPO, which was directly 6 fine-tuned with human preference data. (2) Fixed, popular workflows, where we use the same workflow to generate all images regardless of the prompt. (3) Prior pipeline construction ap- proaches, including agentic workflows that select and use off-the-shelf editing tools to cor- rect generated content (GenArtist, [ 67]) and reward-based fine-tuned LLMs (ComfyGen [ 16]). Figure 3: Example of generations with FlowRLPrompt adherence: As summarized in Table 1, FlowRL demonstrates strong performance on the GenEval benchmark despite not being explicitly trained for prompt adherence. It achieves an overall score of 0.61, matching the best-performing baseline, ComfyGen. Notably, our approach outperforms other methods in the “two objects” (0.85 vs. 0.82) and “binding” (0.38 vs. 0.29) categories, indicating improved capability in handling complex com- positional prompts. A representative qualitative example illustrating prompt adherence is provided in Figure 4. Visual Quality: To automatically evaluate the visual quality of FlowRL’s outputs, we follow [ 55,43,16] and use a pair-wise comparison of HPS v2 [ 59] score between FlowRL and each baseline and report the average win rate. These comparisons use the full CivitAI test set of [ 16]. The win-rate of each baseline over FlowRL is reported in Table 1. Additionally, we conducted a user study were we show users 35 randomly sampled prompts and the images generated for each, using FlowRL and one of the baselines. Here, we focus on the best performing baseline from each category, as well as ComfyGen [ 16]. We then ask them to select the image that they prefer, taking both prompt adherence and visual quality into account. We report the aggregated win percentage in figure 5, and add more details in the supplementary. This experiment demonstrates FlowRL’s capability to create more performant ComfyUI workflows for the given input prompts. Representative qualitative comparisons highlighting these improvements are provided in Figure 4, where our outputs consistently exhibit better prompt alignment and structural coherence compared to baseline generations. ModelSingle TwoCounting Colors PositionAttributeOverallHPSv2 winrate object object binding vs. FlowRL SDXL 0.98 0.74 0.39 0.85 0.15 0.23 0.55 2% ±0.6% JuggernautXL 1.00 0.73 0.48 0.89 0.11 0.19 0.57 5% ±1% DreamShaperXL 0.99 0.78 0.45 0.81 0.17 0.24 0.57 3% ±0.6% DPO-SDXL 1.00 0.81 0.44 0.90 0.15 0.23 0.59 5%±1% Most Popular Flow 0.95 0.38 0.26 0.77 0.06 0.12 0.42 13% ±1% 2ndMost Popular Flow 1.00 0.65 0.56 0.86 0.13 0.34 0.59 14%±1% GenArtist 0.94 0.41 0.40 0.72 0.24 0.07 0.47 5% ±1% RPG-DiffusionMaster 1.00 0.64 0.21 0.89 0.20 0.35 0.55 3% ±0.8% ComfyGen 0.99 0.82 0.50 0.90 0.13 0.29 0.61 40%±2%
https://arxiv.org/abs/2505.21478v1
FlowRL (Ours) 1.00 0.85 0.44 0.86 0.11 0.38 0.61 - Table 1: GenEval and HPS v2 comparisons. FlowRL is on-par with ComfyGen on GenEval and outperforms all other baseline approaches in overall score. On human preference metrics, FlowRL significantly outperforms prior methods. CIs are calculated as one standard deviation from the mean. Novelty of generated flows: A key advantage of our approach lies in its capacity to generate workflows that are not merely copies of those seen during training. To quantify this novelty, we generate 500 flows using the CivitAI test set, and calculate the normalized Levenshtein distance (NLD) [ 52,32] between each generated workflow and its nearest training sample. We further normalize these values by the NLD between training samples, giving us a measure of what fraction of the variance in training data we manage to preserve. Additionally, we report how many generated flows exist “as-is" in the training data, and how many unique flows were created in the 500output set. The results are reported in Table 2. Our experiments confirm the findings of [ 16] which report that their approach learned to copy flows from the training data. FlowRL meanwhile achieves significantly higher novelty, demonstrating the ability to generalize to new parameter combinations. These results 7 Figure 4: Qualitative results on CivitAI and GenEval prompts. Figure 5: Human study win rate of FlowRL vs other relevant baselines highlight the effectiveness of our reinforcement learning framework in encouraging the LLM to explore and produce a broader range of complex workflows. Effects of Dual model guidance: Next, we investigate the impact of our dual-model guidance approach. Specifically, prior work [ 12] highlighted the ability of guidance-based methods to trade diversity for performance (or recall for precision). We show that similar behavior can be observed here. As shown in Table 2, while increasing guidance strength ( γ) improves HPS v2 scores win-rate vs ComfyGen, it significantly impacts the structural diversity of the generated workflows. At γ= 1.5, our method maintains the uniqueness of generated flows. However, as γincreases, we observe a dramatic reduction in the uniqueness ratio to just 8%. Notably, all our FlowRL variants maintain near-zero overlap with training data (0-1% "exists in data" vs ComfyGen’s 94%), and the NLD ratio actually improves with guidance (from 0.6 without CFG to 0.75 at γ= 2). This pattern suggests that stronger guidance pushes the model to consistently generate a smaller subset of high-performing workflows, effectively concentrating probability mass on patterns that maximize reward but reducing exploration of the solution space. Conceptually, this mirrors observations in image generation with CFG, where higher guidance strengths produce higher-quality but less diverse output. 8 Methodunique exists NLD HPSv2 ratio in data ratio win-rate (%) (%) Vs ComfyGen ComfyGEN 7% 94% 0 - FlowRL (w/o CFG) 41% 1% 0.6 59% FlowRL + CFG( γ= 1.5) 41% 0% 0.74 60% FlowRL + CFG ( γ= 2) 8% 0% 0.75 63% Table 2: Comparison of originality of flow generation models 4.2 Ablation study To quantify the impact of individual components in FlowRL, we conducted an ablation study com- paring variants with
https://arxiv.org/abs/2505.21478v1
and without our key improvements. We evaluated the following modifications: (1) removing the component-aware reward model, (2) removing the uncertainty ensemble cutoff, (3) varying number of BERT models in our reward ensemble, (4) dropping the SFT step (stage 1), and (5) dropping the GRPO-tuning step. For (5), we instead use the stage-1 model to sample five flows per prompt, and use our reward ensemble to score them in relation to the prompt. Then, we generate an image with the highest scoring flow. Finally, to ensure that our benefits are not grounded in the novel encoding scheme, we also evaluate a baseline ComfyGen [ 16] model trained on this new representation. We compare all scenarios against both the original ComfyGEN and against our full model, using HPSv2 scores on the CivitAI prompt set. The errors reported are the 1−σWald interval. w/o w/o Ensemble of w/o win ratio prefix reward 1 3 5 ComfyGen SFT SFT reward cutoff Berts (+ encoded) only stage vs ComfyGen(%)55 57 55 56 56 37 29 0 ±2.22 ±2.22 ±2.22 ±2.21 ±2.22 ±2.16 ±2.02 - vs ours (%)42 45 33 34 36 26 19 0 ±2.21 ±2.21 ±2.1±2.12 ±2.15 ±1.96 ±1.75 - Table 3: The win ratio on the HPSv2 score for each component of our method compared to (1) the ComfyGen baseline and (2) the full ComfyGenRL model, using head-to-head comparisons. The results are presented in table 3. These demonstrate the vital contribution of each component to overall performance. The full model consistently outperforms all ablations, with particularly significant drops observed when removing the SFT stage entirely (0% win rate against ComfyGen and our full model). This emphasizes the critical nature of proper initialization before applying reinforcement learning methods. Looking at specific components, "prefix reward" proves the most beneficial, showing the importance of assigning more granular rewards. The "ComfyGen (+encoded)" variant, which uses our encoding scheme but lacks reinforcement learning, achieves only a 37% win rate against the original ComfyGen, highlighting that our encoding improvements work synergistically with the GRPO training approach. 5 Discussion This paper presents a novel approach for fine-tuning LLMs using a combination of supervised learning on flow data, surrogate reward modeling, and uncertainty-aware reinforcement learning. Our method addresses several key challenges in LLM fine-tuning, including reward hacking, distribution shifts, and training efficiency. The results demonstrate that our approach outperforms existing baselines across multiple metrics. Importantly, compared to prior workflow generation work, our approach demonstrates greater output diversity and successfully generalizes to novel flows that did not exist in the training data. Although it improves on the current state-of-the-art in multiple aspects, our approach still maintains many of their limitations. First, it remains focused on text-to-image workflows, with no support for editing tasks or video modules. Second, introducing new workflow components to the LLM would require retraining our entire stack. In the future, we hope to explore more efficient ways of adapting to novel models or blocks. By enabling reliable and diverse automated workflow generation, our work advances generative AI systems that adapt to human preferences. We hope it will help foster more collaborative innovation
https://arxiv.org/abs/2505.21478v1
by streamlining the integration of independently trained, specialized modules. 9 References [1]Google DeepMind AlphaCode Team. Alphacode 2 technical report. https://storage. googleapis.com/deepmind-media/AlphaCode2/AlphaCode2_Tech_Report.pdf , 2024. [2]Halil Beglerovic, Michael Stolz, and Martin Horn. Testing of autonomous vehicles using surrogate models and stochastic optimization. In 2017 IEEE 20th International Conference on Intelligent Transportation Systems (ITSC) , pages 1–6. IEEE, 2017. [3]James Betker, Gabriel Goh, Li Jing, Tim Brooks, Jianfeng Wang, Linjie Li, Long Ouyang, Juntang Zhuang, Joyce Lee, Yufei Guo, et al. Improving image generation with better captions. Computer Science. https://cdn. openai. com/papers/dall-e-3. pdf , 2(3):8, 2023. [4]Kevin Black, Michael Janner, Yilun Du, Ilya Kostrikov, and Sergey Levine. Training diffusion models with reinforcement learning. In The Twelfth International Conference on Learning Representations , 2024. [5]Yujie Cao, Azhan Abdul Aziz, Wan Nur Rukiah Mohd Arshard, Chang Xu, Muhamad Abdul Aziz Bin Ab Gani, and Issarezal Bin Ismail. Ai 2d-3d generation in architectural design: A divine hand or a pandora’s box. In 2024 IEEE 22nd Student Conference on Research and Development (SCOReD) , pages 203–208. IEEE, 2024. [6]Kevin Clark, Paul Vicol, Kevin Swersky, and David J Fleet. Directly fine-tuning diffusion models on differentiable rewards. In The Twelfth International Conference on Learning Representations , 2024. [7]Thomas Coste, Usman Anwar, Robert Kirk, and David Krueger. Reward model ensembles help mitigate overoptimization. arXiv preprint arXiv:2310.02743 , 2023. [8]Josef Dai, Xuehai Pan, Ruiyang Sun, Jiaming Ji, Xinbo Xu, Mickel Liu, Yizhou Wang, and Yaodong Yang. Safe rlhf: Safe reinforcement learning from human feedback. arXiv preprint arXiv:2310.12773 , 2023. [9]Xiaoliang Dai, Ji Hou, Chih-Yao Ma, Sam Tsai, Jialiang Wang, Rui Wang, Peizhao Zhang, Simon Vandenhende, Xiaofang Wang, Abhimanyu Dubey, et al. Emu: Enhancing image generation models using photogenic needles in a haystack. arXiv preprint arXiv:2309.15807 , 2023. [10] Daniel, Michael, and the Unsloth Community. Unsloth: Fast, memory-efficient llm fine- tuning library. https://github.com/unslothai/unsloth , 2024. Version 2.0. Accessed: 2025-05-20. [11] Fei Deng, Qifei Wang, Wei Wei, Tingbo Hou, and Matthias Grundmann. Prdp: Proximal reward difference prediction for large-scale reward finetuning of diffusion models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages 7423–7433, 2024. [12] Prafulla Dhariwal and Alexander Nichol. Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems , 34:8780–8794, 2021. [13] Patrick Esser, Sumith Kulal, Andreas Blattmann, Rahim Entezari, Jonas Müller, Harry Saini, Yam Levi, Dominik Lorenz, Axel Sauer, Frederic Boesel, et al. Scaling rectified flow trans- formers for high-resolution image synthesis. In Forty-first international conference on machine learning . [14] Luca Eyring, Shyamgopal Karthik, Karsten Roth, Alexey Dosovitskiy, and Zeynep Akata. Reno: Enhancing one-step text-to-image models through reward-based noise optimization. arXiv preprint arxiv:2406.04312 , 2024. [15] Ying Fan, Olivia Watkins, Yuqing Du, Hao Liu, Moonkyung Ryu, Craig Boutilier, Pieter Abbeel, Mohammad Ghavamzadeh, Kangwook Lee, and Kimin Lee. Reinforcement learning for fine-tuning text-to-image diffusion models. Advances in Neural Information Processing Systems , 36, 2024. 10 [16] Rinon Gal, Adi Haviv, Yuval Alaluf, Amit H Bermano, Daniel Cohen-Or, and Gal Chechik. Comfygen: Prompt-adaptive workflows for text-to-image generation. arXiv preprint arXiv:2410.01731 , 2024. [17] Leo Gao, John Schulman, and Jacob Hilton. Scaling laws for reward
https://arxiv.org/abs/2505.21478v1
model overoptimization. InInternational Conference on Machine Learning , pages 10835–10866. PMLR, 2023. [18] Dhruba Ghosh, Hannaneh Hajishirzi, and Ludwig Schmidt. Geneval: An object-focused framework for evaluating text-to-image alignment. Advances in Neural Information Processing Systems , 36, 2024. [19] Aaron Grattafiori, Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, Ahmad Al-Dahle, Aiesha Letman, Akhil Mathur, Alan Schelten, Alex Vaughan, et al. The llama 3 herd of models. arXiv preprint arXiv:2407.21783 , 2024. [20] Jonathan Ho and Tim Salimans. Classifier-free diffusion guidance. In NeurIPS 2021 Workshop on Deep Generative Models and Downstream Applications , 2021. [21] Susung Hong, Gyuseong Lee, Wooseok Jang, and Seungryong Kim. Improving sample quality of diffusion models using self-attention guidance. In Proceedings of the IEEE/CVF International Conference on Computer Vision , pages 7462–7471, 2023. [22] Edward J. Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, and Weizhu Chen. Lora: Low-rank adaptation of large language models. ArXiv , abs/2106.09685, 2021. [23] Oucheng Huang, Yuhang Ma, Zeng Zhao, Mingrui Wu, Jiayi Ji, Rongsheng Zhang, Zhipeng Hu, Xiaoshuai Sun, and Rongrong Ji. Comfygpt: A self-optimizing multi-agent system for comprehensive comfyui workflow generation, 2025. [24] Jiaming Ji, Tianyi Qiu, Boyuan Chen, Borong Zhang, Hantao Lou, Kaile Wang, Yawen Duan, Zhonghao He, Jiayi Zhou, Zhaowei Zhang, et al. Ai alignment: A comprehensive survey. arXiv preprint arXiv:2310.19852 , 2023. [25] Keyur Joshi, Chiao Hsieh, Sayan Mitra, and Sasa Misailovic. Gas: Generating fast and accurate surrogate models for autonomous vehicle systems. arXiv preprint arXiv:2208.02232 , 2022. [26] Maria Kalweit, Gabriel Kalweit, Moritz Werling, and Joschka Boedecker. Deep surrogate q- learning for autonomous driving. In 2022 International Conference on Robotics and Automation (ICRA) , pages 1578–1584. IEEE, 2022. [27] Tero Karras, Miika Aittala, Tuomas Kynkäänniemi, Jaakko Lehtinen, Timo Aila, and Samuli Laine. Guiding a diffusion model with a bad version of itself. Advances in Neural Information Processing Systems , 37:52996–53021, 2024. [28] Omar Khattab, Arnav Singhvi, Paridhi Maheshwari, Zhiyuan Zhang, Keshav Santhanam, Sri Vardhamanan, Saiful Haq, Ashutosh Sharma, Thomas T Joshi, Hanna Moazam, et al. Dspy: Compiling declarative language model calls into self-improving pipelines. arXiv preprint arXiv:2310.03714 , 2023. [29] Yuval Kirstain, Adam Polyak, Uriel Singer, Shahbuland Matiana, Joe Penna, and Omer Levy. Pick-a-pic: An open dataset of user preferences for text-to-image generation. In Thirty-seventh Conference on Neural Information Processing Systems , 2023. [30] Hung Le, Yue Wang, Akhilesh Deepak Gotmare, Silvio Savarese, and Steven Chu Hong Hoi. Coderl: Mastering code generation through pretrained models and deep reinforcement learning. Advances in Neural Information Processing Systems , 35:21314–21328, 2022. [31] Kimin Lee, Hao Liu, Moonkyung Ryu, Olivia Watkins, Yuqing Du, Craig Boutilier, Pieter Abbeel, Mohammad Ghavamzadeh, and Shixiang Shane Gu. Aligning text-to-image models using human feedback. arXiv preprint arXiv:2302.12192 , 2023. [32] Vladimir I Levenshtein. Binary codes capable of correcting deletions, insertions, and reversals. Soviet Physics Doklady , 10(8):707–710, 1966. 11 [33] Zeyuan Li, Yangfan He, Lewei He, Jianhui Wang, Tianyu Shi, Bin Lei, Yuchen Li, and Qiuwu Chen. Falcon: Feedback-driven adaptive long/short-term memory reinforced coding optimization system. arXiv preprint arXiv:2410.21349 , 2024. [34] Haipeng Luo, Qingfeng Sun, Can Xu, Pu Zhao, Jianguang Lou, Chongyang Tao,
https://arxiv.org/abs/2505.21478v1
Xiubo Geng, Qingwei Lin, Shifeng Chen, and Dongmei Zhang. Wizardmath: Empowering mathematical rea- soning for large language models via reinforced evol-instruct. arXiv preprint arXiv:2308.09583 , 2023. [35] Yang Luo, Yiheng Zhang, Zhaofan Qiu, Ting Yao, Zhineng Chen, Yu-Gang Jiang, and Tao Mei. Freeenhance: Tuning-free image enhancement via content-consistent noising-and-denoising process. arXiv preprint arXiv:2409.07451 , 2024. [36] Youssef Mroueh. Reinforcement learning with verifiable rewards: Grpo’s effective loss, dynam- ics, and success amplification. arXiv preprint arXiv:2503.06639 , 2025. [37] Alex Nichol, Prafulla Dhariwal, Aditya Ramesh, Pranav Shyam, Pamela Mishkin, Bob McGrew, Ilya Sutskever, and Mark Chen. Glide: Towards photorealistic image generation and editing with text-guided diffusion models. arXiv preprint arXiv:2112.10741 , 2021. [38] Harsha Nori, Yin Tat Lee, Sheng Zhang, Dean Carignan, Richard Edgar, Nicolo Fusi, Nicholas King, Jonathan Larson, Yuanzhi Li, Weishung Liu, et al. Can generalist foundation models outcompete special-purpose tuning? case study in medicine. arXiv preprint arXiv:2311.16452 , 2023. [39] Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. Training language models to follow instructions with human feedback. Advances in neural information processing systems , 35:27730–27744, 2022. [40] Iván J Pérez-Colado, Manuel Freire-Morán, Antonio Calvo-Morata, Víctor M Pérez-Colado, and Baltasar Fernández-Manjón. Ai asyet another tool in undergraduate student projects: Preliminary results. In 2024 IEEE Global Engineering Education Conference (EDUCON) , pages 1–7. IEEE, 2024. [41] Dustin Podell, Zion English, Kyle Lacey, Andreas Blattmann, Tim Dockhorn, Jonas Müller, Joe Penna, and Robin Rombach. SDXL: Improving latent diffusion models for high-resolution image synthesis. In The Twelfth International Conference on Learning Representations , 2024. [42] Mihir Prabhudesai, Anirudh Goyal, Deepak Pathak, and Katerina Fragkiadaki. Aligning text- to-image diffusion models with reward backpropagation. arXiv preprint arXiv:2310.03739 , 2023. [43] Zipeng Qi, Lichen Bai, Haoyi Xiong, et al. Not all noises are created equally: Diffusion noise selection and optimization. arXiv preprint arXiv:2407.14041 , 2024. [44] Aditya Ramesh, Prafulla Dhariwal, Alex Nichol, Casey Chu, and Mark Chen. Hierarchical text-conditional image generation with clip latents. arXiv preprint arXiv:2204.06125 , 2022. [45] Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. High- resolution image synthesis with latent diffusion models. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2022, New Orleans, LA, USA, June 18-24, 2022 , pages 10674–10685. IEEE, 2022. [46] S Sane. Hybrid group relative policy optimization: A multi-sample approach to enhancing policy optimization. arxiv, 2025. [47] Eyal Segalis, Dani Valevski, Danny Lumen, Yossi Matias, and Yaniv Leviathan. A picture is worth a thousand words: Principled recaptioning improves image generation. arXiv preprint arXiv:2310.16656 , 2023. [48] Zhihong Shao, Peiyi Wang, Qihao Zhu, Runxin Xu, Junxiao Song, Xiao Bi, Haowei Zhang, Mingchuan Zhang, YK Li, Y Wu, et al. Deepseekmath: Pushing the limits of mathematical reasoning in open language models. arXiv preprint arXiv:2402.03300 , 2024. 12 [49] Chenyang Si, Ziqi Huang, Yuming Jiang, and Ziwei Liu. Freeu: Free lunch in diffusion u-net. InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages 4733–4743, 2024. [50] Prasann Singhal, Tanya Goyal, Jiacheng Xu, and Greg Durrett. A long way to go:
https://arxiv.org/abs/2505.21478v1
Investigating length correlations in rlhf. arXiv preprint arXiv:2310.03716 , 2023. [51] Dominik Sobania, Martin Briesch, and Franz Rothlauf. Comfygi: Automatic improvement of image generation workflows, 2024. [52] Keiichiro Tashima, Hirohisa Aman, Sousuke Amasaki, Tomoyuki Yokogawa, and Minoru Kawa- hara. Fault-prone java method analysis focusing on pair of local variables with confusing names. In2018 44th Euromicro Conference on Software Engineering and Advanced Applications (SEAA) , pages 154–158. IEEE, 2018. [53] Trieu H Trinh, Yuhuai Wu, Quoc V Le, He He, and Thang Luong. Solving olympiad geometry without human demonstrations. Nature , 2024. [54] Jonathan Uesato, Nate Kushman, Ramana Kumar, Francis Song, Noah Siegel, Lisa Wang, Antonia Creswell, Geoffrey Irving, and Irina Higgins. Solving math word problems with process-and outcome-based feedback. arXiv preprint arXiv:2211.14275 , 2022. [55] Bram Wallace, Meihua Dang, Rafael Rafailov, Linqi Zhou, Aaron Lou, Senthil Purushwalkam, Stefano Ermon, Caiming Xiong, Shafiq Joty, and Nikhil Naik. Diffusion model alignment using direct preference optimization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages 8228–8238, 2024. [56] Peiyi Wang, Lei Li, Zhihong Shao, RX Xu, Damai Dai, Yifei Li, Deli Chen, Yu Wu, and Zhifang Sui. Math-shepherd: Verify and reinforce llms step-by-step without human annotations. arXiv preprint arXiv:2312.08935 , 2023. [57] Benjamin Warner, Antoine Chaffin, Benjamin Clavié, Orion Weller, Oskar Hallström, Said Taghadouini, Alexis Gallagher, Raja Biswas, Faisal Ladhak, Tom Aarsen, et al. Smarter, better, faster, longer: A modern bidirectional encoder for fast, memory efficient, and long context finetuning and inference. arXiv preprint arXiv:2412.13663 , 2024. [58] Jiaxin Wen, Ruiqi Zhong, Akbir Khan, Ethan Perez, Jacob Steinhardt, Minlie Huang, Samuel R Bowman, He He, and Shi Feng. Language models learn to mislead humans via rlhf. arXiv preprint arXiv:2409.12822 , 2024. [59] Xiaoshi Wu, Yiming Hao, Keqiang Sun, Yixiong Chen, Feng Zhu, Rui Zhao, and Hongsheng Li. Human preference score v2: A solid benchmark for evaluating human preferences of text-to-image synthesis. arXiv preprint arXiv:2306.09341 , 2023. [60] Jiazheng Xu, Xiao Liu, Yuchen Wu, Yuxuan Tong, Qinkai Li, Ming Ding, Jie Tang, and Yuxiao Dong. Imagereward: Learning and evaluating human preferences for text-to-image generation. Advances in Neural Information Processing Systems , 36, 2024. [61] Xiangyuan Xue, Zeyu Lu, Di Huang, Zidong Wang, Wanli Ouyang, and Lei Bai. Comfy- bench: Benchmarking llm-based agents in comfyui for autonomously designing collaborative ai systems, 2024. [62] Ling Yang, Zhaochen Yu, Chenlin Meng, Minkai Xu, Stefano Ermon, and Bin Cui. Mastering text-to-image diffusion: Recaptioning, planning, and generating with multimodal llms. In International Conference on Machine Learning , 2024. [63] Honglin Yao. Improving performance of face generation for specific individuals based on lora. In 2024 International Conference on Electronics and Devices, Computational Science (ICEDCS) , pages 77–80. IEEE, 2024. [64] Zhengqing Yuan, Ruoxi Chen, Zhaoxu Li, Haolong Jia, Lifang He, Chi Wang, and Lichao Sun. Mora: Enabling generalist video generation via a multi-agent framework. arXiv preprint arXiv:2403.13248 , 2024. 13 [65] Yuanzhao Zhai, Han Zhang, Yu Lei, Yue Yu, Kele Xu, Dawei Feng, Bo Ding, and Huaimin Wang. Uncertainty-penalized reinforcement learning from human feedback with diverse reward lora ensembles. arXiv preprint arXiv:2401.00243 , 2023. [66] Yinan Zhang,
https://arxiv.org/abs/2505.21478v1
Eric Tzeng, Yilun Du, and Dmitry Kislyuk. Large-scale reinforcement learning for diffusion models. arXiv preprint arXiv:2401.12244 , 2024. [67] Wang Zhenyu, Li Aoxue, Li Zhenguo, and Liu Xihui. Genartist: Multimodal llm as an agent for unified image generation and editing. arXiv preprint arXiv:2407.05600 , 2024. [68] Mingchen Zhuge, Wenyi Wang, Louis Kirsch, Francesco Faccio, Dmitrii Khizbullin, and Jürgen Schmidhuber. Gptswarm: Language agents as optimizable graphs. In Forty-first International Conference on Machine Learning , 2024. 14 A Appendix A.1 Broader impact statement Our work offers a new path to improve text-to-image generation, but this improvement is not without possible social impacts. Text-to-image models can be used to create harmful or misleading content, and improving their output can increase this risk. Moreover, our work relies on the abundance of community-created, fine-tuned specialized models and adapters. These are rarely developed with safety in mind, and do not typically undergo red team assessments. Hence, they may increase the risk of the user generating biased or unsafe content. However, this can be mitigated by carefully curating the generative models seen during training, or by black-listing specific models in the output flow strings. Future work may be able to further refine the reward model used at training to also align it with safety, for example by reducing the score for content deemed unsafe by a detector. A.2 ComfyUI Overview ComfyUI is a popular ( 77,300stars on GitHub at the time of this writing), open-source workflow engine designed for flexible and extensible automation of generative AI tasks. Its node-based interface allows users to visually construct and execute complex processing pipelines, with users across the community often implementing and sharing new nodes to accommodate the changing landscape of generative tools. The pipelines constructed in ComfyUI can be exported to a JSON format, which we then map to a more compact representation and use for both our training data and our LLM output representation. To run our generated workflows, we convert them back to the JSON format and run them through the ComfyUI API. A dedicated user could also load these workflows through the UI and further manually refine them. A.3 Additional Qualitative results Here, we give more qualitative examples of FlowRL generations. Figures 6, 7 and 8 provide additional generations of CivitAI prompts using FlowRL. We give a detailed list of the relevant prompt (ordered by appearance order, from top-left to bottom right) We provide additional qualitative comparisons between FlowRL and the baselines in Figure 9 for both CivitAI prompts and GenEval prompts. In addition in figure 10 we give a qualitative comparison between FlowRL with and w/o the usage of the dual model guidance mechanism (CFG). List of prompts for example generations 1."Amazing detailed photography of a cute adorable samurai kitten holding Katana with 2 paws, Cherry Blossom Tree petals floating in air, high resolution, piercing eyes, lifelike fur, Anti-Aliasing, FXAA, De-Noise, Post-Production, SFX, insanely detailed & intricate, hypermaximalist, elegant, ornate, hyper realistic, super detailed, noir coloration, serene, 16k resolution, full body" 2."masterpiece, best quality, high quality, intricate, absurdres, very aesthetic, no humans, landscape, outdoors, mountain tops, wind, windy,
https://arxiv.org/abs/2505.21478v1
wind lines, clouds, above clouds, cliff, wind magic, aurora, ultra wide angle shot, cinematic style, highly detailed, extremely detailed, sharp detail, majestic, shallow depth of field, movie still, soft light, circular polarizer, colorful, wallpaper, professional illustration, anime" 3."pixar style of turtle, as a pixar character, tinny cute, luminous, wearing hawaiian hat, at the sea shore, tropical beach, smile, high detailed, photorealistic, 8k" 4. "Medieval German castle, surrounded by mountains, high fantasy, epic, digital art." 5."style of Edvard Munch, Piercing, sagacious eyes, mirage-like, the Sandswept dreamdweller, a trickster of dunes, clad in a wind-whispered turban, eternally smirking, sandswaggling over a dune-freckled miragepath in an ancient zephyr-twisted cactidle wilderness of towering 15 dustfrond phantasmagorias, paying no heed to the sun-scorched skyripples above, Arid, Sand-whirled, Mirage, Cacti, Mystical Desert, oasis illusions. Edvard Munch style" 6."full body, Fat cats at Elrond’s council from the movie Lord of the Rings, fluffy paws, background action-packed" 7."detailed, vector art, thick lines, oil painting, vibrant, colorful, candy pink, scarlet red, orange, smooth coloring, nature, landscape, stone pillars, long wild trees, moody streaks sky, natural lighting, river, reflections, best composition, background" 8.a woman with red hair and a white shirt is shown in this painting style photo with a pink background, Charlie Bowater, stanley artgerm lau, a painting, fantasy art masterpiece, best quality, depth of field, backlighting, intricate details" 9. "cinematic shot of stone giant walking in lush forest, dappled sunlight, high resolution" 10."Majestic jagged rocky mountains, red mesas, wind eroded colorful rock formations, twigh- light, starry night, petrified forest national park, arizona, astrophotography" 11."Cubist inspiration, A landscape represented with planes and flat colors. The landscape could show a field, forest or city, and flat planes and colors could be used to create a sense of depth and perspective, surrealism, aesthetic, bold gorgeous colours, high definition, super clear resolution, iridescent watercolor ink, acid influence, fantastic view, crisp quality, complex background, medium: old film grain, tetradic colors, golden hour, rust style, vantablack aura, golden ratio, rule of thirds, cinematic lighting Dark realism and magical. Complementary poisonous colors with deep zoom Memphis style abstract bokeh background with deep zoom" 12."FrostedStyle Highly detailed Dynamic shot of a transparent frosted ruby reindeer, glowing with rage from within extremely detailed" 13."vertical symmetry, vntblk, movie poster art, blood moon, red moon, darkest night, stone- henge, low angle:famous artwork by caspar david friedrich and stephan martiniere, perfectly round scifi portal, ominous dark surreal and unique landscape with towering obelisks piercing the sky, glowing ornate lovecraftian artifact, jagged rock formations, night sky, mysterious, ethereal, deserted, dark corners, burgundy, anthrazit grey, crimson, sunset orange, yellow, teal:16, ultra detailed" 14."by Peter Holme III and Roger Dean and Vitaly Golovatyuk and Mark Lovett, cinematic, shallow depth of field" 15."grainy, extremely detailed, intricate detail, dynamic lighting, photorealistic, filmg, natural lighting, low light, cat, slime, red glowing eyes, :P, fluffy, hairy, fluff, glowing stripes, raining, wet, dark theme, open mouth, lot of teeth, abyss, lurking in shadow" 16."The art of Origami, Paper folding, Swan on a lake, Amazing colours, Intricate details, Painstaking Attention to Details, UHD" 17."amateur analog photo, The creature monster brown fur
https://arxiv.org/abs/2505.21478v1
Easter bunny character covered in yeast, evil, creepy, in dark forest, fine textures, high quality textures of materials, volumetric textures, natural textures" 18."In a wondrously gleaming futuristic realm composed entirely of ripe peaches, a towering palace made of glistening peach flesh and pitted stone stands as the focal point of the image. The palace’s walls are adorned with intricate carvings of peach vines and blossoms, while peach juice flows like streams through the city streets. This vivid and surreal painting cap- tures the ethereal beauty of a world where nature and architecture are seamlessly intertwined, every detail rendered with unparalleled precision and depth, making viewers feel as if they could reach out and touch the succulent fruit structures." 19."high-contrast palette, cinematic quality, fashion photography, chimp wearing a black suit with a black shirt with a black vest with a black necktie with black Rayban style sunglasses, natural skin texture, realistic skin texture, skin pores, skin oils" 20. "faistyle, retro artstyle, painting medium, lake, mountain, forest" 21."close up Portrait photo of muscular bearded guy in a worn mech suit, light bokeh, intricate, steel metal rust, elegant, sharp focus, photo by greg rutkowski, soft lighting, vibrant colors, masterpiece, streets, detailed face" 16 22."detailed ink, pen and ink, mail art, best quality, detailed epic ice transparent ethereal otherworldly ghost castle in the blue sky, clouds, smoke, fog, detailed landscape, ghost figures, lake, boat, green forest, detailed flying dragon at the sky, detailed scales, warm lights, glittering, Craola, Dan Mumford, Andy Kehoe, 2d, flat, art on a cracked paper, patchwork, stained glass, cute, adorable, fairytale, storybook detailed illustration, cinematic, ultra highly detailed, tiny details, beautiful details, mystical, luminism, vibrant colors, complex background" 23. "crystal scorpion" 24."the image portrays a tranquil scene of a boat floating gently on the water, surrounded by an expansive landscape. the moon, full and glowing with a warm, reddish orange hue, casts a mystical ambiance over the entire scene. its reflection shimmers off the surface of the water, adding to the serene atmosphere. in the distance, mountains loom under the moon’s soft glow, their peaks partially obscured by the low hanging clouds. they appear majestic yet gentle, as if watching over the peaceful night below. trees line the shore in the foreground, their silhouettes faintly visible against the darkening sky. this picturesque setting evokes a sense of calm and tranquility, inviting viewers to take a moment and appreciate the beauty of nature. it is a symphony of colors and shapes, each element working harmoniously together to create a visually captivating and emotionally soothing composition." 25."hyper detailed, elusive, exotic, angelic, luminescent, by James Gilleard and by Alice Pasquini, point-of-view shot, fisheyes view, sunbeams lighting" 26."A colossal majestic tiger, looms over a cavern’s silhouette, gazing intently at a small human figure with a stance of curiosity, the figure a silhouette against a backdrop bathed in the warm oranges and yellows of a sun, faces the tiger unafraid, with floating embers dancing around them both, scene of serene confrontation amidst the enveloping dusk" 27."illustration, solo, animal skull head, screaming, long split tongue, fangs, head closeup,
https://arxiv.org/abs/2505.21478v1
black leather coat, horror atmosphere, side view" 28."mysterious silhouette of woman from the enchanted pond, abstract art, by Minjae Lee, Carne Griffiths, Emily Kell, Geoffroy Thoorens, Aaron Horkey, Jordan Grimmer, Greg Rutkowski, extraordinary depth, masterpiece, surreal, geometric patterns, extremely detailed, bokeh, perfect balance, deep and thin edges, artistic photorealism, smoothness, excellent masterpiece by the head of rapid engineering, white background: 1.2" 29.""Struggling to breathe, like being held under water" beautiful inner light, deep shadows, extraordinary detail" 30. "a fantasy landscape at dawn covered in magical flowers" 31."vintage, shabby, morning, dawn, cozy world, Kruskamp, Monge, Kincaid, Potter, Dali, Bur- ton, oil, coal, provence, house by the sea, cozy and beautiful landscape, double composition, drama, tragedy, the core of magic" 32."fine art, oil painting, best quality, dark tales, illustration, each color adds depth, and the entire piece comes together to create a breathtaking spectacle of motion and tranquility., while the ball is adorned with an array of stripes in various hues. the figurine, while her right hand delicately holds a small, epic splash cover art in the van gogh style, starry sky, dan mumford, andy kehoe, 2d, flat, delightful, vintage, art on a cracked paper, patchwork, stained glass, fairytale, storybook detailed illustration, cinematic, ultra highly detailed, tiny details, beautiful details, mystical, luminism, vibrant colors, complex background" 33."Envision a breathtaking waterfall cascading into a crystal-clear pool surrounded by lush greenery. The pool is home to magical water creatures, including playful water sprites and elegant swans with feathers that shimmer in shades of silver and gold. Mist rises from the waterfall, creating rainbows in the sunlight. Curious frogs with iridescent skin leap from rock to rock, while dragonflies with jeweled wings flit above the water. The sanctuary is a hidden paradise, inviting all who enter to experience the tranquility and magic of this enchanting world" 34."futuristic building, surface from a alien planet, mountains in the background, sci fi, fantasy, space art, galaxy background, shotting stars, dynamic angle, intricate " 35."Image is a digital artwork featuring a futuristic samurai robot. The robot has a sleek, metallic body with intricate mechanical details and a predominantly black and silver color 17 scheme. It wears a large, red, conical hat and a matching red cape that flows behind it. The robot’s face is obscured by a mask, giving it a mysterious appearance. It holds a red and black katana in its right hand, ready for combat. The background is a gradient of dark grey, with a circular, smoky effect behind the robot, adding to the dramatic and intense atmosphere of the scene." 36."masterpiece, ASCII, 8k.absurdes, intricate, maximum resolution, hyper detailed, Mirage, DonMn1ghtm4reXL, glow, fog, obsidian armor with red ruby, details, hellgate london themed, demoniac armor, force huge demoniac wide wings, glowing wings, energy wings" 18 Figure 6: More qualitative generations using FlowRL 19 Figure 7: More qualitative generations using FlowRL 20 Figure 8: More qualitative generations using FlowRL 21 Figure 9: Additional qualitative comparisons on CivitAI prompts (top 3) and GenEval prompts (bottom 3) 22 Figure 10: Qualitative example on influence of CFG on the output image A.4 Tokenization Encoding
https://arxiv.org/abs/2505.21478v1
Method We developed a systematic procedure to transform JSON-based workflow representations into a compact, encoded format. This process utilizes schema learning to ensure both accuracy and efficiency in data transformation. Methodology First, we infer a schema from a collection of workflow JSON files by iterating through each file and extracting the class types and field information for all utilized nodes. The resulting schema is stored for future encoding tasks. In the encoded representation, nodes are sorted and formatted to include their class types and input values, with explicit references to connected nodes. Each line in the encoded output corresponds to a node from the original JSON structure, providing a clear and organized mapping. Incorporating Workflow-Specific Tokens To more effectively capture the structure of ComfyUI workflows, we enhanced the base tokenizer by introducing custom tokens that represent key workflow elements such as node types, connections, and parameters. This enriched tokenization scheme helps the model better understand relationships between workflow components. Below, we provide examples of some of the custom tokens added to the tokenizer: • ng Everywhere3 • AspectSize • Automatic CFG • BNK_AddCLIPSDXLRParams • BNK_CLIPTextEncodeAdvanced • BasicPipeToDetailerPipe • Image Levels Adjustment • Image Remove Background (rembg) 23 • CLIP Positive-Negative XL w/Text (WLSH) • CLIP= • CLIPLoader • CLIPMergeSimple • CLIPSetLastLayer • CLIPTextEncode • CLIPTextEncodeSDXL • CLIPTextEncodeSDXLRefiner • CLIP_NEGATIVE • CONDITIONING= • CR Apply LoRA Stack • CR Apply Model Merge • SDXL 1.0/animagineXLV31_v30.safetensors • SDXL 1.0/crystalClearXL_ccxl.safetensors • SDXL 1.0/dreamshaperXL_turboDpmppSDEKarras.safetensors • SDXL 1.0/envyhyperdrivexl_v10.safetensors • SDXL 1.0/faces_v1.safetensors • SDXL 1.0/jibMixRealisticXL_v90BetterBodies.safetensors • SDXL 1.0/juggernautXL_v9Rdphoto2Lightning.safetensors A.5 User study To evaluate our method against baselines, we conducted a user study using a structured survey. For the study, we randomly sampled 50 prompts and generated corresponding images with each baseline. From these, we filtered out results which contained unsafe content (e.g., nudity, violence), resulting in 7–11 comparison questions per baseline. These comparisons were aggregated into a survey where participants were shown a prompt and the outputs from FlowRL and one baseline, and asked to select their preferred image. We collected approximately 200 responses per baseline. Figure 11, provides an example of a question from our survey. A.6 Implementation details A.6.1 SFT stage We implement our model based on a pre-trained Meta Llama3.1- 8B [ 19]. We used the unsloth [ 10] library to fine-tune the model using LoRA [ 22]. The SFT stage was trained on a single NVIDIA H100 80GB HBM3 GPU for 10 hours. LoRA Configuration: To enable parameter-efficient fine-tuning, we applied LoRA (Low-Rank Adaptation) to the model’s attention and feed-forward layers. The LoRA rank was set to r= 16 , with an alpha value of α= 16 , and a dropout rate of 0.0. Target modules included "q_proj, k_proj, v_proj, o_proj, gate_proj, up_proj, down_proj", as well as "lm_head" and "embed_tokens" since we added new tokens to our vocabulary. Prompt structure During the supervised fine-tuning (SFT) stage, the LLM is provided with both the prompt and one of the encoded flows: ">>> Prompt: {p_i} >>> Flow: {f_$}" 24 Figure 11: An example question from the user study. In contrast, during the reinforcement learning (RL) fine-tuning
https://arxiv.org/abs/2505.21478v1
stage, only the prompt is given to the LLM, and it is tasked with generating one or more candidate flows. This setup encourages the model to learn to produce the most appropriate flow for each prompt: ">>> Prompt: {p_i} >>> Flow:" A.6.2 Reward model training For training the Reward BERT model, we utilized the "answerdotai/ModernBERT-base" [57] as the foundational architecture. Beyond its improved classification performance, we selected ModernBert because it was trained on sequence lengths that match our expected prompt and encoded-flow format. We used the Adam optimizer with the default parameters and a learning rate of 8e−5. The maximum sequence length was set to 4096 tokens, with a batch size of 128over10epochs. Fine-tuning was done on a single NVIDIA A100-SXM4-80GB for approximately 4 hours. Dataset: Each data-point consisted of the triplet (fi, pi, si): flow, prompt and human-preference normalized score. and was inserted to the model in this format: "[PROMPT] {p_i} [FLOW] {f_i}". The model was tasked with prediction the output score sifor each pair, using an MSE loss. 25 A.6.3 GRPO Fine-Tuning Hyperparameters Below, we detail the key hyperparameters and configurations used in the GRPO (Group Relative Policy Optimization) fine-tuning stage: LoRA Configuration: To enable parameter-efficient fine-tuning, we applied LoRA (Low-Rank Adaptation) to the model’s attention and feed-forward layers. The LoRA rank was set to r= 16 , with an alpha value of α= 16 , and a dropout rate of 0.0. Target modules included "q_proj, k_proj, v_proj, o_proj, gate_proj, up_proj, down_proj", following best practices for large language model adaptation. Note that this step does not optimize the "lm_head" or "embed_tokens" layers as this step aims to further tune the SFT model, which already knows the flow vocabulary. Optimization Settings: We used the Adam optimizer with a learning rate of 5e−6,β1= 0.9, β2= 0.99, and a weight decay of 0.1. Training was performed with a batch size of 16per device. GRPO-Specific Parameters: We used the group size of 4(number of generations per prompt for group-based reward calculation). clipping coefficient of 0.2, max grad norm of 0.5and KL- regularization coefficient of 0.2. We also used generation temperature of 0.9, and maximal output tokens of 500. Training Procedure: Fine-tuning was conducted for 2 epochs over the CivitAI promp train set. We trained on a single NVIDIA A100-SXM4-80GB node (8 GPUs) for approximately 10 hours. We used an ensemble of 7 BERT reward models and used their mean as the surrogate reward. we set the uncertainty threshold to 0.08 and set the "uncertain reward value" to 0. For the prefix-reward mechanism, we used 5different Bert models. 26
https://arxiv.org/abs/2505.21478v1
Robust Hypothesis Generation: LLM-Automated Language Bias for Inductive Logic Programming Yang Yang∗, Jiemin Wu∗, Yutao Yue† HKUST(GZ) {frankyangy, jieminwu, yutaoyue}@hkust-gz.edu.cn Abstract Automating robust hypothesis generation in open environments is pivotal for AI cognition. We introduce a novel framework integrating a multi-agent system, pow- ered by Large Language Models (LLMs), with Inductive Logic Programming (ILP). Our system’s LLM agents autonomously define a structured symbolic vocabulary (predicates) and relational templates , i.e., language bias directly from raw textual data. This automated symbolic grounding (the construction of the language bias), traditionally an expert-driven bottleneck for ILP, then guides the transformation of text into facts for an ILP solver, which inductively learns interpretable rules. This approach overcomes traditional ILP’s reliance on predefined symbolic struc- tures and the noise-sensitivity of pure LLM methods. Extensive experiments in diverse, challenging scenarios validate superior performance, paving a new path for automated, explainable, and verifiable hypothesis generation. 1 Introduction Hypothesis generation—the process of forming systematic explanations from fragmented observations and iteratively validating them—plays a central role in advancing artificial intelligence [ 10]. This "generate-select" paradigm is fundamental not only in scientific discovery but also in practical AI tasks, such as diagnosing the root cause of software defects from error logs [ 27]. The capacity to continuously produce and verify hypotheses is thus crucial for building robust AI systems, especially when deployed in open environments or high-stakes domains like medical diagnosis and financial decision-making [12; 22]. Inductive Logic Programming (ILP), a traditional method for hypothesis generation, discovers knowl- edge by searching rule sets within expert-defined predicate spaces [ 3]. This reliance on expert-crafted predicates, however, poses significant challenges in complex domains. For instance, in areas like protein interaction research, specialists must meticulously predefine predicates capturing crucial domain-specific features (e.g., atomic distances or amino acid properties) and manually encode exper- imental data into symbolic facts to enable the derivation of reliable binding site identification rules [1].Meanwhile, current ILP research has focused primarily on optimizing rule search algorithms[4], rarely exploring the automatic construction of predicate spaces, which severely limits its scalability in open-domain tasks. The emergence of Large Language Models (LLMs) offers a new pathway for hypothesis generation. Their end-to-end learning paradigm can directly produce candidate hypotheses from unstructured texts (e.g., equipment failure logs or experimental reports) in various forms, including natural language [28], logical expressions [14], or even code [20]. This openness bypasses the traditional reliance on expert-crafted predicates. However, LLMs still face critical challenges: firstly, significant sensitivity ∗Equal contribution. †Corresponding author. Preprint. Under review.arXiv:2505.21486v1 [cs.AI] 27 May 2025 to noise—for instance, hypothesis accuracy can plummet from 71.2% to 50.9% with a 12.5% increase in input data noise [ 20]. Secondly, while an LLM might generate numerous plausible individual hypotheses [ 28], its typically heuristic generation process makes it difficult to assemble these into a compact, internally consistent, and collectively optimal set of rules required to comprehensively describe complex phenomena—a stark contrast to ILP systems that often learn such interdependent rule programs [17; 15]. To harness the semantic strengths of LLMs while retaining the rigorous, verifiable outputs of ILP, we introduce a collaborative
https://arxiv.org/abs/2505.21486v1
reasoning framework, shown in Figure 1. This framework first employs a multi-agent LLM system to automate the generation of a structured language bias, particularly the predicate system, directly from raw text. Subsequently, this LLM-generated bias guides the transformation of large-scale textual data into symbolic facts. This structured knowledge then empowers an ILP engine to conduct a robust, constrained search, yielding a globally coherent and optimal set of rules. Unlike prior approaches that often evaluate under idealized data conditions (e.g., assuming zero label noise or drawing conclusions such as ’8 examples are generally sufficient to describe the pattern’[ 20]), we provide a thorough analysis across more challenging data scenarios, substantiating our method’s superior efficacy and robustness. While related explorations combine LLMs with symbolic reasoning, notably for formal verification [18;9;8], they often employ LLMs as mere translators to predefined logical forms. Our work, in contrast, pioneers LLM-driven automated symbolic template generation . This process dynamically creates the entire logical scaffolding that constitutes the Inductive Logic Programming (ILP) language bias—including the core predicate system defining concepts and relations. Traditionally, defining this bias to guide ILP’s search and ensure its effectiveness requires extensive expert input. Automating the creation of this guiding structure thus unlocks ILP’s potential to dynamically adapt to novel problem domains and discover verifiable hypotheses where manual bias definition was previously a prohibitive barrier. The main contributions of this study are as follows: 1)We introduce a novel multi-agent framework using LLMs to automate ILP language bias (predicate system) construction. This pioneers an end-to-end pipeline from unstructured text to verifiable hypotheses, advancing explainable hybrid AI. 2)Unlike prior work limited to idealized data, we systematically evaluate LLM-based induction across challenging data dimensions (e.g., noise, imbalance, complexity), enabling a more thorough and realistic capability assessment. 3)Extensive experiments demonstrate our framework’s superior accuracy, robustness against data perturbations, and generalization across LLMs, significantly outperforming existing baselines. 2 Preliminaries In First-Order Logic (FOL), predicates are used to describe objects or the relationships between ob- jects and can be classified according to the number of arguments, such as unary or binary. For example, the unary predicate isRed(x) denotes "x is red," and the binary predicate parent(x, y) denotes "x is a parent of y." Instantiating the arguments of a predicate to constants (e.g., parent(Alice, Bob) ) yields an atom ; if this atom is considered to be true, it is called a fact. Typically, the known in- formation in a domain consists of several facts, which are regarded as directly usable background knowledge . Building on this foundation, more general inference rules can be expressed using a Horn clause , which is typically written in the form H←B1∧B2∧. . .∧Bk, where His the rule’s "head" and each Biconstitutes the rule’s "body". Its semantics is that if all atoms Biin the body are true, then the head Hmust also be true. For example, to express "if xis a parent of yandyis a parent of z, thenxis an ancestor of z", one can write ancestor(x, z) ←parent(x, y) ∧parent(y, z) . Here, ancestor(x, z) serves as the rule’s head, while parent(x, y) andparent(y,
https://arxiv.org/abs/2505.21486v1
z) form the rule’s body. Multiple such Horn clauses can be assembled into a rule set , typically exhibiting an "OR-of-ANDs" structure. In such a set, if all preconditions (the body) of any individual rule are satisfied (based on background knowledge), its conclusion (the head) is considered true. 2 3 Related Work 3.1 Inductive Logic Programming Inductive Logic Programming (ILP) automatically learns interpretable logic programs from positive and negative examples of a target predicate, along with background knowledge, by searching for rule sets in the First-Order Logic space. Broadly, these approaches can be categorized into heuristic methods (e.g., FOIL [ 21], Progol [ 16], Aleph [ 24]), constraint-solving techniques (e.g., ILASP [ 11], Popper [ 4], MAXSYNTH [ 7]), and differentiable approaches (e.g., [ 5;6;23]). However, all these approaches are essentially search algorithms that depend on an expert-defined language bias to define the search space. This language bias consists of the set of permissible predicates, the structural forms that rules can take, and other constraints that collectively restrict the universe of possible hypotheses the system can consider. Unlike prior research focused on refining search within fixed language biases, our work pioneers LLM- driven automation of the language bias itself. This encompasses formulating the predicate system, key structural constraints, and other declarative elements, all traditionally demanding extensive expert input. Our primary objective is to eliminate manual language bias engineering, thus enabling more adaptive integration and broader applicability of established ILP algorithms, particularly in open-domain tasks. 3.2 Hypothesis Generation Based on LLMs Large Language Models (LLMs) have recently garnered significant attention for hypothesis generation. For instance, ChatRule [ 14] employs LLMs to directly derive logical rules from knowledge graphs for explainable reasoning, while Moose-Chem [ 26] utilizes multi-turn inspiration selection and evolutionary algorithms to propose novel molecular hypotheses. Other notable approaches include HypoGeniC [ 28], which uses a multi-armed bandit-like mechanism for iterative rule generation and filtering; Iterative Hypothesis Refinement [ 20], guiding LLMs through a “propose-select-refine” process for concept-level rule abstraction; and HtT [ 29], which compiles LLM-generated candidate rules into an executable library for inference. While generalizable, LLMs’ noise sensitivity and heuristic hypothesis generation often yield sub- optimal or incoherent rule sets. Our approach counters this: LLMs first auto-construct a predicate system from unstructured data, which then allows Inductive Logic Programming (ILP) methods using precise constrained solving to produce a globally coherent and optimal rule set. This hybrid methodology yields flexible, robust, and interpretable hypotheses, particularly effective in complex, noisy, and open-domain settings. 4 Methodology Our approach contain three core stages: Predicate System Construction, Symbolic Knowledge Encoding, and ILP Learning. The overall framework is shown in Figure 1. 4.1 Predicate System Construction The construction of the predicate system is a cornerstone of our framework, directly influencing the quality and efficiency of subsequent symbolic encoding and Inductive Logic Programming (ILP) learning. This process is driven by a multi-agent subsystem, primarily comprising an Actor agent and a Critic agent. The Actor is responsible for initially designing and iteratively refining the predicate system based on raw text samples, while the Critic meticulously evaluates the Actor’s
https://arxiv.org/abs/2505.21486v1
proposals and provides guiding feedback. Through multiple rounds of collaborative interaction between the Actor and Critic, the system automatically generates a predicate system that is highly relevant to the task, structurally sound, and compliant with the requirements of an ILP solver. Actor Agent The Actor’s role is to design and optimize the predicate system, either from scratch or based on feedback from the previous iteration. It receives a small subset of training samples and is guided by a few-shot examples of predicate abstraction from other general tasks, along- side predefined predicate design principles and constraints. Based on these inputs, the Actor 3 zendo( A ) <- piece (A , C ) ∧ size (C , B ) ∧ blue ( C ) ∧ small ( B ) ∧ contact (C , D ) ∧ red ( D )Examples  ☺LLM …Hypothesis 1 :There should be two pieces in contact with each other. …Hypothesis 2 :There should be a blue piece and a red piece. ILP SolverOptimal Hypothesis piece(X,Y); contact(X,Y); size(X,Y); large(X); red(X); blue(X) ...Language BiasLLM -Based Multi -agent System Our MethodOther MethodsTrue False True Facts Contact(p1,p2); large(p1); red(p2); …Figure 1: An simple illusration of our method and the difference between our method and other LLM-based hypothesis generation methods. generates a complete definition of the predicate system, encompassing: Core Predicates: Typ- ically includes one target Head Predicate, representing the core concept to be learned and pre- dicted (e.g., suitable_for_business/1 ), and multiple Body Predicates describing sample at- tributes and relationships (e.g., formal_shoes/1 ,leather/1 , etc.). The number of body pred- icates is flexible and determined by task requirements. Predicate Formalism: For each predi- cate, its arity (number of arguments) and the type of each argument are explicitly defined (e.g., type(formal_shoes, (shoes,)) ).Declarations and Constraints: Other meta-information re- quired by the ILP solver is also defined, such as the input/output modes for predicate arguments (e.g., direction(formal_shoes, (in,)) ) and global constraints like the maximum number of variables ( max_vars ) or body length ( max_body ) in a clause. The Actor outputs the generated predicate system in a textual format (typically Prolog-compatible) for evaluation by the Critic. Critic Agent The Critic is responsible for comprehensively evaluating the predicate system gen- erated by the Actor, primarily from semantic and syntactic perspectives. Its evaluation includes: Semantic Evaluation: The Critic leverages the understanding capabilities of Large Language Models (LLMs) to analyze the semantic appropriateness of the predicate system. This includes checking for completeness (i.e., whether crucial concepts are missed), redundancy (i.e., whether semantically similar predicates exist), and relevance to the target task. Syntactic and Constraint Validation: The Critic also validates the predicate system against predefined syntactic rules and constraints using programmatic logic, ensuring its structural compliance with ILP solver specifications. For instance, it verifies that each argument type in the head predicate is covered by at least one body predicate and checks the correctness of arity and type declarations. Based on these checks, the Critic generates an evaluation summary and determines if the current predicate system is satisfactory. If not, the identified issues in the summary are
https://arxiv.org/abs/2505.21486v1
fed back to the Actor for the next round of refinement. The predicate system is finalized and used for subsequent symbolic knowledge encoding and ILP learning only when it passes all checks or when a predefined maximum number of iterations (set to five in our experiments) is reached. 4.2 Symbolic Knowledge Encoding Following predicate system finalization (Section 4.1), our Translator agent transforms natural language samples into Prolog facts. It parses each sample, mapping textual features to the established predicates. This translation proceeds in batches, circumventing LLM context limitations and the need for simultaneous full-dataset access common in other approaches. To bolster stability, translation failures trigger a retry (max two attempts). This systematic, batch-oriented encoding bridges natural language semantics with formal logic, enabling scalable conversion to a symbolic representation for ILP. 4 For example, consider the SHOES dataset. The LLM-derived predicate system might define the head predicate as type(suitable_for_business,(shoes,)). and various body predicates such as type(formal_shoes,(shoes,)). ,type(black,(shoes,)). ,type(leather,(shoes,)). , type(expensive,(shoes,)). , etc. Given a textual sample like “ Shoe_001 is a black formal shoe made of leather, expensive in price and very comfortable to wear. This shoe is suitable for business, ”, the symbolic knowledge encoding module would process this to generate Prolog facts. The background knowledge facts might be: black(shoe_001). ,formal_shoes(shoe_001). , leather(shoe_001). ,expensive(shoe_001). ,very_comfortable(shoe_001). The exam- ple fact indicating the target classification would be: pos(suitable_for_business(shoe_001)). This structured conversion ensures that all relevant information from the text is translated into a consistent logical format, forming the empirical basis for the subsequent ILP learning stage. 4.3 ILP Learning Upon completion of the symbolic knowledge encoding (Section 4.2), the comprehensive set of structured Prolog facts, along with the LLM-generated predicate system (Section 4.1), is provided as input to an ILP solver. We employ MAXSYNTH [ 7], which is an advanced solver that applies the Minimum Description Length (MDL) principle to balance rule complexity with noise coverage. This enables it to find globally optimal or near-optimal rule sets even with imperfect data, achieving both search efficiency and robustness, particularly in the presence of label noise. To enhance stability, if ILP solving fails, the algorithm restarts from predicate system design, with up to two attempts. When successful, the ILP solver outputs a set of Horn clauses as the final learned hy- pothesis. These are directly interpretable logical formulas explaining the target Head Pred- icate. For instance, on the SHOES dataset, a learned rule set might include: Rule 1:suitable_for_business(A) ← expensive(A) ∧formal_shoes(A) ; and Rule 2: suitable_for_business(A) ←synthetic_leather(A) ∧very_comfortable(A) . This inte- gration of an advanced ILP solver enhances the applicability and robustness of rule-based hypothesis generation in open-domain tasks. 5 Experiment Setup 5.1 Datasets and Baselines Datasets: We consider two synthetic binary classification tasks: SHOES andZENDO .SHOES is constructed by us to evaluate models’ ability to determine the suitability of shoes for business occasions, with all attributes expressible as unary predicates (e.g., Black(X) ,leather(X) ).ZENDO is adapted from classic cognitive psychology experiments and is more challenging: it also involves binary predicates (e.g., contact(X, Y) ,has_piece(X, Y) ), requiring models to reason about more complex logical relations
https://arxiv.org/abs/2505.21486v1
(see Appendix for details). For each task, we first specify a set of rules and generate corresponding logical facts to construct samples, which are then further converted into natural language form using templates. Baselines: We consider two LLM-based inductive reasoning algorithms as baselines: HypoGeniC andIterative Hypothesis Refinement (IHR) .HypoGeniC generates hypotheses in natural language form and iteratively improves them using a bank of counterexamples. Iterative Hypothesis Refinement further enhances this process by generating, selecting, and refining hypotheses, and is also capable of producing executable code as candidate rules (see Appendix for details). 5.2 Experimental Variables To systematically evaluate LLMs’ hypothesis generation capabilities under diverse data conditions, we consider the following dataset-level variables: Rule Num refers to the number of underlying logical rules guiding the generation of facts for each sample. For example, a Zendo dataset with two rules may require "a red object on the left" (rule 1) or "a green object adjacent to a blue object" (rule 2), with positive samples satisfying either rule. Our experiments include rule sets containing 1, 2, or 3 rules. Template Num refers to the number of natural language templates used to describe the same logical fact. Each sample is randomly expressed using one of several candidate templates, potentially 5 Method ModelShoes Zendo Average Acc F1 Acc F1 Acc F1 Iterative Hypothesis RefinementGPT-4o 96.7 96.7 50.0 34.1 73.4 65.4 Claude-3.7-sonnet 98.3 98.3 60.0 45.1 79.2 71.7 DeepSeek-V3 95.0 95.0 46.7 30.4 70.9 62.7 Qwen3-32B - - - - - - HypoGeniCGPT-4o 51.7 49.9 73.3 71.8 62.5 60.85 Claude-3.7-sonnet 75.0 74.3 68.3 66.6 71.65 70.45 DeepSeek-V3 70.0 67.9 70.0 69.1 70.0 68.5 Qwen3-32B 83.3 82.0 46.7 37.2 65.0 59.6 OursGPT-4o 87.9 87.9 76.7 75.4 82.3 81.7 Claude-3.7-sonnet 88.3 88.1 81.3 81.4 84.8 84.8 DeepSeek-V3 88.3 88.1 81.3 81.4 84.8 84.8 Qwen3-32B 87.9 87.9 80.0 80.7 84.0 84.3 Table 1: Comparison of hypothesis generation performance on the Shoes and Zendo datasets. Accuracy (Acc, %) and F1 score (F1, %) are reported for each dataset and their average. increasing difficulty for LLMs to abstract consistent semantics across diverse expressions. We evaluate conditions with 1, 2, or 3 candidate templates. Sample Size refers to the total number of samples in the dataset. With smaller sample sizes, LLMs may struggle to capture underlying patterns, limiting generalization capabilities. We compare performance with 50, 100, and 200 samples. Positive Ratio refers to the proportion of positive samples in the dataset. Lower positive ratios may cause models to overlook minority classes, resulting in poor coverage of positive cases. Our experiments include positive ratios of 20%, 30%, and 50%. Noise Ratio refers to the probability of randomly flipped labels in the training set (test labels remain noise-free). Higher noise levels can mislead models, decreasing accuracy and stability. We evaluate noise levels of 0%, 10%, and 20%. 5.3 Implementation Details To further evaluate the generality of each method, we consider the following language models in our experiments: GPT-4o [ 19], Claude-3.7-sonnet [ 2], DeepSeek-V3 [ 13], and Qwen3-32b [ 25]. The temperature parameter is set to 0 to reduce generation randomness. The dataset is split into
https://arxiv.org/abs/2505.21486v1
80% for training and 20% for testing. For each experiment, we perform three independent dataset generation processes and report the average results on the test set across the three runs. 6 Experiments and Results Based on the experimental setup, we evaluate our method by addressing the following research questions: RQ1: How sensitive is our method to the choice of LLMs, and does it show generality? RQ2: Can our method maintain stable performance under various data scenarios? RQ3: How does the overall performance of our method compare to existing advanced baselines? 6.1 Main Experiments In the main experiments, we fix five data generation variables for fair comparison: rule number (2), noise ratio (10%), template number (2), sample size (100), and positive ratio (50%). Each method is evaluated using four mainstream LLMs (GPT-4o, Claude-3.7-sonnet, DeepSeek-V3, and Qwen3-32b) to systematically assess performance differences across models. 6 6.1.1 Comparative Analysis of Methods Table 1 presents results across datasets and models. Our method demonstrates superior performance, particularly on the complex ZENDO task. HypoGeniC shows adaptability but lacks consistency, with performance heavily dependent on the underlying LLM capabilities. IHR excels on simple tasks but struggles with complex reasoning challenges and fails completely with Qwen3-32B. Our method’s key advantage is its model-agnostic design, maintaining consistent performance by delegating logical reasoning to an ILP solver while using LLMs for language understanding. 6.1.2 Dataset Complexity Analysis The results across datasets reveal important differences in method capabilities. IHR achieves near- perfect performance on the simpler SHOES task (96-98% accuracy) but experiences a dramatic drop of almost 50% on the more complex ZENDO task, indicating difficulties with relational reasoning involving binary predicates. Our method maintains more consistent performance across both datasets, demonstrating robust reasoning capabilities regardless of task complexity. 6.1.3 Model Dependency Analysis The results show significant differences in model dependency across methods. HypoGeniC exhibits high variability, with performance differences exceeding 30% between models on certain tasks. IHR shows considerable model dependency, especially for complex reasoning. In contrast, our method demonstrates remarkable consistency across all four LLMs, with performance variations typically below 5%. Claude-3.7-sonnet and DeepSeek-V3 achieve identical performance with our approach, while GPT-4o and Qwen3-32B show only minor differences, highlighting our method’s practical advantage for deployment across different environments. 6.1.4 Visualization of Results Table 2 illustrates how the three methods approach hypothesis generation differently on the same Zendo example. IHR generates code to verify hypotheses but achieves only 60% accuracy, as it struggles with complex relational reasoning despite its structured JSON input processing. HypoGeniC processes natural language descriptions directly but incorrectly infers a rule about "green pieces" (55% accuracy), demonstrating how pure LLM approaches can form plausible but incorrect generalizations. Our method, using the same natural language input as HypoGeniC, combines LLM capabilities with symbolic ILP solving to correctly identify the position-based pattern, achieving significantly higher accuracy. This example highlights how our hybrid approach effectively overcomes the limitations of both pure neural and symbolic reasoning techniques. 6.2 Ablation Studies on Data-level Variables Impact of rule complexity. As the number of underlying rules increases from 1 to 3, all methods experience performance degradation,
https://arxiv.org/abs/2505.21486v1
but to varying degrees. Our method demonstrates the greatest stability, maintaining high performance even with increased rule complexity as shown in Figure 2. This resilience stems from the systematic logical decomposition provided by our approach, which effectively handles conjunction and disjunction of multiple rules, while baseline methods show more substantial performance drops. Effect of template diversity. Increasing template diversity has the most pronounced impact on HypoGeniC, which relies entirely on LLMs’ natural language understanding capabilities. Our method and IHR show greater robustness to template variations, though through different mechanisms - IHR benefits from partially converting reasoning into code, while our approach gains stability by delegating logical structure identification to the ILP solver. Sensitivity to sample size. Sample size experiments reveal that our method achieves optimal performance even with relatively small datasets, benefiting from the ILP solver’s ability to identify the most consistent hypothesis with limited examples. With just 50 samples, our method already achieves performance comparable to what baselines reach with twice as many examples, as demonstrated in Figure 2. Robustness to class imbalance. Varying the positive ratio from 20% to 50% demonstrates another key advantage of our approach. While all methods perform better with balanced datasets, our method 7 Method Input Output Acc IHR • Structured example: { " pieces ": { " p36_0 ": { " orientation ": " lhs ", " size ": " small ", " color ": " green ", " position ": [4, 5] }, " p36_1 ": { " orientation ": " strange ", " size ": " small ", " color ": " red", " position ": [2, 4] }, " p36_2 ": { " orientation ": " lhs ", " size ": " large ", " color ": " blue ", " position ": [4, 6] }, " p36_3 ": { " orientation ": " strange ", " size ": " medium ", " color ": " blue ", " position ": [0, 3] } }, " contacts ": [] }• Code example: def fn(x): pieces = x[" pieces "] # Check if any pieces are in ,→contact if x[" contacts "]: return "No" # Get y- coordinates of all pieces y_coords = [ piece [" position " ][1] ,→for piece_id , piece in ,→pieces . items ()] # Check if all y- coordinates are ,→even or all are odd all_even = all(y % 2 == 0 for y ,→in y_coords ) all_odd = all (y % 2 == 1 for y in ,→ y_coords ) if all_even or all_odd : return " Yes " else : return "No"60.0 HypoGeniC • Text example: "World 36: piece p36_0 is lhs- oriented, small and green, located at (4,5); piece p36_1 is strange- oriented, small and red, located at (2,4); piece p36_2 is lhs-oriented, large and blue, located at (4,6); piece p36_3 is strange-oriented, medium and blue, located at (0,3). World 36 is Zendo"• Text example: "The world satisfies the Zendo rule if it contains at least two green pieces."55.0 Ours • Same with HypoGeniC • Logic program: zendo_world (A):- has_piece (A,C), ,→contacts (C,B), blue (B).
https://arxiv.org/abs/2505.21486v1
zendo_world (A):- has_piece (A,B), ,→strange_oriented (B), large (B ,→).85.0 Table 2: Case study comparing the inputs, outputs, and performance (Acc, %) and of different hypothesis generation methods on the Zendo dataset. The example shows how each method processes the same input differently, with IHR employing code-based verification, HypoGeniC using natural language processing, and our method combining LLM capabilities with symbolic reasoning to achieve higher accuracy. maintains consistently strong performance even with significant class imbalance. The performance gap between our method and the baselines widens as the positive ratio decreases. Resilience to label noise. As noise increases from 0% to 20% shown in Figure 2, both our method and IHR show performance degradation, which is expected since rule-based approaches cannot perfectly represent noisy samples even with ground truth rules. HypoGeniC’s performance remains relatively stable across noise levels, reflecting its dependence on the LLM’s inherent capabilities rather than strict rule adherence. Nevertheless, despite this degradation, our method and IHR still outperform HypoGeniC in high-noise scenarios, demonstrating the fundamental advantage of structured approaches when dealing with complex reasoning tasks, even under challenging conditions. 8 1 2 30.00.20.40.60.81.0F1 ScoreFactor 1 - Rule Num 1 2 30.00.20.40.60.81.0F1 ScoreFactor 2 - Template Num 50 100 2000.00.20.40.60.81.0F1 ScoreFactor 3 - Sample Size 20% 30% 50%0.00.20.40.60.81.0F1 ScoreFactor 4 - Positive Ratio 0% 10% 20%0.00.20.40.60.81.0F1 ScoreFactor 5 - Noise Ratio Methods and Datasets Ours - BUSINESS SHOES Ours - ZENDO HypoGeniC - BUSINESS SHOES HypoGeniC - ZENDO Iterative - BUSINESS SHOES Iterative - ZENDOFigure 2: The five subplots present the F1 scores of different methods on the BUSINESS SHOES and ZENDO datasets, each examining one key experimental variable: rule number, template number, sample size, positive ratio, and noise ratio. 6.3 Overall Analysis Our comprehensive results demonstrate that our method consistently outperforms existing approaches across all evaluated dimensions. The key strength lies in our principled task decomposition: leverag- ing LLMs for natural language understanding and information extraction while delegating logical consistency reasoning to symbolic ILP solvers. This complementary design creates a synergistic sys- tem that maintains robust performance across varied tasks, models, and challenging data conditions, proving more effective than approaches relying predominantly on either LLMs or symbolic methods alone. 7 Conclusion Contribution. This study proposes an innovative multi-agent collaborative reasoning framework for complex knowledge discovery tasks. By integrating LLMs with ILP solver, our framework enables automatic hypothesis generation and verification through multi-agent collaboration. Our approach effectively overcomes the reliance of traditional ILP on expert-defined language bias, achieving an end-to-end automated pipeline from unstructured textual data to verifiable logical hypotheses. Extensive experiments on diverse datasets and challenging scenarios demonstrate that our framework consistently outperforms existing baselines under various data conditions, exhibiting superior per- formance and robustness. This work not only extends the application of ILP to unstructured text domains, but also provides a new paradigm for building interpretable hybrid AI reasoning systems, laying a solid foundation for automated knowledge discovery. Limitation. Although our method demonstrates its effectiveness in the current experimental set- tings—including the synthetic SHOES dataset and the Zendo cognitive reasoning task—its perfor- mance and applicability to more complex and diverse
https://arxiv.org/abs/2505.21486v1
real-world data (e.g., richer textual content with highly sparse information or more ambiguous semantics) remain to be further explored and validated. Future Plan. Future research will extend this framework to broader real-world scenarios, particu- larly tasks requiring hypothesis generation from large-scale unstructured texts. We plan to explore automatic identification of valuable questions and explanatory hypotheses across domains—analyzing human behavioral patterns in social sciences or generating scientific hypotheses about drug interac- tions and catalytic pathways in natural sciences. 9 References [1]Jose C A Santos, Houssam Nassif, David Page, Stephen H Muggleton, and Michael J E Stern- berg. Automated identification of protein-ligand interaction features using inductive logic programming: a hexose binding case study. BMC bioinformatics , 13:1–11, 2012. [2]Anthropic. Claude 3.7 sonnet and claude code. https://www.anthropic.com/news/ claude-3-7-sonnet , 2025. [3]Andrew Cropper and Sebastijan Duman ˇci´c. Inductive logic programming at 30: a new introduc- tion. Journal of Artificial Intelligence Research , 74:765–850, 2022. [4]Andrew Cropper and Rolf Morel. Learning programs by learning from failures. Machine Learning , 110(4):801–856, 2021. [5]Richard Evans and Edward Grefenstette. Learning explanatory rules from noisy data. Journal of Artificial Intelligence Research , 61:1–64, 2018. [6]Claire Glanois, Zhaohui Jiang, Xuening Feng, Paul Weng, Matthieu Zimmer, Dong Li, Wulong Liu, and Jianye Hao. Neuro-symbolic hierarchical rule induction. In International Conference on Machine Learning , pages 7583–7615. PMLR, 2022. [7]Céline Hocquette, Andreas Niskanen, Matti Järvisalo, and Andrew Cropper. Learning mdl logic programs from noisy data. In Proceedings of the AAAI Conference on Artificial Intelligence , volume 38, pages 10553–10561, 2024. [8]Dongwei Jiang, Marcio Fonseca, and Shay B Cohen. Leanreasoner: Boosting complex logical reasoning with lean. arXiv preprint arXiv:2403.13312 , 2024. [9]Aditya Kalyanpur, Kailash Karthik Saravanakumar, Victor Barres, Jennifer Chu-Carroll, David Melville, and David Ferrucci. Llm-arc: Enhancing llms with an automated reasoning critic. arXiv preprint arXiv:2406.17663 , 2024. [10] Brenden M Lake, Ruslan Salakhutdinov, and Joshua B Tenenbaum. Human-level concept learning through probabilistic program induction. Science , 350(6266):1332–1338, 2015. [11] Mark Law, Alessandra Russo, and Krysia Broda. Inductive learning of answer set programs. In Logics in Artificial Intelligence: 14th European Conference, JELIA 2014, Funchal, Madeira, Portugal, September 24-26, 2014. Proceedings 14 , pages 311–325. Springer, 2014. [12] Benjamin Letham, Cynthia Rudin, Tyler H McCormick, and David Madigan. Interpretable classifiers using rules and bayesian analysis: Building a better stroke prediction model. The Annals of Applied Statistics , 9(3):1350–1371, 2015. [13] Aixin Liu, Bei Feng, Bing Xue, Bingxuan Wang, Bochao Wu, Chengda Lu, Chenggang Zhao, Chengqi Deng, Chenyu Zhang, Chong Ruan, et al. Deepseek-v3 technical report. arXiv preprint arXiv:2412.19437 , 2024. [14] Linhao Luo, Jiaxin Ju, Bo Xiong, Yuan-Fang Li, Gholamreza Haffari, and Shirui Pan. Chatrule: Mining logical rules with large language models for knowledge graph reasoning. arXiv preprint arXiv:2309.01538 , 2023. [15] Robin Manhaeve, Sebastijan Duman ˇci´c, Angelika Kimmig, Thomas Demeester, and Luc De Raedt. Neural probabilistic logic programming in deepproblog. Artificial Intelligence , 298:103504, 2021. [16] Stephen Muggleton. Inverse entailment and progol. New generation computing , 13:245–286, 1995. [17] Matthias Nickles and Alessandra Mileo. A hybrid approach to inference in probabilistic non-monotonic logic programming. In PLP@ ICLP , pages 57–68, 2015. [18] Theo X Olausson,
https://arxiv.org/abs/2505.21486v1
Alex Gu, Benjamin Lipkin, Cedegao E Zhang, Armando Solar-Lezama, Joshua B Tenenbaum, and Roger Levy. Linc: A neurosymbolic approach for logical reasoning by combining language models with first-order logic provers. arXiv preprint arXiv:2310.15164 , 2023. 10 [19] OpenAI. Hello gpt-4o. OpenAI Blog , 2024. [20] Linlu Qiu, Liwei Jiang, Ximing Lu, Melanie Sclar, Valentina Pyatkin, Chandra Bhagavatula, Bailin Wang, Yoon Kim, Yejin Choi, Nouha Dziri, et al. Phenomenal yet puzzling: Testing inductive reasoning capabilities of language models with hypothesis refinement. arXiv preprint arXiv:2310.08559 , 2023. [21] J. Ross Quinlan. Learning logical definitions from relations. Machine learning , 5:239–266, 1990. [22] Cynthia Rudin. Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nature machine intelligence , 1(5):206–215, 2019. [23] Prithviraj Sen, Breno WSR de Carvalho, Ryan Riegel, and Alexander Gray. Neuro-symbolic in- ductive logic programming with logical neural networks. In Proceedings of the AAAI conference on artificial intelligence , volume 36, pages 8212–8219, 2022. [24] Ashwin Srinivasan. The aleph manual. Technical report, University of Oxford, 2001. Version 4.4. [25] An Yang, Baosong Yang, Beichen Zhang, Binyuan Hui, Bo Zheng, Bowen Yu, Chengyuan Li, Dayiheng Liu, Fei Huang, Haoran Wei, et al. Qwen2. 5 technical report. arXiv preprint arXiv:2412.15115 , 2024. [26] Zonglin Yang, Wanhao Liu, Ben Gao, Tong Xie, Yuqiang Li, Wanli Ouyang, Soujanya Poria, Erik Cambria, and Dongzhan Zhou. Moose-chem: Large language models for rediscovering unseen chemistry scientific hypotheses. arXiv preprint arXiv:2410.07076 , 2024. [27] Andreas Zeller. Why programs fail: a guide to systematic debugging . Morgan Kaufmann, 2009. [28] Yangqiaoyu Zhou, Haokun Liu, Tejes Srivastava, Hongyuan Mei, and Chenhao Tan. Hypothesis generation with large language models. arXiv preprint arXiv:2404.04326 , 2024. [29] Zhaocheng Zhu, Yuan Xue, Xinyun Chen, Denny Zhou, Jian Tang, Dale Schuurmans, and Hanjun Dai. Large language models can learn rules. arXiv preprint arXiv:2310.07064 , 2023. 11 A MAXSYNTH The core idea of the MAXSYNTH algorithm is to introduce the Minimum Description Length (MDL) principle into the search and optimization process of Inductive Logic Programming (ILP). Specifically, MDL advocates selecting the model with the lowest "description cost" when balancing hypotheses and data: on one hand, it seeks to use more compact rule expressions (reducing the complexity of the hypothesis), while on the other hand, it aims to minimize misclassification of observed data (reducing the data residual). In MAXSYNTH, this principle is quantified by computing the total cost of "program size + number of false positives + number of false negatives" for candidate logic programs. The program with the lowest total cost is selected as the output hypothesis, ensuring a balance between model simplicity and good data coverage, even in high-noise environments. Unlike previous ILP algorithms, MAXSYNTH does not strictly pursue "full coverage of positive examples and zero coverage of negative examples." Instead, it allows a certain degree of error in order to ensure that a suitable explanation can still be found in complex or high-noise data scenarios. Additionally, compared to traditional greedy rule learning or closed searches based on manually defined predicate libraries, MAXSYNTH features a more flexible predicate invention mechanism and
https://arxiv.org/abs/2505.21486v1
the ability to learn recursive rules. It adopts an iterative "generation–combination–constraint" process, integrating an optimal solver based on MaxSAT to progressively eliminate candidate programs that do not satisfy the optimal MDL criterion. This ensures that the remaining programs are globally optimal or near-optimal in terms of both size and misclassification cost. The process is theoretically safeguarded by a "noise-tolerant" constraint that prevents the elimination of any hypothesis that could potentially be an MDL-optimal solution, thereby guaranteeing the correctness and completeness of the algorithm. B Datasets Details B.1 BUSINESS SHOES During the construction of the BUSINESS SHOES dataset, we implemented flexible design choices in three key aspects to ensure that our algorithm could be tested for robustness and generalization across various scenarios. (1) Feature System Each shoe is characterized by five major attributes: color ({red,blue ,black , white ,gray }),material ({leather ,canvas ,mesh ,synthetic leather }),style ({sneakers , casual shoes ,formal shoes ,skateboard shoes }),price ({cheap ,moderate ,expensive }), andcomfort ({very comfortable ,fairly comfortable ,moderately comfortable }). This multi-dimensional feature combination provides a rich attribute space for subsequent logical induc- tion. (2) Variable Natural Language Templates To generate diverse textual descriptions, we provide three different natural language templates for constructing shoe descriptions, as follows: •This is a {$color} {$style} made of {$material}, {$price} in price and {$comfort} to wear. This shoe is {$conclusion}. •This {$style} is made of {$material}, comes in {$color}, positioned at a {$price} price point, and is {$comfort}. It is {$conclusion}. •A {$color} {$material} {$style}, priced {$price}, and {$comfort} when worn. The shoe is {$conclusion}. In our experiments, the number of templates used is controlled by a hyperparameter N, which can take values from {1,2,3}. When N > 1, each sample randomly selects one of the Ntemplates for generation, simulating diverse expression styles. (3) Variable Decision Rules The BUSINESS SHOES dataset defines three default rules for determining whether a shoe is considered “suitable for business occasions” (i.e., a positive example): •Formal Business Occasions : The shoe must satisfy material =leather ,color =black , style =formal shoes , and price =expensive . 12 •Business Casual Occasions : The shoe must satisfy material =synthetic leather ,style =casual shoes , and comfort =very comfortable . •Modern Formal Business Occasions : The shoe must satisfy material =leather ,style =formal shoes , with color chosen from white ,blue andprice =moderate , as well as comfort =very comfortable . A sample is labeled as positive if it satisfies any one of the three rules; otherwise, it is labeled as negative. In practical experiments, we may choose to use only the first Mrules (e.g., using only one, two, or all three rules) to evaluate the model’s performance under different rule complexities. Similar to variable templates, this flexible rule design enables more systematic and comprehensive testing of the algorithm. In summary, the BUSINESS SHOES dataset serves as a flexible and diverse experimental platform for open-domain hypothesis generation and inductive logic programming tasks by incorporating multi-dimensional features, variable natural language templates, and adjustable decision rules. B.2 ZENDO The ZENDO dataset originates from a multi-object logical reasoning scenario, primarily designed
https://arxiv.org/abs/2505.21486v1
to evaluate a model’s ability to perform inductive learning on spatial relationships, object interactions, and attribute compositions. Compared to BUSINESS SHOES, which contains only a single object per sample, each ZENDO sample typically consists of multiple objects ( piece ), which may have spatial relationships such as contact ( contact ) or shared coordinates, making the reasoning task more challenging. (1) Feature System In each world , multiple objects are randomly generated, each possessing the following core attributes: •Position coordinates (x, y) : Defines the object’s location in a 2D plane, which determines spatial distribution and potential interactions between objects. •Size (size) : Initially represented as a numerical value, later mapped to abstract categories such as { small ,medium ,large }. •Color (color) : Includes options such as red,blue , and green . Some logical rules may specify particular color combinations. •Orientation (orientation) : Possible values include lhs,rhs, and upright , indicating the object’s spatial orientation in the world. Additionally, if two objects are sufficiently close or have adjacent coordinates, they are marked as being in contact during dataset generation. In the implementation, the i-th sample’s j-th object is uniquely identified using the format p_i_j . (2) Variable Natural Language Templates To generate the natural language description of each ZENDO sample, we randomly select one of the following templates for each object: •piece {$id} is a {$size} {$color} piece at ({$x},{$y}) oriented {$orientation} •piece {$id} is {$orientation}-oriented, {$size} and {$color}, located at ({$x},{$y}) • . . . These templates highlight different attributes and relationships (e.g., position, orientation, and color), enriching the variety and realism of the textual descriptions. The descriptions of all objects in a world are concatenated, and if any objects are in contact ( contact ), this relationship is explicitly stated as"piece {$id1} contacts piece {$id2}" . Finally, a summary statement such as "World {$world_id} is/is not Zendo" is appended. (3) Variable Decision Rules The dataset defines three configurations, Zendo1 ,Zendo2 , and Zendo3 , each corresponding to a scenario with 1, 2, or 3 logical rules: 13 •Zendo1 : zendo1(A) :- piece(A,C), size(C,B), blue(C), small(B), contact(C,D), red(D). This rule states that a world Asatisfies zendo1 if there exists an object Cthat is blue ,small , and in contact with a redobject D. •Zendo2 : This setting includes two rules, often involving more complex constraints on colors and coordinates: zendo2(A) :- piece(A,B), piece(A,C), piece(A,D), green(D), red(B), blue(C). zendo2(A) :- piece(A,B), coord1(B,C), green(D), lhs(B), coord1(D,C). •Zendo3 : This setting extends to three rules, potentially involving conditions on colors, sizes, orientations, and contact relationships: zendo3(A) :- piece(A,D), blue(D), coord1(D,B), piece(A,C), coord1(C,B), red(C). zendo3(A) :- piece(A,D), contact(D,C), rhs(D), size(C,B), large(B). zendo3(A) :- piece(A,B), upright(B), contact(B,D), blue(D), size(D,C), large(C). A world is classified as "Zendo" if it satisfies any of the defined logical rules; otherwise, it is classified as "Not Zendo." The dataset thus spans multiple levels of complexity, from small-scale ( zendo1 ) to medium ( zendo2 ) and high complexity ( zendo3 ), posing increasing challenges for evaluating a model’s inductive reasoning capabilities and robustness. 14
https://arxiv.org/abs/2505.21486v1
arXiv:2505.21488v1 [cs.CV] 27 May 2025Be Decisive: Noise-Induced Layouts for Multi-Subject Generation OMER DAHARY, Tel Aviv University, Israel and Snap Research, Israel YEHONATHAN COHEN, Tel Aviv University, Israel OR PATASHNIK, Tel Aviv University, Israel and Snap Research, Israel KFIR ABERMAN, Snap Research, United States of America DANIEL COHEN-OR, Tel Aviv University, Israel and Snap Research, Israel “ … a sea turtle, a jellyfish, three starfish, and an octopus… ” “… three ginger kittens, two gray kittens and a brown dog… ” Fig. 1. Our method generates multi-subject images by leveraging the layout encoded in the initial noise. Having a layout control allows to accurately generate each subject. We predict the layout based on the initial noise, and refine it throughout the denoising process, aligning it with the prompt and making it more fine-grained. Using the layout encoded in the initial noise we preserve the prior of the original model and generate diverse compositions. Below each of the generated images, we show the layout predicted at three timesteps along the generation process. Generating multiple distinct subjects remains a challenge for existing text- to-image diffusion models. Complex prompts often lead to subject leakage, causing inaccuracies in quantities, attributes, and visual features. Preventing leakage among subjects necessitates knowledge of each subject’s spatial location. Recent methods provide these spatial locations via an external layout control. However, enforcing such a prescribed layout often conflicts with the innate layout dictated by the sampled initial noise, leading to misalignment with the model’s prior. In this work, we introduce a new approach that predicts a spatial layout aligned with the prompt, derived from the initial noise, and refines it throughout the denoising process. By relying on this noise-induced layout, we avoid conflicts with externally imposed layouts and better preserve the model’s prior. Our method employs a small neural network to predict and refine the evolving noise-induced layout at each denoising step, ensuring clear boundaries between subjects while maintaining consistency. Experimental results show that this noise- aligned strategy achieves improved text-image alignment and more stable multi-subject generation compared to existing layout-guided techniques, while preserving the rich diversity of the model’s original distribution. 1 INTRODUCTION Diffusion models have revolutionized the field of image synthesis, enabling the creation of high-quality and diverse images from in- tuitive conditions such as textual prompts. However, despite their significant success, these models still struggle to accurately align to complex prompts [Chefer et al .2023]. Specifically, generatingmultiple subjects remains surprisingly challenging, often resulting in inaccurate quantities, attributes, and visual features [Binyamin et al. 2024; Rassin et al. 2023; Yang et al. 2024]. Recent works have identified harmful leakage between subjects as a primary source to text-image misalignment. To address this issue, previous methods manipulate the denoising process by limiting inter-attention among distinct subjects [Dahary et al .2025]. This approach requires knowing each subject’s spatial location, which is not explicitly represented within the model, and hence it relies on a prescribed layout control. However, an externally imposed layout [Feng et al .2024b,a; Qu et al.2023; Yang et al .2024; Zheng et al .2023] can conflict with the layout implied by
https://arxiv.org/abs/2505.21488v1
the sampled initial noise, creating tension with the model’s prior and potentially leading to inferior results or deviations from the model’s prior. Specifically, as the image’s low frequencies are defined early in the denoising process, the initial noise plays a fundamental role in shaping the final layout of the generated image [Ban et al .2024; Guo et al.2024; Patashnik et al .2023]. Therefore, steering the denoising trajectory toward a specific layout requires actively countering the model’s intrinsic prior, which naturally encodes a layout intent within the initial noise. This often pushes the generated image away from the image manifold, resulting in semantic misalignment and degradation of image quality. 2 •Omer Dahary, Yehonathan Cohen, Or Patashnik, Kfir Aberman, and Daniel Cohen-Or “... science fiction movie poster with two astronauts , arobot , and a spaceship ” Fig. 2. Our method generates images with multiple subjects without requir- ing external layout inputs. By following the innate noise-induced layout encoded in the sampled initial noise, we preserve the model’s prior and achieve diverse compositions. The second row show the initial noise-induced layout of the corresponding output images above. As can be seen, the initial layouts reflect the final composition of the generated images. In this work, we introduce a method that derives a prompt-aligned spatial layout from the initial noise and iteratively refines it through- out the denoising process, as illustrated in Figure 1. By anchoring the layout around the initial noise, this approach stays consistent with the model’s prior, avoiding the conflicts introduced by exter- nally imposed layouts. We argue that this approach promotes more natural and diverse compositions by minimizing resistance to the input noise, and hence succeeds in generating images that better adhere to the prompt. To produce the layout, we train a small neural network that predicts the layout induced by the latent noisy image using fea- tures extracted from the denoising model. This network is applied throughout the denoising process, gradually refining the layout at each timestep to guide the generation toward layouts that remain both prompt-aligned and consistent across timesteps. Our work embraces the motto “Be Decisive”. At each denoising step, we guide the process toward a well-defined layout, ensuring clear boundaries between subjects. In this approach, each subject is assigned to a distinct image region, preventing leakage and enhanc- ing text-image alignment. Meanwhile, only minimal adjustments are made to the layout between steps, maintaining consistency with the noise-induced layout throughout the process. Through extensive experiments, we demonstrate our method’s power in adhering to complex multi-subject prompts, and compare it with previous methods. Specifically, we demonstrate that our method generates combinations of classes, adjectives, and quantities while maintaining diverse layouts that are natural, as they remain consistent with the model’s prior layouts. Figure 2 highlights this diversity, showcasing compositions obtained by sampling different initial noises. 2 RELATED WORK Diffusion models [Dhariwal and Nichol 2021; Podell et al .2023; Ramesh et al .2022; Rombach et al .2022; Saharia et al .2022] haveachieved remarkable success in modeling the complex distribution of natural images. However, despite their advantages, these models still face
https://arxiv.org/abs/2505.21488v1
limitations in adhering to detailed prompts, particularly those involving multiple subjects. Previous works have addressed chal- lenges in multi-subject generation through two distinct approaches: conditioning the generation on a spatial layout or applying heuris- tics to attention maps to enforce the generation of each subject mentioned in the prompt. Layout-Based Multi-Subject Generation. Layout-based methods have demonstrated greater consistency in multi-subject generation compared to text-to-image models. Early efforts incorporated layout information through techniques such as multiple diffusion compo- sitions [Bar-Tal et al .2023; Ge et al .2023], guidance from model features [Kim et al .2022; Luo et al .2024; Voynov et al .2023], specif- ically attention features [Chen et al .2023a; Couairon et al .2023; Kim et al .2023; Liu et al .2023; Phung et al .2024; Xie et al .2023], or fine-tuning [Avrahami et al .2023; Li et al .2023b; Nie et al .2024; Yang et al. 2023; Zhang et al. 2023b]. Recent studies highlight the architectural tendency of attention layers to leak visual features between subjects – a phenomenon that complicates multi-subject generation [Dahary et al .2025]. To address this, prior methods [Dahary et al .2025; Wang et al .2024a,b; Zhou et al .2024] introduce techniques that mitigate such leakage by modifying the operation of attention layers within the model. However, these approaches rely on a predefined spatial layout to identify the subjects among which leakage should be prevented. In our work, we propose a method to dynamically define the spatial locations of subjects during image generation by extracting the layout throughout the process. This extracted layout is then used to prevent leakage, enabling the generation of accurate multi-subjects images. To simplify the image generation process for users, a common practice is to automatically generate a layout prior to image gen- eration. Several works leverage large language models (LLMs) for this task, employing in-context learning or chain-of-thought rea- soning [Chen et al .2023b; Feng et al .2024b; Lian et al .2023; Qu et al . 2023; Yang et al .2024]. While these methods excel in producing plausible layouts, the separation between the prompt-to-layout and layout-to-image models often leads to inaccuracies or unnatural results in multi-subject images. Notably, Ranni [Feng et al .2024a] proposes overcoming this limitation by jointly fine-tuning the LLM and diffusion model on a shared dataset. However, their approach demands significant resources, with results obtained using a large proprietary model trained on millions of examples. Layout-Free Multi-Subject Generation. Numerous approaches have sought to address specific aspects of subject misalignment during inference without relying on a predefined spatial layout. Some ma- nipulate text embeddings [Feng et al .2022; Tunanyan et al .2023], while others guide the model to disentangle the attention distribu- tions of distinct subjects and attributes [Agarwal et al .2023; Chefer et al.2023; Li et al .2023a; Meral et al .2024; Rassin et al .2023]. While these methods show some success, their effectiveness often hinges on the initial noise, resulting in unstable outcomes. To enhance robustness, other techniques [Bao et al .2024; Wang et al .2023] employ fine-tuning based on similar heuristics in a self-supervised Be Decisive: Noise-Induced
https://arxiv.org/abs/2505.21488v1
Layouts for Multi-Subject Generation •3 Soft layout network 𝑧!Bounded Attention 𝑧!"#′Hard clustering Soft layout networkGuidance 𝑧!"#𝑀!$#Bounded Attention...ℒ! 𝑀!𝑆! 𝑀! Fig. 3. Our method steers the denoising process by applying iterative guidance (turquoise box) after each denoising step (orange regions). At denoising step 𝑡 (left orange box), we predict a soft-layout 𝑆𝑡based on the diffusion model’s features, and cluster it to form a hard-layout 𝑀𝑡(purple box). This hard-layout is then used to control the layout of the next denoising step (right orange box). In the guidance stage, we optimize the latent image, with the objective to align its associated updated soft-layout with the hard-layout 𝑀𝑡. manner. Nevertheless, due to the model’s limitations in interpreting quantities and distinguishing between numerous subjects, these ap- proaches often struggle to generate more than two or three distinct subjects and fail to support multiple instances of the same class effectively. Other works specifically tailor solutions for accurate subject quan- tities [Binyamin et al .2024; Kang et al .2023; Zhang et al .2023a]. While these methods perform well for single-class scenarios, they lack the generality needed for complex compositions involving multi-class subjects and attributes. In contrast, our approach pro- vides comprehensive control over multi-class subjects, quantities, and attributes, addressing the limitations of existing layout-free methods. 3 PRELIMINARY: BOUNDED ATTENTION Text-to-image diffusion models struggle to generate accurate multi- subject images due to visual leakage between subjects. Prior work [Da- hary et al .2025] identified the model’s attention layers as the pri- mary source of this leakage — where features of semantically similar subjects are indiscriminately blended — and proposed Bounded At- tention as a training-free solution to mitigate it. Given an input layout, Bounded Attention modifies the atten- tion layers during the denoising process by masking the attention between queries and keys of different subjects. In cross-attention layers, it constrains each subject’s attention to its corresponding textual tokens. In self-attention layers, it restricts attention to pixels within the subject’s own region and the background, explicitly ex- cluding other subjects. This masking scheme reduces the influence of irrelevant visual and textual tokens on each pixel, maintaining the distinct visual features of each subject. During generation, Bounded Attention alternates between de- noising steps and guidance steps, both of which adopt the maskingscheme. In guidance mode, the latent representation is optimized to adhere to the input layout: 𝑧opt 𝑡=𝑧𝑡−𝛽∇𝑧𝑡 Lcross+L self , where LcrossandLselfare loss terms that encourage the respective cross- and self-attention maps to focus within each subject’s designated mask. By isolating attention for each subject, the masking scheme avoids guidance artifacts caused by forcing similar queries to di- verge, maintaining a trajectory that is better aligned with the data manifold. In our work, we adopt Bounded Attention’s masking scheme to reduce leakage, but instead of relying on a prescribed layout, we extract the noise-induced layout and refine it between denoising steps. We further modify the guidance procedure to promote deci- siveness — that is, enforcing strict subject boundaries throughout the layout refinement process. 4 METHOD Our method aims to facilitate the generation of multiple distinct subjects using an existing text-to-image model
https://arxiv.org/abs/2505.21488v1
[Podell et al .2023]. We steer the denoising process to adhere to a layout that allows preventing unwanted leakage among the subjects. Our key idea is to progressively define a prompt-aligned spatial layout based on features extracted from the noisy latent images along the denoising process. We then encourage the denoising process to follow these layouts, upholding this initial “decision”. Figure 3 illustrates the overall structure of our inference pipeline. Our method is built on a denoising process to which we apply Bounded Attention [Dahary et al .2025] (marked in orange boxes) controlled by layout masks 𝑀𝑡. We add two components to the de- noising process. First, a component that predicts a prompt-aligned layout 𝑀𝑡from a noisy latent image 𝑧𝑡based on features extracted from the diffusion model (purple box). Second, a guidance mecha- nism that optimizes a noisy latent image so that its induced layout 4 •Omer Dahary, Yehonathan Cohen, Or Patashnik, Kfir Aberman, and Daniel Cohen-Or aligns with the previous layout (turquoise box). This mechanism encourages a “decisive” generation process, where each subject men- tioned in the prompt is consistently assigned to its own distinct image region across timesteps. Both components rely on a soft-layout 𝑆𝑡. The soft-layout is a timestep-dependent feature map that reflects the likelihood that two pixels will be associated with a common subject. In the following, we elaborate on the soft-layout and its use. 4.1 Soft-Layout We begin by explaining the motivation behind our soft-layouts. Extracting fine-grained layouts directly from the initial noise is in- herently challenging since the image is formed in a gradual manner. Moreover, predicted layouts might not perfectly correspond to the subjects specified in the prompt. To address these challenges, we introduce the notion of soft-layout, a feature map that represents each pixel as a descriptor encapsulating its potential to associate with other pixels in composing a single subject. In the first timesteps, due to high uncertainty, the soft-layout encodes a coarse layout. At later timesteps, the soft-layout is more granular and precise. Our use of the soft-layout is two-fold. First, it is used to predict the masks 𝑀𝑡, termed as hard-layout , which bounds the attention in the denoising steps. Second, we optimize the noisy latent image to produce a soft-layout that agrees with 𝑀𝑡. At the top of Figure 4, we display the progressive layouts pro- duced by our full pipeline. In the middle, we show the correspond- ing layouts without guidance. As illustrated, guidance is crucial for maintaining consistent hard-layouts across timesteps, thereby facilitating convergence to a prompt-aligned layout by the end of the denoising process. We now turn to formally define the soft-layout and elaborate on the network we train to predict it. A soft-layout 𝑆𝑡∈R𝑛×𝑑is a feature map, encoding 𝑛pixels as 𝑑-dimensional vectors, where the similarity of two feature vectors 𝑆𝑡[𝑥1], 𝑆𝑡[𝑥2]indicates correspon- dence to the same subject in the generated image. To produce the soft-layout, we train a network that takes as input a set of features extracted from various layers of the diffusion model. Dataset. To train our network, we automatically construct a small dataset of∼1500
https://arxiv.org/abs/2505.21488v1
images synthesized by the diffusion model, along with their segmentation maps. First, we randomly generate a set of prompts specifying multiple subject classes and their quantities (see full details in the supplemental). Then, we synthesize images based on these prompts, and segment them by feeding the corresponding subject names to GroundedSAM [Ren et al .2024b]. We filter out ambiguous examples, where two segmentation masks share a large overlap, and select a single label for each segment based on the segmentation model’s confidence score. Notably, we do not apply any filtering based on prompt align- ment. This allows the network to predict soft-layouts that match the diffusion model’s intent, even if it does not adhere to the prompt. In turn, this enables our guidance mechanism to detect misalignments early in the denoising process and apply corrective updates to the latent.“Aparrot andtwo doves sitting on a branch in a lush forest at daylight” 𝑡=50 𝑡=45 𝑡=40 𝑡=35Soft-layout Hard-layout Our full methodSoft-layout Hard-layout Our method w/o guidanceSoft-layout Vanilla SDXL Fig. 4. The figure illustrates the progression of the soft- and hard-layouts in three cases. The top row shows results from our full method. The middle row presents our method without guidance. The bottom row shows vanilla SDXL, where only the soft-layout extracted from the noisy latents is displayed. Below each image, we show the hard-layout obtained at the final timestep. Architecture. Following Readout Guidance [Luo et al .2024], we design our model as a collection of lightweight convolutional heads, each processing different features from the denoising model along with the current time embedding. The outputs of these heads are then averaged using learnable weights, and fed into a convolutional bottleneck head, which outputs a 64×64×10feature map, repre- senting the soft-layout. We attach our heads to the attention layers, which are known to be highly indicative of the image structure and subject bound- aries [Hertz et al .2022; Patashnik et al .2023; Tumanyan et al .2023]. Specifically, we use the cross-attention queries and the self-attention keys at the decoder layers. See the supplemental for full architectural details. Training. We train the soft-layout network with a triplet loss [Schroff et al .2015], encouraging feature similarity between pixels of the same segment, and dissimilarity between different segments. Formally, given a random timestep 𝑡and an image with 𝑘subject segments 𝑀𝑗 𝑘 𝑗=1and a background 𝑀0, we sample triplets of pixel coordinates 𝑥𝑖a, 𝑥𝑖p∈𝑀𝑗pand𝑥𝑖n∈𝑀𝑗n, where 𝑀𝑗p≠𝑀𝑗n. Then, we compute the following loss ∑︁ 𝑖h sim(𝑆𝑡 𝑥𝑖a , 𝑆𝑡 𝑥𝑖n )−sim 𝑆𝑡 𝑥𝑖a , 𝑆𝑡h 𝑥𝑖pi +𝛼i +,(1) Be Decisive: Noise-Induced Layouts for Multi-Subject Generation •5 Soft-layout Hard-layout (a) intra-cluster over-generationHard-layouts over successive timesteps (b) inconsistent cluster borders Fig. 5. Without guidance, we observe two types of layout failures: (a) intra- cluster over-generation, where multiple subjects are assigned to a single cluster due to high variance in the soft-layout; and (b) inconsistent cluster borders across timesteps, leading to subject over-generation and leakage caused by oscillating boundaries. where simis the cosine-similarity, 𝛼is the similarity margin be- tween positive and negative samples, and [·]+is the ReLU operation. 4.2 From Soft to Hard Layouts
https://arxiv.org/abs/2505.21488v1
While the soft-layout represents the original model’s future intent, to successfully generate multiple prompt-aligned subjects, it is neces- sary to uphold clear subject boundaries in accordance to the prompt. To achieve this, we derive a hard-layout from the soft-layout pro- duced by our network. More specifically, given 𝑘subjects mentioned in the prompt, we apply K-Means to cluster the soft-layout into 𝑘+1segments: 𝑘for the subjects, and one for the background. We set the background 𝑀0 as the cluster that has the biggest overlap with the image’s border, and recursively cluster each of the other segments into two sub- clusters, continuing the process with the bigger sub-cluster, until the variance is smaller than 𝜎2 cluster. Any sub-cluster dropped during this process is added to 𝑀0. Finally, we must tag each subject cluster with an appropriate label representing a specific subject instance. After the first denoising step, at 𝑡=𝑇, we compute the average cross-attention map of each subject noun [Epstein et al .2023; Hertz et al .2022; Patashnik et al.2023] and use the Hungarian algorithm to assign instances to clusters such that the corresponding cross-attention response in each cluster is maximized. To avoid leakage, our initial decision regarding each subject’s location must be respected throughout the generation process. Thus, for𝑡<𝑇, we stack the soft-layout 𝑆𝑡with the previous soft-layouts 𝑆𝑡+1, . . . , 𝑆min(𝑡+𝑤,𝑇)from 𝑤earlier timesteps, before performing hard-clustering. Since clusters may shift over time, we reassign their labels at each timestep using the Hungarian algorithm, matching each cluster in 𝑀𝑡to a cluster in 𝑀𝑡+1such that their intersection- over-union (IoU) is maximized. 4.3 Decisive Guidance To encourage decisiveness — in the sense of maintaining consistent subject boundaries throughout generation — we perform guidance steps after each denoising step. These steps optimize the interme- diate latent 𝑧𝑡−1to align the predicted soft-layout 𝑆𝑡−1with the previous hard-layout 𝑀𝑡(see turquoise box in Figure 3). First, to integrate each subject’s semantics to its designated seg- ment in 𝑀𝑡, we apply the cross-attention loss Lcrossfrom BoundedAttention (see Section 3). Notably, Bounded Attention also intro- duces a self-attention loss Lselfto mitigate subject neglect in fixed, externally provided layouts by discouraging background attention. However, we omit this term, as it is unnecessary with our noise- induced layouts and its removal significantly reduces runtime. Nonetheless, our evolving layouts give rise to two distinct failure modes. These are illustrated in Figure 5, which presents zoomed- in views of the soft- and hard-layouts generated without guidance (originally shown in Figure 4). On the left (Figure 5a), the soft-layout contains three spatially separated foreground regions (colored green, purple, and dark red) within a single hard-cluster, each correspond- ing to a distinct dove. On the right (Figure 5b), the middle dove is generated at the intersection of three hard-clusters and does not maintain consistent membership in any single cluster during de- noising. As a result, its lower body is initially assigned to the parrot cluster, leading to a hybrid generation in which the dove inherits a parrot-like tail. To address the first issue (Figure 5a), we introduce a variance loss Lvarthat encourages low cluster variance in 𝑆𝑡−1with respect
https://arxiv.org/abs/2505.21488v1
to the previous hard-layout 𝑀𝑡: Lvar=1 𝑘+1𝑘∑︁ 𝑗=01 𝑀𝑡 𝑗 ∑︁ 𝑥𝑖∈𝑀𝑡 𝑗sim2 𝑆𝑡−1[𝑥𝑖], 𝜇𝑡−1 𝑗 , (2) where 𝜇𝑡−1 𝑗is the mean soft-layout feature vector of cluster 𝑗: 𝜇𝑡−1 𝑗=1 𝑀𝑡 𝑗 ∑︁ 𝑥𝑖∈𝑀𝑡 𝑗𝑆𝑡−1[𝑥𝑖]. (3) This loss promotes intra-cluster similarity, encouraging each cluster to represent a coherent subject instance. To avoid cluster boundaries from oscillating between timesteps (Figure 5b), we compute the Dice segmentation loss Ldice[Milletari et al.2016] between the hard-layout 𝑀𝑡and a probabilistic layout 𝑃𝑡−1∈R𝑛×(𝑘+1), where each element 𝑃𝑡−1[𝑥𝑖, 𝑗]represents the probability that pixel 𝑥𝑖belongs to cluster 𝑗: 𝑃𝑡−1[𝑥𝑖,·]=softmaxn sim 𝑆𝑡−1[𝑥𝑖], 𝜇𝑡−1 𝑗 /𝜏o𝑘 𝑗=0 ∈R𝑘+1, (4) where 𝜏is a temperature hyperparameter. This term penalizes am- biguous pixel-cluster associations, promoting sharper and more consistent cluster boundaries. Together, these three terms address complementary aspects of the layout refinement process: Lcrosspromotes the proper semantic alignment in each cluster, Lvarreduces intra-cluster ambiguity, and Ldiceencourages temporal consistency and boundary sharpness. The final decisiveness loss is defined as: Ldecisive =𝛼crossLcross+𝛼varLvar+𝛼diceLdice, (5) where 𝛼cross, 𝛼var, 𝛼diceare the respective weighting coefficients. Ablation studies evaluating the contribution of each component are provided in the supplementary material. 5 EXPERIMENTS In this section, we present both qualitative and quantitative exper- iments to evaluate the effectiveness of our method. We compare 6 •Omer Dahary, Yehonathan Cohen, Or Patashnik, Kfir Aberman, and Daniel Cohen-Or “... ahuge lion , and two huge crocodiles ... streets of London” “... apheonix , adragon , and a unicorn ... beautiful forest” Fig. 6. Generated images across different seeds. Our method follows the noise-induce layouts to generate prompt-aligned images with diverse com- positions. our approach against four training-free baseline methods: Make-It- Count (MIC) [Binyamin et al .2024], RPG [Yang et al .2024], Attend- and-Excite (A&E) [Chefer et al .2023], and Bounded Attention (BA) [Dahary et al .2025]. Since BA operates on layouts, we use an LLM to automatically provide it with layouts constructed from given prompts (denoted as LLM+BA). Furthermore, we include compar- isons with LMD+[Lian et al .2023] and Ranni [Feng et al .2024a], which require training. 5.1 Qualitative Results Layout diversity. We begin our experiments by showing the ef- fectiveness of our method in generating diverse and natural layouts that adhere to the prompt. Each row of figures 6,10 depict images generated from a single prompt using different random seeds. As can be seen, our results exactly match subject descriptions, display- ing proper combinations of classes, attributes and quantities, while still demonstrating unique and believable compositions. Non-curated results. We conduct a non-curated comparison with our baseline in Figure 11 by sampling each method seven times, using a single prompt and the seeds 0to6. We also display the results obtained by Flux. While LLM+BA is able to generate correct images four out of seven times, our method is able to correctly adhere to the prompt in each image without requiring an input layout. Notably, none of the other methods, including Flux, are able to generate even one sample that match the prompt, often depicting subject amalgama- tions due to severe leakage. Specifically, SDXL, LLM+BA and A&E suffer from over-generation of subjects, while Flux, RPG and
https://arxiv.org/abs/2505.21488v1
Ranni struggle due to under-generation. On the other hand, LMD+ is able to construct the correct quantities, but is prone to generating un- natural compositions, where subjects appear disjointed from the background. Multiple Personalized Subjects. Leakage between subjects is partic- ularly noticeable when generating personalized subjects. Here, we show that our method can be seamlessly integrated with an existing“... in the snow” “... in the snow” “... in a video game” “... as chefs in the kitchen” “... as chefs in the kitchen” “Anime painting ...” “... in a coffee shop” “... in a coffee shop” “... as pop figures” Inputs w/o Be Decisive w/ Be Decisive Fig. 7. Results of integrating our method with an existing personalization method, enabling the generation of multiple personalized individuals within the same image. personalization method to facilitate the generation of multiple per- sonalized subjects. Specifically, we utilize a method trained to gen- erate specific individuals by injecting personalized features through the cross-attention layers of a text-to-image model [Patashnik et al . 2025]. This method does not inherently support the generation of two individuals, as demonstrated in Figure 7. However, combining this method with ours, enables the accurate generation of diverse images with multiple individuals. Comparisons with baselines. We present a qualitative comparison in Figure 8. All other methods struggle to generate multi-subjects prompts due to leakage. In the first row, none of the competing methods are able to generate the distinct characteristics of each of the bears. In the second row, they either generate the wrong number of subjects, or leak the colors of the carpet or the cars into the teddy bears. Specifically, the current LLM-based methods exhibit either subpar control over subject quantities (LLM+BA, RPG, Ranni), or unnatural grid-like subject arrangements (LLM+BA, LMD+). In comparison, our method successfully generates prompt-aligned images with natural-looking compositions. 5.2 Quantitative Results Dataset evaluation. We perform quantitative evaluation on the T2I-CompBench dataset [Huang et al .2023], assessing our method’s performance across the following key aspects: multi-class composi- tions, attribute binding, and numeracy. We further measure layout diversity, which quantifies the variability of generated compositions across different seeds. We summarize the results in Figure 9, and refer to the supplement for the full table. Be Decisive: Noise-Induced Layouts for Multi-Subject Generation •7 "Apolar bear , agrizzly bear , apanda bear , and a koala bear " "Two teddy bears andfour red toy cars on a white carpet" Ours SDXL A&E LLM+BA RPG Ranni LMD+ Fig. 8. Qualitative comparison of our method with baseline methods. We provide more examples in the supplement. Single ClassMulti ClassTextureColorLayout-Free MethodsSingle ClassMulti ClassTextureDiversityColorLayout-Based Methods Fig. 9. Quantitative comparison of our method against layout-based and layout-free methods. Results demonstrate that while other methods face trade-offs between metrics, our method consistently achieves high scores across all metrics. To measure layout diversity, we sample 20 random prompts from CompBench’s single-class dataset and generate five images per prompt using five random seeds. We align all five layouts by maxi- mizing the IoUbetween them using the Hungarian algorithm. Di- versity is quantified as the average 1−IoUbetween all layout pairs of
https://arxiv.org/abs/2505.21488v1
the same prompt. As reported, our method achieves significantly higher diversity than the baseline, preserving the innate variability of the model’s prior, in contrast to the limited diversity of LLM-based methods. All other metrics were assessed on 200 prompts, sampled from the respective category in CompBench. Color and texture binding was evaluated using BLIP-VQA [Huang et al .2023], while single-class and multi-class compositions were evaluated using the F1 scores between ground-truth subject quantities and the quantities com- puted by GroundedSAM [Ren et al .2024a] on the generated images.Table 1. User study results. SDXL LLM+BA RPG Ranni LMD+ MIC Our score vs. 0.74 0.87 0.96 0.89 0.9 0.53 While other methods are tailored towards enhancing specific met- rics, our approach consistently achieves high performance across all measurements, surpassing competitors in most cases. Lastly, since MIC is limited to single-class prompts, we only mea- sure its performance on this specific metric. Our method achieves a score of 0.837, compared to MIC’s score of 0.772. User study. The automatic metrics in Figure 9 fail to detect seman- tic leakage, as they rely on models trained on real images, where such issues do not arise. To address this limitation, we conduct a user study. We utilize ChatGPT to generate 25 prompts enlisting three to four visually similar, but distinct, animals, with an appropriate background. For each prompt, participants were shown 10 images: five generated by our method, and five generated by a competing method. Users were than tasked with selecting images with realis- tic compositions that accurately reflect the prompt. For evaluation against MIC, we additionally generate five prompts with single-class quantities. We collected 192 responses from 32 participants. Table 1 reports the conditional probability of a selected image being gener- ated by our method versus competitors. The results showcase our method’s superiority in handling complex multi-subject prompts, with our scores substantially improving over layout-based methods. Notably, even though MIC is specifically designed to tackle single- class quantities, our method receives comparable scores, while still being versatile enough to support more complex prompts. 6 CONCLUSIONS We have addressed the notorious difficulty of generating multiple distinct subjects from complex prompts in text-to-image diffusion models. Recognizing that inter-subject leakage is the primary issue, 8 •Omer Dahary, Yehonathan Cohen, Or Patashnik, Kfir Aberman, and Daniel Cohen-Or and that bounding mutual attention offers a viable solution, we de- signed a mechanism to define a layout for controlling inter-subject attention. Our key contribution lies in using the natural latent layout defined by the initial noise of the model, rather than imposing an external layout. By making only small adjustments on-the-fly, our approach remains rooted in the original distribution of the model, benefiting from denoising a signal already close to that distribu- tion. Empirical evaluations confirm that this strategy provides a stronger balance between text-image alignment and visual diversity compared to layout-driven alternatives. It is important to recognize that the multi-subject generation problem is intrinsically tied to the pretrained model’s prior. When the underlying network has not been sufficiently exposed to images featuring multiple distinct subjects, its learned distribution may be ill-equipped
https://arxiv.org/abs/2505.21488v1
to handle complex multi-subject arrangements. As a result, any approach aiming to improve multi-subject generation, ours included, must contend with these fundamental distributional constraints. Although our method outperforms existing alterna- tives, there remains a ceiling imposed by the model training data, restricting how effectively multi-subject prompts can be addressed in practice. The main limitation of our method lies in the computational cost of the iterative guidance and its tendency to push the optimized latent away from the prior distribution. In the future, we aim to explore regularization techniques to keep the latent closer to its original distribution or replace the optimization process with feature injection from a control map representing the target clusters. ACKNOWLEDGMENTS We thank Elad Richardson and Narek Tumanyan for their early feedback and helpful suggestions. We also thank the anonymous re- viewers for their meticulous comments which have helped improve our work. This work was partially supported by ISF (grant 3441/21). REFERENCES Aishwarya Agarwal, Srikrishna Karanam, KJ Joseph, Apoorv Saxena, Koustava Goswami, and Balaji Vasan Srinivasan. 2023. A-star: Test-time attention segre- gation and retention for text-to-image synthesis. In Proceedings of the IEEE/CVF International Conference on Computer Vision . 2283–2293. Omri Avrahami, Thomas Hayes, Oran Gafni, Sonal Gupta, Yaniv Taigman, Devi Parikh, Dani Lischinski, Ohad Fried, and Xi Yin. 2023. Spatext: Spatio-textual representation for controllable image generation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition . 18370–18380. Yuanhao Ban, Ruochen Wang, Tianyi Zhou, Boqing Gong, Cho-Jui Hsieh, and Minhao Cheng. 2024. The Crystal Ball Hypothesis in diffusion models: Anticipating object positions from initial noise. arXiv preprint arXiv:2406.01970 (2024). Zhipeng Bao, Yijun Li, Krishna Kumar Singh, Yu-Xiong Wang, and Martial Hebert. 2024. Separate-and-Enhance: Compositional Finetuning for Text-to-Image Diffusion Models. In ACM SIGGRAPH 2024 Conference Papers . 1–10. Omer Bar-Tal, Lior Yariv, Yaron Lipman, and Tali Dekel. 2023. Multidiffusion: Fusing diffusion paths for controlled image generation. (2023). Lital Binyamin, Yoad Tewel, Hilit Segev, Eran Hirsch, Royi Rassin, and Gal Chechik. 2024. Make It Count: Text-to-Image Generation with an Accurate Number of Objects. arXiv preprint arXiv:2406.10210 (2024). Agneet Chatterjee, Gabriela Ben Melech Stan, Estelle Aflalo, Sayak Paul, Dhruba Ghosh, Tejas Gokhale, Ludwig Schmidt, Hannaneh Hajishirzi, Vasudev Lal, Chitta Baral, et al.2024. Getting it right: Improving spatial consistency in text-to-image models. InEuropean Conference on Computer Vision . Springer, 204–222. Hila Chefer, Yuval Alaluf, Yael Vinker, Lior Wolf, and Daniel Cohen-Or. 2023. Attend- and-excite: Attention-based semantic guidance for text-to-image diffusion models. ACM Transactions on Graphics (TOG) 42, 4 (2023), 1–10. Minghao Chen, Iro Laina, and Andrea Vedaldi. 2023a. Training-free layout control with cross-attention guidance. arXiv preprint arXiv:2304.03373 (2023).Xiaohui Chen, Yongfei Liu, Yingxiang Yang, Jianbo Yuan, Quanzeng You, Li-Ping Liu, and Hongxia Yang. 2023b. Reason out your layout: Evoking the layout master from large language models for text-to-image synthesis. arXiv preprint arXiv:2311.17126 (2023). Guillaume Couairon, Marlene Careil, Matthieu Cord, Stéphane Lathuiliere, and Jakob Verbeek. 2023. Zero-shot spatial layout conditioning for text-to-image diffusion models. In Proceedings of the IEEE/CVF International Conference on Computer Vision . 2174–2183. Omer Dahary, Or Patashnik, Kfir Aberman, and Daniel Cohen-Or. 2025. Be your- self: Bounded attention for multi-subject
https://arxiv.org/abs/2505.21488v1
text-to-image generation. In European Conference on Computer Vision . Springer, 432–448. Prafulla Dhariwal and Alexander Nichol. 2021. Diffusion models beat gans on image synthesis. Advances in neural information processing systems 34 (2021), 8780–8794. Dave Epstein, Allan Jabri, Ben Poole, Alexei Efros, and Aleksander Holynski. 2023. Diffusion self-guidance for controllable image generation. Advances in Neural Information Processing Systems 36 (2023), 16222–16239. Weixi Feng, Xuehai He, Tsu-Jui Fu, Varun Jampani, Arjun Akula, Pradyumna Narayana, Sugato Basu, Xin Eric Wang, and William Yang Wang. 2022. Training-free struc- tured diffusion guidance for compositional text-to-image synthesis. arXiv preprint arXiv:2212.05032 (2022). Weixi Feng, Wanrong Zhu, Tsu-jui Fu, Varun Jampani, Arjun Akula, Xuehai He, Sugato Basu, Xin Eric Wang, and William Yang Wang. 2024b. Layoutgpt: Compositional visual planning and generation with large language models. Advances in Neural Information Processing Systems 36 (2024). Yutong Feng, Biao Gong, Di Chen, Yujun Shen, Yu Liu, and Jingren Zhou. 2024a. Ranni: Taming text-to-image diffusion for accurate instruction following. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition . 4744–4753. Songwei Ge, Taesung Park, Jun-Yan Zhu, and Jia-Bin Huang. 2023. Expressive text- to-image generation with rich text. In Proceedings of the IEEE/CVF International Conference on Computer Vision . 7545–7556. Xiefan Guo, Jinlin Liu, Miaomiao Cui, Jiankai Li, Hongyu Yang, and Di Huang. 2024. Initno: Boosting text-to-image diffusion models via initial noise optimization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition . 9380–9389. Amir Hertz, Ron Mokady, Jay Tenenbaum, Kfir Aberman, Yael Pritch, and Daniel Cohen-Or. 2022. Prompt-to-prompt image editing with cross attention control. arXiv preprint arXiv:2208.01626 (2022). Kaiyi Huang, Kaiyue Sun, Enze Xie, Zhenguo Li, and Xihui Liu. 2023. T2i-compbench: A comprehensive benchmark for open-world compositional text-to-image generation. Advances in Neural Information Processing Systems 36 (2023), 78723–78747. Wonjun Kang, Kevin Galim, and Hyung Il Koo. 2023. Counting guidance for high fidelity text-to-image synthesis. arXiv preprint arXiv:2306.17567 (2023). Gyeongnyeon Kim, Wooseok Jang, Gyuseong Lee, Susung Hong, Junyoung Seo, and Seungryong Kim. 2022. Dag: Depth-aware guidance with denoising diffusion proba- bilistic models. arXiv preprint arXiv:2212.08861 (2022). Yunji Kim, Jiyoung Lee, Jin-Hwa Kim, Jung-Woo Ha, and Jun-Yan Zhu. 2023. Dense Text-to-Image Generation with Attention Modulation. In Proceedings of the IEEE/CVF International Conference on Computer Vision . 7701–7711. Yumeng Li, Margret Keuper, Dan Zhang, and Anna Khoreva. 2023a. Divide & bind your attention for improved generative semantic nursing. arXiv preprint arXiv:2307.10864 (2023). Yuheng Li, Haotian Liu, Qingyang Wu, Fangzhou Mu, Jianwei Yang, Jianfeng Gao, Chunyuan Li, and Yong Jae Lee. 2023b. Gligen: Open-set grounded text-to-image generation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition . 22511–22521. Long Lian, Boyi Li, Adam Yala, and Trevor Darrell. 2023. Llm-grounded diffusion: Enhancing prompt understanding of text-to-image diffusion models with large language models. arXiv preprint arXiv:2305.13655 (2023). Zhiheng Liu, Yifei Zhang, Yujun Shen, Kecheng Zheng, Kai Zhu, Ruili Feng, Yu Liu, Deli Zhao, Jingren Zhou, and Yang Cao. 2023. Customizable image synthesis with multiple subjects. Advances in neural information processing systems 36 (2023), 57500–57519. Grace Luo, Trevor Darrell, Oliver Wang, Dan B Goldman, and Aleksander Holynski. 2024. Readout guidance: Learning control
https://arxiv.org/abs/2505.21488v1
from diffusion features. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition . 8217–8227. Tuna Han Salih Meral, Enis Simsar, Federico Tombari, and Pinar Yanardag. 2024. Con- form: Contrast is all you need for high-fidelity text-to-image diffusion models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition . 9005–9014. Fausto Milletari, Nassir Navab, and Seyed-Ahmad Ahmadi. 2016. V-net: Fully convolu- tional neural networks for volumetric medical image segmentation. In 2016 fourth international conference on 3D vision (3DV) . Ieee, 565–571. Weili Nie, Sifei Liu, Morteza Mardani, Chao Liu, Benjamin Eckart, and Arash Vahdat. 2024. Compositional Text-to-Image Generation with Dense Blob Representations. arXiv preprint arXiv:2405.08246 (2024). Be Decisive: Noise-Induced Layouts for Multi-Subject Generation •9 "... movie scene with a meerkat , abunny , afox, and a frog dancing hula in Hawaii" "... serene and exotic scene with a wooden cabin , abrick house , and the golden temple on a hill" "... asailboat , amotorboat , and a kayak in a beautiful lake" "...five horses in a field" "...panda plush toy , ared panda plush toy , and a koala plush toy on a shelf" "... a window dressing with three mannequins wearing a blue velvet dress , apink tulle gown , and a brown fur coat " Fig. 10. Generated images across different seeds. Our method follows noise-induce layouts to generate diverse compositions, while still faithfully depicting subject characteristics such as class features, attributes and quantities. Note the rich layout diversity of the results. 10 •Omer Dahary, Yehonathan Cohen, Or Patashnik, Kfir Aberman, and Daniel Cohen-Or "A hyper-realistic photo of a ferret , asquirrel , and a crow in a beautiful garden" Seed 0 Seed 1 Seed 2 Seed 3 Seed 4 Seed 5 Seed 6Ours SDXL Flux A&E LLM+BA RPG Ranni LMD+ Fig. 11. Comparison of non-curated images generated from seeds 0 to 6. Be Decisive: Noise-Induced Layouts for Multi-Subject Generation •11 Or Patashnik, Rinon Gal, Daniil Ostashev, Sergey Tulyakov, Kfir Aberman, and Daniel Cohen-Or. 2025. Nested Attention: Semantic-aware Attention Values for Concept Personalization. arXiv:2501.01407 [cs.CV] Or Patashnik, Daniel Garibi, Idan Azuri, Hadar Averbuch-Elor, and Daniel Cohen-Or. 2023. Localizing object-level shape variations with text-to-image diffusion models. InProceedings of the IEEE/CVF International Conference on Computer Vision . 23051– 23061. Quynh Phung, Songwei Ge, and Jia-Bin Huang. 2024. Grounded text-to-image synthesis with attention refocusing. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition . 7932–7942. Dustin Podell, Zion English, Kyle Lacey, Andreas Blattmann, Tim Dockhorn, Jonas Müller, Joe Penna, and Robin Rombach. 2023. Sdxl: Improving latent diffusion models for high-resolution image synthesis. arXiv preprint arXiv:2307.01952 (2023). Leigang Qu, Shengqiong Wu, Hao Fei, Liqiang Nie, and Tat-Seng Chua. 2023. LayoutLLM-T2I: Eliciting Layout Guidance from LLM for Text-to-Image Gener- ation. arXiv preprint arXiv:2308.05095 (2023). Aditya Ramesh, Prafulla Dhariwal, Alex Nichol, Casey Chu, and Mark Chen. 2022. Hierarchical text-conditional image generation with clip latents. arXiv preprint arXiv:2204.06125 1, 2 (2022), 3. Royi Rassin, Eran Hirsch, Daniel Glickman, Shauli Ravfogel, Yoav Goldberg, and Gal Chechik. 2023. Linguistic Binding in Diffusion Models: Enhancing Attribute Cor- respondence through Attention Map
https://arxiv.org/abs/2505.21488v1
Alignment. arXiv preprint arXiv:2306.08877 (2023). Tianhe Ren, Shilong Liu, Ailing Zeng, Jing Lin, Kunchang Li, He Cao, Jiayu Chen, Xinyu Huang, Yukang Chen, Feng Yan, et al .2024a. Grounded sam: Assembling open-world models for diverse visual tasks. arXiv preprint arXiv:2401.14159 (2024). Tianhe Ren, Shilong Liu, Ailing Zeng, Jing Lin, Kunchang Li, He Cao, Jiayu Chen, Xinyu Huang, Yukang Chen, Feng Yan, Zhaoyang Zeng, Hao Zhang, Feng Li, Jie Yang, Hongyang Li, Qing Jiang, and Lei Zhang. 2024b. Grounded SAM: Assembling Open-World Models for Diverse Visual Tasks. ArXiv abs/2401.14159 (2024). https: //api.semanticscholar.org/CorpusID:267212047 Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. 2022. High-resolution image synthesis with latent diffusion models. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition . 10684–10695. Chitwan Saharia, William Chan, Saurabh Saxena, Lala Li, Jay Whang, Emily L Denton, Kamyar Ghasemipour, Raphael Gontijo Lopes, Burcu Karagol Ayan, Tim Salimans, et al.2022. Photorealistic text-to-image diffusion models with deep language under- standing. Advances in neural information processing systems 35 (2022), 36479–36494. Florian Schroff, Dmitry Kalenichenko, and James Philbin. 2015. Facenet: A unified embedding for face recognition and clustering. In Proceedings of the IEEE conference on computer vision and pattern recognition . 815–823. Narek Tumanyan, Michal Geyer, Shai Bagon, and Tali Dekel. 2023. Plug-and-play diffusion features for text-driven image-to-image translation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition . 1921–1930. Hazarapet Tunanyan, Dejia Xu, Shant Navasardyan, Zhangyang Wang, and Humphrey Shi. 2023. Multi-Concept T2I-Zero: Tweaking Only The Text Embeddings and Nothing Else. arXiv preprint arXiv:2310.07419 (2023). Andrey Voynov, Kfir Aberman, and Daniel Cohen-Or. 2023. Sketch-guided text-to-image diffusion models. In ACM SIGGRAPH 2023 Conference Proceedings . 1–11. Xudong Wang, Trevor Darrell, Sai Saketh Rambhatla, Rohit Girdhar, and Ishan Misra. 2024a. Instancediffusion: Instance-level control for image generation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition . 6232–6242. Xierui Wang, Siming Fu, Qihan Huang, Wanggui He, and Hao Jiang. 2024b. Ms-diffusion: Multi-subject zero-shot image personalization with layout guidance. arXiv preprint arXiv:2406.07209 (2024). Zirui Wang, Zhizhou Sha, Zheng Ding, Yilin Wang, and Zhuowen Tu. 2023. To- kencompose: Grounding diffusion with token-level supervision. arXiv preprint arXiv:2312.03626 (2023). Jinheng Xie, Yuexiang Li, Yawen Huang, Haozhe Liu, Wentian Zhang, Yefeng Zheng, and Mike Zheng Shou. 2023. BoxDiff: Text-to-Image Synthesis with Training-Free Box-Constrained Diffusion. In Proceedings of the IEEE/CVF International Conference on Computer Vision . 7452–7461. Ling Yang, Zhaochen Yu, Chenlin Meng, Minkai Xu, Stefano Ermon, and CUI Bin. 2024. Mastering text-to-image diffusion: Recaptioning, planning, and generating with multimodal llms. In Forty-first International Conference on Machine Learning . Zhengyuan Yang, Jianfeng Wang, Zhe Gan, Linjie Li, Kevin Lin, Chenfei Wu, Nan Duan, Zicheng Liu, Ce Liu, Michael Zeng, et al .2023. Reco: Region-controlled text-to-image generation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition . 14246–14255. Lvmin Zhang, Anyi Rao, and Maneesh Agrawala. 2023b. Adding conditional control to text-to-image diffusion models. In Proceedings of the IEEE/CVF International Conference on Computer Vision . 3836–3847. Ruisu Zhang, Yicong Chen, and Kangwook Lee. 2023a. Zero-shot Improvement of Object Counting with CLIP. In R0-FoMo: Robustness of Few-shot and
https://arxiv.org/abs/2505.21488v1
Zero-shot Learning in Large Foundation Models .Guangcong Zheng, Xianpan Zhou, Xuewei Li, Zhongang Qi, Ying Shan, and Xi Li. 2023. LayoutDiffusion: Controllable Diffusion Model for Layout-to-image Generation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition . 22490–22499. Dewei Zhou, You Li, Fan Ma, Xiaoting Zhang, and Yi Yang. 2024. Migc: Multi-instance generation controller for text-to-image synthesis. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition . 6818–6828. 12 •Omer Dahary, Yehonathan Cohen, Or Patashnik, Kfir Aberman, and Daniel Cohen-Or APPENDIX A TECHNICAL DETAILS A.1 Architecture and Hyperparameters In our experiments, we used SDXL [Podell et al .2023] as the back- bone model. Our soft-layout network follows the Readout Guidance archi- tecture with spatially aligned heads [Luo et al .2024], with minor modifications: •We use the self-attention keys and cross-attention queries from the decoder layers as inputs to the network, adjusting the input channels accordingly. •We modify the final convolutional layer to output 10 chan- nels instead of 3. The network was trained for 5,000 steps using a learning rate of10−4. To compute the triplet loss, we sample 50 pixel triplets per image, selecting subject pixels with a probability of 0.75 and background pixels with a probability of 0.25. A similarity margin of 𝛼=0.5is applied. During image generation, we use 50 denoising steps, a DDPM scheduler, and a guidance scale of 7.5. Guidance is applied during the first 15 denoising steps, with 5 gradient descent iterations per step. The decisive loss is computed using the weights 𝛼cross=0.3, 𝛼var=0.21, and 𝛼dice=0.49. ForLdice, a temperature value of 𝜏=15is used. For hard-clustering, we set a sliding window size of 𝑤=30and a variance threshold of 𝜎2 cluster=0.025. A.2 Dataset Generation Our dataset consists of approximately 1,500 generated images and their corresponding segmentation maps. To construct training prompts, we use the same 20 MSCOCO classes as in MIC [Binyamin et al .2024]. Each prompt randomly includes 1–3 classes. For each selected class, we assign a quantity between 1 and 10, with a probability of 0.9. We optionally prepend a prefix (with probability 0.8) and append a postfix (with probability 0.6), both sampled from fixed lists: •Prefixes: “a photo of”, “an image of”, “a picture of”, “a paint- ing of”. •Postfixes: “on the grass”, “on the road”, “on the ground”, “in a yard”. We observe that our soft-layout network generalizes well to un- seen object classes, backgrounds, and prompt structures, owing to its lightweight design and the use of expressive attention features from the pre-trained diffusion model. A.3 Computational Resource Usage All experiments were conducted on an NVIDIA A100 GPU, with all computations — including clustering — performed on the GPU. Sim- ilar to Readout Guidance, our sampling process takes approximately 77 seconds and utilizes 36 GB of VRAM, compared to 7 seconds and 8 GB for vanilla SDXL. Our implementation builds upon the Read- out Guidance and Bounded Attention codebases, which were not optimized for resource efficiency. As such, further code optimization is likely to reduce both runtime and memory usage.Table 2. Quantitative evaluation. Method Color Texture Single-Class Multi-Class Layout Diversity
https://arxiv.org/abs/2505.21488v1
Ours 0.704 0.686 0.837 0.723 0.718 SDXL 0.568 0.660 0.746 0.676 - A&E 0.537 0.659 0.742 0.682 - LLM+BA 0.685 0.665 0.659 0.603 0.408 RPG 0.604 0.643 0.609 0.635 0.155 Ranni 0.259 0.445 0.729 0.579 0.679 LMD+ 0.457 0.614 0.885 0.898 0.408 Table 3. Ablation user study results. Method Prompt-Alignment Accuracy w/oLdecisive 0.016 w/oLcross 0.442 w/oLvar 0.447 w/oLdice 0.105 Ldecisive𝑆𝑡−1, 𝑀𝑡−10.289 Full method 0.832 B ADDITIONAL RESULTS We use the same baseline as in the main paper: LLM+BA [Da- hary et al .2025], RPG [Yang et al .2024], Ranni [Feng et al .2024a], LMD+[Lian et al. 2023], and A&E [Chefer et al. 2023]. B.1 Quantitative Results Table 2 presents the quantitative results comparing our method with the baseline. Unlike other approaches that balance trade-offs between metrics, our method consistently delivers high performance across all metrics. B.2 Qualitative Results Figure 12 showcases additional qualitative comparison results. Un- like competing methods, which fail to accurately generate all sub- jects from the prompt, our method consistently preserves the in- tended semantics of each subject. For instance, in the first row, none of the methods successfully generate all the fruits specified in the prompt. Similarly, in the last row, none of the methods accurately capture all the animals in the prompt, with most suffering from attribute leakage. B.3 Ablation Studies Quantitative Evaluation. To quantitatively assess the contribution of each component, we conducted a user study, following the same format as our benchmark user study, and composed of a subset of 10 random prompts from the full user study. We report the percentage of user-selected images for each methods, i.e. prompt-alignment accuracy, as recorded by 19 participants, in Table 3. Qualitative Evaluation. We display qualitative ablation studies in Figure 13, where we systematically vary our method’s configuration to assess each component’s importance. As can be seen in the leftmost four columns, neglecting our deci- sive guidance, or any of its terms, promotes subject over-generation Be Decisive: Noise-Induced Layouts for Multi-Subject Generation •13 "Three apples ,two oranges , and an avocado on the kitchen counter" "Amonkey and a sloth and a frog in the rainforest" "Two electric guitars andtwo violins on the floor" "Two stone towers and a carriage in medieval times" "Aniguana , ared lizard , aturtle and a porcupine in the desert" Ours SDXL A&E LLM+BA RPG Ranni LMD+ Fig. 12. Qualitative comparison of our method with baseline methods. due to the instability of the hard-layouts during generation. Omit- tingLcross orLvarcauses clusters to fragment internally, leading them to span multiple, disconnected subject instances. Additionally, parts of a subject may be absorbed into the background, resulting in shrunken or incomplete subject regions. Omitting Ldicealso promotes over-generation, as oscillating cluster boundaries across timesteps lead to the emergence of redundant subjects with mixed appearances at cluster edges. Finally, we ablate the choice of computing our Ldecisive loss be- tween the intermediate soft-layout 𝑆𝑡−1and the previous hard- layout 𝑀𝑡. Instead, we apply an additional denoising step on 𝑧𝑡−1 before optimizing it using guidance. That denoising step is employed to extract the denoising model’s features and compute an updatedhard-layout 𝑀𝑡−1. Then,
https://arxiv.org/abs/2505.21488v1
during guidance, we compute Ldecisive between 𝑆𝑡−1and𝑀𝑡−1. As evident in the second-to-right column, this approach also compromises accurate subject generation, yield- ing redundant subject instance due to clustering inconsistencies between timesteps. B.4 Limitations In Figure 14, we present two limitations of our method. First, in cluttered scenes, subjects may appear with irregular sizes or exhibit poor interaction with the background (left image). We observe that this issue also occurs with vanilla SDXL and can often be mitigated by increasing the number of denoising steps. 14 •Omer Dahary, Yehonathan Cohen, Or Patashnik, Kfir Aberman, and Daniel Cohen-Or "... aparrot , and two doves sitting on a branch in a lush forest at daylight" "...two cows and a donkey in a farm" w/oLdecisive w/oLcross w/oLvar w/oLdice Ldecisive 𝑆𝑡−1, 𝑀𝑡−1 Full method Fig. 13. Qualitative ablation. We ablate our method by skipping the guidance steps (w/o Ldecisive ), dropping a loss term when optimizing (w/o Lcross, w/o Lvar, w/oLdice), and performing an alternative guidance step ( Ldecisive𝑆𝑡−1, 𝑀𝑡−1), where the loss is computed between the soft- and hard-layouts of the same timestep (instead of Ldecisive𝑆𝑡−1, 𝑀𝑡). All images in each row are generated using the same seed. “... acactus in aclay pot and “...ten apples ...” a fern in aporcelain pot ...” Fig. 14. Limitations.Second, since the layouts are derived from the model’s prior — which lacks a robust understanding of spatial relationships [Chat- terjee et al .2024] — subjects may sometimes fail to respect spatial constraints specified in the prompt (right image).
https://arxiv.org/abs/2505.21488v1
arXiv:2505.21497v1 [cs.CV] 27 May 2025Paper2Poster: Towards Multimodal Poster Automation from Scientific Papers 1Wei Pang∗,2Kevin Qinghong Lin∗ ,1Xiangru Jian∗,1Xi He,3Philip Torr 1University of Waterloo2National University of Singapore3University of Oxford Project Page: https://paper2poster.github.io Abstract Academic poster generation is a crucial yet challenging task in scientific commu- nication, requiring the compression of long-context interleaved documents into a single, visually coherent page. To address this challenge, we introduce the first benchmark and metric suite for poster generation, which pairs recent conference papers with author-designed posters and evaluates outputs on (i) Visual Qual- ity—semantic alignment with human posters, (ii) Textual Coherence—language fluency, (iii) Holistic Assessment—six fine-grained aesthetic and informational criteria scored by a VLM-as-judge, and notably (iv) PaperQuiz —the poster’s ability to convey core paper content as measured by VLMs answering generated quizzes . Building on this benchmark, we propose PosterAgent , a top-down, visual- in-the-loop multi-agent pipeline: the (a) Parser distills the paper into a structured asset library; the (b) Planner aligns text–visual pairs into a binary-tree layout that preserves reading order and spatial balance; and the (c) Painter–Commenter loop refines each panel by executing rendering code and using VLM feedback to eliminate overflow and ensure alignment. In our comprehensive evaluation, we find that GPT-4o outputs—though visually appealing at first glance—often exhibit noisy text and poor PaperQuiz scores, and we find that reader engagement is the primary aesthetic bottleneck, as human-designed posters rely largely on visual semantics to convey meaning. Our fully open-source variants ( e.g.,based on the Qwen-2.5 series) outperform existing 4o-driven multi-agent systems across nearly all metrics, while using87%fewer tokens . It transforms a 22-page paper into a finalized yet editable ‘.pptx ’ poster — all for just $0.005. These findings chart clear directions for the next generation of fully automated poster-generation models. The code and datasets are available at https://github.com/Paper2Poster/Paper2Poster . 1 Introduction Academic posters play a pivotal role in scientific communication, enabling rapid dissemination of key findings at conferences where attendees have only minutes to grasp core insights from the full papers. Despite significant progress in automated slide generation – with systems such as PPTAgent [ 37] and D2S [ 29] pioneering text-to-slide pipelines – poster creation [ 33,30,3] remains an underexplored and substantially more challenging task. Unlike slide decks, which distribute content across multiple, single-message slides, academic posters must condense an entire paper into a single, visually coherent page. This requires (i)handling a much longer multi-modal context [ 24],(ii)tightly interleaving text and graphics to convey complex ideas at a glance [ 33,3], and (iii)respecting stringent spatial constraints to avoid text overflow or layout collapse [ 10,30]. These factors make VLM- or LLM-only approaches insufficient: without explicit visual feedback like humans, it is difficult to reason about spatial layouts, maintain logical flow within a confined canvas, ensuring legibility and aesthetic. To systematically evaluate poster generation, we propose the Paper2Poster Benchmark, the first benchmark and metric suite for this novel task. Our benchmark comprises recent conference papers ∗Equal contribution. Corresponding to: xihe@uwaterloo.ca, kevin.qh.lin@gmail.com Preprint. How to create poster?How to evaluate poster?PosterAgentPaper2PosterPaper (20Ktokens)Poster (created by Agent)Poster (created by Author)Figure 1: Overview of this work.
https://arxiv.org/abs/2505.21497v1
We address two core challenges in scientific poster generation: Left: How to create a poster from a paper —we propose PosterAgent (Sec. 4), a framework that transforms long-context scientific papers (20K+ tokens) into structured visual posters; and Right: How to evaluate poster quality —we introduce the Paper2Poster benchmark (Sec. 3), which enables systematic comparison between agent-generated and author-designed posters. paired with author-designed posters, along with a human-and-model evaluation protocol that measures (i)Visual Quality — how well the generated poster aligns visually with the human-designed version. (ii)Textual Coherence — the clarity and fluency of the poster’s language. (iii)Holistic Assessment — the overall aesthetic and informational quality, rated across six fine-grained dimensions by VLM as Judge. Notably, (iv) PaperQuiz — motivated by the poster’s role as a bridge between authors and readers, this metric evaluates how effectively the poster alone conveys core paper content by simulating diverse reader comprehension using VLMs to answer questions derived from the paper. To tackle multimodal context compression in Paper2Poster, we introduce PosterAgent , a multi- agent framework that first globally organizes document content and then performs panel-level refinements—while weaving visual feedback into every stage. Starting with the Parser , we ingest the full paper PDF and transform it into an asset library of section-level text summaries and extracted figures and tables. Next, the Planner semantically matches each synopsis to its corresponding visual asset and generates a binary -tree layout, allocating panels by estimated content length while preserving reading order and spatial balance. Finally, the Painter–Commenter loop refines each panel: the Painter distills section -figure pairs into concise bullet points and renders draft panels viapython-pptx code, and the Commenter—a VLM with zoom -in reference prompts—provides targeted feedback to correct text overflow and spatial alignment. This top-down, visual -in-the-loop design produces concise, coherent posters without manual tuning. Using Paper2Poster, we comprehensively evaluate human-designed (oracle) posters, state-of-the- art generative models ( e.g., GPT-4o), and multi-agent solutions, revealing several key insights: (i) GPT-4o’s outputs, though visually appealing at first glance, suffer from noisy or incoherent text, yielding high perplexity and poor PaperQuiz performance; (ii)VLM-based judging shows the primary aesthetic bottleneck is Engagement rather than informational content, since human posters convey meaning predominantly through visual semantics; (iii)PaperQuiz proves a reliable metric—VLM reader scores correlate closely with human evaluations, and more capable VLMs achieve higher scores on well-designed posters; and (iv)our Paper2Poster pipeline, built on a fully open-source toolbox ( e.g.,Qwen-2.5-VL-7B ), surpasses existing GPT-4o–based multi-agent approaches on nearly all metrics while consuming 87%fewer tokens . Our findings illuminate pathways for the next generation of models and agent systems aimed at fully automated poster generation. 2 Related Work 2.1 Visual Design Automation Recent advances in multi-modal learning have driven significant progress in automating visual design tasks. These tasks commonly fall into two broad categories: (i) Text-rich Image Generation. Tasks such as poster generation [ 3,17,11,33] have greatly benefited from diffusion-based approaches [ 11, 10, 31], which enable the synthesis of detailed visuals conditioned on natural language descriptions. However, ensuring the quality and fidelity of embedded textual content via an end-to-end pixel generative model remains
https://arxiv.org/abs/2505.21497v1
a major challenge, as generated text at the pixel level appears blurry and hard to read. (ii) Complex Visual Layouts. Tasks like website designing [ 7,27,16,23] or slide generation [ 37,2,8,18,26,29] involve intricate visual structures and require integrating diverse components. To handle such complexity, mainstream approaches [ 37,5] often employ agentic workflows that rely heavily on code generation and tool usage to assemble complete visual outputs. In contrast, our Paper2Poster addresses a more demanding yet highly practical setting: scientific visual design based on academic papers. This involves long-context, interleaved multi-modal, inputs and outputs , posing substantial challenges in both effectiveness and computational efficiency. 2 2.2 Vision-Language Agents Recent progress has revealed the promising potential of LLMs beyond pure language understanding. Techniques such as ReAct [ 36,35] have demonstrated that LLMs can serve as autonomous agents, capable of solving complex tasks through step-by-step reasoning and dynamic interaction via coding [32,34], API function calling [ 25,15], or UI interface interaction [ 13,22,19]. Despite these advances, general-purpose agents still struggle with professional tasks [ 12] as they require serious, accurate interaction and domain-specific knowledge. One closely related application is slide automation [ 5,37], where agents translate brief textual queries into executable Python code ( e.g., viapython-pptx ) to render presentation slides. However, our Paper2Poster setting is significantly more challenging: instead of a text prompt, we take full-length academic papers as inputs and generate compact, well- structured posters as output. This novel task requires careful design of both evaluation metrics and an effective, practical automation workflow. 3 Paper2Poster Benchmark 3.1 Task Definition Given a scientific paper composed of interleaved text, figures, and tables, the goal is to automatically generate a single-page academic poster that faithfully conveys the paper’s core content in a visually coherent and spatially efficient format. This task presents several unique challenges: a. Long-Context Long-Horizon Task : Scientific papers span multiple pages and thousands of words. Summarizing key insights while preserving coherence demands hierarchical understanding and selective abstraction. The complexity further necessitates long-horizon reasoning and multiple iterative interactions, making the task especially challenging. b. Interleaved Multimodal Inputs : Papers integrate numerous figures, tables, and charts, each semantically linked to the surrounding text. Successful poster generation demands the ability to extract, interpret, and align these multimodal elements in a contextually appropriate manner. c. Layout-Aware Multimodal Outputs : Unlike tasks focused solely on text ( e.g., blog) or vision, poster generation requires producing interleaved text–image outputs within a constrained spatial layout. This necessitates joint reasoning over language, visual content, and layout to prevent overflow, imbalance, and logical misalignment. 3.2 Data Curation Data Source. We focus exclusively on AI papers for three key reasons: (1) they are relatively recent and undergo rigorous peer review, ensuring high scientific quality; (2) they offer diverse content across subfields—such as image-rich computer vision, text-centric NLP, and theory papers with numerous equations—providing a broad range of input modalities. To support this, we adopt the POSTER SUMdataset [ 24], which contains a large collection of paper–poster pairs from recent AI conferences including ICML, NeurIPS, and ICLR (2022–2024). We specifically use the test split
https://arxiv.org/abs/2505.21497v1
to reduce the risk of overlap with training data. Diverse Sampling. Based on the initial candidate set, we apply two filtering criteria to curate high- quality data: (1) Length Control : We deliberately include longer papers, including supplementary material, selecting PDFs that exceed 15pages and extend up to 50pages. (2) Latest Version : We manually retrieve the most recent PDF version for each paper to ensure the dataset reflects final camera-ready submissions. From the filtered set, we construct the final Paper2Poster dataset consisting of 100 paper–poster pairs, stratified by publication year to ensure temporal balance: 33 pairs from 2022, 33from 2023, and 34from 2024. To further enhance diversity, we also stratify by source venue—selecting 35papers from NeurIPS, 37from ICML, and 28from ICLR, ensuring broad coverage across these leading conferences. Data Statistics. Overall, Paper2Poster comprises 100 paper-poster pairs spanning 280distinct topics across domains such as Computer Vision ( 19%), Natural Language Processing ( 17%), and Reinforcement Learning ( 10%), ensuring comprehensive coverage across subfields. As illustrated in Fig. 2 (a-b), the input papers contain an average of 12155 .7words across 22.6pages, amounting to approximately 20370 .3tokens, with an average of 22.59figures per paper. In Fig. 2 (c-d), the corresponding author-designed posters include an average of 774.1words ( 1416.2tokens) and 8.7 figures. This reflects a textual compression ratio of approximately 14.4×and a figure reduction ratio of about 2.6×from paper to poster. 3 (a) Word cloud of topics (b)#of tokens (c)#of figures Figure 2: Data Statistics of Paper2Poster. (a) Word cloud illustrating the diversity of research topics. (b) Textual Token statistics and Figure count statistics for input papers vs.posters provided by authors. Overall, these statistics highlight that Paper2Poster is a multimodal context compression task, requiring effective abstraction of both textual and visual content. 3.3 Evaluation Metrics To systematically measure the quality of generated posters, we establish a comprehensive evaluation framework that covers four essential dimensions as shown in Fig. 3 (left): (i) visual quality ,(ii) textual coherence ,(iii) quality assessment via VLM (i.e.,VLM-as-judge), and notably our proposed (iv) PaperQuiz which measures how effectively the poster conveys the paper’s core knowledge. PaperQuizQ1: What shared codebook size V is used across all scales in the VAR tokenizer?A: 4096; B: 2048; C: 8192; D: 3072 Q2: What is a limitation of the current VAR model?A: Incompatible with current hardware;B: Cannot scale beyond 1B parameters;C: Does not support video generation;D: Requires extensive manual tuning.…PaperLet me create multiple choices QA for this paper. LLM asExaminer Quiz for ReadersQuiz96 👍Quiz58 👎 Better! Junior studente.g.,LLaVA-OVSenior studente.g.,Phi-4-MMProfessore.g., GPT-o3VLM as Diverse ReadersQAVLM as JudgeVisual QualityTextualCoherenceVerbatimInterpretiveAesthetic QualityInformationQualityVisual SimilarityFigure RelevancePPL Figure 3: Left: Overview of the evaluation framework in Paper2Poster. Middle : We automatically generate multiple-choice questions from each paper using an LLM (o3), forming the our PaperQuiz evaluation. Right : In PaperQuiz, we simulate multiple reader by allowing VLMs—representing different expertise levels ( e.g., student, professor)—to read each generated poster and answer the quiz. The poster that achieves the highest average score is considered the most effective in conveying the paper’s content. (i) Visual Quality. The visual presentation of a poster directly impacts reader comprehension
https://arxiv.org/abs/2505.21497v1
and engagement. To evaluate visual quality from both global and local perspectives, we employ two metrics: (1) We measure “ Visual Similarity ” between the generated and the author-designed posters as ground-truth using CLIP image embeddings. This approach is favored over traditional distribution-based metrics (such as FID used in prior works [ 5,37]), as it assesses instance-level semantic consistency. (2) We measure “ Figure Relevance ” by computing the average CLIP similarity between figures and their corresponding text sections in the original paper. This metric ensures figures are contextually appropriate and effectively integrated, assigning zero relevance to posters lacking visual content. For both metrics, we employ AltCLIP [ 4] due to its robustness in handling longer sequences alignment. Detailed definition of both metrics can be found in Appendix F.1. (ii) Textual Coherence. Clear and fluent text is essential for poster readability and comprehension. We therefore quantify textual coherence by computing the standard “ Perplexity ” (PPL) of the entire poster text under Llama-2-7b-hf . Lower PPL indicates more predictable, coherent language. A detailed definition is provided in Appendix F.2. (iii) Holistic Assessment (VLM-as-Judge). To evaluate overall poster effectiveness in fine-grained dimension, we prompt a VLM ( e.g., GPT -4o) as an automated judge by outputting score ( 1–5). For each poster image, the model assigns 6criterion-level scores: 3under “ Aesthetic Score ”—{Element Quality, Layout Balance, Engagement}, and 3under “ Information Score ”—{Clarity, Content Com- pleteness, Logical Flow}. This direct, image-centric evaluation preserves fidelity to both visual design and content, while also capturing informativeness. It provides fine-grained feedback to guide future poster design. Full prompt templates and scoring protocols are detailed in Appendix F.3. 4 ValueKeyFig1. General sample...Fig2. Overall comparison …The advent of GPT …Introduction PosterAgent Input:Paper Output:Poster 1.Parser3. Painter-Commenter 2. Planner LLMAssets 2. Panel Layout Gen1. Assets Matching …3. Panel Iteration0-th Panel1-stPaneln-th Panel % python codesection_t = add_textbox(b1,b2,b3,b4)poster.save(…) Overflow out of bounding box! Reduce content.• VAR demo …performance• Efficiency …• Adaptive … •VAR demo …performance• Efficiency …• Adaptive ……• Key comp…In-Context ReferenceRendering Zoom-inFigure 4: Illustration of the PosterAgent pipeline. Given an input paper, PosterAgent generates a structured academic poster through three modules: 1. Parser: Extracts key textual and visual assets using a combination of tools and LLM-based summarization, resulting in a structured asset library. 2. Planner: Matches assets and arranges them into coherent layouts, iteratively generating panels with azoom-in operation. 3. Painter–Commenter: The Painter generates panel-level bullet-content along with executable code, and renders the visual output, while the Commenter—a VLM with in-context reference—provides feedback to ensure layout coherence and prevent content overflow. (iv) PaperQuiz. Given the poster’s central role in communicating the content of its source pa- per—serving as a bridge between authors and readers—we design an evaluation protocol that simu- lates this communication scenario. As shown in Fig. 3 (middle), each paper PDF is first submitted to o3 as examiner to generate 100 multiple-choice questions per paper: 50 verbatim questions (directly answerable from the text, spanning 13 content aspects) and 50 interpretive questions (targeting high-level comprehension across 10 conceptual dimensions). Next, as illustrated in Fig. 3 (right), we present each poster
https://arxiv.org/abs/2505.21497v1