SlowGuess commited on
Commit
4cfe39c
·
verified ·
1 Parent(s): 40ac6d4

Add Batch fc02b068-d87e-4699-aaa1-7d5db5a8626c

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. a3ddoesdiffusiondreamabout3dalignment/f343e28c-96c0-4631-b299-8da12a866be5_content_list.json +3 -0
  2. a3ddoesdiffusiondreamabout3dalignment/f343e28c-96c0-4631-b299-8da12a866be5_model.json +3 -0
  3. a3ddoesdiffusiondreamabout3dalignment/f343e28c-96c0-4631-b299-8da12a866be5_origin.pdf +3 -0
  4. a3ddoesdiffusiondreamabout3dalignment/full.md +512 -0
  5. a3ddoesdiffusiondreamabout3dalignment/images.zip +3 -0
  6. a3ddoesdiffusiondreamabout3dalignment/layout.json +3 -0
  7. abencharelmmsmastersatevaluatingaigeneratedimages/677b8e8b-9e9a-4670-974f-14fe8264da6b_content_list.json +3 -0
  8. abencharelmmsmastersatevaluatingaigeneratedimages/677b8e8b-9e9a-4670-974f-14fe8264da6b_model.json +3 -0
  9. abencharelmmsmastersatevaluatingaigeneratedimages/677b8e8b-9e9a-4670-974f-14fe8264da6b_origin.pdf +3 -0
  10. abencharelmmsmastersatevaluatingaigeneratedimages/full.md +516 -0
  11. abencharelmmsmastersatevaluatingaigeneratedimages/images.zip +3 -0
  12. abencharelmmsmastersatevaluatingaigeneratedimages/layout.json +3 -0
  13. acccollabanactorcriticapproachtomultiagentllmcollaboration/9e3ba617-0b5d-4804-8487-12e426f6826e_content_list.json +3 -0
  14. acccollabanactorcriticapproachtomultiagentllmcollaboration/9e3ba617-0b5d-4804-8487-12e426f6826e_model.json +3 -0
  15. acccollabanactorcriticapproachtomultiagentllmcollaboration/9e3ba617-0b5d-4804-8487-12e426f6826e_origin.pdf +3 -0
  16. acccollabanactorcriticapproachtomultiagentllmcollaboration/full.md +560 -0
  17. acccollabanactorcriticapproachtomultiagentllmcollaboration/images.zip +3 -0
  18. acccollabanactorcriticapproachtomultiagentllmcollaboration/layout.json +3 -0
  19. acceleratedoverrelaxationheavyballmethodachievingglobalacceleratedconvergencewithbroadgeneralization/47e0cc3c-eff0-4705-ba54-f8fe497facdf_content_list.json +3 -0
  20. acceleratedoverrelaxationheavyballmethodachievingglobalacceleratedconvergencewithbroadgeneralization/47e0cc3c-eff0-4705-ba54-f8fe497facdf_model.json +3 -0
  21. acceleratedoverrelaxationheavyballmethodachievingglobalacceleratedconvergencewithbroadgeneralization/47e0cc3c-eff0-4705-ba54-f8fe497facdf_origin.pdf +3 -0
  22. acceleratedoverrelaxationheavyballmethodachievingglobalacceleratedconvergencewithbroadgeneralization/full.md +0 -0
  23. acceleratedoverrelaxationheavyballmethodachievingglobalacceleratedconvergencewithbroadgeneralization/images.zip +3 -0
  24. acceleratedoverrelaxationheavyballmethodachievingglobalacceleratedconvergencewithbroadgeneralization/layout.json +3 -0
  25. accelerating3dmoleculegenerationviajointlygeometricoptimaltransport/408fb72c-ed7d-466d-a23b-e67b16738ba0_content_list.json +3 -0
  26. accelerating3dmoleculegenerationviajointlygeometricoptimaltransport/408fb72c-ed7d-466d-a23b-e67b16738ba0_model.json +3 -0
  27. accelerating3dmoleculegenerationviajointlygeometricoptimaltransport/408fb72c-ed7d-466d-a23b-e67b16738ba0_origin.pdf +3 -0
  28. accelerating3dmoleculegenerationviajointlygeometricoptimaltransport/full.md +471 -0
  29. accelerating3dmoleculegenerationviajointlygeometricoptimaltransport/images.zip +3 -0
  30. accelerating3dmoleculegenerationviajointlygeometricoptimaltransport/layout.json +3 -0
  31. acceleratingautoregressivetexttoimagegenerationwithtrainingfreespeculativejacobidecoding/c5fa3fe5-d415-4220-8a61-0115cb31e01c_content_list.json +3 -0
  32. acceleratingautoregressivetexttoimagegenerationwithtrainingfreespeculativejacobidecoding/c5fa3fe5-d415-4220-8a61-0115cb31e01c_model.json +3 -0
  33. acceleratingautoregressivetexttoimagegenerationwithtrainingfreespeculativejacobidecoding/c5fa3fe5-d415-4220-8a61-0115cb31e01c_origin.pdf +3 -0
  34. acceleratingautoregressivetexttoimagegenerationwithtrainingfreespeculativejacobidecoding/full.md +644 -0
  35. acceleratingautoregressivetexttoimagegenerationwithtrainingfreespeculativejacobidecoding/images.zip +3 -0
  36. acceleratingautoregressivetexttoimagegenerationwithtrainingfreespeculativejacobidecoding/layout.json +3 -0
  37. acceleratingdiffusiontransformerswithtokenwisefeaturecaching/1d42a978-2ece-486e-8135-a5ec0a04b970_content_list.json +3 -0
  38. acceleratingdiffusiontransformerswithtokenwisefeaturecaching/1d42a978-2ece-486e-8135-a5ec0a04b970_model.json +3 -0
  39. acceleratingdiffusiontransformerswithtokenwisefeaturecaching/1d42a978-2ece-486e-8135-a5ec0a04b970_origin.pdf +3 -0
  40. acceleratingdiffusiontransformerswithtokenwisefeaturecaching/full.md +443 -0
  41. acceleratingdiffusiontransformerswithtokenwisefeaturecaching/images.zip +3 -0
  42. acceleratingdiffusiontransformerswithtokenwisefeaturecaching/layout.json +3 -0
  43. acceleratinginferenceofretrievalaugmentedgenerationviasparsecontextselection/ad761159-4432-4ee7-b97b-d14149889864_content_list.json +3 -0
  44. acceleratinginferenceofretrievalaugmentedgenerationviasparsecontextselection/ad761159-4432-4ee7-b97b-d14149889864_model.json +3 -0
  45. acceleratinginferenceofretrievalaugmentedgenerationviasparsecontextselection/ad761159-4432-4ee7-b97b-d14149889864_origin.pdf +3 -0
  46. acceleratinginferenceofretrievalaugmentedgenerationviasparsecontextselection/full.md +333 -0
  47. acceleratinginferenceofretrievalaugmentedgenerationviasparsecontextselection/images.zip +3 -0
  48. acceleratinginferenceofretrievalaugmentedgenerationviasparsecontextselection/layout.json +3 -0
  49. acceleratingneuralnetworktrainingananalysisofthealgoperfcompetition/bf14fb76-b040-4a88-a8d9-fdd96771aa65_content_list.json +3 -0
  50. acceleratingneuralnetworktrainingananalysisofthealgoperfcompetition/bf14fb76-b040-4a88-a8d9-fdd96771aa65_model.json +3 -0
a3ddoesdiffusiondreamabout3dalignment/f343e28c-96c0-4631-b299-8da12a866be5_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5b41e666f9c3ab3accb87af1b87bf1fcccf5b154d6f309ed3c16332bb2a29219
3
+ size 149947
a3ddoesdiffusiondreamabout3dalignment/f343e28c-96c0-4631-b299-8da12a866be5_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0b886740219f9371f923fd14150141bcbf5600d8d046dabc863e09d6ae9beec5
3
+ size 179865
a3ddoesdiffusiondreamabout3dalignment/f343e28c-96c0-4631-b299-8da12a866be5_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0a30a666b242ba49bf3d4ee89be8d4ea9602036ed6643a9c9f07d3a74092e48a
3
+ size 12948146
a3ddoesdiffusiondreamabout3dalignment/full.md ADDED
@@ -0,0 +1,512 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # A3D: DOES DIFFUSION DREAM ABOUT 3D ALIGNMENT?
2
+
3
+ Savva Ignatyev\*1 Nina Konovalova\*2 Daniil Selikhanovych1 Oleg Voynov1,2 Nikolay Patakin2 Ilya Olkov1 Dmitry Senushkin2 Alexey Artemov3 Anton Konushin2 Alexander Filippov4 Peter Wonka5 Evgeny Burnaev1,2 1Skoltech, Russia 2AIRI, Russia 3Medida AI, Israel 4AI Foundation and Algorithm Lab, Russia 5KAUST, Saudi Arabia \*Savva Ignatyev and Nina Konovalova contributed equally Corresponding author: Savva Ignatyev (e-mail: savva.ignatyev@skoltech.ru)
4
+
5
+ ![](images/3a8c891fd63917a373ede743eb04d3f1429efd2c5d99d8fc182f7f578281a448.jpg)
6
+ Figure 1: Our method A3D enables conditioning text-to-3D generation process on a set of text prompts to jointly generate a set of 3D objects with a shared structure (top). This enables a user to make "hybrids" combined of different parts from multiple aligned objects (middle), or perform text-driven structure-preserving transformation of an input 3D model (bottom).
7
+
8
+ # ABSTRACT
9
+
10
+ We tackle the problem of text-driven 3D generation from a geometry alignment perspective. Given a set of text prompts, we aim to generate a collection of objects with semantically corresponding parts aligned across them. Recent methods based on Score Distillation have succeeded in distilling the knowledge from 2D diffusion models to high-quality representations of the 3D objects. These methods handle multiple text queries separately, and therefore the resulting objects have a high variability in object pose and structure. However, in some applications, such as 3D asset design, it may be desirable to obtain a set of objects aligned with each other. In order to achieve the alignment of the corresponding parts of the generated objects, we propose to embed these objects into a common latent space and optimize the continuous transitions between these objects. We enforce two kinds of properties of these transitions: smoothness of the transition and plausibility of the intermediate objects along the transition. We demonstrate that both of these properties are essential for good alignment. We provide several practical scenarios that benefit from alignment between the objects, including 3D editing and object hybridization, and experimentally demonstrate the effectiveness of our method. voyleg.github.io/a3d
11
+
12
+ ![](images/130150ab9497529ad0a1e3335cfc32e67689823c3287099a729e4c3dc05fed89.jpg)
13
+ Figure 2: Collections of objects generated with existing text-to-3D methods lack structural consistency (left, (Shi et al., 2024)). Shapes obtained with existing text-driven 3D editing methods lack text-to-asset alignment and visual quality (middle, (Chen et al., 2024a)). In contrast, our method enables the generation of structurally coherent, text-aligned assets with high visual quality (right).
14
+
15
+ # 1 INTRODUCTION
16
+
17
+ Creating high-quality 3D assets is a time- and labor-intensive process, so even experienced 3D artists commonly break it down into manageable steps. Prior to shaping an asset, an artist might conceptualize its design structure to capture geometric proportions and spatial relationships of its semantically meaningful parts. Then, a series of detailed 3D design instances (e.g., high-resolution geometry and textures) consistent with an established structure can be produced. Recent 3D generation research (Poole et al., 2023; Qiu et al., 2024; Shi et al., 2024) promises to significantly reduce the effort related to manual production of such high-resolution textured shapes, replacing it by an automated AI-based step controlled with natural language. A text-to-3D generation pipeline could potentially be utilized to produce a collection of structurally aligned 3D objects, consisting of a common set of semantic parts and sharing their structure, e.g., the pose or the arrangement of semantic parts.
18
+
19
+ However, existing 3D generation approaches synthesize objects independently and fail to maintain structural alignment across them (Figure 2, left). One may attempt to enforce the alignment in a series of 3D objects by generating an initial one with a text-to-3D pipeline and obtaining the others with text-driven 3D editing methods (Haque et al., 2023; Chen et al., 2024a). Unfortunately, the latter struggle with the visual quality and sometimes fail to perform the necessary edits appropriately, resulting in a low degree of the alignment with the text prompt (Figure 2, middle).
20
+
21
+ To address the limitations of existing approaches, we propose $A3D$ , a method for jointly generating collections of structurally aligned objects from a collection of respective text prompts. The idea of our method is to embed a set of 3D objects and transitions between them into a shared latent space and enforce smoothness and plausibility of these transitions. We take inspiration from the transition trajectory regularization proposed for 2D GANs (Karras et al., 2020, Sec. 3.2). We represent each set of 3D objects and the transitions between them with a single Neural Radiance Field (NeRF) (Mildenhall et al., 2020) and train it with a text-to-image denoising diffusion model via Score Distillation Sampling (SDS) (Poole et al., 2023) to simultaneously correspond to the set of text prompts and enforce the plausibility of the transitions between the objects.
22
+
23
+ Our method is naturally suited for several different scenarios that require control over the structure of the generated objects. (1) Generation of multiple structurally aligned 3D objects (Figure 1, top) enables artists to choose an appropriate 3D asset among a variety of generations, replace 3D objects within existing scenes, or transfer animations across distinct objects. (2) Combining parts of different objects into a "hybrid" (Figure 1, middle) allows adjusting constituent elements without affecting the overall structure of the asset. (3) Structure-preserving transformations of a 3D object (Figure 1, bottom), let a user design a simplified 3D shape with a particular pose and let the automatic generation process fill in complex geometric details and texture while preserving the structure.
24
+
25
+ Sets of objects generated using our method exhibit a high degree of structural alignment and high visual quality scores, outperforming those obtained with state-of-the-art alternatives. Our method is easily adopted for the structure-preserving transformation, performing on par with specialized text-driven 3D editing methods. Further, it is effective in combination with different text-to-3D generation frameworks. Overall, our work advances the state of the art in text-driven 3D generation and opens up new possibilities for applications requiring the generation of structurally aligned objects.
26
+
27
+ # 2 RELATED WORK
28
+
29
+ # 2.1 TEXT-DRIVEN 3D ASSET GENERATION
30
+
31
+ Collecting large, high-quality, diverse 3D datasets poses significant challenges, so 3D generation approaches predominantly leverage 2D priors for training. DreamFusion (Poole et al., 2023) introduced Score Distillation Sampling (SDS) that enables training Neural Radiance Fields (NeRFs) (Mildenhall et al., 2020) with the guidance of pre-trained 2D diffusion models. Subsequent research has refined this methodology to improve the quality and speed of 3D generation. Magic3D (Lin et al., 2023) uses a coarse-to-fine optimization strategy to increase the speed and resolution. Fantasy3D (Chen et al., 2023a) disentangles the geometry and texture training. Several works enhance realism, detail, and optimization speed by utilizing adversarial training (Chen et al., 2024d), 3D-view conditioned diffusion models (Liu et al., 2023; Shi et al., 2023a; 2024; Liu et al., 2024; Ye et al., 2024; Seo et al., 2024), and Gaussian splatting-based models (Tang et al., 2024b; Yi et al., 2024). All these works focus on independent optimization for distinct prompts, resulting in generation of collections of objects that lack structural alignment, as we show in Figure 2 and in our ablation study. This misalignment issue persists even in amortized frameworks, where a single generative model is trained to handle multiple prompts (Tang et al., 2024a; Jun & Nichol, 2023; Hong et al., 2024; Siddiqui et al., 2024; Ma et al., 2024). Due to the mode-seeking nature of SDS (Poole et al., 2023), these frameworks often produce misaligned objects with the structure sensitive to subtle variations in the prompt, increasing inconsistency across generated outputs. Unlike the described methods that optimize 3D objects independently or amortized models trained on large-scale datasets, our method optimizes a small set of objects jointly, allowing us to achieve structural consistency between objects.
32
+
33
+ # 2.2 TEXT-DRIVEN 3D ASSET EDITING
34
+
35
+ One straightforward way to produce a collection of aligned 3D objects is to generate an initial object using a text-to-3D pipeline and subsequently modify this object via text-driven 3D editing. Several methods have been proposed to manipulate NeRF-based scene representations using text as guidance (Haque et al., 2023; Park et al., 2024; Bao et al., 2023; Zhuang et al., 2023). DreamBooth3D (Raj et al., 2023) and Magic3D (Lin et al., 2023) provide the capability to edit personalized objects while leveraging the underlying 3D structure. FocalDreamer (Li et al., 2024), Progressive3D (Cheng et al., 2024), and Vox-E (Sella et al., 2023) confine the effect of modifications to specific parts of the object, thus enhancing control of the editing process. Fantasy3D (Chen et al., 2023a) and DreamMesh (Yang et al., 2024) focus on global transformations of one object into another, iteratively optimizing a 3D model to align with a text prompt via SDS. Iterative optimization with SDS does not guarantee preservation of the structure of the transformed object, so several techniques were proposed to improve it. Coin3D (Dong et al., 2024) refines geometric primitives into high-quality assets by imposing deformation constraints through input masks. GaussianDreamer (Yi et al., 2024) and LucidDreamer (Liang et al., 2024) show text-driven editing capabilities for Gaussian splats, which they initialize using a separate pipeline and fine-tune with the help of a diffusion model. Haque et al. (2023) and Palandra et al. (2024) use the SDS loss in combination with a pre-trained 2D image editing network InstructPix2Pix (Brooks et al., 2023). MVEdit (Chen et al., 2024a) goes one step further by avoiding SDS and proposes a special mechanism that coordinates 2D edits from different viewpoints. Although some of these methods allow obtaining sets of the aligned objects sequentially, the editing process is constrained by the configuration of the initially generated object. This limits the visual quality of the generated sets of objects, as we show in Figure 2 and in our experiments. In contrast, our method optimizes the whole transition trajectory between the objects, and produces both structurally consistent and high-quality results. Additionally, our method is easily adapted for the task of structure-preserving 3D editing, performing on par with specialized methods.
36
+
37
+ # 2.3 LATENT SPACE REGULARIZATION
38
+
39
+ To achieve the structural alignment between the generated objects, we embed these objects into a common latent space together with the transition trajectories between them. We draw inspiration from works on generative modeling of 2D images that show that alignment, disentanglement, and quality of the generated samples can be improved with regularization of trajectories between them. For example, Berthelot* et al. (2019) and Sainburg et al. (2018) directly optimize the quality of the interpolated samples with adversarial training, and StyleGAN (Karras et al., 2020) explicitly
40
+
41
+ regularizes the smoothness of the trajectories by calculating the perceptual path distance in the VGG feature space. Similarly, we employ a diffusion model as a critic that encourages plausibility of the samples on the trajectories via SDS, leading to smooth transitions and aligned objects.
42
+
43
+ # 3 PRELIMINARIES
44
+
45
+ # 3.1 NEURAL RADIANCE FIELDS
46
+
47
+ Neural radiance field (NeRF) (Mildenhall et al., 2020) is a differentiable volume rendering approach that represents the scene as a radiance function parameterized with a neural network. This network maps a 3D point $\pmb{\mu} \in \mathbb{R}^3$ and a view direction $\mathbf{d} \in \mathbb{S}^2$ into a volumetric density $\tau \in \mathbb{R}^{+}$ and a view-dependent emitted radiance $\mathbf{c} \in \mathbb{R}^3$ at that spatial location. To render an image, NeRF queries 5D coordinates $(\pmb{\mu}, \mathbf{d})$ along camera rays and gathers the output colors and densities using volumetric rendering. The ray color $\mathbf{C}$ is calculated numerically through quadrature approximation:
48
+
49
+ $$
50
+ \mathbf {C} = \sum_ {i} \alpha_ {i} T _ {i} \mathbf {c} _ {i}, \quad T _ {i} = \prod_ {j < i} 1 - \alpha_ {i}, \quad \alpha_ {i} = 1 - \exp \left(- \tau_ {i} \| \boldsymbol {\mu} _ {i} - \boldsymbol {\mu} _ {i + 1} \|\right), \tag {1}
51
+ $$
52
+
53
+ where $\mathbf{c}_i$ and $\tau_{i}$ are the radiance and density queried at the $i$ 'th position along the ray, and $\alpha_{i}$ and $T_{i}$ are the transmittance and accumulated transmittance.
54
+
55
+ Originally, NeRF is iteratively trained from a set of posed images. At each iteration, a batch of camera rays is randomly sampled from the set of all observed pixels and the photometric deviation between the colors $\hat{\mathbf{C}}_k$ observed along the $k$ 'th ray and $\mathbf{C}_k$ rendered via Equation (1) is minimized:
56
+
57
+ $$
58
+ \mathcal {L} _ {\mathrm {c}} = \sum_ {k} \| \mathbf {C} _ {k} - \hat {\mathbf {C}} _ {k} \| _ {2} ^ {2}. \tag {2}
59
+ $$
60
+
61
+ # 3.2 SCORE DISTILLATION SAMPLING
62
+
63
+ Score Distillation Sampling (SDS) (Poole et al., 2023) was proposed for fitting a NeRF to a text description of the 3D scene, without any input images, using a pre-trained text-to-image diffusion model. The NeRF is iteratively guided towards consistency with the text prompt by using the diffusion model as a critic for the rendered images. At each iteration, the image $\mathbf{x}$ is rendered for a randomly sampled camera position. A random Gaussian noise $\epsilon \sim \mathcal{N}(\mathbf{0},\mathbf{I})$ is added to the image and the output of the denoising diffusion model $\mathcal{E}$ is obtained via $\hat{\epsilon} = \mathcal{E}(\mathbf{y},t,\alpha_t\mathbf{x} + \sigma_t\boldsymbol {\epsilon})$ , where $\mathbf{y}$ is the embedding of the text prompt, $t\sim \mathcal{U}(0,1)$ is the diffusion timestep, and $\alpha_{t}$ and $\sigma_t$ are weighting factors. The weights $\theta$ of the NeRF network are then updated using the gradient of the SDS loss term:
64
+
65
+ $$
66
+ \nabla_ {\theta} \mathcal {L} _ {\mathrm {S D S}} = \mathbb {E} _ {t, \epsilon} [ w (t) (\hat {\epsilon} - \epsilon) \partial_ {\theta} \mathbf {x} ], \tag {3}
67
+ $$
68
+
69
+ where $w(t)$ is another weighting factor.
70
+
71
+ Poole et al. (2023) use a NeRF-like network $F$ that maps the 3D point $\pmb{\mu}$ into volumetric density $\tau$ and the diffuse RGB reflectance $\pmb{\rho} \in \mathbb{R}^3$ (albedo), i.e., $(\tau, \pmb{\rho}) = F(\pmb{\mu}; \theta)$ . They obtain the emitted radiance $\mathbf{c}$ for Equation (1) via shading with a random lighting:
72
+
73
+ $$
74
+ \mathbf {c} = \boldsymbol {\rho} \odot \mathbf {l} (\boldsymbol {\mu}, \mathbf {n}), \quad \mathbf {n} = - \nabla_ {\boldsymbol {\mu}} \tau / \| \nabla_ {\boldsymbol {\mu}} \tau \|, \tag {4}
75
+ $$
76
+
77
+ where $\mathbf{l} \in \mathbb{R}^3$ is the radiance received by the scene at the point $\pmb{\mu}$ from the light sources, $\mathbf{n}$ is "surface normal", and $\odot$ is the element-wise product.
78
+
79
+ # 4 METHOD
80
+
81
+ Learning implicit 3D representations of the objects separately often produces non-aligned results (Figure 2). Our method uses a single NeRF-like network to represent a set of aligned objects together with the transitions between them. For this, we introduce a new input parameter $\mathbf{u} \in \mathbb{R}^N$ that represents a point in a latent space. We optimize the network, guiding it with a text-to-image diffusion model via SDS loss. The diffusion model is conditioned on a weighted linear combination of individual text embeddings, where the weight coefficients correspond to the elements of $\mathbf{u}$ . At each iteration, we sample $\mathbf{u}$ randomly from the edges of the probability simplex. At the vertices of the simplex, the SDS loss guides the renders from the network towards consistency with the individual text prompts. At the edges of the simplex, the loss guides the renders towards image plausibility. This leads to plausible transitions between the objects in the respective pairs and, as a result, to the alignment between the objects. The overview of our method is shown in Figure 3.
82
+
83
+ ![](images/711c501c069d9b23ffd1274b512dc4416722f1a7990cb62f5b99b85073cd0002.jpg)
84
+ Figure 3: Overview of our method. The NeRF model, conditioned on the latent code $\mathbf{u}$ sampled from the edges of the latent simplex, produces a render. The render is passed to the diffusion model, conditioned on the linear combination of the embeddings of the text prompts. Finally, the SDS loss is backpropagated to the NeRF model.
85
+
86
+ # 4.1 GENERATION OF MULTIPLE ALIGNED 3D OBJECTS
87
+
88
+ We adopt the SDS method for the joint generation of aligned 3D objects from a set of $N$ text prompts. To do this, we embed all the objects into a common space of 3D reflectance fields represented with a single neural field. We train our network as a small generative model with the latent space built around the given set of text prompts. Specifically, we define the latent code $\mathbf{u}$ on the $(N - 1)$ -dimensional probability simplex $\{\mathbf{u} \in \mathbb{R}^N : u_1 + \dots + u_N = 1, u_i \geq 0\}$ and assign the vertices of this simplex $\{u_i = 1\}$ to the given textual prompts. We add this latent code as an input parameter to the neural field. We train it to represent the individual 3D objects at the respective vertices of the simplex and map the linear interpolations (edges) between the latent codes at the vertices to transitions between the objects. This allows us to regularize these transitions and achieve structural alignment across the objects.
89
+
90
+ Specifically, we iteratively train the network with the SDS loss (Equation (3)). At each iteration, we sample the latent code $\mathbf{u}$ from the vertices and edges of the simplex. We render the image $\mathbf{x}$ following Equations (1) and (4), where the density and albedo now additionally depend on the latent code $(\tau, \rho) = F(\mu, \mathbf{u}; \theta)$ . At the vertices of the simplex, we condition the diffusion model on the text embeddings of the individual prompts $\{\mathbf{y}_i\}$ . At the edges, which represent transitions between the objects, we apply two kinds of regularization inspired by works on training GANs with mixed latent codes (Berthelot* et al., 2019; Karras et al., 2020).
91
+
92
+ First, we encourage the network to produce plausible 3D objects for latent codes sampled at the transition trajectories. We use the text-to-image diffusion model as a critic to evaluate and improve the plausibility through SDS. For this, we obtain the text embeddings $\mathbf{y}$ for the diffusion model as the sum of the embeddings of the individual prompts weighted with the components of the latent code $\mathbf{y}(\mathbf{u}) = u_1\mathbf{y}_1 + \dots + u_N\mathbf{y}_N$ . For the edges of the latent simplex it corresponds to linearly interpolating between the pair of embeddings of the individual text prompts. In ablation study, we show that a similar effect to a lesser extent can be achieved by conditioning the diffusion model on some general prompt independent of the objects being generated.
93
+
94
+ Second, we encourage the transitions between the objects to be smooth. We avoid doing this directly to give our model more flexibility and instead regularize the smoothness of the transitions implicitly. Specifically, we limit the depth of our neural field network, which limits the Lipschitz norm of the function parameterized by this network (Miyato et al., 2018), and enforce smoothness of rendered normal maps with a corresponding loss (see Equation (5)).
95
+
96
+ These regularization strategies encourage the network to learn a mapping from the edges of the simplex to meaningful transitions between the individual 3D objects. In our ablation study, we show that this is crucial for obtaining structurally aligned objects.
97
+
98
+ # 4.2 HYBRIDIZATION: COMBINING THE ALIGNED 3D OBJECTS
99
+
100
+ The neural network trained with our method not only represents multiple aligned 3D objects but also enables smooth interpolation of the reflectance field between these objects at each point in 3D
101
+
102
+ ![](images/dfa71b9c66d96a3c569fce65e328450e1591ee83fd40aeea0cccd3499164c7a9.jpg)
103
+ Anchor points
104
+
105
+ ![](images/14456a03832c1f5d281b87e5ba7acca6702c4c4c0e6afca103612126e1266818.jpg)
106
+ Latent code
107
+
108
+ ![](images/3e558794e0a9aecdd838ec619113e68f18e7924c7f401e158b9b4bff7347ef0d.jpg)
109
+ seahorse
110
+
111
+ ![](images/78b87f1c93870b1c4826bb439aec17a08b6fbf7329417d51907f7dcb45379159.jpg)
112
+ mermaid
113
+ Figure 4: Our method allows us to blend different objects seamlessly. A proper alignment of multiple 3D models provides the ability to replace parts of one object with similar components of the other objects. We manually select spatial anchor points (left) and assign them to a particular model. The latent code $\mathbf{u}$ is linearly interpolated between anchors at every spatial location, resulting in a smooth distribution over 3D space (second column). The resulting objects are shown on the right.
114
+
115
+ ![](images/b21679fc84256631259b9873c34869ffdf68ff8ede7fa30c086346c0f034020e.jpg)
116
+ Hybrid model
117
+
118
+ ![](images/1f80dbbf7d6849985dd3fd134502e8dc5d7c521711eb10e13c9dda0949c96642.jpg)
119
+ Anchor points
120
+
121
+ ![](images/e66538dc4e6dc9132fe1d0bf6d1ec926ab60f6e92380dc21729a32ce67fb7082.jpg)
122
+ Latent code
123
+
124
+ ![](images/71266951c70f25726714b04b161777a00f85a6dae7d64319738769b093a4ca29.jpg)
125
+ kangaroo
126
+
127
+ ![](images/a40e85ed1d79f883b863f770096f568dcd9dacff7e78ab9b34e6209b905d172b.jpg)
128
+ gopher
129
+
130
+ ![](images/a2a4618544fc30b6b366b18d1ea59bd438397bb89d290903bb74048b7bb7e525.jpg)
131
+ Hybrid model
132
+
133
+ space $\pmb{\mu}$ independently. This allows for a natural and seamless fusion of objects, blending specific parts of individual generated objects into new forms, such as a gopher with a head of a kangaroo shown in Figure 4. To achieve this, the 3D space is partitioned into regions corresponding to specific objects and the reflectance field is smoothly interpolated across the boundaries of these regions. This partitioning is defined by a smooth spatial distribution of the latent code $\mathbf{u}(\pmb{\mu})$ as illustrated in the second column of Figure 4. The new hybrid model is rendered following Equations (1) and (4) with the reflectance field now depending on the spatially varying latent code $(\tau, \pmb{\rho}) = F(\pmb{\mu}, \mathbf{u}(\pmb{\mu}); \theta)$ .
134
+
135
+ # 4.3 STRUCTURE-PRESERVING TRANSFORMATION OF 3D MODELS
136
+
137
+ Our method can be easily adapted for the transformation of a given source 3D model into a target 3D model described by a text prompt while preserving the original structure, such as pose and proportions. For this, first, we set up the neural network as described in Section 4.1 for two text prompts $N = 2$ . In this setup, the latent code $\mathbf{u}$ is defined on a one-dimensional segment $\{u_1 \in [0,1]; u_2 = 1 - u_1\}$ . Then, we initialize the network with the input 3D model across the whole latent space uniformly. This initialization can be done in different ways depending on the representation of the input model. In our experiments, we obtain the renderings of the input model for a random set of viewpoints and fit the network to these renderings photometrically by minimizing the loss function in Equation (2). Afterwards, we select a text prompt describing the input model (chosen manually for simplicity in our experiments) and assign the endpoints of the latent segment $u_1 = 1$ and $u_2 = 1$ to this prompt and to the target prompt, respectively. Finally, we train the network with SDS as described in Section 4.1, additionally keeping the constraint on the photometric consistency with the input model (Equation (2)) at the respective endpoint of the latent segment $u_1 = 1$ .
138
+
139
+ # 5 EXPERIMENTS
140
+
141
+ We demonstrate the capabilities of our method in the three scenarios described above. We compare our method with alternatives quantitatively for the generation of pairs of aligned 3D objects and for the structure-preserving transformation of 3D models. We discuss the results of the hybridization of aligned objects generated with our method from the qualitative perspective. We implement of our method based on MVDream text-to-3D generation model (Shi et al., 2024), that uses an efficient version of NeRF Instant-NGP (Müller et al., 2022) to represent the 3D scene. In Appendix A, we show that our method is also effective in combination with a different model RichDreamer (Qiu et al., 2024) that represents 3D objects using DMTet (Shen et al., 2021). We show additional applications of our method in Appendix E. We refer the reader to the complete set of animated results of our experiments on the project page for a more complete picture. In Appendix B.4, we provide the computational costs and hardware details.
142
+
143
+ Metrics. We quantify three aspects of the generated pairs of aligned objects and the results of the structure-preserving transformation. The first one is the degree of alignment between the corresponding structural parts of the objects in a generated pair, or of a source 3D model and its transformed version. Measuring such alignment directly would require explicit detection of corresponding structural parts for an arbitrary pair of objects, which is a hard task by itself, even if the objects have similar structure. Recently, Tang et al. (2023) have proposed a method for finding corresponding points in pairs of images of arbitrary similar objects by matching features extracted from pretrained
144
+
145
+ Table 1: Quantitative comparison for the generation of multiple aligned 3D objects.
146
+
147
+ <table><tr><td colspan="8">GPTEval3D, % of comparisons where our method is preferred</td><td></td><td></td></tr><tr><td></td><td>Text-asset alignment</td><td>Text-geometry alignment</td><td>3D plausibility</td><td>Texture details</td><td>Geometry details</td><td>Overall quality</td><td>CLIP ↑</td><td>DIFT distance ↓ % of object size</td><td></td></tr><tr><td>vs. MVEdit</td><td>94</td><td>89</td><td>76</td><td>91</td><td>92</td><td>89</td><td>MVEdit</td><td>27.1</td><td>5.5</td></tr><tr><td>vs. LucidDreamer</td><td>61</td><td>77</td><td>70</td><td>56</td><td>80</td><td>77</td><td>LucidDreamer</td><td>26.4</td><td>11.3</td></tr><tr><td>vs. GaussianEditor</td><td>99</td><td>90</td><td>89</td><td>76</td><td>86</td><td>97</td><td>GaussianEditor</td><td>22.8</td><td>2.4</td></tr><tr><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td>A3D (Ours)</td><td>27.7</td><td>6.1</td></tr></table>
148
+
149
+ 2D diffusion models. Based on this method, called DIFT, we define DIFT distance that we use to measure the structural alignment. To compute this distance for a pair of objects, we render them from the same viewpoint. We densely sample points on one of the renders and find the corresponding points on the other one with DIFT. For an ideally aligned pair of objects, a sample and its corresponding point have identical image coordinates. So, we define the DIFT distance as the average distance between these coordinates across all samples. We normalize it by the size of the objects in image space, for better interpretability. We report the value averaged across multiple viewpoints around the objects and for the points sampled for each of the objects in the pair.
150
+
151
+ Second, we measure the semantic coherence between a generated object and the respective text prompt. We measure it following the methodology of GPTEval3D (Mao et al., 2023), that was shown to align with human perception well. Specifically, we ask a Large Multimodal Model GPT-4o (OpenAI, 2024) to compare the 3D objects generated with two methods for the same text prompt and choose the one that is more consistent with the prompt, based on Text-Asset Alignment and Text-Geometry Alignment. We compare our method against each alternative and report the percentage of comparisons in which our method is preferred. Additionally, we measure the coherence between the generated object and the prompt using CLIP similarity (Jain et al., 2022), which is defined as cosine similarity between the CLIP (Radford et al., 2021) embeddings of a render of the object and the respective text prompt.
152
+
153
+ Finally, we evaluate the visual quality of the generated objects and the quality of their surface. For this, we compare the objects generated with two methods using GPTEval3D based on 3D Plausibility, Texture Details, Geometry Details, and Overall quality.
154
+
155
+ # 5.1 GENERATION OF MULTIPLE ALIGNED 3D OBJECTS
156
+
157
+ We evaluate our method in the generation of sets of structurally aligned objects on 15 pairs of prompts describing pairs of objects with similar morphology but different geometry and appearance, such as a car and a carriage. We include various categories of objects, namely different kinds of animals, humans, plants, vehicles, furniture, and buildings, see the list of prompts in Table 5.
158
+
159
+ As no existing method targets the generation of aligned 3D objects, we adopt for comparison several methods of text-driven generation and editing of 3D models. To obtain a pair of aligned objects with such a method, we generate one of the objects from scratch and transform it into the other one. We compare with MVEdit (Chen et al., 2024a), LucidDreamer (Liang et al., 2024), and GaussianEditor (Chen et al., 2024c). MVEdit takes mesh as an input and generates multiple views of the edited object with InstructPix2Pix (Brooks et al., 2023) diffusion network. Then, the mesh is optimized photometrically to be consistent with the edited views, iteratively reducing the level of diffusion noise. LucidDreamer uses Gaussian Splatting and its own version of SDS algorithm inspired by the works on 2D image editing. GaussianEditor also uses Gaussian Splatting in combination with methods of 2D generative guidance, in particular Instruct-Pix2Pix, and additionally develops an anchor loss to control the flexibility of Gaussians. It only performs text-driven editing but not generation, so we generate the initial 3D objects for this method using MVDream (Shi et al., 2024). In our ablation study, we also compare with pairs of objects generated using MVDream independently.
160
+
161
+ We show the quantitative comparison in Table 1 and the qualitative comparison in Figure 5. Our method generates pairs of objects aligned with the text prompts and with high visual and geometric quality. It outperforms all the other methods on all evaluation criteria of GPTEval3D and w.r.t. CLIP similarity. Compared to our method, MVEdit produces less detailed objects. LucidDreamer produces noisier geometry and often struggles with multi-view inconsistency. GaussianEditor often struggles
162
+
163
+ ![](images/b826cba808cdad2e2d36011e736e5980c29935e63b55b96c73fc01577b91253e.jpg)
164
+ Figure 5: Pairs of objects generated with existing methods and our method. The top two rows show the results for one pair of prompts written below, the bottom two rows show the results for another pair of prompts. For each object, we show a color rendering and a rendering of the geometry below it.
165
+
166
+ to obtain an object corresponding to the prompt. This may be due to Instruct-Pix2Pix (used by this method) generating inconsistent guidance from different views for synthesized objects, as it was trained on real-world data.
167
+
168
+ Pairs of objects generated with our method have a high degree of structural alignment, which is confirmed by a low DIFT distance, less than one tenth of the size of an object. W.r.t. this metric our method is only slightly outperformed by MVEdit and GaussianEditor that, while obtaining the second object in a pair by transforming the first one, often fail to change the geometry of the initial object. This leads to poor alignment with the text prompt and low quality of the generated objects overall (see the values of GPTEval3D). Moreover, the other methods, which transform one object in a pair into the other one, often generate a variant of the initial object with the geometric structure unsuitable for the other prompt, as in the carriage-car pair produced by MVEedit. This fundamentally limits the quality of sets of objects generated with this sequential approach. In contrast, our method jointly optimizes the set of objects so that they simultaneously share the structure and correspond to their respective text prompts well.
169
+
170
+ # 5.2 HYBRIDIZATION: COMBINING THE ALIGNED 3D OBJECTS
171
+
172
+ We show examples of the hybrid objects combining parts of aligned objects produced by our method, and illustrate the process of getting these hybrids in Figure 4. In these experiments, for better visibility we intentionally choose the hyperparameters of our method to increase the visual difference between the generated objects. To choose which part of each object we want to use, we assign several anchor points to each object and manually place these points in the common 3D space of the objects. We define the spatial distribution of the latent code $\mathbf{u}(\boldsymbol{\mu})$ at the location $\boldsymbol{\mu}$ (described in Section 4.2) via linear interpolation between the latent codes corresponding to the objects associated with the two closest anchors.
173
+
174
+ The examples of hybrids demonstrate that our method generates aligned 3D objects that can be seamlessly blended in different configurations. The coherent appearance of the hybrid models demonstrates a high degree of structural alignment across the generated objects. Remarkably, our method allows us to easily transition between the parts of the objects with different geometries, e.g., the necks of the gopher and kangaroo, which have different diameters, or waists of the seahorse and mermaid, which have fins and hands nearby. This is in contrast to methods that represent 3D objects with a mesh (e.g., MVEdit), which have to be locally adjusted first to be stitched together.
175
+
176
+ Table 2: Quantitative comparison for structure-preserving transformation.
177
+
178
+ <table><tr><td colspan="8">GPTEval3D, % of comparisons where our method is preferred</td></tr><tr><td></td><td>Text-asset alignment</td><td>Text-geometry alignment</td><td>3D plausibility</td><td>Texture details</td><td>Geometry details</td><td>Overall quality</td><td>CLIP ↑ DIFT distance ↓ % of object size</td></tr><tr><td>vs. MVEdit</td><td>76</td><td>66</td><td>50</td><td>80</td><td>84</td><td>83</td><td>MVEdit 27.9 3.7</td></tr><tr><td>vs. GaussianEditor</td><td>94</td><td>94</td><td>83</td><td>97</td><td>97</td><td>100</td><td>GaussianEditor 24.6 1.8</td></tr><tr><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td>A3D (Ours) 27.8 7.9</td></tr></table>
179
+
180
+ ![](images/2073f10d8643b66c5cd65eb91ff8deae65f7f3ee3154603c14376e1b1562c71d.jpg)
181
+ Figure 6: Objects generated with existing methods and our method from an initial 3D model on the left. Each row shows the results obtained for the text prompt below. For each object, we show a color rendering and a rendering of the geometry.
182
+
183
+ # 5.3 STRUCTURE-PRESERVING TRANSFORMATION OF 3D MODELS
184
+
185
+ We evaluate the capability of our method to transform an initial 3D model while preserving its structure on 26 text prompts. For each prompt we find a coarse initial model with the desired structure on the web, or use the SMPL parametric human body model (Loper et al., 2023) in a desired pose. In this way, we obtain, for example, a skeleton from a 3D model of a cat, or a princess on a throne from a simple model of a sitting woman, see the list of text prompts in Table 6. We compare with the same text-driven 3D editing methods as in the generation of pairs of objects.
186
+
187
+ We show the quantitative comparison in Table 2 and the qualitative comparison in Figure 6. LucidDreamer diverged for half of the scenes, so we only compare with it qualitatively. Our method generates objects aligned with the text prompts and with high visual and geometric quality, while preserving the geometric structure of the initial 3D model in terms of pose and proportions. It generally produces results on par with state-of-art specialized text-driven 3D editing methods, which is confirmed by the metrics. Our method consistently outperforms MVEdit w.r.t. the asset quality and alignment with the prompt, by producing more detailed results. Unlike MVEdit, which is restricted to superficial deformations of the surface, our method is able to add or remove significant parts of the object requested by the prompt, e.g., adding the throne and crown for the princess, or shrinking the cat down to its skeleton. This also explains a slightly higher DIFT distance for our method, since these additional parts do not have the corresponding parts in the initial 3D model. LucidDreamer produces the objects with inconsistent and distorted geometry. It rarely preserves the pose and overall structure of the initial 3D model and often struggles with the Janus problem, producing the objects with multiple faces, limbs, etc. On the other hand, it generates more detailed visual appearance compared with our method.
188
+
189
+ # 6 ABLATION
190
+
191
+ We compare our method with two branches of baselines for generating pairs of objects. We refer to these baselines as (A), (B), (C), (E), (F), and to our complete method as (D).
192
+
193
+ Table 3: Quantitative ablation study.
194
+
195
+ <table><tr><td colspan="4">DIFT distance ↓, percentage of object size</td></tr><tr><td>(A) MVDream, independently generated objects</td><td>30.1</td><td></td><td></td></tr><tr><td>(B) MVDream + multiple objects in one network</td><td>18.7</td><td>(F) A3D, 3-layer MLP</td><td>14.3</td></tr><tr><td>(C) A3D, transition plausibility with empty prompt</td><td>14.2</td><td>(E) A3D, 2-layer MLP</td><td>13.5</td></tr><tr><td>(D) A3D, transition plausibility with blended prompt (Ours)</td><td>6.1</td><td>(D) A3D, 1-layer MLP (Ours)</td><td>6.1</td></tr></table>
196
+
197
+ In the first branch, we study the effects of embedding a set of objects into a single neural field and the importance of regularizing plausibility of transition between them. We start with the basic version of the text-to-3D framework MVDream (Shi et al., 2024) that our method is based on (A). MVDream generates pairs of objects independently from one another. We modify MVDream to parameterize two objects with a single neural field without regularizing the transition between them (B). We also implement a version of our method that regularizes the transitions using a diffusion model conditioned on an empty text prompt (C), instead of a blending of the input prompts in our full method (D).
198
+
199
+ In the second branch of comparisons we study the importance of the smoothness of the transitions. We achieve this by following the reasoning in (Miyato et al., 2018) and limiting the depth of our neural network. Specifically, in our complete method (D), we parameterize sets of objects with a multilayer perceptron (MLP) with one hidden layer on top of a feature hash grid. We evaluate two alternative designs that use MLPs with two and three hidden layers (E, F).
200
+
201
+ We show the quantitative comparison in Table 3, the qualitative comparison in Figure 8, and provide more details in Appendix D.1. Our results fully support the motivation behind the components of our method. MVDream (A) produces pairs of objects with different poses and proportions, which is confirmed by a high DIFT distance, corresponding to the nearly one third of the size of an object. Version (B) improves alignment across the objects but still does not lead to similar poses and proportions. These results show that independent generation of the objects with a single model, does not guarantee good structural alignment, which can be explained by the mode-seeking nature of SDS (Poole et al., 2023). Using an empty prompt to enforce plausibility of the transitions (C) consistently improves the alignment between the corresponding structural parts of the objects, both qualitatively and quantitatively. This shows that the key property of our method is achieved through regularization of plausibility of the transitions and not through interpolation between the latent codes of individual objects. The interpolation that we use in our complete method (D) additionally makes the objects similar to each other (while sacrificing their realism and making them more stylized if they are naturally notably different from each other) and further improves the structural alignment across the objects. Further tuning of the plausibility loss weight allows one to control the degree of alignment, as we discuss in Appendix D.2. Our experiments with increasing the depth of the network (D-F) show that enforcing the smoothness of the transition between the objects is essential for the proper alignment.
202
+
203
+ # 7 CONCLUSIONS
204
+
205
+ We present A3D, the first method designed to generate a collection of objects structurally aligned with each other. This is achieved by encouraging the transitions between the objects, jointly embedded into a shared latent space, to be smooth and meaningful, which is demonstrated to be an essential property for proper alignment. We show that, when applied to the generation of the structurally aligned objects our method outperforms the editing-based competitors in terms of the asset quality and text-object alignment, while keeping the geometric structural alignment on the state-of-art level. When applied to the 3D editing task, our method provides the results on par with recent methods specialized in this problem. Our method allows to compose novel objects seamlessly combining the parts from the different aligned objects in the generated collection. Our approach is limited to generating static aligned objects, and can not be applied to pose-changing tasks. In future work, we plan to extend our framework to changing poses and 4D video generation by experimenting with different regularization techniques for the transitions between objects.
206
+
207
+ # ACKNOWLEDGMENTS
208
+
209
+ The authors acknowledge the use of the Skoltech supercomputer Zholes (Zacharov et al., 2019) to obtain the results presented in this paper. The research was partially supported by the funding of the Skoltech Applied AI center.
210
+
211
+ # ETHICS STATEMENT
212
+
213
+ Our method is built on MVDream model, so it inherits all the problematic biases and limitations that this model may have. For example, MVDream fine-tuned the open-source Stable Diffusion 2.1 model (Rombach & Esser) on the Objaverse (Deitke et al., 2023) and LAION (Schuhmann et al., 2022) datasets. The LAION-400M subset of the full LAION-5B was found to contain unwanted images (Birhane et al., 2021), including inappropriate and abusive depictions. Our method may have the potential to displace creative workers through automation and increase accessibility for the creative and gaming industries. There is the risk that our method could be used to produce fake content.
214
+
215
+ # REPRODUCIBILITY STATEMENT
216
+
217
+ We build our method based on the official MVDream implementation using threestudio framework (Shi et al., 2023c). We discuss the details of implementation of our method in Appendix B.1, the details of experiments with the methods that we compare with in Appendix B.3, and describe the evaluation details in Appendix C.
218
+
219
+ # REFERENCES
220
+
221
+ Chong Bao, Yinda Zhang, Bangbang Yang, Tianxing Fan, Zesong Yang, Hujun Bao, Guofeng Zhang, and Zhaopeng Cui. Sine: Semantic-driven image-based nef editing with prior-guided editing field. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 20919-20929, 2023.
222
+ David Berthelot*, Colin Raffel*, Aurko Roy, and Ian Goodfellow. Understanding and improving interpolation in autoencoders via an adversarial regularizer. In International Conference on Learning Representations, 2019. URL https://openreview.net/forum?id=S1fQSiCcYm.
223
+ Abeba Birhane, Vinay Uday Prabhu, and Emmanuel Kahembwe. Multimodal datasets: misogyny, pornography, and malignant stereotypes. arXiv preprint arXiv:2110.01963, 2021.
224
+ Tim Brooks, Aleksander Holynski, and Alexei A Efros. Instructpix2pix: Learning to follow image editing instructions. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18392-18402, 2023.
225
+ Hansheng Chen, Ruoxi Shi, Yulin Liu, Bokui Shen, Jiayuan Gu, Gordon Wetzstein, Hao Su, and Leonidas Guibas. Generic 3d diffusion adapter using controlled multi-view editing, 2024a.
226
+ Hansheng Chen, Ruoxi Shi, Yulin Liu, Bokui Shen, Jiayuan Gu, Gordon Wetzstein, Hao Su, and Leonidas Guibas. Mvedit code. https://github.com/Lakonik/MVEDit, 2024b.
227
+ Rui Chen, Yongwei Chen, Ningxin Jiao, and Kui Jia. *Fantasia3d: Disentangling geometry and appearance for high-quality text-to-3d content creation*. In *Proceedings of the IEEE/CVF International Conference on Computer Vision*, pp. 22246–22256, 2023a.
228
+ Y. Chen, Z. Chen, C. Zhang, F. Wang, X. Yang, Y. Wang, Z. Cai, L. Yang, H. Liu, and G. Lin. Gaussianeditor: Swift and controllable 3d editing with gaussian splatting. In 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 21476-21485, Los Alamitos, CA, USA, jun 2024c. IEEE Computer Society. doi: 10.1109/CVPR52733.2024.02029.
229
+ Yiwen Chen, Zilong Chen, Chi Zhang, Feng Wang, Xiaofeng Yang, Yikai Wang, Zhonggang Cai, Lei Yang, Huaping Liu, and Guosheng Lin. Gaussian editor experiment with bear. https://github.com/buaacyw/GaussianEditor/blob/master.script/edit_n2n_bear_grey.sh, 2023b.
230
+
231
+ Yiwen Chen, Zilong Chen, Chi Zhang, Feng Wang, Xiaofeng Yang, Yikai Wang, Zhongang Cai, Lei Yang, Huaping Liu, and Guosheng Lin. Gaussian editor code. https://github.com/buaacyw/GaussianEditor, 2023c.
232
+ Yiwen Chen, Chi Zhang, Xiaofeng Yang, Zhongang Cai, Gang Yu, Lei Yang, and Guosheng Lin. It3d: Improved text-to-3d generation with explicit view synthesis. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 38, pp. 1237–1244, 2024d.
233
+ Xinhua Cheng, Tianyu Yang, Jianan Wang, Yu Li, Lei Zhang, Jian Zhang, and Li Yuan. Progressive3d: Progressively local editing for text-to-3d content creation with complex semantic prompts. In The Twelfth International Conference on Learning Representations, 2024. URL https://openreview.net/forum?id=O072Rc8uUy.
234
+ Matt Deitke, Dustin Schwenk, Jordi Salvador, Luca Weihs, Oscar Michel, Eli VanderBilt, Ludwig Schmidt, Kiana Ehsani, Aniruddha Kembhavi, and Ali Farhadi. Objaverse: A universe of annotated 3d objects. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 13142-13153, June 2023.
235
+ Wenqi Dong, Bangbang Yang, Lin Ma, Xiao Liu, Liyuan Cui, Hujun Bao, Yuewen Ma, and Zhaopeng Cui. Coin3d: Controllable and interactive 3d assets generation with proxy-guided conditioning. In ACM SIGGRAPH 2024 Conference Papers, SIGGRAPH '24, New York, NY, USA, 2024. Association for Computing Machinery. ISBN 9798400705250. doi: 10.1145/3641519.3657425. URL https://doi.org/10.1145/3641519.3657425.
236
+ Ayaan Haque, Matthew Tancik, Alexei Efros, Aleksander Holynski, and Angjoo Kanazawa. Instruct-nerf2nerf: Editing 3d scenes with instructions. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 2023.
237
+ Fangzhou Hong, Jiaxiang Tang, Ziang Cao, Min Shi, Tong Wu, Zhaoxi Chen, Tengfei Wang, Liang Pan, Dahua Lin, and Ziwei Liu. 3dtopia: Large text-to-3d generation model with hybrid diffusion priors. arXiv preprint arXiv:2403.02234, 2024.
238
+ Ajay Jain, Ben Mildenhall, Jonathan T Barron, Pieter Abbeel, and Ben Poole. Zero-shot text-guided object generation with dream fields. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 867-876, 2022.
239
+ Heewoo Jun and Alex Nichol. Shap-e: Generating conditional 3d implicit functions. arXiv preprint arXiv:2305.02463, 2023.
240
+ Tero Karras, Samuli Laine, Miika Aittala, Janne Hellsten, Jaakko Lehtinen, and Timo Aila. Analyzing and improving the image quality of stylegan. In 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8107-8116, 2020. doi: 10.1109/CVPR42600.2020.00813.
241
+ Bernhard Kerbl, Georgios Kopanas, Thomas Leimkuhler, and George Drettakis. Gaussian splatting code. https://github.com/graphdeco-inria/gaussian-splatting, 2023.
242
+ Alexander Kirillov, Eric Mintun, Nikhila Ravi, Hanzi Mao, Chloe Rolland, Laura Gustafson, Tete Xiao, Spencer Whitehead, Alexander C. Berg, Wan-Yen Lo, Piotr Dólar, and Ross Girshick. Segment anything. arXiv:2304.02643, 2023.
243
+ Yuhan Li, Yishun Dou, Yue Shi, Yu Lei, Xuanhong Chen, Yi Zhang, Peng Zhou, and Bingbing Ni. Focaldreamer: Text-driven 3d editing via focal-fusion assembly. Proceedings of the AAAI Conference on Artificial Intelligence, 38(4):3279-3287, Mar. 2024. doi: 10.1609/aaai.v38i4.28113. URL https://ojs.aai.org/index.php/AAAI/article/view/28113.
244
+ Y. Liang, X. Yang, J. Lin, H. Li, X. Xu, and Y. Chen. Luciddreamer: Towards high-fidelity text-to-3d generation via interval score matching. In 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 6517-6526, Los Alamitos, CA, USA, jun 2024. IEEE Computer Society. doi: 10.1109/CVPR52733.2024.00623. URL https://doi.ieeeccomputersociety.org/10.1109/CVPR52733.2024.00623.
245
+ Yixun Liang, Xin Yang, Jiantao Lin, Haodong Li, Xiaogang Xu, and Yingcong Chen. Luciddreamer code. https://github.com/EnVision-Research/LucidDreamer, 2023.
246
+
247
+ Chen-Hsuan Lin, Jun Gao, Luming Tang, Towaki Takikawa, Xiaohui Zeng, Xun Huang, Karsten Kreis, Sanja Fidler, Ming-Yu Liu, and Tsung-Yi Lin. Magic3d: High-resolution text-to-3d content creation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 300–309, 2023.
248
+ Ruoshi Liu, Rundi Wu, Basile Van Hoorick, Pavel Tokmakov, Sergey Zakharov, and Carl Vondrick. Zero-1-to-3: Zero-shot one image to 3d object. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pp. 9298-9309, October 2023.
249
+ Yuan Liu, Cheng Lin, Zijiao Zeng, Xiaoxiao Long, Lingjie Liu, Taku Komura, and Wenping Wang. Syncdreamer: Generating multiview-consistent images from a single-view image. In The Twelfth International Conference on Learning Representations, 2024. URL https://openreview.net/forum?id=MN3yH2ovHb.
250
+ Matthew Loper, Naureen Mahmood, Javier Romero, Gerard Pons-Moll, and Michael J Black. Smpl: A skinned multi-person linear model. In Seminal Graphics Papers: Pushing the Boundaries, Volume 2, pp. 851-866. 2023.
251
+ Lykon. Dreamshaper 8 code. https://huggingface.co/Lykon/dreamshaper-8, 2023.
252
+ Zhiyuan Ma, Yuxiang Wei, Yabin Zhang, Xiangyu Zhu, Zhen Lei, and Lei Zhang. Scaledreamer: Scalable text-to-3d synthesis with asynchronous score distillation. arXiv preprint arXiv:2407.02040, 2024.
253
+ Rui Mao, Guanyi Chen, Xulang Zhang, Frank Guerin, and Erik Cambria. Gpteval: A survey on assessments of chatgpt and gpt-4. arXiv preprint arXiv:2308.12488, 2023.
254
+ Luca Medeiros. Language segment-anything code. https://github.com/luca-medeiros/ lang-segment-anything, 2024.
255
+ Ben Mildenhall, Pratul P. Srinivasan, Matthew Tancik, Jonathan T. Barron, Ravi Ramamoorthi, and Ren Ng. Nerf: Representing scenes as neural radiance fields for view synthesis. In ECCV, 2020.
256
+ Takeru Miyato, Toshiki Kataoka, Masanori Koyama, and Yuichi Yoshida. Spectral normalization for generative adversarial networks. In International Conference on Learning Representations, 2018. URL https://openreview.net/forum?id=B1QRgziT-.
257
+ Thomas Müller, Alex Evans, Christoph Schied, and Alexander Keller. Instant neural graphics primitives with a multiresolution hash encoding. ACM Transactions on Graphics (ToG), 41(4): 1-15, 2022.
258
+ OpenAI. Hello gpt-4o. https://openai.com/index/hello-gpt-4o/, 2024. Accessed: 2024-05-26.
259
+ Francesco Palandra, Andrea Sanchietti, Daniele Baieri, and Emanuele Rodolà. Gsedit: Efficient text-guided editing of 3d objects via gaussian splatting. arXiv preprint arXiv:2403.05154, 2024.
260
+ JangHo Park, Gihyun Kwon, and Jong Chul Ye. ED-neRF: Efficient text-guided editing of 3d scene with latent space neRF. In The Twelfth International Conference on Learning Representations, 2024. URL https://openreview.net/forum?id=9DvDRTTdlu.
261
+ Ben Poole, Ajay Jain, Jonathan T. Barron, and Ben Mildenhall. Dreamfusion: Text-to-3d using 2d diffusion. In The Eleventh International Conference on Learning Representations, 2023. URL https://openreview.net/forum?id=FjNys5c7VyY.
262
+ Lingteng Qiu, Guanying Chen, Xiaodong Gu, Qi Zuo, Mutian Xu, Yushuang Wu, Weihao Yuan, Zilong Dong, Liefeng Bo, and Xiaoguang Han. Richdreamer: A generalizable normal-depth diffusion model for detail richness in text-to-3d. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 9914–9925, June 2024.
263
+ Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. In International conference on machine learning, pp. 8748-8763. PMLR, 2021.
264
+
265
+ Amit Raj, Srinivas Kaza, Ben Poole, Michael Niemeyer, Ben Mildenhall, Nataniel Ruiz, Shiran Zada, Kfir Aberman, Michael Rubenstein, Jonathan Barron, Yuanzhen Li, and Varun Jampani. Dreambooth3d: Subject-driven text-to-3d generation. ICCV, 2023.
266
+ Robin Rombach and Patrick Esser. Stable diffusion v2-1. https://huggingface.co/stabilityai/stable-diffusion-2-1.
267
+ Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. High-resolution image synthesis with latent diffusion models. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 10684-10695, 2022.
268
+ Tim Sainburg, Marvin Thielk, Brad Theilman, Benjamin Migliori, and Timothy Gentner. Generative adversarial interpolative autoencoding: adversarial training on latent space interpolations encourage convex latent distributions. arXiv preprint arXiv:1807.06650, 2018.
269
+ Christoph Schuhmann, Romain Beaumont, Richard Vencu, Cade W Gordon, Ross Wightman, Mehdi Cherti, Theo Coombes, Aarush Katta, Clayton Mullis, Mitchell Wortsman, Patrick Schramowski, Srivatsa R Kundurthy, Katherine Crowson, Ludwig Schmidt, Robert Kaczmarczyk, and Jenia Jitsev. LAION-5b: An open large-scale dataset for training next generation image-text models. In Thirty-sixth Conference on Neural Information Processing Systems Datasets and Benchmarks Track, 2022. URL https://openreview.net/forum?id=M3Y74vmsMcY.
270
+ Etai Sella, Gal Fiebelman, Peter Hedman, and Hadar Averbuch-Elor. Vox-e: Text-guided voxel editing of 3d objects. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pp. 430–440, October 2023.
271
+ Junyoung Seo, Wooseok Jang, Min-Seop Kwak, Hyeonsu Kim, Jaehoon Ko, Junho Kim, Jin-Hwa Kim, Jiyoung Lee, and Seungryong Kim. Let 2d diffusion model know 3d-consistency for robust text-to-3d generation. In The Twelfth International Conference on Learning Representations, 2024. URL https://openreview.net/forum?id=UbxWjq0U02.
272
+ Tianchang Shen, Jun Gao, Kangxue Yin, Ming-Yu Liu, and Sanja Fidler. Deep marching tetrahedra: a hybrid representation for high-resolution 3d shape synthesis. In Advances in Neural Information Processing Systems (NeurIPS), 2021.
273
+ Ruoxi Shi, Hansheng Chen, Zhuoyang Zhang, Minghua Liu, Chao Xu, Xinyue Wei, Linghao Chen, Chong Zeng, and Hao Su. Zero123++: a single image to consistent multi-view diffusion base model. arXiv preprint arXiv:2310.15110, 2023a.
274
+ Ruoxi Shi, Hansheng Chen, Zhuoyang Zhang, Minghua Liu, Chao Xu, Xinyue Wei, Linghao Chen, Chong Zeng, and Hao Su. Zero123++ v1.2 code. https://huggingface.co/sudo-ai/ zero123plus-v1.2/tree/main, 2023b.
275
+ Yichun Shi, Peng Wang, Jianglong Ye, Long Mai, Kejie Li, and Xiao Yang. Mvdream code. https://github.com/bytedance/MVDream-threestudio, 2023c.
276
+ Yichun Shi, Peng Wang, Jianglong Ye, Long Mai, Kejie Li, and Xiao Yang. MVDream: Multi-view diffusion for 3d generation. In The Twelfth International Conference on Learning Representations, 2024. URL https://openreview.net/forum?id=FUgrjq2pbB.
277
+ Yawar Siddiqui, Tom Monnier, Filippos Kokkinos, Mahendra Kariya, Yanir Kleiman, Emilien Garreau, Oran Gafni, Natalia Neverova, Andrea Vedaldi, Roman Shapovalov, et al. Meta 3d assetgen: Text-to-mesh generation with high-quality geometry, texture, and pbr materials. arXiv preprint arXiv:2407.02445, 2024.
278
+ Jiaxiang Tang, Zhaoxi Chen, Xiaokang Chen, Tengfei Wang, Gang Zeng, and Ziwei Liu. Lgm: Large multi-view gaussian model for high-resolution 3d content creation. arXiv preprint arXiv:2402.05054, 2024a.
279
+ Jiaxiang Tang, Jiawei Ren, Hang Zhou, Ziwei Liu, and Gang Zeng. Dreamgaussian: Generative gaussian splatting for efficient 3d content creation. In The Twelfth International Conference on Learning Representations, 2024b. URL https://openreview.net/forum?id=UyNXMqnN3c.
280
+
281
+ Luming Tang, Menglin Jia, Qianqian Wang, Cheng Perng Phoo, and Bharath Hariharan. Emergent correspondence from image diffusion. Advances in Neural Information Processing Systems, 36: 1363-1389, 2023.
282
+ Zhengyi Wang, Yikai Wang, Yifei Chen, Chendong Xiang, Shuo Chen, Dajiang Yu, Chongxuan Li, Hang Su, and Jun Zhu. CRM: Single image to 3d textured mesh with convolutional reconstruction model. In European Conference on Computer Vision, pp. 57-74. Springer, 2025.
283
+ Haibo Yang, Yang Chen, Yingwei Pan, Ting Yao, Zhineng Chen, Zuxuan Wu, Yu-Gang Jiang, and Tao Mei. Dreammesh: Jointly manipulating and texturing triangle meshes for text-to-3d generation. In ECCV, 2024.
284
+ Jianglong Ye, Peng Wang, Kejie Li, Yichun Shi, and Heng Wang. Consistent-1-to-3: Consistent image to 3d view synthesis via geometry-aware diffusion models. In 2024 International Conference on 3D Vision (3DV), pp. 664–674, 2024. doi: 10.1109/3DV62453.2024.00027.
285
+ Taoran Yi, Jiemin Fang, Junjie Wang, Guanjun Wu, Lingxi Xie, Xiaopeng Zhang, Wenyu Liu, Qi Tian, and Xinggang Wang. Gaussian dreamer: Fast generation from text to 3d gaussians by bridging 2d and 3d diffusion models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 6796-6807, June 2024.
286
+ Igor Zacharov, Rinal Arslanov, Maksim Gunin, Daniil Stefanishin, Andrey Bykov, Sergey Pavlov, Oleg Panarin, Anton Maliutin, Sergey Rykovanov, and Maxim Fedorov. “zholes”-petaflops supercomputer for data-driven modeling, machine learning and artificial intelligence installed in skolkovo institute of science and technology. Open Engineering, 9(1):512-520, 2019.
287
+ Jingyu Zhuang, Chen Wang, Liang Lin, Lingjie Liu, and Guanbin Li. Dreameditor: Text-driven 3d scene editing with neural fields. In SIGGRAPH Asia 2023 Conference Papers, pp. 1-10, 2023.
288
+
289
+ Table 4: Quantitative evaluation of the RichDreamer-based implementation of our method for the generation of multiple aligned 3D objects.
290
+
291
+ <table><tr><td></td><td>Text-asset alignment</td><td>Text-geometry alignment</td><td>3D plausibility</td><td>Texture details</td><td>Geometry details</td><td>Overall quality</td><td>CLIP ↑</td><td>DIFT distance ↓ % of object size</td></tr><tr><td>vs. MVEdit</td><td>86</td><td>81</td><td>72</td><td>80</td><td>81</td><td>82</td><td>MVEdit</td><td>27.1</td></tr><tr><td>vs. LucidDreamer</td><td>51</td><td>59</td><td>66</td><td>46</td><td>50</td><td>53</td><td>LucidDreamer</td><td>26.4</td></tr><tr><td>vs. GaussianEditor</td><td>88</td><td>66</td><td>71</td><td>71</td><td>65</td><td>72</td><td>GaussianEditor</td><td>22.8</td></tr><tr><td rowspan="2" colspan="7">GPTEval3D, % of comparisons where the RichDreamer-based version of our method is preferred</td><td>A3D-MVDream (Ours, main)</td><td>27.7</td></tr><tr><td>A3D-RichDreamer (Ours)</td><td>28.8</td></tr></table>
292
+
293
+ ![](images/749bbe652cdf239c3b0334ec52a80db09367d1c2d39df5bf6d8ea6837972c24c.jpg)
294
+ Figure 7: Pairs of objects generated using the implementation of our method based on RichDreamer (Qiu et al., 2024). The respective pairs of prompts are shown below. For each object, we show a color rendering and a rendering of the geometry below it.
295
+
296
+ ![](images/d2dcf83e30846ff2fae51e87aef919f6c62a188705379083fcaa1e7a4264a825.jpg)
297
+
298
+ ![](images/0186c72a995da2bca1580ff987e023c95e221d27a57eed07ed1c8d1482ad153f.jpg)
299
+
300
+ ![](images/dde2fad595d54104ba28bb5e3e442170d8d155cc2408298090342a3d7bbca9c8.jpg)
301
+
302
+ # A GENERATION OF MULTIPLE ALIGNED 3D OBJECTS WITH RICHDREAMER
303
+
304
+ To evaluate the generalizability of our approach, we performed additional experiments on the generation of pairs of aligned objects using an implementation of our method based on the RichDreamer model (Qiu et al., 2024). RichDreamer employs a hybrid DMTet (Shen et al., 2021) representation for 3D content creation. It additionally integrates a special normal-depth diffusion model alongside the Score Distillation Sampling (SDS) method to refine the geometry. This approach allows for sharper edges, enhanced surface continuity, and a more realistic object appearance. In addition to improving geometry, RichDreamer also employs albedo diffusion process for texture learning. The optimization process is split into three stages. First, the coarse geometry is optimized to establish the basic shape of the 3D object. Then, this geometry is refined, improving the fidelity and detail of the object surface. Finally, the texture on the surface is generated.
305
+
306
+ In Figure 7 we show examples of the pairs of objects generated with this implementation of our method. In Table 4 we show the quantitative comparison of this implementation with the other methods. Our results with RichDreamer demonstrate the ability of our method to preserve the quality when switching the backbone. This version of our method produces the 3D objects with a higher degree of structural alignment compared with the main version based on the MVDream model. The reason for this may be that enforcing normal smoothness is easier with an SDF-based rendering backbone compared with a backbone based on radiance fields. On the other hand, the version of our method based on RichDreamer produces the results with a lower visual and geometric quality, w.r.t. GPTEval3D. One reason for this may be that RichDreamer sometimes struggles with generation of fine structures.
307
+
308
+ # B IMPLEMENTATION DETAILS
309
+
310
+ # B.1 IMPLEMENTATION OF OUR METHOD BASED ON MVDREAM
311
+
312
+ We use the same NeRF architecture and the majority of hyperparameters for training as MVDream (Shi et al., 2024). The neural density field is parameterized with an MLP network with one hidden layer, built on top of a hierarchical feature hash grid with the dimension of 32. The grid has 16 levels starting from the resolution of 8 with 2 features per level. For SDS, we use the multi-view diffusion model from MVDream, and decrease the level of noise gradually during training.
313
+
314
+ To implement the transitions, we introduce an additional latent vector $\mathbf{u}$ that is concatenated with the coordinate embedding obtained from the hash grid and passed to the NeRF network. At each training iteration, we sample the latent vector $\mathbf{u}$ randomly from the latent simplex (described in Section 4.1). With the probability $1 - p$ , we sample the latent vector from the vertices of the simplex, i.e., optimize a single object, and with the probability $p$ , we sample the latent vector from the edges of the simplex, i.e., optimize a transition between two objects. We obtain our main results with $p = 0.5$ . When the latent vector is sampled from an edge of the simplex, we additionally sample a scalar parameter $t$ from the uniform distribution $t \sim \mathcal{U}(0,1)$ and obtain the latent vector $\mathbf{u}$ for the current training iteration via linear interpolation between the latent codes of the individual objects in the pair with this parameter $t$ .
315
+
316
+ We employ two types of regularization on the normal maps: the orientation penalty described in (Poole et al., 2023) and the normal smoothness loss. Given the direction of the normal $N_{i,j}$ at the pixel with the indices $i, j$ , the smoothness loss is defined as
317
+
318
+ $$
319
+ \frac {1}{(H - 1) (W - 1)} \sum_ {i = 1, j = 1} ^ {H - 1, W - 1} | N _ {i, j + 1} - N _ {i, j} | + | N _ {i + 1, j} - N _ {i, j} |, \tag {5}
320
+ $$
321
+
322
+ where $H$ and $W$ are the dimensions of the normal map. We gradually increase the weight of the orientation penalty from 100 to 1000. We set the weight of the normal smoothness loss to the value of 10.
323
+
324
+ # B.2 IMPLEMENTATION OF OUR METHOD BASED ON RICHDREAMER
325
+
326
+ We mostly use the default configuration of RichDreamer, including the details of the architecture and the optimization process. This involves utilizing an MLP with one hidden layer and 64 neurons for prediction of the SDF and another MLP with the same structure for prediction of the albedo. Similarly to the implementation of our method based on MVDream, we employ the latent code $\mathbf{u}$ concatenated with the encoded points from a hash grid as the input to these networks. We initialize the geometry representation using a uniform sphere with a radius of one, and utilize SDS with Stable Diffusion 2.1 and a depth-normal diffusion model to improve the accuracy of depth predictions. For smoother interpolation between prompts, we incorporate normal consistency loss, and after experimentation, we found that setting the loss coefficient between 3 and 5 yields better results than the original configuration. To generate textures, we use a material system based on a diffuse and point-light setup without background. For generating albedo maps, we guide the prediction with an additional diffusion model.
327
+
328
+ # B.3 DETAILS OF TESTING THE OTHER METHODS
329
+
330
+ MVEdit. We used the official implementation of MVEdit (Chen et al., 2024b) with the default values of all hyperparameters except the denoising strength for text-guided 3D-to-3D pipeline. We changed this value from the default 0.7 to 0.8 to increase the scale of the changes that the method makes to the input 3D mesh. In our experiments, this leads to a higher quality of the results produced by MVEdit.
331
+
332
+ We obtain pairs of aligned objects with MVEdit in three steps, the first two of which follow the text-to-3D generation pipeline of MVEedit.
333
+
334
+ 1. We generate the images of the first object in the pair using the Stable Diffusion 1.5 model (Lykon, 2023; Rombach et al., 2022) conditioned on the respective text prompt.
335
+ 2. We generate the mesh of the first object from these images using the Zero123++ model (Shi et al., 2023b;a). This step includes extraction of object masks for the generated images, that we obtain using the Language Segment-Anything model (Medeiros, 2024; Kirillov et al., 2023).
336
+ 3. We obtain the mesh of the second object in the pair by following the text-guided 3D-to-3D pipeline of MVEdit initialized with the previously generated mesh of the first object.
337
+
338
+ For the structure-preserving transformation of 3D models, we follow the text-guided 3D-to-3D pipeline of MVEdit in the straightforward way.
339
+
340
+ LucidDreamer. We used the official implementation of LucidDreamer (Liang et al., 2023) with the default values of all hyperparameters.
341
+
342
+ We obtain pairs of aligned objects with LucidDreamer in three steps, the first two of which follow the text-to-3D generation pipeline of LucidDreamer.
343
+
344
+ 1. We generate a coarse point cloud of the first object in the pair using the Shape-E model conditioned on the respective text prompt.
345
+ 2. We obtain the Gaussian Splatting representation of the first object by initializing it using the point cloud from the previous step and optimizing it using SDS conditioned on the same text prompt.
346
+ 3. We obtain the Gaussian Splitting representation of the second object in the pair by initializing it with the previously generated Gaussian splats of the first object and optimizing it using SDS conditioned on the text prompt for the second object.
347
+
348
+ For the structure-preserving transformation of 3D models, we extract the point cloud from the source mesh and use this point cloud for initialization of the Gaussian Splatting in the generation pipeline of LucidDreamer.
349
+
350
+ GaussianEditor. We used the official implementation of GaussianEditor (Chen et al., 2023c) with the same values of all hyperparameters that the authors use for their experiment with the bear scene (Chen et al., 2023b).
351
+
352
+ GaussianEditor takes as input a Gaussian Splitting representation of the scene and performs the text-driven editing of this scene. For the generation of pairs of aligned objects, we obtain the input Gaussian Splitting for the first object in the pair generated using MVDream. For the structure-preserving transformation of 3D models, we obtain the input Gaussian Splitting for the source mesh. In both cases, we obtain the second object in the pair or the transformed object with the following steps.
353
+
354
+ 1. We obtain the input Gaussian Splatting from 120 renders of the initial object, by following the original implementation of Gaussian Splatting (Kerbl et al., 2023) with the default parameters and 30k training iterations. We render the initial object from 120 camera positions evenly located around it and use the known camera parameters during optimization of the Gaussian Splatting.
355
+ 2. We obtain the second object in the pair by running GaussianEditor with the prompt "Turn the prompt_1 into a prompt_2", where the prompt_1 and prompt_2 describe the initial object and the second object in the pair respectively. For structure-preserving transformation we use the prompt "Turn it into a target_prompt".
356
+
357
+ We note that Gaussian Splatting does not provide an explicit representation of the 3D surface, so we derive the renderings of the surface (e.g., shown on Figure 5) from the renderings of depth maps.
358
+
359
+ # B.4 COMPUTATIONAL COST
360
+
361
+ We run all experiments on a single Nvidia A100 GPU. To generate a single object, MVDream, which we use as the baseline of our method, requires 10k iterations, which corresponds approximately to 45 minutes. To generate a pair of objects, our method typically requires 20k iterations, which corresponds to 1.5 hours. The two main steps of our adaptation of MVEdit, namely text-driven 3D generation to obtain one of the objects in the pair and text-driven editing to obtain the other object, require 40 minutes in total. To generate a pair of objects, LucidDreamer typically requires 2 hours. With GaussianEditor, we generate an initial object in the pair using MVDream, which requires 45 minutes, and then edit the first object into the second one, which requires 15 minutes, so the total time required to generate a pair of objects is 1 hour. Overall, the running time of our method is comparable with the alternatives.
362
+
363
+ We have experimented with the generation of up to 5 aligned objects at a time using our method. We decided not to rely on knowledge sharing and used a simple linear heuristic for scaling the number of iterations. We add 10k optimization iterations (45 minutes) per object, so that the generation of 5 objects requires 50k iterations, which corresponds to 3 hours and 45 minutes. Informally, we have
364
+
365
+ noticed that sublinear scaling also produces the results of a high quality, so it would also be possible to use fewer iterations.
366
+
367
+ # C EVALUATION DETAILS
368
+
369
+ For evaluation, we place the results of all methods into the same coordinate space. We manually rotate the results of MVEdit for better consistency with the other methods. We render the results from 120 camera positions evenly located around the object, consistent with the threestudio format (Shi et al., 2023c).
370
+
371
+ The results for editing-based competitors are split into two groups. The first group, which corresponds to the generative part of their pipeline produces "source" 3D objects. The second group consists of the objects that are obtained by feeding the objects from the first group to the corresponding 3D editing pipeline. The object and its corresponding transformation could be obtained by taking some object prompt from the first group and taking its complementary prompt from the pair from the second group.
372
+
373
+ DIFT distance. Given a pair of images $I_A, I_B$ and corresponding masks $M_A, M_B$ we denote the DIFT (Tang et al., 2023) mappings from the first image to the second as $F_A$ and from the second to the first as $F_B$ . We build two 2D point clouds $P^A, P^B$ by filtering regular 2D grids of points with masks $M_A, M_B$ . We define the DIFT distance as following: $S_{DIFT} = \frac{1}{2N}\sum_{i=1}^{N}\frac{\|F_A(P_i^A) - P_i^A\|_2}{\sigma_{P_A}} + \frac{\|F_B(P_i^B) - P_i^B\|_2}{\sigma_{P_B}}$ , where $\sigma_{P_A}$ and $\sigma_{P_B}$ are the diameters of the 2D pointclouds $P^A$ and $P^B$ . We average the distance across the 120 viewpoints around the object.
374
+
375
+ GPTEval3D. We follow the procedure proposed by (Mao et al., 2023) precisely, with one change. The version of the GPT model used in the original study is no longer available through OpenAI API, so we utilize a newer version GPT-4o. We compare each pair of methods based on 90 pairwise comparisons. For each comparison, we randomly sample a pair of objects produced by the two methods for the same text prompt and compose the request to the model. Each request consists of a pair of grids of renderings of the compared objects and a textual description of the questions to the model. Interestingly, we observed that the compressed version of the model, GPT-4o-mini, prefers the left result in the majority of comparisons regardless of the quality.
376
+
377
+ CLIP. We use ViT-L/14 version of the CLIP model. We calculate the CLIP similarity between the RGB render of the object and the respective text prompt for each of the 120 viewpoints around the object and report the average value.
378
+
379
+ # D ADDITIONAL RESULTS
380
+
381
+ In Table 5 we show the quantitative comparison of methods for the generation of multiple aligned 3D objects per each pair of objects. In Table 6 we show the quantitative comparison of methods for structure-preserving transformation per each pair of initial model and text prompt. We note that LucidDreamer diverged for half of the scenes.
382
+
383
+ # D.1 EXTENDED DISCUSSION OF ABLATION RESULTS
384
+
385
+ In Tables 7 and 8 we show the results of the quantitative ablation study per each pair of objects, and in Table 9 we show the comparison using GPTEval3D. In Figure 8, we show qualitative comparison for one pair of objects, and we refer the reader to the complete set of animated results of our experiments on the project page. We study our two main regularizations: encouraging the network to (1) learn plausible transitions between the objects, and (2) learn smooth transitions.
386
+
387
+ To demonstrate the effects of progressively decreasing the strength of smoothness regularization, we compare our method with a 1-layer MLP (D), with the versions with 2-layer MLP (E), and 3-layer MLP (F). Weakening the regularization (increasing the number of layers) leads to a lower degree of alignment, as confirmed by the DIFT distance, without any significant improvement of the visual and
388
+
389
+ Table 5: Quantitative comparison for the generation of multiple aligned 3D objects per each pair of objects.
390
+
391
+ <table><tr><td rowspan="2">Prompt 1</td><td rowspan="2">Prompt 2</td><td colspan="4">CLIP Similarity ↑</td><td colspan="4">DIFT distance ↓, % of object size</td></tr><tr><td>MVEdit</td><td>LucidDreamer</td><td>GaussianEditor</td><td>A3D (Ours)</td><td>MVEdit</td><td>LucidDreamer</td><td>GaussianEditor</td><td>A3D (Ours)</td></tr><tr><td>ant animal</td><td>crab animal</td><td>29.2</td><td>24.8</td><td>19.9</td><td>28.0</td><td>5.0</td><td>10.7</td><td>1.9</td><td>5.9</td></tr><tr><td>bicycle</td><td>motorcycle</td><td>26.7</td><td>26.2</td><td>22.8</td><td>25.7</td><td>4.5</td><td>7.0</td><td>3.9</td><td>5.5</td></tr><tr><td>bird animal</td><td>dinosaur animal</td><td>24.0</td><td>25.0</td><td>21.5</td><td>26.8</td><td>6.4</td><td>15.7</td><td>1.6</td><td>6.8</td></tr><tr><td>car</td><td>carriage</td><td>21.8</td><td>28.9</td><td>22.2</td><td>29.1</td><td>3.0</td><td>10.5</td><td>1.6</td><td>4.4</td></tr><tr><td>dwarf</td><td>minotaur</td><td>27.2</td><td>24.4</td><td>17.0</td><td>26.7</td><td>6.0</td><td>9.3</td><td>0.9</td><td>5.3</td></tr><tr><td>gopher animal</td><td>kangaroo animal</td><td>25.0</td><td>25.6</td><td>21.2</td><td>28.7</td><td>5.9</td><td>10.2</td><td>3.4</td><td>4.3</td></tr><tr><td>horse animal</td><td>horse skeleton</td><td>26.7</td><td>24.4</td><td>25.9</td><td>29.1</td><td>2.1</td><td>4.2</td><td>1.9</td><td>7.2</td></tr><tr><td>animal</td><td>lego animal</td><td>25.9</td><td>25.4</td><td>26.0</td><td>24.3</td><td>5.1</td><td>16.7</td><td>4.7</td><td>5.5</td></tr><tr><td>magnolia tree</td><td>sakura tree</td><td>30.2</td><td>24.3</td><td>28.0</td><td>30.6</td><td>9.0</td><td>14.3</td><td>5.0</td><td>2.8</td></tr><tr><td>space marine</td><td>ww2 soldier</td><td>28.4</td><td>24.6</td><td>19.1</td><td>25.3</td><td>3.2</td><td>13.7</td><td>0.7</td><td>7.5</td></tr><tr><td>mermaid</td><td>seahorse</td><td>30.4</td><td>27.6</td><td>20.9</td><td>29.9</td><td>10.4</td><td>14.3</td><td>1.6</td><td>7.6</td></tr><tr><td>man standing</td><td>robot standing</td><td>30.7</td><td>26.8</td><td>24.9</td><td>29.9</td><td>3.4</td><td>11.7</td><td>3.6</td><td>5.8</td></tr><tr><td>atakebune ship</td><td>modern yacht</td><td>27.1</td><td>27.8</td><td>23.7</td><td>28.9</td><td>9.7</td><td>8.7</td><td>1.9</td><td>16.6</td></tr><tr><td>gothic cathedral</td><td>hindu temple</td><td>26.6</td><td>31.3</td><td>23.8</td><td>26.6</td><td>4.9</td><td>12.2</td><td>1.8</td><td>3.5</td></tr><tr><td>chair</td><td>gothic throne, royal</td><td>26.6</td><td>28.9</td><td>25.2</td><td>26.5</td><td>3.2</td><td>10.1</td><td>1.7</td><td>2.6</td></tr></table>
392
+
393
+ Table 6: Quantitative comparison for structure-preserving transformation per each pair of initial model and text prompt. We explain how we use the prompt describing the input model with our method in Section 4.3.
394
+
395
+ <table><tr><td rowspan="2">Target text prompt</td><td rowspan="2">Prompt describing the input model</td><td colspan="4">CLIP Similarity ↑</td><td colspan="4">DIFT distance ↓, % of object size</td></tr><tr><td>MVEdit</td><td>LucidDreamer</td><td>GaussianEditor</td><td>A3D (Ours)</td><td>MVEdit</td><td>LucidDreamer</td><td>GaussianEditor</td><td>A3D (Ours)</td></tr><tr><td>arab warrior</td><td>greek hoplite</td><td>27.8</td><td>29.0</td><td>27.0</td><td>27.4</td><td>2.7</td><td>17.2</td><td>1.2</td><td>4.1</td></tr><tr><td>astronaut</td><td>man</td><td>27.1</td><td></td><td>23.1</td><td>25.9</td><td>3.1</td><td></td><td>1.0</td><td>6.8</td></tr><tr><td>avocado chair</td><td>egg chair</td><td>27.7</td><td></td><td>26.4</td><td>32.5</td><td>6.2</td><td></td><td>3.8</td><td>6.9</td></tr><tr><td>skeleton of a cat</td><td>cat animal</td><td>31.3</td><td></td><td>28.6</td><td>31.6</td><td>3.1</td><td></td><td>1.7</td><td>7.4</td></tr><tr><td>clown, sitting</td><td>man, sitting</td><td>32.1</td><td>29.6</td><td>28.7</td><td>31.6</td><td>2.2</td><td>13.7</td><td>1.2</td><td>6.5</td></tr><tr><td>dragon</td><td>parrot</td><td>24.3</td><td></td><td>18.8</td><td>26.0</td><td>8.2</td><td></td><td>1.8</td><td>12.0</td></tr><tr><td>realistic baby duck bird</td><td>yellow duck toy</td><td>28.6</td><td></td><td>29.8</td><td>27.9</td><td>5.0</td><td></td><td>3.2</td><td>8.4</td></tr><tr><td>bearded dwarf with an axe</td><td>man</td><td>26.7</td><td>27.6</td><td>23.8</td><td>30.5</td><td>3.4</td><td>13.1</td><td>1.8</td><td>5.4</td></tr><tr><td>female elf woman sitting</td><td>female sitting</td><td>27.7</td><td>32.7</td><td>30.8</td><td>33.6</td><td>1.6</td><td>16.2</td><td>1.7</td><td>4.5</td></tr><tr><td>saturn planet with rings</td><td>globe on a stand</td><td>24.5</td><td></td><td>19.2</td><td>26.0</td><td>4.1</td><td></td><td>1.2</td><td>11.6</td></tr><tr><td>groot</td><td>man wearing jeans and t-shirt</td><td>31.2</td><td></td><td>22.7</td><td>31.3</td><td>3.6</td><td></td><td>1.7</td><td>7.7</td></tr><tr><td>man hunter holding a gun in both hands</td><td>male human</td><td>28.8</td><td>22.5</td><td>26.3</td><td>25.6</td><td>2.3</td><td>19.5</td><td>1.3</td><td>3.6</td></tr><tr><td>iron throne</td><td>antique wooden chair</td><td>31.9</td><td></td><td>24.9</td><td>27.5</td><td>5.4</td><td></td><td>1.9</td><td>15.5</td></tr><tr><td>jedi with lightsaber</td><td>man</td><td>25.1</td><td></td><td>21.3</td><td>27.7</td><td>2.4</td><td></td><td>1.8</td><td>4.0</td></tr><tr><td>female jedi with lightsaber</td><td>woman</td><td>30.4</td><td>26.2</td><td>22.5</td><td>26.7</td><td>3.8</td><td>14.1</td><td>2.1</td><td>4.0</td></tr><tr><td>female jedi with lightsaber</td><td>woman</td><td>27.7</td><td>23.4</td><td>21.3</td><td>28.7</td><td>3.2</td><td>20.3</td><td>1.0</td><td>5.6</td></tr><tr><td>highly detailed realistic lara croft</td><td>lara croft low poly</td><td>29.3</td><td></td><td>20.5</td><td>26.4</td><td>2.2</td><td></td><td>1.6</td><td>3.7</td></tr><tr><td>female marble statue</td><td>woman</td><td>31.8</td><td></td><td>30.4</td><td>27.2</td><td>2.3</td><td></td><td>2.2</td><td>6.8</td></tr><tr><td>space marine, warhammer</td><td>man</td><td>24.4</td><td>25.9</td><td>20.3</td><td>28.0</td><td>6.5</td><td>16.6</td><td>2.1</td><td>13.1</td></tr><tr><td>my little pony</td><td>horse animal</td><td>29.0</td><td></td><td>26.7</td><td>29.2</td><td>6.5</td><td></td><td>1.3</td><td>15.5</td></tr><tr><td>beautiful princess sitting on a throne</td><td>female sitting</td><td>26.7</td><td>27.1</td><td>23.2</td><td>26.5</td><td>2.3</td><td>18.6</td><td>1.7</td><td>9.4</td></tr><tr><td>robot</td><td>man</td><td>25.6</td><td>25.4</td><td>23.2</td><td>24.5</td><td>2.6</td><td>12.9</td><td>2.0</td><td>10.6</td></tr><tr><td>robot</td><td>man</td><td>24.6</td><td>25.7</td><td>24.7</td><td>25.5</td><td>3.4</td><td>14.9</td><td>1.2</td><td>9.6</td></tr><tr><td>robot, standing</td><td>man, standing</td><td>25.8</td><td></td><td>25.4</td><td>23.8</td><td>2.9</td><td></td><td>1.5</td><td>4.8</td></tr><tr><td>man wearing a black tailcoat with red tie</td><td>man wearing jeans and t-shirt</td><td>28.7</td><td></td><td>29.4</td><td>24.6</td><td>4.4</td><td></td><td>1.6</td><td>4.6</td></tr><tr><td>werewolf</td><td>man</td><td>25.7</td><td>27.8</td><td>20.1</td><td>27.8</td><td>2.3</td><td>17.7</td><td>2.1</td><td>12.4</td></tr></table>
396
+
397
+ Table 7: Quantitative ablation study comparing our method (D) with the baselines per each pair of objects using DIFT distance. See descriptions of the baselines in Section 6.
398
+
399
+ <table><tr><td rowspan="2">Prompt 1</td><td rowspan="2">Prompt 2</td><td colspan="6">DIFT distance ↓, % of object size</td></tr><tr><td>(A) MVDream</td><td>(B) MVDream+mult.</td><td>(C) Empty prompt</td><td>(D) Ours</td><td>(E) 2-layer MLP</td><td>(F) 3-layer MLP</td></tr><tr><td>ant animal</td><td>crab animal</td><td>33.8</td><td>34.6</td><td>26.9</td><td>5.9</td><td>15.8</td><td>21.2</td></tr><tr><td>bicycle</td><td>motorcycle</td><td>17.9</td><td>7.6</td><td>5.1</td><td>5.5</td><td>7.9</td><td>9.1</td></tr><tr><td>bird animal</td><td>dinosaur animal</td><td>28.3</td><td>21.9</td><td>13.6</td><td>6.8</td><td>15.1</td><td>13.9</td></tr><tr><td>car</td><td>carriage</td><td>33.0</td><td>29.7</td><td>23.7</td><td>4.4</td><td>18.6</td><td>12.4</td></tr><tr><td>dwarf</td><td>minotaur</td><td>41.2</td><td>9.0</td><td>6.7</td><td>5.3</td><td>8.8</td><td>9.7</td></tr><tr><td>gopher animal</td><td>kangaroo animal</td><td>37.7</td><td>38.2</td><td>27.3</td><td>4.3</td><td>6.7</td><td>8.0</td></tr><tr><td>horse animal</td><td>horse skeleton</td><td>13.0</td><td>8.5</td><td>7.0</td><td>7.2</td><td>7.9</td><td>8.4</td></tr><tr><td>animal</td><td>lego animal</td><td>25.8</td><td>10.8</td><td>4.8</td><td>5.5</td><td>3.6</td><td>2.8</td></tr><tr><td>magnolia tree</td><td>sakura tree</td><td>21.3</td><td>0.9</td><td>1.1</td><td>2.8</td><td>0.7</td><td>1.1</td></tr><tr><td>space marine</td><td>ww2 soldier</td><td>21.7</td><td>10.0</td><td>13.0</td><td>7.5</td><td>9.1</td><td>17.8</td></tr><tr><td>mermaid</td><td>seahorse</td><td>34.1</td><td>35.8</td><td>27.4</td><td>7.6</td><td>24.0</td><td>17.9</td></tr><tr><td>man standing</td><td>robot standing</td><td>33.8</td><td>7.5</td><td>5.4</td><td>5.8</td><td>23.6</td><td>25.8</td></tr><tr><td>atakebune ship</td><td>modern yacht</td><td>39.5</td><td>30.9</td><td>15.3</td><td>16.6</td><td>28.3</td><td>26.8</td></tr><tr><td>gothic cathedral</td><td>hindu temple</td><td>35.7</td><td>8.0</td><td>6.1</td><td>3.5</td><td>8.0</td><td>12.7</td></tr><tr><td>chair</td><td>gothic throne, royal</td><td>34.4</td><td>27.2</td><td>29.7</td><td>2.6</td><td>25.0</td><td>26.9</td></tr><tr><td colspan="2">Average</td><td>30.1</td><td>18.7</td><td>14.2</td><td>6.1</td><td>13.5</td><td>14.3</td></tr></table>
400
+
401
+ Table 8: Quantitative ablation study comparing our method (D) with the baselines per each pair of objects using CLIP similarity. See descriptions of the baselines in Section 6.
402
+
403
+ <table><tr><td rowspan="2">Prompt 1</td><td rowspan="2">Prompt 2</td><td colspan="6">CLIP Similarity ↑</td></tr><tr><td>(A) MVDream</td><td>(B) MVDream+mult.</td><td>(C) Empty prompt</td><td>(D) Ours</td><td>(E) 2-layer MLP</td><td>(F) 3-layer MLP</td></tr><tr><td>ant animal</td><td>crab animal</td><td>29.0</td><td>28.9</td><td>29.8</td><td>28.0</td><td>28.4</td><td>28.5</td></tr><tr><td>bicycle</td><td>motorcycle</td><td>27.3</td><td>26.0</td><td>26.6</td><td>25.7</td><td>26.5</td><td>27.0</td></tr><tr><td>bird animal</td><td>dinosaur animal</td><td>24.8</td><td>24.6</td><td>25.2</td><td>26.8</td><td>25.4</td><td>24.8</td></tr><tr><td>car</td><td>carriage</td><td>30.5</td><td>29.5</td><td>29.0</td><td>29.1</td><td>29.1</td><td>30.7</td></tr><tr><td>dwarf</td><td>minotaur</td><td>27.4</td><td>29.7</td><td>29.4</td><td>26.7</td><td>27.6</td><td>28.4</td></tr><tr><td>gopher animal</td><td>kangaroo animal</td><td>30.4</td><td>29.8</td><td>30.2</td><td>28.7</td><td>29.5</td><td>29.9</td></tr><tr><td>horse animal</td><td>horse skeleton</td><td>28.9</td><td>29.7</td><td>30.5</td><td>29.1</td><td>30.3</td><td>31.1</td></tr><tr><td>animal</td><td>lego animal</td><td>26.4</td><td>25.0</td><td>26.4</td><td>24.3</td><td>24.0</td><td>23.3</td></tr><tr><td>magnolia tree</td><td>sakura tree</td><td>31.7</td><td>28.1</td><td>24.9</td><td>30.6</td><td>28.9</td><td>29.0</td></tr><tr><td>space marine</td><td>ww2 soldier</td><td>27.0</td><td>26.5</td><td>28.7</td><td>25.3</td><td>26.6</td><td>29.3</td></tr><tr><td>mermaid</td><td>seahorse</td><td>29.9</td><td>29.3</td><td>28.9</td><td>29.9</td><td>29.0</td><td>28.8</td></tr><tr><td>man standing</td><td>robot standing</td><td>29.3</td><td>29.9</td><td>30.8</td><td>29.9</td><td>29.0</td><td>29.7</td></tr><tr><td>atakebune ship</td><td>modern yacht</td><td>28.5</td><td>27.7</td><td>28.8</td><td>28.9</td><td>28.4</td><td>27.9</td></tr><tr><td>gothic cathedral</td><td>hindu temple</td><td>28.6</td><td>29.0</td><td>29.7</td><td>26.6</td><td>28.6</td><td>31.4</td></tr><tr><td>chair</td><td>gothic throne, royal</td><td>27.7</td><td>27.5</td><td>27.5</td><td>26.5</td><td>28.1</td><td>28.4</td></tr><tr><td></td><td>Average</td><td>28.5</td><td>28.1</td><td>28.4</td><td>27.7</td><td>28.0</td><td>28.5</td></tr></table>
404
+
405
+ Table 9: Quantitative ablation study comparing our method (D) with the baselines using GPTEval3D. See descriptions of the baselines in Section 6.
406
+
407
+ <table><tr><td rowspan="2"></td><td colspan="6">GPTEval3D, % of comparisons where our method is preferred</td></tr><tr><td>Text-asset alignment</td><td>Text-geometry alignment</td><td>3D plausibility</td><td>Texture details</td><td>Geometry details</td><td>Overall quality</td></tr><tr><td>(A) MVDream, independently generated objects</td><td>60.0</td><td>59.0</td><td>53.2</td><td>59.0</td><td>50.0</td><td>59.0</td></tr><tr><td>(B) MVDream + multiple objects in one network</td><td>38.0</td><td>37.1</td><td>38.7</td><td>38.7</td><td>35.5</td><td>33.9</td></tr><tr><td>(C) A3D, transition plausibility with empty prompt</td><td>47.0</td><td>44.9</td><td>65.6</td><td>54.4</td><td>68.8</td><td>68.8</td></tr><tr><td>(E) A3D, 2-layer MLP</td><td>60.2</td><td>52.1</td><td>36.8</td><td>54.3</td><td>42.6</td><td>42.6</td></tr><tr><td>(F) A3D, 3-layer MLP</td><td>59.9</td><td>51.4</td><td>50.0</td><td>53.6</td><td>50.0</td><td>51.5</td></tr></table>
408
+
409
+ ![](images/4f84ed3bc881bd5af42385dd6a272a558941783b74938febd7b30ae64675a65f.jpg)
410
+ Figure 8: The first two rows show pairs of objects generated with our method (D) and the baselines (see the main text for their description). The last row shows an overlay of the silhouettes of the objects, demonstrating the alignment of their structural parts.
411
+
412
+ geometric quality, as confirmed by GPTEval3D and CLIP. We relate the preservation of the quality to the following. The Instant-NGP 3D representation, which we use for the main implementation of our method, mostly stores the neural field in the feature hash grid, while the MLP only plays an auxiliary role.
413
+
414
+ To demonstrate the effects of progressively removing the plausibility regularization, we compare our full method with the regularization conditioned on a blend of the text prompts (D), with the regularization conditioned on an empty text prompt (C), and without the plausibility regularization (B). Quantitatively, decreasing the plausibility of transitions also decreases the degree of alignment between the objects, as confirmed by the DIFT distance, but may slightly increase the visual and geometric quality of the results and their semantic coherence with the text prompts, measured with GPTEval3D and CLIP. Qualitatively, when we relax the restriction on the plausibility of transitions, the objects become less strictly aligned with each other and obtain more characteristic properties corresponding to the text prompts, especially if they are naturally different. For example, the ant and crab get generated with different numbers of legs, and the crab obtains a pair of claws; the car in the car-carriage pair changes from vintage to modern; the proportions of the gopher and kangaroo become more naturalistic.
415
+
416
+ # D.2 VARYING DEGREE OF ALIGNMENT
417
+
418
+ We achieve the alignment between the objects through regularization of transitions between them. Specifically, we encourage the network to learn plausible and smooth transitions. By varying the strength of these regularizations, we can control the degree of alignment between the objects and choose between more strict or more loose alignment.
419
+
420
+ The strength of the plausibility regularization is defined by the probability $p$ of sampling the latent code $u$ from the edges of the latent simplex instead of its vertices (see Appendix B.1 for details). In Figure 9 we compare the results of our full method (D) with the results obtained with a decreased strength of the regularization (G), and without the regularization (B). When we decrease the plausibility of transitions, the objects become less strictly aligned with each other and obtain more characteristic properties corresponding to the text prompts. For example, the ant and crab get generated with different numbers of legs, and the crab obtains a pair of claws, while the proportions of the gopher and kangaroo become more naturalistic.
421
+
422
+ We demonstrate the effects of progressively decreasing the strength of smoothness regularization in our ablation study (Section 6 and Appendix D.1). We compare the results of our method with 1-layer MLP (D), 2-layer MLP (E), and 3-layer MLP (F) in Figure 8 and on the project page. The variants with more layers produce more loosely aligned objects.
423
+
424
+ On the project page, we show examples of transitions between the generated objects. The transitions are generally smooth, gradually transforming one object into another. The plausibility of the intermediate results is higher for pairs of objects with a higher degree of alignment. The implementation of our method based on RichDreamer produces smoother transitions compared to the implementation
425
+
426
+ ![](images/31feb43f35622420fc431f0ea4dc437a58c560e998eb7699a328b563f2e91a45.jpg)
427
+ Figure 9: The first two rows show pairs of objects generated with our method in the default configuration (D), with a decreased strength of the plausibility regularization (G), and without this regularization (B). The last row shows an overlay of the silhouettes of the objects, demonstrating the alignment of their structural parts. Each three columns show the results for one pair of prompts written below.
428
+
429
+ ![](images/71d9d0e191949b4839bf27c83bcc9c91931e5afc174b609d87f7527d3006ebb1.jpg)
430
+
431
+ based on MVDream, which we relate to the implicit surface prior in DMTet that encourages the network to learn smooth geometry.
432
+
433
+ # E OTHER EXPERIMENTS
434
+
435
+ Our method can generate instances of the same object with different details. In Figure 10 we show examples of objects with adjusted accessories.
436
+
437
+ Our method can produce the results with some degree of diversity for a fixed set of text prompts, as we show at the top row of Figure 11. The diversity of the results produced by our method is mostly defined by the frameworks that we use for implementation: MVDream and RichDreamer. These frameworks use Score Distillation Sampling. Optimization with the Score Distillation Sampling employs high values of the Classifier-Free Guidance scale, which leads to mode-seeking behavior and lack of diversity. Additionally, the diffusion models used in these frameworks are tuned on the Objaverse dataset, which further decreases the diversity of the results produced by these models. One way to obtain different instances of the same set of objects with our method is to describe the desired differences in the text prompts, as we show at the bottom row of Figure 11.
438
+
439
+ Our method is robust to variations of the text prompts. In the first two rows of Figure 12 we show the results for pairs of text prompts with the same meaning but with a different phrasing. While not identical, the generated pairs exhibit a high degree of alignment between the objects and correspond to the text prompts well. In the last row of Figure 12 we show the result for a pair of prompts describing two different objects with the same attributes but using different phrasing. These objects are also aligned well and have a high quality. Overall, A3D does not require too much prompt engineering but one can expect some diversity in the results for different formulations of the text prompts.
440
+
441
+ We show a possible approach of applying our method to an image-to-3D model by performing the structure-preserving transformation of a 3D object generated with this model. We show some results obtained in this way in Figure 13. We generate the initial 3D models using the fast image-to-3D pipeline of CRM (Wang et al., 2025) and then transform these models using our method as described in Section 4.3. We use the faster implementation of our method based on the RichDreamer framework. Our method preserves the structure of the initial object generated from the image well.
442
+
443
+ ![](images/bec6559af0b848c00195bb10d1a4f52640447eeb2a9113888f24978f959cb46f.jpg)
444
+ brown monkey animal $\leftrightarrow$ ... with red hat
445
+
446
+ ![](images/8ea2f4c4140e8f71fd2c0a49edf41db3a08113ffae44ad8d5eb88300d8e38e70.jpg)
447
+ Figure 10: Pairs of instances of the same object with adjusted details generated with our method.
448
+
449
+ ![](images/ed0b1fe54a0e5101e1a84e64ab44220ee86af176bb9dc1cf90659e40471bfade.jpg)
450
+ cat animal $\leftrightarrow$ ... wearing red bow
451
+
452
+ ![](images/deb8d93cac1d2a2dcbc786acb21b8db14dd8ee6faae9373aad58a61d782246f3.jpg)
453
+
454
+ ![](images/586b7d07fbcc4f2a0102d943cf75b4eaf2a81bed820d3b89206ccc40c7eccd74.jpg)
455
+ woman in black sneakers $\leftrightarrow$ woman in red boots
456
+
457
+ ![](images/93adc2c06cceeef1335e70af5e5a9bb95e431b6d001d9813a540aa727e60394b.jpg)
458
+
459
+ woman in black sneakers $\leftrightarrow$ woman in red boots
460
+
461
+ ![](images/d80dc1d72455964cac5c37a2745f3422ee9dd2190cc4d95b78f51020f7795b8c.jpg)
462
+ A3D (Ours), instance 1
463
+
464
+ ![](images/603573357a122879af23dc305ea11e0c9470c1d5fecfd5c7338fd6caf552a290.jpg)
465
+ A3D (Ours), instance 2
466
+ running cat animal with bow $\leftrightarrow$ running dog animal with bow
467
+
468
+ ![](images/35505f7aac8215bcee95b2c928f86581778ce7b01cf64818b930730bfbf80079.jpg)
469
+ A3D (Ours), instance 1
470
+ horse animal $\leftrightarrow$ horse skeleton
471
+
472
+ ![](images/de9606dfa438255de20de40f54422421154a6f70fdaee0508599733392920974.jpg)
473
+ A3D (Ours), instance 2
474
+
475
+ ![](images/90363168a8977544b0377a38e3d7ad67d1b9984a46cb1b7300b2195b772e7714.jpg)
476
+ Figure 11: The top row shows two examples of slightly different instances of pairs of objects generated with our method for the fixed pairs of text prompts, written below. For some text prompts, the baseline text-to-3D framework MVDream tends to produce the exact same object every time (bottom row, left). For the respective pairs of prompts, the diversity of the results produced by our method is also limited (bottom row, middle). One way to obtain different instances of the same set of objects with our method in this case is to describe the desired differences in the text prompts (bottom row, right).
477
+
478
+ ![](images/79ac1015ab8619038d030aa0e2c600168db76d61a799163b2f453188d3122d21.jpg)
479
+ MVDream, instance 1
480
+ man standing
481
+ MVDream, instance 2
482
+
483
+ ![](images/a508a795030a2859c219b84543383fc77306c696816236be9aff03a2faec6bd1.jpg)
484
+
485
+ ![](images/52cd207b41dfd392ae2c56f82bbb849f9838ef56a182bc4d373beeef77fe08bf.jpg)
486
+ A3D (Ours), instance 1
487
+ man standing $\leftrightarrow$ robot standing
488
+
489
+ ![](images/fa0d5ba26097a0ed6b27258a0cdb518856a63b830748df15f0f6a4cb91953e6b.jpg)
490
+ A3D (Ours), instance 2
491
+
492
+ ![](images/7b745e07e6b7c6b05cf6f6a930fe0a9fa8ce1fabe8fdb13fe8e7f5ed52616503.jpg)
493
+ A3D (Ours)
494
+ man standing in red T-shirt $\leftrightarrow$ robot standing
495
+
496
+ ![](images/25f8a94320e82d67a06b61d3f1e877822ab40ea0022486c2fcb4f50edd2fd428.jpg)
497
+ Figure 12: Each of the top two rows shows two pairs of objects generated with our method for pairs of text prompts with a similar meaning but a different phrasing. The last row shows the result for a pair of prompts describing two different objects with the same attributes but using different phrasing.
498
+
499
+ ![](images/5df1e377368e9c212b0076c8b22d088e5b5432bef46d5f8e1ba69439e0213927.jpg)
500
+
501
+ ![](images/1fe3aaa6d44ae3cf33deace3739349ee563b135bb57f58e3502e205c1c985568.jpg)
502
+ Target prompt: cat skeleton
503
+
504
+ ![](images/629b057979673bd94fffdedb8ee0115138d18f9c84d1c9c959aef563945643bc.jpg)
505
+ Figure 13: Examples of application of our method to an image-to-3D pipeline. Each triplet of images shows an input image, the 3D model generated with the image-to-3D pipeline, and the result of structure-preserving transformation of this model using our method with the target prompt written below. The input image of the black cat is designed by macrovector / Freepik.
506
+
507
+ ![](images/b442d24276594e789bcd9652c2ea6e8cd7b98a04b7c6f2b6708d15266704ebed.jpg)
508
+
509
+ ![](images/5832ff49e181f4397f7fb115f8d70e5b804392044dc49d77f211fb017da2d76b.jpg)
510
+ Target prompt: robot running
511
+
512
+ ![](images/20f614d47e3b5f122def56a418c467a3527fb2314289dd57e8dd1436954457f9.jpg)
a3ddoesdiffusiondreamabout3dalignment/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:48b7c66db458fb98f1e61b7374f711fb76f78dc2ed945cedea63e2dd7e51b881
3
+ size 1184861
a3ddoesdiffusiondreamabout3dalignment/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9a94ae8974d20ef1a963902574270f7b85bba47a8f4e5725aae970342c11ffda
3
+ size 661508
abencharelmmsmastersatevaluatingaigeneratedimages/677b8e8b-9e9a-4670-974f-14fe8264da6b_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f071985f01cec94788608f1aed63d8d07afb751eb49c847f6f96d8235ad10adf
3
+ size 123238
abencharelmmsmastersatevaluatingaigeneratedimages/677b8e8b-9e9a-4670-974f-14fe8264da6b_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:530b17712fcf8b06db1235689d330a1c497ac7aedc3908817486a44cc9bb5ca2
3
+ size 170692
abencharelmmsmastersatevaluatingaigeneratedimages/677b8e8b-9e9a-4670-974f-14fe8264da6b_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:605865d9a484e349235e1bcf60b7c48b18a9d0ae9a8a43e684e198d8a6a21f00
3
+ size 6023457
abencharelmmsmastersatevaluatingaigeneratedimages/full.md ADDED
@@ -0,0 +1,516 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # A-BENCH: ARE LMMs MASTERS AT EVALUATING AI-GENERATED IMAGES?
2
+
3
+ Zicheng Zhang $^{1*}$ , Haoning Wu $^{2*}$ , Chunyi Li $^{1}$ , Yingjie Zhou $^{1}$ , Wei Sun $^{1}$ , Xiongkuo Min $^{1}$ , Zijian Chen $^{1}$ , Xiaohong Liu $^{1}$ , Weisi Lin $^{2}$ , Guangtao Zhai $^{1\dagger}$ ,
4
+
5
+ $^{1}$ Shanghai Jiaotong University, $^{2}$ Nanyang Technological University
6
+
7
+ * Equal contribution. † Corresponding authors. Project Page: https://github.com/Q-Future/A-Bench.
8
+
9
+ From Composition Identification
10
+ ![](images/d2a46f1a4f7fbd7126f2f0aaf984c3357149edb2d36d8a5547d8fe8dab991220.jpg)
11
+ What is partially covered by the mountain climber's backpacks?
12
+ A. Climbing harnesses
13
+ B. Boots lined up behind
14
+ D. The view of the mountain in the background (corre
15
+
16
+ From Contradiction Overcome
17
+ Figure 1: Error cases from the A-Bench.
18
+ ![](images/1a767842b549a52efbab5c956edf0fbf2d7d0554f90c7222215211384852db06.jpg)
19
+ C.Ropes and carabiners
20
+ GPT-4o Response: B. Boots lined up behind
21
+ Gemini 1.5 Pro Response: B. Boots lined up behind
22
+
23
+ Does the cactus contain soft and fluffy leaves?
24
+ A.No D. No
25
+ B. Yes (correct)
26
+ GPT-4o Response: A
27
+ Gemini 1.5 Pro Response: A No
28
+ A. Incorrect structure of the handgun
29
+ B.Blur due to low completion
30
+ C. Incorrect structure of the woman's face
31
+ D. Incorrect structure of the woman's hand (correct)
32
+ GPT-4o Response: A
33
+ A. 细胞I中 $\mathrm{P}$ -Proteinase
34
+
35
+ From Generative Distortion Assessment
36
+ ![](images/a40e50c005551529333c695c1778aa585469a8ad702507eb1673ada8b83b5ee4.jpg)
37
+ What is the most severe generative distortion?
38
+
39
+ # ABSTRACT
40
+
41
+ How to accurately and efficiently assess AI-generated images (AIGIs) remains a critical challenge for generative models. Given the high costs and extensive time commitments of user studies, many researchers have turned towards employing large multi-modal models (LMMs) as AIGI evaluators, the precision and validity of which are still questionable. Furthermore, traditional benchmarks often utilize mostly natural-captured content rather than AIGIs to test the abilities of LMMs, leading to a noticeable gap for AIGIs. Therefore, we introduce A-Bench in this paper, a benchmark designed to diagnose whether LMMs are masters at evaluating AIGIs. Specifically, A-Bench is organized under two key principles: 1) Emphasizing both high-level semantic understanding and low-level visual quality perception to address the intricate demands of AIGIs. 2) Various generative models are utilized for AIGI creation, and various LMMs are employed for evaluation, which ensures a comprehensive validation scope. Ultimately, 2,864 AIGIs from 16 text-to-image models are sampled, each paired with question-answers annotated by human experts. We hope that A-Bench will significantly enhance the evaluation process and promote the generation quality for AIGIs.
42
+
43
+ # 1 INTRODUCTION
44
+
45
+ One look is worth a thousand words. Inspired by this age-old adage, numerous researchers dedicate their efforts to developing text-to-image (T2I) models that vividly bring text to life through imagery. These T2I models, driven by free-form text prompts, aim to create images that accurately align with the text and showcase high perceptual quality. Innovations such as AlignDRAW (Mansimov et al., 2015) and the text-conditional GAN (Reed et al., 2016) have introduced differential architecture for image generation. The field continues to advance with the development of stable diffusion models (Saharia et al., 2022; Rombach et al., 2022b), significantly propelling T2I technology forward. On the commercial front, major corporations leverage vast-scale data to launch stunningly effective T2I models, such as DALL-E (Ramesh et al., 2022), Midjourney (Holz, 2023), Parti (Yu et al., 2022), etc. However, despite their diversity and widespread adoption, all these advanced T2I models occasionally face issues of low alignment with prompts and low perceptual quality in creating AI-generated images (AIGIs), necessitating careful evaluation and improvement.
46
+
47
+ ![](images/46547ede32b5f85beb75ece6d2ddfc15db32339b0556b93810f96c5f9d49872b.jpg)
48
+ Figure 2: The proposed A-Bench is designed to find out whether LMMs are reliable for T2I AIGI evaluation. Instead of directly assessing the performance of LMM-based metrics, we evaluate the LMMs themselves behind by examining whether the fundamental questions regarding semantic understanding and quality perception can be correctly answered. Based on the benchmark results, we can then 'diagnose' the strengths and weaknesses across various LMMs.
49
+
50
+ ![](images/656f147ef4d07b756be6d97e9f91065d9f3689b66aa4784997fa44049a4e9a53.jpg)
51
+
52
+ The alignment and quality evaluation of AIGIs present significant challenges that small expert models attempt to address. Although these small expert models offer some solutions, they possess inherent drawbacks and often fail to meet contemporary demands. Specifically, for alignment assessment, CLIP-based similarity models struggle with accurately judging alignment, particularly with complex text prompts (Radford et al., 2021b). When it comes to quality evaluation, traditional image quality/aesthetic assessment methods (IQA/IAA) are not capable of identifying AIGI-generative distortions (Wu et al., 2023a;b), rendering them unsuitable for this specialized task.
53
+
54
+ Many researchers are increasingly relying on large language models (LLMs) and large multi-modal models (LMMs) for their human-like processing capabilities, which are presumed to enable accurate judgments of alignment and quality in generated content. Consequently, many LMM-based evaluators have been developed, including VIE-Score (Ku et al., 2023), Prometheus (Kim et al., 2023), VQAScore (Lin et al., 2024), GPT4V-Eval (Zhang et al., 2023b), TIFA (Hu et al., 2023), and Davidsonian Graph (Cho et al., 2023), etc. However, a fundamental question remains:
55
+
56
+ Are LMMs reliable for evaluating T2I AIGIs?
57
+
58
+ These LMM-based metrics traditionally employ evaluation criteria such as SRCC/PLCC to determine their reliability. However, this approach only reveals how well the metrics perform, without shedding light on their specific strengths and weaknesses. To address this gap, we propose conducting a comprehensive 'diagnostic' benchmark $\rightarrow$ A-Bench, focusing on LMMs' capabilities in semantic understanding and quality assessment. Rather than directly evaluating these LMM-based metrics, we focus on the LMMs themselves behind. We move away from computing SRCC/PLCC criteria and instead examine whether the fundamental perceptual questions can be correctly answered, which is the core basis of all LMM-based evaluators. To initiate our exploration on the AIGI evaluation abilities of LMMs, we first construct the A-Bench centered on a pivotal question:
59
+
60
+ What do we expect from LMMs as AIGI evaluators?
61
+
62
+ The answer lies in the capabilities of semantic alignment and quality evaluation. We then define two key diagnostic subsets: A-Bench $^{P1}$ →high-level semantic understanding, and A-Bench $^{P2}$ →low-level quality perception. For high-level semantic understanding, A-Bench $^{P1}$ targets three critical areas: Basic Recognition, Bag-of-Words Pitfalls Discrimination, and Outside Knowledge Realization, which are designed to progressively test the LMM's capability in AIGI semantic understanding, moving from simple to complex prompt-related content. For low-level quality perception, A-Bench $^{P2}$ concentrates on Technical Quality Perception, Aesthetic Quality Evaluation, and Generative Distortion Assessment, which are designed to cover the common and AIGI-specific quality problems. The aspect selection is meticulously designed to encompass the most prevalent application scenarios. Specifically, a comprehensive dataset of 2,864 AIGIs sourced from various T2I models is compiled, including 1,408 AIGIs for A-Bench $^{P1}$ and 1,456 for A-Bench $^{P2}$ . Each AIGI is
63
+
64
+ paired with a question-answer set annotated by human experts. We then test 23 prominent LMMs, including both open-source and closed-source models, on the A-Bench. From the results that the best LMM still falls behind humans by a large margin, we can derive the following conclusion:
65
+
66
+ LMMs are still not masters at evaluating AIGIs.
67
+
68
+ All LMMs lag behind even the poorest human performance on A-Bench, and there is a substantial disparity between open-source LMMs and closed-source LMMs. The performance across different subcategories fluctuates for both A-Bench $^{P1}$ and A-Bench $^{P2}$ , indicating that LMMs are not yet robust for different evaluation scenarios for AIGIs. There remains a considerable gap and significant room for improvement before LMMs can be considered masters of evaluating AIGIs.
69
+
70
+ In summary, we systematically explore the capabilities of LMMs in semantic understanding and quality perception, both crucial for their role as AIGI evaluators. These two essential capabilities constitute the core of the proposed A-Bench, the first 'diagnostic' benchmark specifically designed for LMM assessment in AIGI evaluation. Our contributions are summarized as follows:
71
+
72
+ - We carry out the first 'diagnostic' benchmark on AIGI evaluation for LMMs, which consists of 2,864 AIGIs (sampled from various T2I models) paired with question-answer sets on both high-level semantic understanding and low-level quality perception.
73
+ - A detailed discussion is made about what to 'diagnose'. Semantic understanding is subdivided into Basic Recognition, Bag-of-Words Pitfalls Discrimination, and Outside Knowledge Realization while quality perception is broken down into Technical Quality Perception, Aesthetic Quality Evaluation, and Generative Distortion Assessment.
74
+ - From the benchmark results, several insights are gleaned, which can enable us to diagnose various issues with different LMMs and assist in their improvement for AIGI evaluation.
75
+
76
+ # 2 RELATED WORKS
77
+
78
+ # 2.1 LARGE MULI-MODAL MODELS
79
+
80
+ Large language models (LLMs), such as GPT-4 (OpenAI, 2023), T5 (Chung et al., 2022), and LLaMA (Touvron et al., 2023), exhibit exceptional linguistic capabilities in general human knowledge domains. By integrating visual input via CLIP (Radford et al., 2021a) and additional adaptation modules, large multi-modal models (LMMs) (Li et al., 2023a; Gao et al., 2023; Liu et al., 2023b; Dai et al., 2023; Zhang et al., 2023a) are capable of addressing diverse multi-modal tasks, including image captioning, visual question answering, visual segmentation, visual classification, visual reasoning, etc. Namely, OpenFlamingo (Awadalla et al., 2023) initially integrates several gated crossattention dense blocks into the pretrained language encoder layers. InstructBLIP (Dai et al., 2023) extends BLIP-2 (Li et al., 2023c) by incorporating vision-language instruction tuning. To further develop open-source LMMs, many works have employed GPT-4 (OpenAI, 2023) to create data for vision-language tuning, such as LLaVA series (Liu et al., 2023b;a; 2024). However, whether these LMMs are masters at evaluating T2I AIGIs is still questionable, which needs further investigation.
81
+
82
+ # 2.2 MULTI-MODAL BENCHMARKS
83
+
84
+ Benchmarks such as COCO Caption (Chen et al., 2015) and Nocaps (Agrawal et al., 2019) evaluate the capability of models to generate textual descriptions for images. Subsequently, benchmarks like GQA (Hudson & Manning, 2019) and OK-VQA (Marino et al., 2019) focus on visual question answering, assessing multi-modal models' visual perception and reasoning abilities. Further complexities are added in benchmarks such as TextVQA (Singh et al., 2019) and ScienceQA (Lu et al., 2022), which incorporate OCR tasks and commonsense reasoning, respectively. MME (Fu et al., 2023) and MMbench (Liu et al., 2023c) provide comprehensive evaluations of LMMs across various subtasks. Additionally, MMMU (Yue et al., 2023) targets extensive multi-disciplinary tasks that require college-level knowledge and sophisticated reasoning. More recently, Q-Bench (Wu et al., 2023a) focuses specifically on assessing the low-level visual perception capabilities of LMMs. Despite these efforts, there is still a gap in systematic benchmarks for assessing the abilities of LMMs in AIGI evaluation, prompting the development of A-Bench to address this shortfall.
85
+
86
+ ![](images/10d5b2b81988d496f9c654ff072545487361981fb137bcc1f3f5f1262924329c.jpg)
87
+ (a)
88
+
89
+ ![](images/c4e3231e263a49cea6deee76a0370ad7474b40aa7004efb03b3c6ab17006223d.jpg)
90
+ (b)
91
+
92
+ ![](images/775efbeca95f1fcd283bc5d714d3388a29a74eeed8bd64f3e420aaf4a5858946.jpg)
93
+ (c)
94
+ Figure 3: Illustration of focused aspects and corresponding quality distributions for A-Bench. The focused aspects and the amount of AIGIs employed are shown in (a). The quality scores of AIGIs sampled for Technical Quality Perception and Aesthetic Quality Evaluation subsets are obtained from AIGIQA-20K (Li et al., 2024) and predicted from Q-Align (Wu et al., 2023c) respectively.
95
+
96
+ # 3 CONSTRUCTING THE A-BENCH
97
+
98
+ # 3.1 KEY PRINCIPLES
99
+
100
+ Covering High-level and Low-level Attributes. The demand for generating images has become increasingly stringent, with requirements for not only accurate adherence to prompt specifications but also high visual quality of AIGIs. To ascertain whether LMMs can effectively evaluate whether AIGIs meet these criteria, it is essential to assess their capabilities in both high-level semantic understanding and low-level quality perception. High-level semantic understanding encompasses basic recognition and the integration of external knowledge, whereas low-level quality perception involves the identification of technical quality, aesthetic appeal, and generative distortions. The detailed focused aspects can be overviewed in Fig. 3 (a).
101
+
102
+ Ensuring Diverse AIGI Scope. Considering the variety of current generative models and their application scenarios, we have selected a broad range of mainstream text-to-image (T2I) generation models to produce AI-generated images (AIGIs). To assess high-level semantic understanding, we design prompts rich in content to ensure diversity among the generated images. For evaluating low-level quality perception, we employ uniform sampling to encompass a wide spectrum of visual quality and the corresponding quality distributions are illustrated in Fig 3 (b) and (c). Throughout the benchmarking process, we test multiple open-source and closed-source LMMs to guarantee a comprehensive evaluation. These measures ensure that our proposed A-Bench encompasses a diverse and extensive scope. More details about AIGIs collection can be referred to in Sec. A.1.
103
+
104
+ # 3.2 FOCUSED ASPECTS
105
+
106
+ The key evaluation aspects of T2I models involve image-text alignment and image visual quality, which correspond to high-level semantic understanding and low-level quality perception abilities. Some representative examples regarding the subcategories discussed below are exhibited in Fig. 4.
107
+
108
+ # 3.2.1 HIGH-LEVEL SEMANTIC UNDERSTANDING
109
+
110
+ To evaluate whether LMMs can effectively assess image-text alignment, we implement the A-Bench $^{P1}$ , which consists of 1,408 challenging multi-modal question-answer pairs that focus on high-level semantic understanding for AIGIs. The high-level semantic understanding can be divided into the following subcategories, moving from simple to complex prompt-related content:
111
+
112
+ ![](images/5a89603b9c1da5283f02d54bd2e2a706f6d7b1846f0f9da952a511578d680b82.jpg)
113
+ Figure 4: Examples of A-Bench. Each AIGI is accompanied by a question-answer pair.
114
+
115
+ Basic Recognition. This aspect concentrates on the fundamental semantic understanding of AIGIs (Nichol et al., 2021; Sahara et al., 2022), which can be subdivided into two distinct areas based on the objects of interest: 1) Major Object Recognition, which involves recognizing the primary objects in the image, such as humans or objects depicted in the foreground. 2) Minor Object Recognition, which pertains to the identification of less-prominent objects within the image, such as background elements or secondary characters.
116
+
117
+ Bag-of-Words Pitfalls Discrimination. This dimension focuses on the discriminative semantic understanding of AIGIs crafted with Bag-of-words prompts (encompassing rich descriptive attributes or complex object relationships (Qu et al., 2024)). This can be subdivided into the following aspects related to the crucial points of T2I generation alignment: 1) Attributes Awareness, defined as the capability to accurately identify the attributes of objects in AIGIs (Xu et al., 2023; Liu et al., 2023c). 2) Additionally, given that T2I models may incorrectly interpret nouns as adjectives, resulting in the unwanted generation of objects instead of the intended attributes (Chatterjee et al., 2025; Motamed et al., 2023), we have also introduced a dimension called Nouns as Adjectives Awareness to address this issue. 3) Composition Identification, recognized as the ability to correctly comprehend the compositional relationships such as orientation, occlusion, size comparison, and spatial arrangement (Wang et al., 2024b; Zhang et al., 2024). 4) Number of Objects Counting, regarded as the ability to accurately count the specified objects in the image, which is crucial for assessing whether the AIGI aligns with the numerical specifications of the prompt (Binyamin et al., 2024).
118
+
119
+ Outside Knowledge Realization. This aspect emphasizes the reasoning ability to utilize external knowledge not directly depicted in the images (Schwenk et al., 2022), and can be broken down into the following dimensions: 1) Specific Terms Recognition: This involves identifying specific scenes and objects related to distinct domains such as geography, sports, science, materials, food, everyday life, creatures, brands, and styles. 2) Contradiction Overcome, recognized as the ability to correctly interpret AIGIs even when their content contradicts established world knowledge, which is particularly crucial for evaluating AIGIs generated from controversial prompts (Hou et al., 2024).
120
+
121
+ # 3.2.2 LOW-LEVEL QUALITY PERCEPTION
122
+
123
+ Conversely, to determine the ability of LMMs on image visual quality, we conduct the A-Bench $^{P2}$ , comprising 1,456 challenging multi-modal question-answer pairs centered on low-level quality perception for AIGIs, which can be categorized into the following aspects: 1) Technical Quality Perception This indicates the low-level characteristics that directly degrade the quality of images, such as blur, noise, exposure, etc (Su et al., 2021; Ying et al., 2020). 2) Aesthetic Quality Evaluation This indicates the attributes that affect the aesthetic appeal of AIGIs and evoke varied human feelings, which include color, lighting, etc (Huang et al., 2024). 3) Generative Distortion Assessment This indicates the unexpected AIGI-specific distortions (Chen et al., 2023b; Li et al., 2023b; 2024), such as generative blur caused by low completion, confusing geometry structure, unnaturalness, etc.
124
+
125
+ # 3.3 QUESTION COLLECTION
126
+
127
+ Question Type In the A-Bench, two types of question formats are utilized, including Yes-or-No questions and What questions. The Yes-or-No questions (accounting for $25.9\%$ ) are used to evaluate the fundamental judgment abilities of LMMs while the What questions (accounting for $74.1\%$ ) are more complicated and require LMMs to gain a more comprehensive understanding of the AIGIs.
128
+
129
+ Human Expert Annotation We have assembled a team of 15 human annotators, each with expert experience in AIGI evaluation, to develop questions for A-Bench. This annotation process is conducted in a controlled laboratory environment, ensuring consistency and reliability. Annotators are tasked with designing questions specific to the sub-categories of the AIGIs under review, utilizing their extensive knowledge to determine the content and format of each question. To ensure the highest quality and suitability, each question undergoes a rigorous review process, with at least three other expert annotators double-checking it. More details can be acquired in Sec. A.3.
130
+
131
+ Question Response Specifically, the example input query to LMMs can be exemplified as:
132
+
133
+ #User: What painting style is represented in the image? |IMAGE_TOKEN|
134
+
135
+ A. Abstract B. Surrealism C. Expressionism D. Impressionism
136
+
137
+ Answer with the option's letter from the given choices directly.
138
+
139
+ The answer candidates and correct answers are shuffled during the evaluation process. Since the responses from LMMs can be in various forms (if the correct choice is C) such as 'C', 'Expressionism', 'The painting style of image is expressionism', etc., we employ a GPT-assisted choice evaluation technique proposed in (Liu et al., 2023c; Wu et al., 2023a) to validate the correctness of LMMs responses. More details are shown in Sec. A.4.
140
+
141
+ # 4 EXPERIMENT RESULTS
142
+
143
+ # 4.1 BENCHMARK CANDIDATES
144
+
145
+ To ensure the results are comprehensive and up-to-date, we select the widely used LMMs for benchmarking. The Proprietary LMMs (closed-source) include Gemini 1.5 Pro (Reid et al., 2024), GPT-4v (OpenAI, 2023), GPT-4o (2024-05-13) (OpenAI, 2024), and Qwen-VL-Max (Bai et al., 2023). The Open-source LMMs include Qwen2-VL-72B (Qwen2-72B) (Wang et al., 2024a), MiniCPM-V2.6 (Qwen2-7B) (Yao et al., 2024), InternVL2-40B (Nous-Hermes-2-Yi-34B) (Chen et al., 2024), Ovis1.5 (Llama3-8B), LLaVA-OneVision (Qwen2-7B) (Lu et al., 2024b), CogVLM2-19B (Llama3-8B) (Wang et al., 2023), IDEFICS-2 (Mistral-7B-Instruct-v0.2) (Huggingface, 2023), DeepSeek-VL-7B (Lu et al., 2024a), InternLM-XComposer2-VL (Dong et al., 2024), LLaVA-NeXT (Llama3
146
+
147
+ ![](images/9d827cc600f725104f0a81b7670dbd9f8acd7efe11c26b9cf7674cef1a1292df.jpg)
148
+ (a) Overall results of A-Bench.
149
+
150
+ ![](images/017d9264d22d37422663cfe455247785b9191a0012581b579f98a0ecb66dc054.jpg)
151
+ (b) Detailed results of A-Bench.
152
+ Figure 5: A Quick Look of the A-Bench outcomes. (a) showcases a comparative analysis of the overall accuracy between humans, 23 selected LMMs (both closed-source and open-source), and random guess. (b) displays a radar chart that details the accuracy performance (subtracting the accuracy of random guess) of the top-7 LMMs across various subcategories of A-Bench.
153
+
154
+ 8B), LLaVA-NeXT (Qwen-72B), LLaVA-NeXT (Qwen-110B) (Liu et al., 2024), mPLUG-Owl2 (LLaMA-7B) (Ye et al., 2023), LLaVA-v1.5 (Vicuna-v1.5-7B), LLaVA-v1.5 (Vicuna-v1.5-13B) (Liu et al., 2023a), CogVLM-17B (Vicuna-v1.5-7B) (Wang et al., 2023), Qwen-VL (Qwen-7B) (Bai et al., 2023), BakLLava (Mistral-7B) (Liu et al., 2023b), and Fuyu-8B (Persimmon-8B) (Adept, 2023). All LMMs are tested with zero-shot setting. It's worth noting that the instruction prompt might slightly differ for different LMMs according to the official setting.
155
+
156
+ # 4.2 HUMAN PERFORMANCE
157
+
158
+ For human performance on A-Bench, we conduct a user-study experiment with five ordinary people in a controlled laboratory setting. Initially, participants familiarize themselves with the tasks through exposure to similar cases. Subsequently, they select the appropriate responses for the questions posed in the A-Bench. To maintain consistency with the conditions experienced by LMMs, the order of questions is randomized, and participants receive no additional information beyond the AIGIs, questions, and answer options. The best and worst performance is included for comparison. More details about acquiring human performance can be referred to in Sec.A.5.
159
+
160
+ # 4.3 FINDINGS OF A-BENCH
161
+
162
+ General Observation: Proprietary LMMs deliver performance comparable to the best open-source LMMs. A concise overview of the A-Bench results is provided in Fig. 5, revealing several general insights: 1) All LMMs significantly outperform the random guess, indicating their capabilities in handling AIGI evaluation, with Qwen2-VL-72B leading, closely followed by Gemini 1.5 Pro and GPT-4o. Notably, among the open-source LMMs, which are preferred for AIGI evaluations due to their accessibility and modifiability, Qwen2-VL-72B stands out, even outperforming the best closed-source competitors. 2) Even the lowest performance by humans surpasses that of all LMMs, with a noticeable $14.62\%$ gap compared to the top-performing LMM, Qwen2-VL-72B, indicating that LMMs are still far from adequately performing AIGI evaluation as humans. 3) A closer examination of the radar chart in Fig. 5 (b) shows that top LMMs exhibit varied performances across different sub-categories, suggesting a lack of robustness, while humans show more consistent and balanced performance across these categories, highlighting areas where LMMs need further improvement.
163
+
164
+ Findings of A-Bench<sup>P1</sup>: LMMs excel at basic recognition tasks but tend to be less effective when it comes to nuanced semantic understanding. The performance results of LMMs on the A-Bench<sup>P1</sup> subset, as detailed in Table 1, reveal several key insights: 1) Almost all LMMs show
165
+
166
+ Table 1: Benchmark results on the A-Bench $^{P1}$ subset, which reveal the high-level semantic understanding abilities across LMMs. The best performance is marked in bold and the second performance is underlined for both proprietary and open-source LMMs respectively.
167
+
168
+ <table><tr><td>Categories</td><td colspan="2">Basic Recognition</td><td colspan="4">Bag-of-Words</td><td colspan="2">Outside Knowledge</td><td rowspan="2">Overall↑</td></tr><tr><td>LMM (LLM)</td><td>Major↑</td><td>Minor↑</td><td>Attr.↑</td><td>N. Adj.↑</td><td>Comp.↑</td><td>Number↑</td><td>Term↑</td><td>Contra.↑</td></tr><tr><td>HUMAN (WORST)</td><td>95.18%</td><td>94.24%</td><td>96.78%</td><td>88.70%</td><td>85.49%</td><td>82.46%</td><td>81.76%</td><td>88.91%</td><td>92.40%</td></tr><tr><td>HUMAN (BEST)</td><td>95.40%</td><td>95.21%</td><td>99.42%</td><td>95.17%</td><td>93.34%</td><td>91.73%</td><td>84.29%</td><td>96.05%</td><td>94.02%</td></tr><tr><td colspan="10">Proprietary LMMs:</td></tr><tr><td>GEMINI 1.5 PRO</td><td>93.82%</td><td>95.18%</td><td>94.35%</td><td>80.27%</td><td>72.14%</td><td>79.35%</td><td>72.88%</td><td>61.56%</td><td>84.70%</td></tr><tr><td>GPT-4v</td><td>92.95%</td><td>96.00%</td><td>87.40%</td><td>82.67%</td><td>64.39%</td><td>68.84%</td><td>77.60%</td><td>66.73%</td><td>83.60%</td></tr><tr><td>GPT-4o (2024-05-13)</td><td>94.34%</td><td>95.14%</td><td>91.99%</td><td>79.54%</td><td>76.40%</td><td>73.30%</td><td>77.47%</td><td>68.59%</td><td>85.44%</td></tr><tr><td>Qwen-VL-MAX</td><td>92.56%</td><td>94.75%</td><td>91.99%</td><td>85.78%</td><td>68.94%</td><td>75.85%</td><td>78.94%</td><td>65.05%</td><td>84.47%</td></tr><tr><td colspan="10">Open-source LMMs:</td></tr><tr><td>Qwen2-VL-72B (Qwen2-72B)</td><td>95.15%</td><td>94.61%</td><td>92.31%</td><td>83.66%</td><td>71.37%</td><td>78.20%</td><td>79.12%</td><td>68.99%</td><td>86.02%</td></tr><tr><td>MiniCPM-V2.6 (Qwen2-7B)</td><td>93.01%</td><td>93.22%</td><td>93.44%</td><td>81.21%</td><td>78.31%</td><td>77.06%</td><td>79.32%</td><td>67.86%</td><td>84.98%</td></tr><tr><td>InternVL2-40B (Nous-Hermes-2-Yi-34B)</td><td>94.86%</td><td>93.87%</td><td>93.56%</td><td>80.32%</td><td>79.88%</td><td>78.01%</td><td>77.44%</td><td>69.54%</td><td>85.17%</td></tr><tr><td>Ovis1.5 (Llama3-8B)</td><td>92.79%</td><td>92.26%</td><td>92.12%</td><td>80.55%</td><td>78.61%</td><td>78.59%</td><td>78.34%</td><td>69.87%</td><td>85.08%</td></tr><tr><td>LLaVA-OneVision (Qwen2-7B)</td><td>92.53%</td><td>92.01%</td><td>92.07%</td><td>81.12%</td><td>79.33%</td><td>77.98%</td><td>79.02%</td><td>69.91%</td><td>84.88%</td></tr><tr><td>CogVLM2-19B (Llama3-8B)</td><td>93.31%</td><td>92.70%</td><td>89.97%</td><td>75.41%</td><td>64.63%</td><td>66.63%</td><td>75.88%</td><td>61.54%</td><td>82.55%</td></tr><tr><td>IDEFICS-2 (Mistral-7B-Instruct-v0.2)</td><td>89.92%</td><td>91.87%</td><td>86.50%</td><td>75.45%</td><td>61.36%</td><td>71.04%</td><td>73.31%</td><td>62.91%</td><td>80.14%</td></tr><tr><td>DeepSeek-VL-7B</td><td>91.48%</td><td>91.15%</td><td>82.44%</td><td>83.73%</td><td>63.38%</td><td>69.91%</td><td>75.40%</td><td>60.32%</td><td>81.42%</td></tr><tr><td>InternLM-XComposer2-VL (InternLM2)</td><td>92.79%</td><td>95.21%</td><td>86.38%</td><td>82.64%</td><td>68.87%</td><td>72.22%</td><td>70.77%</td><td>64.35%</td><td>81.89%</td></tr><tr><td>LLaVA-NeXT (Llama3-8B)</td><td>92.72%</td><td>92.40%</td><td>91.15%</td><td>83.62%</td><td>61.04%</td><td>67.07%</td><td>76.23%</td><td>62.94%</td><td>82.88%</td></tr><tr><td>LLaVA-NeXT (Qwen-72B)</td><td>94.37%</td><td>92.72%</td><td>91.49%</td><td>81.61%</td><td>62.40%</td><td>73.39%</td><td>77.15%</td><td>61.44%</td><td>83.99%</td></tr><tr><td>LLaVA-NeXT (Qwen-110B)</td><td>93.83%</td><td>91.10%</td><td>90.43%</td><td>84.71%</td><td>67.76%</td><td>67.70%</td><td>76.25%</td><td>64.28%</td><td>83.66%</td></tr><tr><td>mPLUG-Owl2 (LLaMA-7B)</td><td>85.29%</td><td>86.26%</td><td>83.87%</td><td>79.66%</td><td>53.73%</td><td>57.85%</td><td>71.14%</td><td>58.47%</td><td>76.40%</td></tr><tr><td>LLaVA-v1.5 (Vicuna-v1.5-7B)</td><td>87.82%</td><td>88.65%</td><td>83.86%</td><td>75.41%</td><td>61.39%</td><td>65.67%</td><td>74.76%</td><td>62.69%</td><td>78.86%</td></tr><tr><td>LLaVA-v1.5 (Vicuna-v1.5-13B)</td><td>88.60%</td><td>89.57%</td><td>86.48%</td><td>79.52%</td><td>62.33%</td><td>58.82%</td><td>74.81%</td><td>61.56%</td><td>79.72%</td></tr><tr><td>CogVLM-17B (Vicuna-v1.5-7B)</td><td>90.38%</td><td>95.17%</td><td>85.89%</td><td>77.47%</td><td>49.56%</td><td>47.82%</td><td>73.34%</td><td>61.34%</td><td>78.61%</td></tr><tr><td>Qwen-VL (Qwen-7B)</td><td>86.14%</td><td>86.32%</td><td>81.38%</td><td>77.47%</td><td>52.72%</td><td>61.22%</td><td>71.61%</td><td>57.32%</td><td>76.39%</td></tr><tr><td>BakLLava (Mistral-7B)</td><td>88.91%</td><td>81.31%</td><td>77.42%</td><td>73.81%</td><td>52.18%</td><td>62.32%</td><td>68.37%</td><td>49.02%</td><td>74.33%</td></tr><tr><td>Fuyu-8B (Persimmon-8B)</td><td>81.41%</td><td>68.27%</td><td>66.72%</td><td>57.45%</td><td>42.24%</td><td>48.32%</td><td>61.16%</td><td>29.65%</td><td>63.12%</td></tr><tr><td>random guess</td><td>32.27%</td><td>37.22%</td><td>31.03%</td><td>42.82%</td><td>29.85%</td><td>29.78%</td><td>26.51%</td><td>32.13%</td><td>30.80%</td></tr></table>
169
+
170
+ good performance in Basic Recognition, suggesting that they are quite adept at fundamental semantic understanding, which includes recognizing foreground and background objects in AIGIs. 2) However, their effectiveness diminishes in more complex tasks such as Bag-of-Words, particularly in subcategories like Nouns as Adjectives Awareness, Composition Identification, and Number of Objects Counting. These areas require deeper semantic understanding and reasoning, which is critical as users often employ complex prompts that include such nuanced elements. The LMMs' underperformance here indicates potential challenges in accurately aligning AIGIs with user prompts. 3) Additionally, Outside Knowledge poses significant challenges, with LMMs generally achieving unsatisfactory performance in the Contradiction Overcome subcategory, where AIGIs contain content that defies common sense, requiring LMMs to override their prior knowledge to respond correctly. The subcategory Specific Terms tests the knowledge base of LMMs, where proprietary LMMs generally perform better due to being trained on more recent and extensive datasets. 4) Therefore, to improve the evaluation of text alignment in AIGIs using LMM, it is recommended to simplify overly complex prompts. By employing a divide-and-conquer approach to break down complex prompts into shorter ones, sequential judgment can effectively enhance accuracy.
171
+
172
+ Findings of A-Bench $^{P2}$ : LMMs are poor quality evaluators. The performance results of LMMs on the A-Bench $^{P2}$ subset, as shown in Table 2, illustrate a notable disparity in capabilities: 1) There is a significant performance gap of approximately $23.10\%$ between the top-performing LMMs and human evaluators, highlighting that LMMs lag considerably in quality perception and struggle to accurately assess the quality of AIGIs. 2) Furthermore, most LMMs exhibit their weakest performance in the Generative Distortion Assessment subcategory (except Qwen2-VL-72B), suggesting their ineffectiveness at identifying unexpected generative distortions, such as unnatural appearances and incorrect geometric structures. 3) Interestingly, while humans generally perform better in Technical Quality Perception compared to Aesthetic Quality Evaluation, LMMs show similar performance levels in both subcategories (except Qwen2-VL-72B and MiniCPM-V2.6). This difference likely stems from the more objective nature of technical quality assessments, which leads to more con
173
+
174
+ Table 2: Benchmark results on the A-Bench $^{P2}$ subset, which reflect the low-level quality perception abilities across LMMs. The best performance is marked in bold and the second performance is underlined for both proprietary and open-source LMMs respectively.
175
+
176
+ <table><tr><td>Categories
177
+ LMM (LLM)</td><td>Technical↑</td><td>Aesthetic↑</td><td>Generative↑</td><td>Overall↑</td></tr><tr><td>HUMAN (WORST)</td><td>94.32%</td><td>84.49%</td><td>86.25%</td><td>90.56%</td></tr><tr><td>HUMAN (BEST)</td><td>94.69%</td><td>86.01%</td><td>93.00%</td><td>92.22%</td></tr><tr><td colspan="5">Proprietary LMMs:</td></tr><tr><td>GEMINI 1.5 PRO</td><td>71.22%</td><td>77.61%</td><td>59.07%</td><td>69.12%</td></tr><tr><td>GPT-4v</td><td>67.82%</td><td>68.34%</td><td>58.02%</td><td>64.31%</td></tr><tr><td>GPT-4o (2024-05-13)</td><td>70.59%</td><td>61.61%</td><td>67.92%</td><td>66.88%</td></tr><tr><td>Qwen-VL-MAX</td><td>71.31%</td><td>69.77%</td><td>58.56%</td><td>66.21%</td></tr><tr><td colspan="5">Open-source LMMs:</td></tr><tr><td>Qwen2-VL-72B (Qwen2-72B)</td><td>74.22%</td><td>60.31%</td><td>70.23%</td><td>68.99%</td></tr><tr><td>MiniCPM-V2.6 (Qwen2-7B)</td><td>69.10%</td><td>60.14%</td><td>60.47%</td><td>64.01%</td></tr><tr><td>InternVL2-40B (Nous-Hermes-2-Yi-34B)</td><td>66.28%</td><td>63.21%</td><td>50.10%</td><td>59.22%</td></tr><tr><td>Ovis1.5 (Llama3-8B)</td><td>70.83%</td><td>67.82%</td><td>55.39%</td><td>64.50%</td></tr><tr><td>LLaVA-OneVision (Qwen2-7B)</td><td>68.84%</td><td>67.79%</td><td>54.27%</td><td>63.78%</td></tr><tr><td>CogVLM2-19B (Llama3-8B)</td><td>64.21%</td><td>61.33%</td><td>56.75%</td><td>60.73%</td></tr><tr><td>IDEFICS-2 (Mistral-7B-Instruct-v0.2)</td><td>62.00%</td><td>68.76%</td><td>47.12%</td><td>59.11%</td></tr><tr><td>DeepSeek-VL-7B</td><td>55.91%</td><td>53.79%</td><td>47.59%</td><td>52.36%</td></tr><tr><td>InternLM-XComposer2-VL (InternLM2)</td><td>62.29%</td><td>63.37%</td><td>50.26%</td><td>58.58%</td></tr><tr><td>LLaVA-NeXT (Llama3-8B)</td><td>58.59%</td><td>48.57%</td><td>52.00%</td><td>53.13%</td></tr><tr><td>LLaVA-NeXT (Qwen-72B)</td><td>59.91%</td><td>55.51%</td><td>59.80%</td><td>58.42%</td></tr><tr><td>LLaVA-NeXT (Qwen-110B)</td><td>64.69%</td><td>57.20%</td><td>63.64%</td><td>61.89%</td></tr><tr><td>mPLUG-Owl2 (LLaMA-7B)</td><td>57.90%</td><td>54.47%</td><td>53.81%</td><td>55.45%</td></tr><tr><td>LLaVA-v1.5 (Vicuna-v1.5-7B)</td><td>45.90%</td><td>41.33%</td><td>54.59%</td><td>47.12%</td></tr><tr><td>LLaVA-v1.5 (Vicuna-v1.5-13B)</td><td>46.08%</td><td>41.22%</td><td>48.10%</td><td>45.54%</td></tr><tr><td>CogVLM-17B (Vicuna-v1.5-7B)</td><td>54.76%</td><td>48.45%</td><td>52.47%</td><td>51.36%</td></tr><tr><td>Qwen-VL (Qwen-7B)</td><td>49.46%</td><td>34.34%</td><td>50.49%</td><td>44.99%</td></tr><tr><td>BakLLava (Mistral-7B)</td><td>47.88%</td><td>33.37%</td><td>48.46%</td><td>43.39%</td></tr><tr><td>Fuyu-8B (Persimmon-8B)</td><td>44.61%</td><td>30.23%</td><td>45.65%</td><td>40.20%</td></tr><tr><td>random guess</td><td>31.87%</td><td>32.92%</td><td>33.14%</td><td>32.63%</td></tr></table>
178
+
179
+ sistent evaluations among humans, whereas aesthetic quality, being more subjective, results in a broader range of opinions and consequently lower performance scores.
180
+
181
+ Human vs. Proprietary LMMs Proprietary (closed-source) LMMs are regarded as closely mirroring human perception and demonstrate superior performance, particularly in zero-shot settings for evaluating AIGI. Therefore, here we make a finer discussion about the human and proprietary LMMs. 1) Beginning with a detailed comparison of human and proprietary LMMs, we observe that proprietary LMMs achieve human-level performance in Basic Recognition, indicating their ability to correctly assess AIGI alignment when prompts are simple. 2) Despite this, LMMs encounter difficulties in the Bag-of-Words aspect, especially in identifying composition and counting objects, which highlights their limitations in handling complex compositional relationships and specific object counts. 3) In the Outside Knowledge domain, proprietary LMMs show only a slight performance gap compared to humans on Specific Terms, demonstrating comprehensive prior knowledge about specific terms, but they notably lag behind in identifying controversial content. While humans can easily recognize contradictory elements, proprietary LMMs often struggle due to their reliance on common sense, making accurate responses challenging. To conclude, according to the results shown in Table 1, proprietary LMMs are competent as evaluators for simple prompts in AIGI, yet they require further improvements for more complex prompts related AIGI content. 4) On the other hand, Table 2 reveals that LMMs have significant shortcomings in low-level quality perception compared to humans, with an uneven performance across different quality dimensions. Surprisingly, GPT-4o shows a distinct advantage over other proprietary LMMs in recognizing generative distortions, suggesting its superior capability in this area. However, the substantial overall difference in quality perception between proprietary LMMs and humans underlines that these models are currently unsuitable for assessing the visual quality of AIGI.
182
+
183
+ # 5 CONCLUSION
184
+
185
+ In conclusion, the ambition to employ LMMs for evaluating AIGIs exposes considerable deficiencies in their capabilities, as revealed by the diagnostic benchmark A-Bench. This benchmark scrutinizes
186
+
187
+ the core capabilities of LMMs themselves, focusing on their ability to accurately address fundamental questions related to high-level semantic understanding and low-level quality perception. Our findings from A-Bench serve as a stark reminder of the current limitations faced by LMMs in the realm of AIGI evaluation. The results underscore that while LMMs provide valuable insights, their evaluation capacity remains notably inferior to human performance, especially in tasks that demand deep semantic comprehension and detailed quality assessment. By identifying specific areas for enhancement and charting a course for future research, this study not only underscores the urgent need for further development but also aids in refining the application of LMMs in AIGI evaluation tasks. Future initiatives should focus on augmenting the capabilities of LMMs to reliably match or surpass human performance in these intricate evaluation scenarios.
188
+
189
+ # 6 ETHICS STATEMENT
190
+
191
+ This submission complies fully with the ethical guidelines set by ICLR 2025. We follow ICLR's principles for responsible AI development, ensuring that our research avoids any potential harm, bias, or discrimination. The data utilized in this work is sourced exclusively from publicly available open-source datasets and models. Furthermore, our methods prioritize fairness, accountability, and transparency in the evaluation of AI-generated images.
192
+
193
+ # 7 LIMITATIONS
194
+
195
+ Timeliness Concern Creating a benchmark involves generating images, collecting data, training evaluators, and verifying data quality, making the process both time-consuming and costly. As a result, it is inevitable that AIGI benchmarks may not always keep pace with the latest technologies or models. However, the insights provided by the benchmark in evaluating AIGI remain valuable and offer useful guidance. We are committed to ongoing updates and expansions to ensure the benchmark remains current.
196
+
197
+ Scale-up Concern Since the A-Bench dataset is fully manually annotated and requires validation by at least three other humans, the annotation process is both costly and time-consuming. As such, it is quite challenging to scale up.
198
+
199
+ # 8 ACKNOWLEDGEMENT
200
+
201
+ This work is supported in part by the National Natural Science Foundation of China (623B2073, 62101326, 62225112, & 62301316).
202
+
203
+ # REFERENCES
204
+
205
+ Adept. Fuyu-8b: A multimodal architecture for ai agents, 2023. URL https://www.adapt.ai/blog/fuyu-8b. Accessed: 2024-05-13.
206
+ Harsh Agrawal, Karan Desai, Yufei Wang, Xinlei Chen, Rishabh Jain, Mark Johnson, Dhruv Batra, Devi Parikh, Stefan Lee, and Peter Anderson. nocaps: novel object captioning at scale. In ICCV, 2019.
207
+ Anas Awadalla, Irena Gao, Josh Gardner, Jack Hessel, Yusuf Hanafy, Wanrong Zhu, Kalyani Marathe, Yonatan Bitton, Samir Gadre, Shiori Sagawa, et al. Openflamingo: An open-source framework for training large autoregressive vision-language models. arXiv preprint arXiv:2308.01390, 2023.
208
+ Jinze Bai, Shuai Bai, Shusheng Yang, Shijie Wang, Sinan Tan, Peng Wang, Junyang Lin, Chang Zhou, and Jingren Zhou. Qwen-vl: A versatile vision-language model for understanding, localization, text reading, and beyond. arXiv preprint arXiv:2308.12966, 2023.
209
+ Lital Binyamin, Yoad Tewel, Hilit Segev, Eran Hirsch, Royi Rassin, and Gal Chechik. Make it count: Text-to-image generation with an accurate number of objects. arXiv preprint arXiv:2406.10210, 2024.
210
+
211
+ Agneet Chatterjee, Gabriela Ben Melech Stan, Estelle Aflalo, Sayak Paul, Dhruba Ghosh, Tejas Gokhale, Ludwig Schmidt, Hannaneh Hajishirzi, Vasudev Lal, Chitta Baral, et al. Getting it right: Improving spatial consistency in text-to-image models. In European Conference on Computer Vision, pp. 204-222. Springer, 2025.
212
+ Junsong Chen, Jincheng Yu, Chongjian Ge, Lewei Yao, Enze Xie, Yue Wu, Zhongdao Wang, James Kwok, Ping Luo, Huchuan Lu, and Zhenguo Li. Pixart-α: Fast training of diffusion transformer for photorealistic text-to-image synthesis. 2310.00426, 2023a.
213
+ Xinlei Chen, Hao Fang, Tsung-Yi Lin, Ramakrishna Vedantam, Saurabh Gupta, Piotr Dollar, and C. Lawrence Zitnick. Microsoft coco captions: Data collection and evaluation server, 2015.
214
+ Zhe Chen, Weiyun Wang, Hao Tian, Shenglong Ye, Zhangwei Gao, Erfei Cui, Wenwen Tong, Kongzhi Hu, Jiapeng Luo, Zheng Ma, et al. How far are we to gpt-4v? closing the gap to commercial multimodal models with open-source suites. arXiv preprint arXiv:2404.16821, 2024.
215
+ Zijian Chen, Wei Sun, Haoning Wu, Zicheng Zhang, Jun Jia, Xiongkuo Min, Guangtao Zhai, and Wenjun Zhang. Exploring the naturalness of ai-generated images. arXiv preprint arXiv:2312.05476, 2023b.
216
+ Jaemin Cho, Yushi Hu, Roopal Garg, Peter Anderson, Ranjay Krishna, Jason Baldridge, Mohit Bansal, Jordi Pont-Tuset, and Su Wang. Davidsonian scene graph: Improving reliability in fine-grained evaluation for text-image generation. arXiv preprint arXiv:2310.18235, 2023.
217
+ Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Eric Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, Albert Webson, Shixiang Shane Gu, Zhuyun Dai, Mirac Suzgun, Xinyun Chen, Aakanksha Chowdhery, Sharan Narang, Gaurav Mishra, Adams Yu, Vincent Zhao, Yanping Huang, Andrew Dai, Hongkun Yu, Slav Petrov, Ed H. Chi, Jeff Dean, Jacob Devlin, Adam Roberts, Denny Zhou, Quoc V. Le, and Jason Wei. Scaling instructionfinetuned language models, 2022.
218
+ Wenliang Dai, Junnan Li, Dongxu Li, Anthony Meng Huat Tiong, Junqi Zhao, Weisheng Wang, Boyang Li, Pascale Fung, and Steven Hoi. Instructlip: Towards general-purpose vision-language models with instruction tuning, 2023.
219
+ DeepFloyd. If-i-x1-v1.0. https://www_deepfloyd.ai, 2023.
220
+ Xiaoyi Dong, Pan Zhang, Yuhang Zang, Yuhang Cao, Bin Wang, Linke Ouyang, Xilin Wei, Songyang Zhang, Haodong Duan, Maosong Cao, Wenwei Zhang, Yining Li, Hang Yan, Yang Gao, Xinyue Zhang, Wei Li, Jingwen Li, Kai Chen, Conghui He, Xingcheng Zhang, Yu Qiao, Dahua Lin, and Jiaqi Wang. Internl m-xcomposer2: Mastering free-form text-image composition and comprehension in vision-language large model. arXiv preprint arXiv:2401.16420, 2024.
221
+ dreamlike art. dreamlike-photoreal-2.0. https://dreamlike.art, 2023.
222
+ Chaoyou Fu, Peixian Chen, Yunhang Shen, Yulei Qin, Mengdan Zhang, Xu Lin, Zhenyu Qiu, Wei Lin, Jinrui Yang, Xiawu Zheng, Ke Li, Xing Sun, and Rongrong Ji. Mme: A comprehensive evaluation benchmark for multimodal large language models, 2023.
223
+ Peng Gao, Jiaming Han, Renrui Zhang, Ziyi Lin, Shijie Geng, Aojun Zhou, Wei Zhang, Pan Lu, Conghui He, Xiangyu Yue, Hongsheng Li, and Yu Qiao. Llama-adapter v2: Parameter-efficient visual instruction model. arXiv preprint arXiv:2304.15010, 2023.
224
+ Yatharth Gupta, Vishnu V. Jaddipal, Harish Prabhala, Sayak Paul, and Patrick Von Platen. Progressive knowledge distillation of stable diffusion x1 using layer level loss. 2401.02677, 2024.
225
+ David Holz. Midjourney. https://www.midjourney.com, 2023.
226
+ Yufang Hou, Alessandra Pascale, Javier Carnerero-Cano, Tigran Tchrakian, Radu Marinescu, Elizabeth Daly, Inkit Padhi, and Prasanna Sattigeri. Wikicontradict: A benchmark for evaluating llms on real-world knowledge conflicts from wikipedia. arXiv preprint arXiv:2406.13805, 2024.
227
+
228
+ Yushi Hu, Benlin Liu, Jungo Kasai, Yizhong Wang, Mari Ostendorf, Ranjay Krishna, and Noah A Smith. Tifa: Accurate and interpretable text-to-image faithfulness evaluation with question answering. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 20406-20417, 2023.
229
+ Yipo Huang, Quan Yuan, Xiangfei Sheng, Zhichao Yang, Haoning Wu, Pengfei Chen, Yuzhe Yang, Leida Li, and Weisi Lin. Aesbench: An expert benchmark for multimodal large language models on image aesthetics perception. arXiv preprint arXiv:2401.08276, 2024.
230
+ Drew A Hudson and Christopher D Manning. Gqa: A new dataset for real-world visual reasoning and compositional question answering. Conference on Computer Vision and Pattern Recognition (CVPR), 2019.
231
+ Huggingface. Introducing idefics: An open reproduction of state-of-the-art visual language model, 2023. URL https://huggingface.co/blog/idefics.
232
+ Seungone Kim, Jamin Shin, Yejin Cho, Joel Jang, Shayne Longpre, Hwaran Lee, Sangdoo Yun, Seongjin Shin, Sungdong Kim, James Thorne, et al. Prometheus: Inducing fine-grained evaluation capability in language models. arXiv preprint arXiv:2310.08491, 2023.
233
+ Max Ku, Dongfu Jiang, Cong Wei, Xiang Yue, and Wenhu Chen. Viescore: Towards explainable metrics for conditional image synthesis evaluation. arXiv preprint arXiv:2312.14867, 2023.
234
+ Bo Li, Yuanhan Zhang, Liangyu Chen, Jinghao Wang, Jingkang Yang, and Ziwei Liu. Otter: A multi-modal model with in-context instruction tuning. arXiv preprint arXiv:2305.03726, 2023a.
235
+ Chunyi Li, Zicheng Zhang, Haoning Wu, Wei Sun, Xiongkuo Min, Xiaohong Liu, Guangtao Zhai, and Weisi Lin. Agia-3k: An open database for ai-generated image quality assessment. IEEE Transactions on Circuits and Systems for Video Technology, 2023b.
236
+ Chunyi Li, Tengchuan Kou, Yixuan Gao, Yuqin Cao, Wei Sun, Zicheng Zhang, Yingjie Zhou, Zhichao Zhang, Weixia Zhang, Haoning Wu, et al. Aigia-20k: A large database for ai-generated image quality assessment. arXiv preprint arXiv:2404.03407, 2024.
237
+ Junnan Li, Dongxu Li, Silvio Savarese, and Steven Hoi. Blip-2: Bootstrapping language-image pre-training with frozen image encoders and large language models. In International conference on machine learning, pp. 19730-19742. PMLR, 2023c.
238
+ Zhiqiu Lin, Deepak Pathak, Baiqi Li, Jiayao Li, Xide Xia, Graham Neubig, Pengchuan Zhang, and Deva Ramanan. Evaluating text-to-visual generation with image-to-text generation. arXiv preprint arXiv:2404.01291, 2024.
239
+ Haotian Liu, Chunyuan Li, Yuheng Li, and Yong Jae Lee. Improved baselines with visual instruction tuning, 2023a.
240
+ Haotian Liu, Chunyuan Li, Qingyang Wu, and Yong Jae Lee. Visual instruction tuning, 2023b.
241
+ Haotian Liu, Chunyuan Li, Yuheng Li, Bo Li, Yuanhan Zhang, Sheng Shen, and Yong Jae Lee. Llava-last: Improved reasoning,OCR, and world knowledge, January 2024. URL https:// lllava-v1.github.io/blog/2024-01-30-llava-last/.
242
+ Yuan Liu, Haodong Duan, Yuanhan Zhang, Bo Li, Songyang Zhang, Wangbo Zhao, Yike Yuan, Jiaqi Wang, Conghui He, Ziwei Liu, Kai Chen, and Dahua Lin. Mmbench: Is your multi-modal model an all-around player?, 2023c.
243
+ Haoyu Lu, Wen Liu, Bo Zhang, Bingxuan Wang, Kai Dong, Bo Liu, Jingxiang Sun, Tongzheng Ren, Zhuoshu Li, Hao Yang, Yaofeng Sun, Chengqi Deng, Hanwei Xu, Zhenda Xie, and Chong Ruan. Deepseek-vl: Towards real-world vision-language understanding, 2024a.
244
+ Pan Lu, Swaroop Mishra, Tony Xia, Liang Qiu, Kai-Wei Chang, Song-Chun Zhu, Oyvind Tafjord, Peter Clark, and Ashwin Kalyan. Learn to explain: Multimodal reasoning via thought chains for science question answering. In The 36th Conference on Neural Information Processing Systems (NeurIPS), 2022.
245
+
246
+ Shiyin Lu, Yang Li, Qing-Guo Chen, Zhao Xu, Weihua Luo, Kaifu Zhang, and Han-Jia Ye. Ovis: Structural embedding alignment for multimodal large language model. arXiv:2405.20797, 2024b.
247
+ Simian Luo, Yiqin Tan, Suraj Patil, Daniel Gu, Patrick von Platen, Apolinário Passos, Longbo Huang, Jian Li, and Hang Zhao. Lcm-lora: A universal stable-diffusion acceleration module, 2023.
248
+ Elman Mansimov, Emilio Parisotto, Jimmy Lei Ba, and Ruslan Salakhutdinov. Generating images from captions with attention. arXiv preprint arXiv:1511.02793, 2015.
249
+ Kenneth Marino, Mohammad Rastegari, Ali Farhadi, and Roozbeh Mottaghi. Ok-vqa: A visual question answering benchmark requiring external knowledge. In Conference on Computer Vision and Pattern Recognition (CVPR), 2019.
250
+ Saman Motamed, Danda Pani Paudel, and Luc Van Gool. Lego: Learning to disentangle and invert concepts beyond object appearance in text-to-image diffusion models. arXiv e-prints, pp. arXiv-2311, 2023.
251
+ Alex Nichol, Prafulla Dhariwal, Aditya Ramesh, Pranav Shyam, Pamela Mishkin, Bob McGrew, Ilya Sutskever, and Mark Chen. Glide: Towards photorealistic image generation and editing with text-guided diffusion models. arXiv preprint arXiv:2112.10741, 2021.
252
+ OpenAI. Gpt-4 technical report, 2023.
253
+ OpenAI. Hello gpt-4o, 2024. URL https://openai.com/index/hello-gpt-4o/. Accessed: 2024-05-13.
254
+ PlaygroundAI. playground-v2-1024px-aesthetic. https://playground.com, 2023.
255
+ Leigang Qu, Wenjie Wang, Yongqi Li, Hanwang Zhang, Liqiang Nie, and Tat-Seng Chua. Discriminative probing and tuning for text-to-image generation. arXiv preprint arXiv:2403.04321, 2024.
256
+ Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, and Ilya Sutskever. Learning transferable visual models from natural language supervision, 2021a.
257
+ Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. In International conference on machine learning, pp. 8748-8763. PMLR, 2021b.
258
+ Aditya Ramesh, Prafulla Dhariwal, Alex Nichol, Casey Chu, and Mark Chen. Hierarchical text-conditional image generation with clip latents. 2204.06125, 2022.
259
+ Scott Reed, Zeynep Akata, Xinchen Yan, Lajanugen Logeswaran, Bernt Schiele, and Honglak Lee. Generative adversarial text to image synthesis. In International conference on machine learning, pp. 1060-1069. PMLR, 2016.
260
+ Machel Reid, Nikolay Savinov, Denis Teptyashin, Dmitry Lepikhin, Timothy Lillicrap, Jean-baptiste Alayrac, Radu Soricut, Angeliki Lazaridou, Orhan First, Julian Schrittwieser, et al. Gemini 1.5: Unlocking multimodal understanding across millions of tokens of context. arXiv preprint arXiv:2403.05530, 2024.
261
+ Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. High-resolution image synthesis with latent diffusion models. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 10684-10695, 2022a.
262
+ Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. High-resolution image synthesis with latent diffusion models. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 10684-10695, 2022b.
263
+
264
+ Chitwan Sahara, William Chan, Saurabh Saxena, Lala Li, Jay Whang, Emily L Denton, Kamyar Ghasemipour, Raphael Gontijo Lopes, Burcu Karagol Ayan, Tim Salimans, et al. Photorealistic text-to-image diffusion models with deep language understanding. Advances in neural information processing systems, 35:36479-36494, 2022.
265
+ Axel Sauer, Dominik Lorenz, Andreas Blattmann, and Robin Rombach. Adversarial diffusion distillation, 2023.
266
+ Dustin Schwenk, Apoorv Khandelwal, Christopher Clark, Kenneth Marino, and Roozbeh Mottaghi. A-okvqa: A benchmark for visual question answering using world knowledge. arXiv, 2022.
267
+ Amanpreet Singh, Vivek Natarajan, Meet Shah, Yu Jiang, Xinlei Chen, Dhruv Batra, Devi Parikh, and Marcus Rohrbach. Towards vqa models that can read. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 8317-8326, 2019.
268
+ Shaolin Su, Vlad Hosu, Hanhe Lin, Yanning Zhang, and Dietmar Saupe. Koniq++: Boosting no-reference image quality assessment in the wild by jointly predicting image quality and defects. In The British Machine Vision Conference (BMVC), pp. 1-12, 2021.
269
+ Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothee Lacroix, Baptiste Roziere, Naman Goyal, Eric Hambro, Faisal Azhar, Aurelien Rodriguez, Armand Joulin, Edouard Grave, and Guillaume Lample. Llama: Open and efficient foundation language models, 2023.
270
+ Peng Wang, Shuai Bai, Sinan Tan, Shijie Wang, Zhihao Fan, Jinze Bai, Keqin Chen, Xuejing Liu, Jialin Wang, Wenbin Ge, Yang Fan, Kai Dang, Mengfei Du, Xuancheng Ren, Rui Men, Dayiheng Liu, Chang Zhou, Jingren Zhou, and Junyang Lin. Qwen2-vl: Enhancing vision-language model's perception of the world at any resolution. arXiv preprint arXiv:2409.12191, 2024a.
271
+ Weihan Wang, Qingsong Lv, Wenmeng Yu, Wenyi Hong, Ji Qi, Yan Wang, Junhui Ji, Zhuoyi Yang, Lei Zhao, Xixuan Song, Jiazheng Xu, Bin Xu, Juanzi Li, Yuxiao Dong, Ming Ding, and Jie Tang. Cogvlm: Visual expert for pretrained language models, 2023.
272
+ Yunnan Wang, Ziqiang Li, Zequn Zhang, Wenyao Zhang, Baoxie, Xihui Liu, Wenjun Zeng, and Xin Jin. Scene graph disentanglement and composition for generalizable complex image generation. arXiv preprint arXiv:2410.00447, 2024b.
273
+ Haoning Wu, Zicheng Zhang, Erli Zhang, Chaofeng Chen, Liang Liao, Annan Wang, Chunyi Li, Wenxiu Sun, Qiong Yan, Guangtao Zhai, and Weisi Lin. Q-bench: A benchmark for general-purpose foundation models on low-level vision. 2023a.
274
+ Haoning Wu, Zicheng Zhang, Erli Zhang, Chaofeng Chen, Liang Liao, Annan Wang, Kaixin Xu, Chunyi Li, Jingwen Hou, Guangtao Zhai, et al. Q-instruct: Improving low-level visual abilities for multi-modality foundation models. arXiv preprint arXiv:2311.06783, 2023b.
275
+ Haoning Wu, Zicheng Zhang, Weixia Zhang, Chaofeng Chen, Liang Liao, Chunyi Li, Yixuan Gao, Annan Wang, Erli Zhang, Wenxiu Sun, et al. Q-align: Teaching lmms for visual scoring via discrete text-defined levels. arXiv preprint arXiv:2312.17090, 2023c.
276
+ Peng Xu, Wenqi Shao, Kaipeng Zhang, Peng Gao, Shuo Liu, Meng Lei, Fanqing Meng, Siyuan Huang, Yu Qiao, and Ping Luo. Lvlm-ehub: A comprehensive evaluation benchmark for large vision-language models. arXiv preprint arXiv:2306.09265, 2023.
277
+ Yuan Yao, Tianyu Yu, Ao Zhang, Chongyi Wang, Junbo Cui, Hongji Zhu, Tianchi Cai, Haoyu Li, Weilin Zhao, Zhihui He, et al. Minicpm-v: A gpt-4v level mllm on your phone. arXiv preprint arXiv:2408.01800, 2024.
278
+ Qinghao Ye, Haiyang Xu, Jiabo Ye, Ming Yan, Haowei Liu, Qi Qian, Ji Zhang, Fei Huang, and Jingren Zhou. mplug-owl2: Revolutionizing multi-modal large language model with modality collaboration. arXiv preprint arXiv:2311.04257, 2023.
279
+ Zhenqiang Ying, Haoran Niu, Praful Gupta, Dhruv Mahajan, Deepti Ghadiyaram, and Alan Bovik. From patches to pictures (paq-2-piq): Mapping the perceptual space of picture quality. In CVPR, 2020.
280
+
281
+ Jiahui Yu, Yuanzhong Xu, Jing Yu Koh, Thang Luong, Gunjan Baid, Zirui Wang, Vijay Vasudevan, Alexander Ku, Yinfei Yang, Burcu Karagol Ayan, et al. Scaling autoregressive models for content-rich text-to-image generation. arXiv preprint arXiv:2206.10789, 2(3):5, 2022.
282
+ Xiang Yue, Yuansheng Ni, Kai Zhang, Tianyu Zheng, Ruoqi Liu, Ge Zhang, Samuel Stevens, Dongfu Jiang, Weiming Ren, Yuxuan Sun, et al. Mmmu: A massive multi-discipline multimodal understanding and reasoning benchmark for expert agi. arXiv preprint arXiv:2311.16502, 2023.
283
+ Pan Zhang, Xiaoyi Dong, Bin Wang, Yuhang Cao, Chao Xu, Linke Ouyang, Zhiyuan Zhao, Shuangrui Ding, Songyang Zhang, Haodong Duan, Wenwei Zhang, Hang Yan, Xinyue Zhang, Wei Li, Jingwen Li, Kai Chen, Conghui He, Xingcheng Zhang, Yu Qiao, Dahua Lin, and Jiaqi Wang. Internlm-xcomposer: A vision-language large model for advanced text-image comprehension and composition, 2023a.
284
+ Xinchen Zhang, Ling Yang, Guohao Li, Yaqi Cai, Jiake Xie, Yong Tang, Yujiu Yang, Mengdi Wang, and Bin Cui. Itercomp: Iterative composition-aware feedback learning from model gallery for text-to-image generation. arXiv preprint arXiv:2410.07171, 2024.
285
+ Xinlu Zhang, Yujie Lu, Weizhi Wang, An Yan, Jun Yan, Lianke Qin, Heng Wang, Xifeng Yan, William Yang Wang, and Linda Ruth Petzold. Gpt-4v (isdiction) as a generalist evaluator for vision-language tasks. arXiv preprint arXiv:2311.01361, 2023b.
286
+
287
+ # A APPENDIX
288
+
289
+ ![](images/57273f81da8a82523e99a7aaf3299e01cf4425e016836fe1e9946224416f1175.jpg)
290
+ Figure 6: Overview of the AIGIs from $\mathbf{A}$ -BenchP1.
291
+
292
+ # A.1 AIGIS COLLECTION
293
+
294
+ AIGI collection for A-Bench $^{P1}$ To ensure that the AIGIs meet the specific subcategory requirements, we have gathered 2,000 manually-written prompts to serve as the textual foundation. Below, we provide examples of these prompts:
295
+
296
+ 1. Basic Recognition -> Major Object Recognition: An elaborate treehouse in a thick forest, with children playing inside, rope bridges connecting to other trees, and birds chirping around.
297
+ 2. Basic Recognition ->Minor Object Recognition: A magical fairy ring in a moonlit forest, with tiny glowing fairies dancing and mystical plants all around.
298
+ 3. Bag-of-Words ->Attributes Awareness: A delicate, frosty, crystal snowflake beside a warm, glowing, amber emberr on a smooth, slate-gray stone.
299
+ 4. Bag-of-Words ->Nouns as Adjectives Awareness: Shark-sleek submarine exploring ocean depths.
300
+ 5. Bag-of-Words ->Composition Identification: A gamer's setup with consoles and controllers on a desk, multiple screens above, and game boxes and snacks partially obscured beneath the desk.
301
+ 6. Bag-of-Words ->Number of Objects Counting: Six logs in a woodpile, stacked so tightly that they seem to form a solid block.
302
+ 7. Outside Knowledge ->Specific Terms Recognition: A barometer showing a rapid decrease in pressure.
303
+ 8. Outside Knowledge ->Contradiction Overcome: A ship floating above the clouds, sails made of sunlight.
304
+
305
+ Afterward, we use the collected prompts to create AIGIs. 15 text-to-image generation models are selected, which include: Dreamlike (dreamlike art, 2023), Pixart $\alpha$ Chen et al. (2023a), Playground v2 PlaygroundAI (2023), SD1.4 (Rombach et al., 2022a), SD1.5 (Rombach et al., 2022a), SDXL (Rombach et al., 2022a), SSD1B (Gupta et al., 2024), LCM Pixart (Luo et al., 2023), LCM SD1.5 (Luo et al., 2023), LCM SDXL (Luo et al., 2023), SDXL Turbo Sauer et al. (2023) DALLE2 (Ramesh et al., 2022), DALLE3 (Ramesh et al., 2022), IF (DeepFloyd, 2023), Midjourney v5.2 Holz (2023). Finally, a total of $15 \times 2,000 = 30,000$ AIGIs are collected. To guarantee diversity, we randomly select 2,000 AIGIs, choosing one AIGI per prompt. Subsequently, we conduct a manual review of these AIGIs to remove any that failed to generate correctly or are unsuitable for annotation. This process results in the final set of AIGIs for A-Bench $^{P1}$ , which can be overviewed in Fig. 6.
306
+
307
+ ![](images/3f0295ee847a6d5b060bd51bb493cef1669f14e7cde786de3e92c645cc35b1a9.jpg)
308
+ (a)
309
+
310
+ ![](images/80afb3fb35f0d779328b801ba4269812c2cfc8eb9a6ee1d34e4b992cf684f41b.jpg)
311
+ (b)
312
+
313
+ ![](images/f457f9c28e4ce6372f37350c175605eaf7a034840a80463ed2e401c1e43b9c56.jpg)
314
+ (c)
315
+
316
+ ![](images/101898d649b8c3c98e996f4ac088f7f567797a79d78aa7a4540fccad48945379.jpg)
317
+ (d)
318
+
319
+ ![](images/85f92affd80086d339645c520b22bb47da0b1c9d22d3d5b804716546f12cf5fc.jpg)
320
+ Figure 7: Illustration of the quality distribution transformation.
321
+ Figure 8: Overview of the AIGIs from $\mathbf{A}$ -BenchP2.
322
+
323
+ AIGI collection for A-Bench $^{P2}$ A-Bench $^{P2}$ is designed for the quality evaluation of AIGIs. Consequently, it is essential to ensure that the collected AIGIs span a wide quality range to address various practical scenarios. For Technical Quality, we sample 500 AIGIs from the AIGIA-20K dataset (Li et al., 2024) using a uniform sampling strategy. Specifically, each AIGI in the AIGIA-20K dataset is assigned a mean opinion score (MOS) for technical quality. We apply uniform sampling to create more even distributions, as illustrated in Fig. 7. For Aesthetic Quality, in the absence of provided aesthetic scores, we utilize q-align (Wu et al., 2023c), an effective quality predictor, to infer the aesthetic values of AIGIs. Subsequently, we perform uniform sampling similarly to obtain 500 AIGIs for aesthetic evaluation. For Generative Distortion, we manually select 500 AIGIs exhibiting unexpected AIGI-specific distortions. It is important to note that there is no content overlap among the selected AIGIs, which can be overviewed in Fig. 8.
324
+
325
+ # Composition Identification -> Orientation
326
+
327
+ ![](images/0f87659252b3ecbbd8c87eac6c412efa3fa31699917d2a90d189ed3913a7f5a7.jpg)
328
+
329
+ What is located to the left of the desks in the classroom?
330
+
331
+ A. Educational posters on the walls (correct)
332
+ B. A teacher's desk
333
+ C. A blackboard
334
+ D. A bookcase
335
+
336
+ # Composition Identification -> Orientation
337
+
338
+ ![](images/2732109385a83b03ccccd537f2505fb6975c9d7cb9832bd971cd8ff4b9a7b97a.jpg)
339
+
340
+ What is partially covered by cloaks hanging in the background of the magic workshop?
341
+ A. Shelves (correct)
342
+ B. Tricks and hats on a table
343
+ C. A magic wand
344
+ D. Cards spread out
345
+
346
+ # Composition Identification $\rightharpoondown$ Size Comparison
347
+
348
+ ![](images/42e911d6a2cb09e4dc963146a8661354ecc2f19e5d6b1a2fd4f90625d9d8f49d.jpg)
349
+
350
+ In the paleontologist's dig site, which seems to be the largest?
351
+
352
+ A. Human on the right
353
+ B. Human in the middle
354
+ C. The field journal (correct)
355
+ D. Pens
356
+
357
+ # Composition Identification -> Spatial Arrangement
358
+
359
+ ![](images/a71804e3dc000a1c90c2d4047a7cc869a0a590f2aede7f14f761fb2cc9d0fb1b.jpg)
360
+
361
+ Are strategy boards placed in all four corners?
362
+
363
+ A. No (correct)
364
+ B. Yes
365
+
366
+ # Specific Terms $\rightarrow$ Geography
367
+
368
+ ![](images/73dd74758fe8b42a321dce671b15940f636b1bd04197fc8b23b7e81f1b8354c2.jpg)
369
+
370
+ What geography feature is depicted here?
371
+
372
+ A. Sandy beach
373
+ B. Mangrove forest
374
+ C. Coral reef
375
+ D. Rugged coastline (correct)
376
+
377
+ # Specific Terms $\rightarrow$ Brand
378
+
379
+ ![](images/5ad511f23a54d0c782d08d7a485107d6a43507e1e2f7793d039ae77ebd5ff172.jpg)
380
+
381
+ Which brand is famous for this item?
382
+
383
+ A. Shell
384
+ B. Sony
385
+ C. Nike (correct)
386
+ D. Amazon
387
+
388
+ # Specific Terms $\rightharpoondown$ Food
389
+
390
+ ![](images/19eb86cb43e532622230472bb13ecc5d53ae545284d7c45ce8b8b4fabedbaa7b.jpg)
391
+ Figure 9: Some finer cases for the 'Bag-of-Words ->Composition Identification' and 'Outside Knowledge ->Specific Terms' subcategories.
392
+
393
+ What is the main cooking technique used for the meat in this dish?
394
+
395
+ A. Poaching
396
+ B. Frying
397
+ C. Grilling (correct)
398
+ D. Roasting
399
+
400
+ # Specific Terms -> Style
401
+
402
+ ![](images/a57d9ecee5e3f3c5733d2b2f3d9cfa57f7daff989087cc03ce9e698ccfee1f7a.jpg)
403
+
404
+ What painting style is
405
+
406
+ represented in the image?
407
+ A. Baroque
408
+ B. Rococo (correct)
409
+ C. Neoclassicism
410
+ D. Art Nouveau
411
+
412
+ # A.2 FINER EXPLANATION FOR SOME SUBCATEGORIES
413
+
414
+ For certain subcategories that require additional clarification for better understanding, we provide detailed explanations here (the corresponding cases are shown in Fig. 9):
415
+
416
+ 1. Bag-of-Words ->Nouns as Adjectives Awareness. The 'Noun as Adjectives' illustrates the use of nouns as adjectives to modify objects in AIGIs. Essentially, we aim for the descriptive effect, not for the nouns themselves to be visually represented in the AIGIs. For instance, as shown in Fig.4 row 2 column 2, when we describe a submarine as 'shark-sleek,' we do not intend to generate an image of an actual shark. This subcategory is designed to test whether LMMs can correctly identify such misunderstandings.
417
+ 2. Bag-of-Words ->Composition Identification. We categorize composition into four distinct types: 1) Orientation, which assesses the ability to correctly determine the relative spatial positions of objects; 2) Occlusion, which involves evaluating the accuracy in discerning the overlapping relationships between objects; 3) Size Comparison, which tests the ability to accurately judge the size relationships among objects; and 4) Spatial Arrangement, which examines the ability to accurately assess the arrangement of objects within the AIGI.
418
+ 3. Outside Knowledge ->Specific Terms. This subcategory covers many aspects, including geography, sports, science, materials, food, everyday life, creatures, brands, and styles. This primarily investigates whether it is possible for LMMs to infer and deduce specific knowledge within these fields based on the content of AIGIs such as identifying the exact location feature based on geographical attributes, deducing the brand from the characteristics of a product, recognizing the cooking technique of the food, etc.
419
+
420
+ Figure 10: Illustration depicting the annotation interface, where experts are presented with the sub-category and are able to record their questions and answers.
421
+ ![](images/4402c77256853b7ee57cdfe514687e39beb892086ab4b5167673e3f0629dc58a.jpg)
422
+ Guidelines:
423
+ - Click 'Save and Next' button to proceed to the next
424
+ - Click 'Previous' button to review the last item
425
+ - Click Save to save your results
426
+ - Click 'Exit' to close the application
427
+ Subcategory: Part2 -> Outside Knowledge -> Contradiction Overcome
428
+ Insert your Question Here:
429
+ What is in the ice cube?
430
+ Answer 0:
431
+ Water
432
+ Save and Next
433
+ Answer 1:
434
+ Ice
435
+ Previous
436
+ Answer 2:
437
+ Fire
438
+ Save
439
+ Answer 3:
440
+ Frost
441
+ Exit
442
+
443
+ # A.3 HUMAN EXPERT ANNOTATION
444
+
445
+ A total of fifteen experts, each possessing professional skills and extensive experience in photography and AIGIs, participate in the subjective labeling experiment of A-Bench. All experts are informed that their annotation data will be publicly released, and they all agree to this arrangement. The hourly wage for each expert is approximately 12 US dollars, resulting in a total expense of about 2,400 US dollars for the whole subjective experiment.
446
+
447
+ The experiment takes place in a laboratory environment with standard indoor lighting. A Dell 4K monitor, supporting a resolution of $3840 \times 2160$ , is used for displaying the interfaces. Screenshots of the interfaces can be referred to in Fig. 10. Each expert annotates up to 30 AIGIs per day to avoid fatigue, with every annotation carefully reviewed by at least three other experts before acceptance. This approach ensures the highest possible accuracy and rigor of the A-Bench labels, thereby enhancing the precision and meaningfulness of the performance testing capability of A-Bench.
448
+
449
+ # A.4 GPT EVALUATION FOR CHOICE JUDGMENT
450
+
451
+ For some LMMs, the response to the question inquiry may vary. For example, given the correct answer $C$ . 'Blurry' to the question 'What is the most severe technical distortion of this image?' LMMs may respond in different formats: 'The image is blurry', 'There is blur in this image', or 'low clarity'. To address the impact of such variations on our evaluation, we've implemented a 5-round voting strategy (Wu et al., 2023a). Under this strategy, we pose the same prompt, as defined in the templates, five times and determine the final outcome based on the majority of GPT's responses.
452
+
453
+ # GPT Evaluation Prompt Template
454
+
455
+ System: You are a helpful assistant that grades answers related to image perception. There are a lot of special terms or keywords related to image processing and photography.
456
+
457
+ #User: Assuming you are a grader, you will now be provided with a question [question] and a set of options [options] with option [options[0]] being the correct answer. Additionally, there will be an answer [answer] provided by a respondent. Please determine whether the respondents answer is correct considering the context of the question. Even if the word choice is not completely the same, you can decide based on the given options and see whether the one in the answer is close enough to the given correct answer. The result is 1 if the answer is correct and else the result is 0. Please only provide the result in the following format: Result:
458
+
459
+ # Example for GPT Evaluation
460
+
461
+ Question: Which is the most blurry part of this image?
462
+
463
+ Choices: [‘The house on the left’, ‘The person in the middle’, ‘The background’, ‘The tree on the left’]
464
+
465
+ ![](images/12edd404983aeb24e6b43894d4909c603b57f921926b37be3fb02db8b2956777.jpg)
466
+ Figure 11: Illustration of the interface for the user-study.
467
+
468
+ # LMM Answer:
469
+
470
+ The most blurry part in this image is the house to the left of the person.
471
+
472
+ # 5-Round GPT Answers:
473
+
474
+ [“Score: 1”, “Score: 1”, “Score: 1”, “Score: 1”]
475
+
476
+ $\rightarrow$ Final Correctness after Voting: $\checkmark$
477
+
478
+ # A.5 USER-STUDY ON A-BENCH
479
+
480
+ To provide human performance on the A-Bench, we employ five ordinary people in a controlled laboratory setting. Initially, participants familiarize themselves with the tasks through exposure to similar cases. Subsequently, they select the appropriate responses for the questions posed in the A-Bench. The user-study interface is shown in Fig. 11.
481
+
482
+ # A.6 LMM EXPERIMENT DETAILS
483
+
484
+ The LMMs undergo testing in a zero-shot setting. Proprietary LMMs are evaluated via official APIs, whereas the open-source LMMs (with the exceptions of LLaVA-NeXT Qwen-72B and LLaVA-NeXT Qwen-110B) run on an NVIDIA RTX 6000 Ada with 48 GB of memory. The LLaVA-NeXT Qwen-72B and LLaVA-NeXT Qwen-110B are operated on 4 NVIDIA H100 with 320 GB of memory. All LMMs operate with default parameters, ensuring that the A-Bench results are readily reproducible.
485
+
486
+ Table 3: Benchmark results on the question types. The best performance is marked in bold and the second performance is underlined for both proprietary and open-source LMMs respectively.
487
+
488
+ <table><tr><td>Categories</td><td colspan="2">A-BenchP1</td><td colspan="2">A-BenchP2</td><td colspan="2">Overall</td></tr><tr><td>LMM (LLM)</td><td>Yes-or-no↑</td><td>What↑</td><td>Yes-or-no↑</td><td>What↑</td><td>Yes-or-no↑</td><td>What↑</td></tr><tr><td>HUMAN (WORST)</td><td>91.21%</td><td>92.77%</td><td>89.45%</td><td>91.02%</td><td>91.23%</td><td>91.88%</td></tr><tr><td>HUMAN (BEST)</td><td>93.55%</td><td>94.25%</td><td>91.80%</td><td>92.64%</td><td>92.77%</td><td>93.39%</td></tr><tr><td colspan="7">Proprietary LMMs:</td></tr><tr><td>GEMINI 1.5 PRO</td><td>81.96%</td><td>86.91%</td><td>74.08%</td><td>65.57%</td><td>76.50%</td><td>76.82%</td></tr><tr><td>GPT-4v</td><td>82.37%</td><td>85.86%</td><td>71.11%</td><td>60.09%</td><td>75.51%</td><td>73.23%</td></tr><tr><td>GPT-4o</td><td>84.39%</td><td>85.76%</td><td>69.76%</td><td>65.15%</td><td>76.28%</td><td>75.81%</td></tr><tr><td>Qwen-VL-MAX</td><td>86.70%</td><td>84.02%</td><td>68.13%</td><td>64.60%</td><td>75.79%</td><td>74.91%</td></tr><tr><td colspan="7">Open-source LMMs:</td></tr><tr><td>CogVLM2-19B (Llama3-8B)</td><td>81.77%</td><td>83.26%</td><td>63.70%</td><td>58.65%</td><td>70.55%</td><td>71.61%</td></tr><tr><td>IDEFICS-2 (Mistral-7B-Instruct-v0.2)</td><td>78.32%</td><td>83.84%</td><td>63.87%</td><td>55.63%</td><td>68.91%</td><td>69.96%</td></tr><tr><td>DeepSeek-VL-7B</td><td>80.72%</td><td>82.00%</td><td>60.00%</td><td>47.15%</td><td>66.88%</td><td>66.48%</td></tr><tr><td>InternLM-XComposer2-VL (InternLM2)</td><td>82.08%</td><td>81.53%</td><td>66.49%</td><td>53.06%</td><td>70.90%</td><td>69.83%</td></tr><tr><td>LLaVA-NeXT (Llama3-8B)</td><td>81.17%</td><td>84.11%</td><td>52.10%</td><td>53.77%</td><td>63.89%</td><td>68.82%</td></tr><tr><td>LLaVA-NeXT (Qwen-72B)</td><td>83.22%</td><td>84.31%</td><td>57.91%</td><td>60.01%</td><td>70.22%</td><td>71.55%</td></tr><tr><td>LLaVA-NeXT (Qwen-110B)</td><td>82.99%</td><td>83.91%</td><td>59.78%</td><td>62.87%</td><td>71.76%</td><td>73.05%</td></tr><tr><td>mPLUG-Owl2 (LLaMA-7B)</td><td>74.92%</td><td>78.00%</td><td>56.97%</td><td>54.36%</td><td>64.38%</td><td>67.81%</td></tr><tr><td>LLaVA-v1.5 (Vicuna-v1.5-7B)</td><td>78.27%</td><td>82.74%</td><td>46.39%</td><td>47.97%</td><td>58.85%</td><td>66.21%</td></tr><tr><td>LLaVA-v1.5 (Vicuna-v1.5-13B)</td><td>79.51%</td><td>81.47%</td><td>47.23%</td><td>43.90%</td><td>61.41%</td><td>63.61%</td></tr><tr><td>CogVLM-17B (Vicuna-v1.5-7B)</td><td>76.77%</td><td>80.11%</td><td>55.13%</td><td>49.71%</td><td>64.33%</td><td>65.65%</td></tr><tr><td>Qwen-VL (Qwen-7B)</td><td>72.77%</td><td>80.95%</td><td>46.22%</td><td>44.02%</td><td>56.60%</td><td>63.39%</td></tr><tr><td>BakLLava (Mistral-7B)</td><td>71.01%</td><td>78.77%</td><td>42.11%</td><td>44.11%</td><td>55.61%</td><td>60.03%</td></tr><tr><td>Fuyu-8B (Persimmon-8B)</td><td>61.56%</td><td>64.22%</td><td>38.76%</td><td>41.66%</td><td>50.06%</td><td>52.31%</td></tr></table>
489
+
490
+ # A.7 QUESTION TYPE PERFORMANCE
491
+
492
+ We assess the performance disparity between Yes-or-no and What questions among LMMs. The Yes-or-no questions gauge the fundamental judgment capabilities of LMMs, whereas What questions demand a more comprehensive understanding. According to the results in Table 3, it is observed that most LMMs perform better on What questions within A-Bench $^{P1}$ , suggesting a proficiency in processing semantic content. Conversely, in A-Bench $^{P2}$ , where LMMs generally show lesser performance, they exhibit limited in-depth perception, maintaining only basic evaluative capabilities without comprehensive understanding, leading to poorer performance on What questions. Interestingly, human performance consistently excels in What questions across both A-Bench $^{P1}$ and A-Bench $^{P2}$ , likely due to a broader range of options facilitating easier inference. However, human performance tends to be more balanced compared to LMMs, which may exhibit significant variance, such as IDEFICS-2, where there is over a $5\%$ accuracy difference between question types, indicating less robustness.
493
+
494
+ # A.8 RESPONSE VARIANCE FOR LMMS
495
+
496
+ Considering that the accuracy and stability of the benchmark directly affect the quality of the evaluation, therefore we conduct the response variance experiment here. First, we use a consistent prompt instruction format to minimize any misunderstanding by LMMs and standardize the output. Additionally, we set the model's temperature parameter to 0, meaning the LMM's output will no longer be affected by randomness. As a result, the model will give the same response to the same question each time, eliminating variance.
497
+
498
+ It's also worth noting that increasing the model's temperature to encourage more diverse and exploratory answers is indeed an interesting consideration. To further address the concern about the statistical significance of the experiment, we repeat the A-Bench experiment for 5 rounds with different temperature settings across several popular 7B-8B LMMs. The performance is listed in the table below, with the results presented as the mean accuracy $\pm$ standard error.
499
+
500
+ Based on the results, we can observe that when the temperature is set to zero, the accuracy results for all LMMs remain consistent across all 5 rounds. As the temperature increases, the average
501
+
502
+ <table><tr><td>Temperature</td><td>DeepSeek-VL-7B</td><td>LLaVA-NeXT-8B</td><td>LLaVA-v1.5-7B</td><td>Qwen-VL-7B</td></tr><tr><td>0.0</td><td>66.58±0.00</td><td>67.75±0.00</td><td>62.97±0.00</td><td>60.41±0.00</td></tr><tr><td>0.5</td><td>65.11±1.72</td><td>66.43±2.09</td><td>60.61±2.23</td><td>58.17±1.89</td></tr><tr><td>1.0</td><td>62.04±4.51</td><td>63.77±3.86</td><td>59.22±4.01</td><td>55.22±6.04</td></tr></table>
503
+
504
+ Table 4: Performance comparison at different temperatures for various LMM models.
505
+
506
+ performance declines and the results become more unstable, with higher standard errors. Therefore, to ensure reproducibility and performance stability, we prefer the zero-temperature setting, as it more accurately and reliably reflects the performance of LMMs, making it more suitable for practical applications.
507
+
508
+ # A.9 DATA STATEMENT
509
+
510
+ The A-Bench dataset is released under the CC BY 4.0 license. This includes all associated AIGIs, questions, and answer candidates. However, to prevent incorporation into the training sets of any LMMs, the correct answers remain confidential. We believe this precaution will ensure that A-Bench retains its long-term value as a benchmark for assessing AIGI evaluation capabilities.
511
+
512
+ # A.10 FURTHER LIMITATIONS AND SOCIAL IMPACT
513
+
514
+ Limitations While A-Bench uses a diverse set of generative models and LMMs for evaluation, the choice and number of models might still limit the generalizability of the results. The performance of untested models or newer generative approaches might differ significantly. The rapid advancement in AI and generative models may quickly outpace the current setup of A-Bench, necessitating frequent updates or redesigns of the benchmark to stay relevant.
515
+
516
+ Social Impact By improving the evaluation metrics for AIGIs, A-Bench could lead to more reliable and trustworthy AI-generated content, which is crucial as these technologies increasingly intersect with areas like media, entertainment, and education. Moreover, improved benchmarks and evaluation methods can drive industry standards, potentially lowering the barrier to entry for smaller developers and promoting innovation through clearer performance targets.
abencharelmmsmastersatevaluatingaigeneratedimages/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6732d7041734934d82252f50a8184bc2e3b0a418ce656dff1efe7627b62c2499
3
+ size 1519918
abencharelmmsmastersatevaluatingaigeneratedimages/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7aaa503a5b13d7d38a49cbab277f55cb519a6ddd5e5acb54958461546511f823
3
+ size 640604
acccollabanactorcriticapproachtomultiagentllmcollaboration/9e3ba617-0b5d-4804-8487-12e426f6826e_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7251aaa921db932c450af2771ae4898982cba8085d12489b9ea189812a4315b3
3
+ size 130478
acccollabanactorcriticapproachtomultiagentllmcollaboration/9e3ba617-0b5d-4804-8487-12e426f6826e_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9ca7737f6bcefb5eed35fd72cb24be557479cee3577f3d18d936c3ea864786de
3
+ size 161478
acccollabanactorcriticapproachtomultiagentllmcollaboration/9e3ba617-0b5d-4804-8487-12e426f6826e_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:64acfd92b072a792980363bf9a2e24cf3ade12ce9c7c69395f5d42ab5bf7da26
3
+ size 7129892
acccollabanactorcriticapproachtomultiagentllmcollaboration/full.md ADDED
@@ -0,0 +1,560 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # ACC-COLLAB: AN ACTOR-CRITIC APPROACH TO MULTI-AGENT LLM COLLABORATION
2
+
3
+ Andrew Estornell *
4
+
5
+ ByteDance Research
6
+
7
+ andrew.estornell@bytedance.com
8
+
9
+ Jean-François Ton *
10
+
11
+ ByteDance Research
12
+
13
+ jeanfrancois@bytedance.com
14
+
15
+ Yuanshun Yao
16
+
17
+ Meta GenAI
18
+
19
+ kevinyao@meta.com
20
+
21
+ Yang Liu
22
+
23
+ University of California, Santa Cruz
24
+
25
+ yangliu@ucsc.edu
26
+
27
+ # ABSTRACT
28
+
29
+ Large language models (LLMs) have demonstrated a remarkable ability to serve as general-purpose tools for various language-based tasks. Recent works have demonstrated that the efficacy of such models can be improved through iterative dialog between multiple models. While these paradigms show promise in improving model efficacy, most works in this area treat collaboration as an emergent behavior, rather than a learned behavior. In doing so, current multi-agent frameworks rely on collaborative behaviors to have been sufficiently trained into off-the-shelf models. To address this limitation, we propose ACC-Collab, an Actor-Critic based learning framework to produce a two-agent team (an actor-agent and a critic-agent) specialized in collaboration. We demonstrate that ACC-Collab outperforms SotA multi-agent techniques on a wide array of benchmarks.
30
+
31
+ # 1 INTRODUCTION
32
+
33
+ Recently, large language models (LLMs) have rapidly become a cornerstone in various applications, redefining how we process and generate language at scale (Thirunavukarasu et al., 2023; Hadi et al., 2023; Jiang et al., 2024). Their ability to handle diverse tasks, from translation (Zhu et al., 2024; Otter et al., 2020) to answering complex questions (Zhang et al., 2024; Hao et al., 2024; Havrilla et al., 2024), has attracted the attention of both industry as well as academia. However, despite these advancements, LLMs still exhibit notable weaknesses, particularly when it comes to answering factual questions and reasoning (Tonmoy et al., 2024; Rawte et al., 2023; Huang et al., 2023).
34
+
35
+ To address these limitations, several techniques have been proposed, such as Chain-of-Thought (CoT) prompting (Wei et al., 2022), Self-Reflection (Ji et al., 2023; Shinn et al., 2023), and Multi-Agent Debate (MAD) (Du et al., 2023), to name a few. These approaches aim to improve the reasoning abilities of LLMs by guiding them toward more accurate answers through structured thinking or discourse. However, the majority of these techniques do not involve training the model specifically for these tasks but instead rely on zero-shot or few-shot capabilities.
36
+
37
+ Similar to most multi-agent paradigms, MAD approaches make use of off-the-shelf general-purpose LLMs, which are not trained to collaborate. Such approaches rely on collaboration as an emergent, rather than a learned, behavior. While, in some cases, these emergent behaviors are sufficient, the question remains: Can these methods be improved by imbuing models directly with collaborative abilities? To answer this, we propose training teams of LLMs to solve tasks collaboratively.
38
+
39
+ A particularly relevant work is DebateGPT (Subramaniam et al., 2024), which employs debate as a mechanism to generate higher-quality fine-tuning data. Unlike our approach, which optimizes LLMs for multi-round collaborative problem-solving, their method focuses on using debate to enhance training data for a single model that produces individual responses.
40
+
41
+ In this paper, we propose a novel framework Actor-Critic Collaboration (ACC-Collab) which jointly trains a two-agent team to collaboratively solve problems through iterative conversation; this team consists of an actor-agent, responsible for providing answers for a given task, and a critic-agent, responsible for assisting the actor-agent with feedback on its answers. In our training pipeline, we introduce a novel off-policy learning scheme called "Guided-Collaboration" to generate high-quality multi-turn training data to enhance the actor's and critic's performance on challenging tasks.
42
+
43
+ To summarize, our contributions are as follows:
44
+
45
+ - We are the first to propose a framework for jointly training a team of LLM agents (Actor-Critic) within the context of collaborative problem solving.
46
+ - We introduce a novel data generation scheme, "Guided Collaboration Trajectories", which enables the efficient creation of high-quality training data for both the actor and critic roles.
47
+ - Our extensive experiments demonstrate that our method, ACC-Collab, significantly outperforms existing state-of-the-art approaches.
48
+
49
+ # 2 RELATED WORK
50
+
51
+ Our research is closely related to the emerging field of multi-agent deliberation, sometimes called Multi-Agent Debate (MAD), which examines how to use groups of models to solve tasks through iterative discussion Chan et al. (2023); Liang et al. (2023); Du et al. (2023); Li et al. (2023c); Khan et al. (2024); Michael et al. (2023); Rasal (2024); Pham et al. (2023); Abdelnabi et al. (2023); Hong et al. (2023); Irving et al. (2018); Li et al. (2023b;d; 2024a); Wang et al. (2023a); Zhang et al. (2023). Many of these works find that language models have naturally collaborative abilities Singhal et al. (2023); Du et al. (2023); Chan et al. (2023), while others have noted that the collaborative ability of off-the-shelf models can be quite limited Wang et al. (2024); Smit et al..
52
+
53
+ Current approaches to multi-agent deliberation can be broadly cast into two main categories: those that modify model prompts and responses during the discussion Liang et al. (2023); Khan et al. (2024); Rasal (2024); Feng et al. (2024); Yang et al. (2024), and those that modify the structure of the deliberation process Li et al. (2023a); Hong et al. (2023); Liu et al. (2023); Li et al. (2024c); Wang et al. (2023b); Wu et al. (2023); Chen et al. (2023); Chang (2024b). Importantly, both categories use off-the-shelf language models (which have not been trained to collaborate) and work by modifying either the inputs or outputs of these models. Deviating from this line of work, we aim to specifically train a team of models to collaboratively solve tasks.
54
+
55
+ Two works of particular note are that of Subramaniam et al. (2024), which proposes to use debate data to fine-tune models, and Li et al. (2024b), which trains models for adversarial debate. In the former, debate is used to generate higher-quality fine-tuning data and is not used at inference time; differing from this work, we train models directly to collaborate and use multi-agent discussion both during training and inference. In the latter, models are trained to be effective arguers rather than collaborators, i.e., models are trained to give conceiving arguments such that they can win a debate against other LLMs. Differing from this work, we train models to collaboratively solve tasks.
56
+
57
+ In the context of multi-agent deliberation, the concept of divergent opinions is highly relevant to our method. Several approaches to multi-agent deliberation aim to control the level of disagreement among the agents Liang et al. (2023); Khan et al. (2024); Chang (2024a). Often, these works dynamically increase disagreement to prevent early convergence of deliberation. In our study, we leverage divergent opinions to generate high-quality training data. In particular, we have agents change their opinion during the discussion and measure whether or not that change increases or decreases the likelihood that the agents' discussion converges to a correct answer. Using this signal we can then assess the value of a given training example for training the models.
58
+
59
+ Also closely related to our work are paradigms that aim to use self-generated data to improve model performance, often in the context of reasoning or chain of thought Trung et al. (2024); Huang et al. (2024); Xiong et al. (2024); Chen et al. (2024); Pang et al. (2024b). Similar to this line of research, we make use of model generations as training data. However, we are the first work to use such data in the context of multiple models debating collaboratively to solve a given task.
60
+
61
+ # 3 PRELIMINARIES AND NOTATION
62
+
63
+ In this section, we formalize multi-agent collaboration between an Actor (an agent that provides answers) and a Critic (an agent that provides feedback to the actor) while also introducing the notation that will be used throughout the remainder of the paper.
64
+
65
+ Let $(x,y)\sim \mathcal{D}$ be a task-answer pair source from a distribution of tasks and answers $\mathcal{D}$ . For a given task $x$ , two agents - an actor agent responsible for providing answers and a critic agent responsible for providing feedback and assistance to the actor agent - engage in an iterative discussion over $T$ rounds, to correctly infer the answer $y$ . Let $\theta_{a}$ and $\theta_{c}$ be the parameters of actor and critic agent, respectively. The iterative discussion between these two agents is as follows:
66
+
67
+ 1. At round $t = 0$ a task $x$ is given to the actor $\theta_{a}$ who provides an initial response $z_{a}^{(0)}$ .
68
+ 2. Next, still at round $t = 0$ , the critic $\theta_c$ views task $x$ and $z_a^{(0)}$ , then provides feedback $z_c^{(0)}$ .
69
+ 3. For each round $t > 0$ , the actor views the task $x$ , its own previous response $z_{a}^{(t - 1)}$ and the critic's feedback $z_{c}^{(t - 1)}$ , then provides an updated response $z_{a}^{(t)}$ .
70
+ 4. After the actor's new response $z_{a}^{(t)}$ , the critic provides the feedback $z_{c}^{(t)}$ based on $z_{a}^{(t)}$ .
71
+
72
+ The accuracy of this procedure is measured via the correctness of the actor's final response, i.e., $\mathbb{I}\big[\zeta (z_a^{(T)}) = y\big]$ . Where $\zeta$ is a function that extracts answers from text-based responses. For example if $z_{a}^{(T)} =$ "The sky is blue", then $\zeta (z_a^{(T)}) =$ "blue". With this notation and formalization of multiagent collaboration, we introduce our framework for training actor-critic teams.
73
+
74
+ ![](images/f2d73ee21bf1c51501011345b613d0fcfad5cc662708de77e6252c6ab3aa82ab.jpg)
75
+ Figure 1: ACC-Collab training pipeline, exemplified for the actor. 1) We generate data from both natural deliberation as well as guided deliberation towards and away from the ground truth answer $y$ using the actor and critic. 2) We compute the relative quality of each trajectory based on the expected quality difference $\Delta_y$ , $\Delta_{!y}$ w.r.t. to the natural response. 3) We store all high-quality pairwise data in our database and train the actor agent. 4) We alternate this procedure for the actor and critic. See Figure 5 of the supplement for the corresponding procedure applied to the critic.
76
+
77
+ # 4 METHODOLOGY
78
+
79
+ In this section, we outline our procedure for training a two-agent team, consisting of an actor agent $f_{\theta_a}$ (responsible for providing answers to a given task $x$ ) and a critic agent $f_{\theta_c}$ (responsible for providing feedback and assistance to the actor). At inference time, the two trained agents engage in iterative discussion to solve a given task $x$ , generating the final response $z_a^{(T)}$ .
80
+
81
+ # 4.1 AN ACTOR-CRITIC COLLABORATION FRAMEWORK
82
+
83
+ Building upon our established notation from the previous section and the general actor-critic framework, we formally define our optimization objective as follows<sup>1</sup>:
84
+
85
+ $$
86
+ \theta_ {a} ^ {*}, \theta_ {c} ^ {*} = \arg \max _ {\theta_ {a}} \max _ {\theta_ {c}} \mathbb {E} _ {(x, y) \sim D} \left[ \zeta \left(\underbrace {f _ {\theta_ {a}} \left(x , z _ {a} ^ {(T - 1)} , z _ {c} ^ {(T - 1)}\right)} _ {\text {a c t o r s f i n a l r e s p o n s e} z _ {a} ^ {(T)}}\right) = y \right] \tag {1}
87
+ $$
88
+
89
+ Intuitively, Eq. 1 aims to simultaneously optimize the actor's parameters $\theta_{a}$ and the critic's parameters $\theta_{c}$ , ensuring that the actor's final output at iteration $T$ matches the correct answer $y$ . In other words, we optimize the accuracy of the actor's response at time $T$ , namely
90
+
91
+ $$
92
+ z _ {a} ^ {(T)} = f _ {\theta_ {a}} \left(x, z _ {a} ^ {(T - 1)}, z _ {c} ^ {(T - 1)}\right),
93
+ $$
94
+
95
+ where accuracy is measured as $\mathbb{E}\big[\zeta \big(z_a^{(T)}\big) = y\big]$ .
96
+
97
+ It is important to note that the recursive nature of multi-agent deliberation introduces significant complexity to the optimization process. Each response $z_{a}^{(t)}$ depends not only on the actor's previous output $z_{a}^{(t-1)}$ but also on the critic's previous output $z_{c}^{(t-1)}$ . This interaction closely resembles a cooperative dynamic Stackelberg game (Li & Sethi, 2017), where two players engage in hierarchical decision-making over time, leading us to adopt an iterative best-response approach (Fiez et al., 2019). In other words, we first train the critic agent, followed by training the actor to best respond to the critic's output. We can then update the critic to adapt to the newly trained actor, and so on. More formally, this process works by first fixing $\theta_{a}$ , and solving,
98
+
99
+ $$
100
+ \theta_ {c} ^ {*} = \arg \max _ {\theta_ {c}} \mathbb {E} _ {(x, y) \sim D} \left[ \zeta \left(f _ {\theta_ {a}} \left(x, z _ {a} ^ {(T - 1)}, \underbrace {f _ {\theta_ {c}} \left(x , z _ {a} ^ {(T - 1)}\right)} _ {\text {c r i t i c s}}\right)\right) = y \right] \tag {2}
101
+ $$
102
+
103
+ then fixing $\theta_c^*$ from above, we solve
104
+
105
+ $$
106
+ \theta_ {a} ^ {*} = \arg \max _ {\theta_ {a}} \mathbb {E} _ {(x, y) \sim D} \left[ \zeta \left(f _ {\theta_ {a}} \left(x, z _ {a} ^ {(T - 1)}, f _ {\theta_ {c} ^ {*}} \left(x, z _ {a} ^ {(T - 1)}\right)\right)\right) = y \right] \tag {3}
107
+ $$
108
+
109
+ this process then repeats until a desired stopping criteria is reached. In practice, we find that a single iteration is sufficient to produce a high-quality collaborative team.
110
+
111
+ While this alternating scheme allows us to optimize the actor and critic separately, the objectives of each agent still cannot be optimized directly due to the recursive nature of agent responses in this objective; responses at round $T$ depend on those given by the agent at round $t - 1$ which themselves depend on the response given at round $t - 2$ and so on. To deal with this temporal dependency, we next introduce the concept of Partial Trajectory rewards, which will allow us to capture the signal of each response $z^{(t)}$ for each $t \leq T$ .
112
+
113
+ # 4.2 PARTIAL TRAJECTORY REWARD
114
+
115
+ To address the inter-round dependencies of the above optimization, we proposed a scheme that allows us to determine the "goodness" of a given response $z^{(t)}$ (from either the actor or the critic) for any $t \leq T$ . Consider a conversation between the actor and the critic that was paused at time $t$ , i.e., the most recent response is $z^{(t)}$ . To assess the goodness of $z^{(t)}$ , one might ask how likely the deliberation procedure will converge to the correct answer $y$ at round $T$ , given that the procedure is already at response $z^{(t)}$ . Formally, we can define this as
116
+
117
+ $$
118
+ r \left(z ^ {(t)}, x, y\right) = \mathbb {E} \left[ \zeta \left(z _ {a} ^ {(T)}\right) = y \mid x, z ^ {(t)} \right] \tag {4}
119
+ $$
120
+
121
+ Intuitively, the partial reward captures the expectation of arriving at the correct answer $y$ through deliberation starting at round $t$ with generation $z^{(t)}$ . In practice, $r(z^{(t)}, x, y)$ can be estimated by learning the reward $r$ or by using heuristics such as one-step roll-out, i.e., Monte Carlo estimation.
122
+
123
+ In our experiments, we use one-step roll-out heuristics, i.e. simulating an additional deliberation round multiple times from response $z^{(t)}$ . The reward $r(z^{(t)},x,y)$ is set as the average accuracy of these simulations. Empirically, we find this approach effective for generating high-quality training data. We leave learning-based reward functions for future work.
124
+
125
+ Our objective will then be to optimize the parameters of the actor and critic, $\theta_{a},\theta_{c}$ , so that the responses produced by these agents at each timestep $t$ , namely $z^{(t)}$ , maximize $r(z^{(t)},x,y)$ . That is, we optimize the actor and the critic so that at each timestep $t$ , they give a response $z^{(t)}$ which has a high probability of leading the deliberation to converge to the correct answer at time $T$ .
126
+
127
+ To optimize the objective in Eq. 1, we will utilize preference optimization, a standard technique in LLM training. Using the iterative maximization scheme described above, we first have to gather pairwise preference data for both the actor and the critic. In the following sections, we first detail our process for generating this preference data before delving into the optimization procedure.
128
+
129
+ Algorithm 1: Trajectory generation and selection
130
+ Data: Actor and critic: $\theta_{a},\theta_{c}$ ,Distribution of tasks $\mathcal{D}$ , Reward threshold $\varepsilon$
131
+ Result: A dataset of trajectories $D$ $D\gets \emptyset /\ast$ Set of trajectories to use \*/
132
+ for $(x,y)\sim \mathcal{D}$ do
133
+ $\mathbf{z}^{(0)}\leftarrow$ OneDeliberationRound(x) /* actor and critic responses, i.e. $\mathbf{z} = \langle z_a^{(0)},z_c^{(0)}\rangle *\nabla$ for t in [1,T] do
134
+ $\mathbf{z}^{(t)}\leftarrow$ OneDeliberationRound $(x,\mathbf{z}^{(t - 1)})$ /* updated natural responses\*/ /* guided-deliberation towards, and away from, correct answer $y^{\star}$ /
135
+ $\mathbf{z}_{+}^{(t)}\leftarrow$ OneGuidedDeliberationRound $(x,\mathbf{z}^{(t - 1)},y)$ $\mathbf{z}_{-}^{(t)}\leftarrow$ OneGuidedDeliberationRound $(x,\mathbf{z}^{(t - 1)},!y)$
136
+ /* Estimate final round accuracy if deliberation continue from response $z^{(t)}$ . i.e. $r(z^{(t)},x,y)$ $r(z_{+}^{(t)},x,y)$ $r(z_{-}^{(t)},x,y)$ $^ { \text{一} }$ EstimateFinalAccuracy $(\mathbf{z}^t)$ $v_{+}\gets$ EstimateFinalAccuracy $(\mathbf{z}_{+}^{t})$ $v_{-}\gets$ EstimateFinalAccuracy $(\mathbf{z}_{-}^{t})$
137
+ /\* Compute the expected improvement for each trajectory \*/
138
+ /\* Save trajectory pairs that result in sufficient accuracy improvement \*/
139
+ if $v_{+} - v\geq \varepsilon$ then D.add $(\mathrm{pos} = \mathbf{z}_+^{(t)},\mathrm{neg} = \mathbf{z}(t))$
140
+ end
141
+ else if $v - v_{-}\geq \varepsilon$ then D.add $(\mathrm{pos} = \mathbf{z}^{(t)},\mathrm{neg} = \mathbf{z}_{-}^{(t)})$
142
+ end
143
+ end
144
+
145
+ # 4.3 OFF-POLICY TRAJECTORY GENERATION
146
+
147
+ In this section, we describe how to generate the preference data needed to optimize the objective in Eq. 1. The classification of a sample as positive or negative is determined by the deliberation trajectory it follows. Specifically, a positive sample for training the actor corresponds to a trajectory likely to lead to the correct answer at round $T$ , while a negative sample corresponds to one that leads to an incorrect answer at round $T$ . Intuitively, we aim to push the actor agent to generate responses that lead to correct answers while reducing responses that are unlikely to do so, thus optimizing for Eq. 3. The same principle applies to the critic when optimizing Eq. 2.
148
+
149
+ With this intuition in mind, we now describe how such data is generated. A deliberation trajectory can be defined as a sequence of responses $\langle z_a^{(0)},z_c^{(0)},z_a^{(1)},z_c^{(1)},\ldots ,z_a^{(T)},z_c^{(T)}\rangle$ for a given task $x$ . A straightforward way to generate preference data would be to generate multiple rollouts at each round and select the trajectories with the highest $r(z^{(t)},x,y)$ as positive samples and those with the lowest $r(z^{(t)},x,y)$ as negative samples. This approach could enforce the desired behavior for both the actor and the critic if enough samples are collected.
150
+
151
+ However, this approach is not without its limitations. In particular, if the agent performs poorly on a given dataset, it may be difficult to collect enough positive samples, resulting in low training signals. Additionally, even if the agent performs adequately, generating sufficient responses for both the actor and critic requires significant computational resources, especially to ensure that high $r(z^{(t)},x,y)$ values are used for positive samples and low values for negative samples.
152
+
153
+ # 4.4 GUIDED COLLABORATIVE TRAJECTORIES
154
+
155
+ To address these limitations and improve efficiency, we propose Guided-Collaborative Trajectories, which steer the deliberation procedure in two opposing directions: one towards, and another away from, the correct. By comparing these guided trajectories with the natural deliberation trajectory, we can assess the relative goodness of each trajectory using an estimation of the reward structure $r$ .
156
+
157
+ Specifically, for task $x$ with answer $y$ , let $\mathbf{z}^{(t - 1)} = (z_a^{(t - 1)}, z_c^{(t - 1)})$ be the agents' responses at time $t - 1$ . Let $(z_a^{(t)}, z_c^{(t)})$ be the agents' natural responses (i.e., without guidance), let $(z_{y,a}^{(t)}, z_{y,c}^{(t)})$ and $(z_{!y,a}^{(t)}, z_{!y,c}^{(t)})$ be the agents' responses when guided towards, and away from, supporting answer $y$ respectively. Thus, each guided response is an off-policy generation. In practice, we want guided responses to be different enough from natural responses so that learning the guided responses results in consequential changes to the agent, but not so different that they are challenging to learn; we find that prompt modification is an effective tool for striking this balance. To guide the generations of $(z_{y,a}^{(t)}, z_{y,c}^{(t)})$ and $(z_{!y,a}^{(t)}, z_{!y,c}^{(t)})$ , we will simply provide a correct and wrong target answer in the prompt, respectively - see "Guided Collaborative Trajectory Prompts" in Section C for further details.
158
+
159
+ For each guided response, we consider how influential this response was in altering the accuracy of the final response, i.e., in the case of the actor, we define
160
+
161
+ $$
162
+ \Delta_ {y} = r (z _ {y, a} ^ {(t)}, x, y) - r (z _ {a} ^ {(t)}, x, y) \qquad \mathrm {a n d} \qquad \Delta_ {! y} = r (z _ {a} ^ {(t)}, x, y) - r (z _ {! y, a} ^ {(t)}, x, y)
163
+ $$
164
+
165
+ The terms $\Delta_y$ and $\Delta_{!y}$ give the expected accuracy difference if at round $t$ the actor had given response $z_{y,a}^{(t)}$ (or $z_{!y,a}^{(t)}$ ) instead of response $z_a^{(t)}$ . Large $\Delta_y$ indicates that a one-response difference during the deliberation was sufficient to push the procedure toward the correct answer. Such responses would be desirable for the agent to learn. On the other hand, large values of $\Delta_{!y}$ indicate that a one-response difference easily causes the agents to converge to the incorrect answer; this indicates that the deliberation procedure is particularly fragile at timestep $t$ .
166
+
167
+ With these observations in hand, we use $\Delta_y$ and $\Delta_{!y}$ to define positive an negative examples, in particular for a threshold $\varepsilon$ ,
168
+
169
+ $$
170
+ \left(z _ {+} ^ {(t)}, z _ {-} ^ {(t)}\right) = \left\{ \begin{array}{l l} \left(z _ {y} ^ {(t)}, z ^ {(t)}\right) & \text {i f} \quad \varepsilon \leq \Delta_ {y} = r \left(z _ {y, a} ^ {(t)}, x, y\right) - r \left(z _ {a} ^ {(t)}, x, y\right) \\ \left(z ^ {(t)}, z _ {! y} ^ {(t)}\right) & \text {i f} \quad \varepsilon \leq \Delta_ {! y} = r \left(z _ {a} ^ {(t)}, x, y\right) - r \left(z _ {! y, a} ^ {(t)}, x, y\right) \end{array} \right. \tag {5}
171
+ $$
172
+
173
+ if neither value is above the threshold, then the example is thrown out.
174
+
175
+ Remark 1 Under Eq. 5, a positive example $z_{+}^{(t)}$ can be interpreted as a guided response $z_{y}^{(t)}$ which increased the probability of deliberation converging to the correct answer by at least $\varepsilon$ , when compared with the natural response $z^{(t)}$ . Similarly, a negative example $z_{-}^{(t)}$ is a guided response $z_{y!}^{(t)}$ which decreased the probability of deliberation converging to the answer by at least $\varepsilon$ .
176
+
177
+ Now that we have a procedure for generating high-quality training examples consisting of positive and negative pairs, we next discuss how to use those positive and negative pairs to train both agents.
178
+
179
+ # 4.5 LEARNING ON GUIDED TRAJECTORIES
180
+
181
+ In order to optimize each objective (Eq. 2 and 3), we use standard preference optimization Direct Preference Optimization (DPO) Rafailov et al. (2024). We choose DPO for its efficiency, but any preference optimization scheme could be used (see section B of the supplement for details on how other preference optimization schemes may be incorporated).
182
+
183
+ Hence, given a preference dataset of positive and negative examples for both the actor and critic agent, of the from $z_{-}^{(t)}, z_{+}^{(t)}$ , the DPO loss is defined as,
184
+
185
+ $$
186
+ \mathcal {L} _ {\mathrm {D P O}} = \sum_ {t = 0} ^ {T} \mathbb {E} _ {(x, y, z _ {-} ^ {(t)}, z _ {+} ^ {(t)}) \sim D} \left[ \log \sigma \left(\frac {\pi_ {\theta} \left(z _ {+} ^ {(t)} \mid x , \mathbf {z} ^ {(t - 1)}\right)}{\pi_ {\operatorname {r e f}} \left(z _ {+} ^ {(t)} \mid x , \mathbf {z} ^ {(t - 1)}\right)} - \frac {\pi_ {\theta} \left(z _ {-} ^ {(t)} \mid x , \mathbf {z} ^ {(t - 1)}\right)}{\pi_ {\operatorname {r e f}} \left(z _ {-} ^ {(t)} \mid x , \mathbf {z} ^ {(t - 1)}\right)}\right) \right] \tag {6}
187
+ $$
188
+
189
+ Where $\pi_{\theta}$ is the policy induced by parameters $\theta_{a}$ or $\theta_{c}$ and $\mathbf{z}^{(t - 1)}$ are the agent's responses at the previous round (i.e., the responses prior to giving either response $z_{-}^{(t)}$ or $z_{+}^{(t)}$ ).
190
+
191
+ Remark 2 Recall, given our generated preference data, this loss implicitly optimizes the reward $r(z^t, x, y)$ which itself is equivalent to final accuracy (i.e., the quantity being maximized by Eq. 1). By summing across all rounds, we implicitly maximize the probability that each round $t$ yields a response $z^t$ which causes deliberation to converge to the correct answer at time $T$ .
192
+
193
+ # 5 EXPERIMENTS
194
+
195
+ Benchmarks To evaluate the efficacy of ACC-Collab we make use of 5 standard benchmark tasks: BoolQ Clark et al. (2019) $\sim$ 12k yes-no reading comprehension questions, MMLU Hendrycks et al. (2020) $\sim$ 15k multiple choice questions covering a wide array of subjects and difficulty, BBH Suzgun et al. (2022) $\sim$ 5k mixed-type questions SCIQ Welbl et al. (2017) $\sim$ 13k multiple-choice science questions, ARC Chollet (2019) $\sim$ 7k multiple-choice reasoning-based questions.
196
+
197
+ Baselines We compare ACC-Collab to several multi-agent and single-agent baselines. For inference-based methods, we compare to Society of Minds SoM Du et al. (2023), Persona Chan et al. (2023). For training-based methods, we compare to supervised fine-tuning SFT Radford (2018), DebateTune Li et al. (2024b), DebateGPT Subramaniam et al. (2024). We use three different base models: Llama-3-8B-Instruct Llama-3 Dubey et al. (2024), Mistral-7B-Instruct Mistral Jiang et al. (2023), and Gemma-2-2B-Instruct Gemma-2 Team et al. (2024).
198
+
199
+ # 5.1 ACC-COLLAB PERFORMANCE
200
+
201
+ Final Answer Accuracy We begin by examining the performance of our method ACC-Collab (a single round of training) and ACC-Collab+ (two rounds of training) In table 1, we see the average accuracy of each method after five rounds of deliberation. Our method attains superior performance compared with baseline methods in most cases. The high efficacy of ACC-Collab relative to the baselines indicates that in most cases, only a single round of training is necessary to produce a high-quality collaborative team. It is worth noting that in some cases, further training rounds may decrease performance (i.e., ACC-Collab+ can have worse performance than ACC-Collab). Hence, we have a hold-out set of tasks to determine whether further training degrades performance.
202
+
203
+ Process Accuracy In addition to measuring the correctness of the actor's final answer (i.e., outcome accuracy), we also analyze the correctness of the team's reasoning and discussion steps (i.e., process accuracy). Since ground truth is not available for reasoning and discussion steps, we use GPT-4o as an oracle to evaluate whether the agents follow the correct steps to reach the final answer. We provide an outline of the experimental setup and full results in Section A.1 of the supplement. We find that our method improves or maintains the process accuracy of the actor and critic.
204
+
205
+ # 5.2 PERFORMANCE INCREASE OF MULTI-AGENT COLLABORATION
206
+
207
+ Average Improvement As noted in Du et al. (2023), the key mechanism behind the success of multi-agent deliberation (or any of its many variants) is that discussion over multiple rounds allows
208
+
209
+ <table><tr><td colspan="7">Llama-3</td><td>(Ours)</td><td>(Ours)</td></tr><tr><td>BoolQ MMLU BBH SCIQ ARC</td><td>SoM (2x) .812±.01 .62±.004 .508±.003 .925±.002 .874±.001</td><td>SoM (4x) .811±.007 .635±.004 .514±.005 .923±.002 .874±.001</td><td>Persona .781±.002 .639±.004 .509±.013 .925±.004 .87±.003</td><td>DebateTune .775±.033 .63±.004 .508±.005 .924±.004 .871±.002</td><td>SFT .798±.006 .642±.005 .552±.006 .925±.003 .879±.004</td><td>DebateGPT .815±.005 .654±.005 .551±.008 .932±.001 .876±.002</td><td>ACC-Collab .887±.005 .644±.01 .593±.006 .952±.0 .881±.004</td><td>ACC-Collab+ .894±.003 .683±.012 .574±.003 .948±.003 .869±.002</td></tr><tr><td colspan="7">Mistral</td><td>(Ours)</td><td>(Ours)</td></tr><tr><td>BoolQ MMLU BBH SCIQ ARC</td><td>SoM (2x) .801±.005 .57±.003 .428±.002 .856±.002 .824±.001</td><td>SoM (4x) .798±.004 .562±.005 .462±.003 .856±.002 .823±.002</td><td>Persona .831±.003 .574±.002 .465±.011 .86±.002 .827±.001</td><td>DebateTune .83±.003 .562±.005 .456±.005 .863±.003 .834±.0</td><td>SFT .84±.003 .594±.004 .439±.006 .858±.004 .825±.003</td><td>DebateGPT .848±.002 .577±.002 .48±.012 .871±.003 .822±.002</td><td>ACC-Collab .877±.002 .61±.005 .519±.009 .902±.005 .843±.003</td><td>ACC-Collab+ .893±.002 .672±.004 .601±.004 .905±.002 .856±.003</td></tr><tr><td colspan="7">Gemma-2</td><td>(Ours)</td><td>(Ours)</td></tr><tr><td>BoolQ MMLU BBH SCIQ ARC</td><td>SoM (2x) .75±.011 .58±.002 .454±.007 .903±.002 .841±.003</td><td>SoM (4x) .759±.004 .578±.002 .449±.01 .903±.002 .843±.005</td><td>Persona .716±.015 .577±.002 .447±.006 .908±.003 .847±.003</td><td>DebateTune .767±.003 .578±.001 .447±.007 .903±.001 .847±.003</td><td>SFT .783±.011 .579±.002 .498±.006 .913±.002 .848±.002</td><td>DebateGPT .812±.003 .582±.002 .491±.01 .914±.002 .851±.003</td><td>ACC-Collab .84±.005 .51±.016 .513±.006 .918±.003 .852±.003</td><td>ACC-Collab+ .845±.005 .555±.003 .475±.008 .909±.003 .849±.002</td></tr></table>
210
+
211
+ Table 1: Average accuracy (with $95\%$ confidence intervals) after 5 rounds of deliberation. For each dataset, the highest accuracy is shown in bold.
212
+
213
+ ![](images/b0a072ee71059009fc23e4ed0414058e77d366946e7ea90781fbab5e572b1ea9.jpg)
214
+ Figure 2: Percent improvement in accuracy after five rounds of deliberation, compared to a single round. Percent improvement (Eq. 7) for each method is averaged across all five datasets.
215
+
216
+ the models to iteratively refine their answers. Thus, a natural question for any iterative multi-agent method is: how much does accuracy improve from the initial round $t = 0$ to the final round $t = T$ ? Where $T = 4$ in our experiments. To measure this, we look at the percent improvement in model accuracy from round $t = 0$ to round $t = 4$ calculated as,
217
+
218
+ $$
219
+ \frac {\mathrm {a c c} _ {4} - \mathrm {a c c} _ {0}}{\mathrm {a c c} _ {0}} \quad \text {w h e r e a c c} _ {t} \text {i s a c c u r a c y a t r o u n d} t \tag {7}
220
+ $$
221
+
222
+ In Figure 2 we see the average percent improvement for each method, averaged across all 5 datasets. For each of the three base models, ACC-Collab+ has the highest average improvement compared to all other methods. Additionally, the improvement gained by methods such as SoM, SFT or DebateGPT is far less stable than that of ACC-Collab+. In particular, for Mistral, SoM yields nearly no improvement, similarly SFT and DebateGPT offer little improvement when applied to of Llama-3.
223
+
224
+ Per-Round Accuracy Next, we look more closely at the accuracy of each method across five rounds of deliberation. Figure 3, shows per-round accuracy on the BoolQ dataset. As already illustrated by Table 1, ACC-Collab and ACC-Collab+ achieve higher final round accuracy than the other methods. Notably, our method has higher accuracy both at the final round $t = 4$ and at round $t = 0$ . Recall that at round $t = 0$ , the actor's response is independent of the critic. This indicates that in some cases, our training pipeline can produce actor agents with superior zero-shot accuracy (without deliberation) compared to models produced by SFT and DebateGPT.
225
+
226
+ Interestingly, we observe that in some cases, SFT and DebateGPT are not much better than simple deliberation with untrained models (i.e., SoM-4x), e.g., BoolQ with Llama-3. This is primarily due to the fact that these methods are designed to improve the model's single-short performance but do
227
+
228
+ ![](images/7f1ffcff3886ae5acf3ab6d68844d76802dbafa6e5763ef9aa54c0d90ce1d0c5.jpg)
229
+ Figure 3: Accuracy over five rounds of deliberation on BoolQ (top) and SCIQ (bottom).
230
+
231
+ little to improve their collaborative abilities. Cases such as this outline the necessity of training to improve collaboration (ACC-Collab) rather than training for raw accuracy (SFT and DebateGPT).
232
+
233
+ # 5.3 INDIVIDUAL PERFORMANCE
234
+
235
+ Next, we examine the relative effectiveness of both the actor and the critic. To do this, we train an actor and critic via ACC-Collab. Then, during deliberation, we pair the trained actor with an untrained critic and pair the trained critic with an untrained actor. In Table 2, column "Actor" corresponds to the former, while "Critic" corresponds to the latter. On average, the trained actor attains higher accuracy compared to the trained critic, this aligns with intuition as the actor is responsible for providing answers while the critic plays a supporting role. In most cases, the trained actor (paired with an untrained critic) outperforms SFT and DebateGPT. When the trained actor is paired with a trained critic (either ACC-Collab or ACC-Collab+), its performance is further improved.
236
+
237
+ ![](images/6e10fe204c7d6d343a9bb63f1380efea7a5aad148bf5b3fbd3eeeacbecae240b.jpg)
238
+ Figure 4: Comparison of responses from the critic model before and after training with ACC-Collab.
239
+
240
+ # 5.4 WHAT DO THE AGENTS ACTUALLY LEARN?
241
+
242
+ Lastly, we are interested in understanding how ACC-Collab improves the Actor-Critic Team. Figure 4 demonstrates an example of the difference in responses between an untrained critic and a critic trained through ACC-Collab. Although the actor provides a wrong answer, the untrained critic is too agreeable and does not provide substantive feedback for the actor to correct their answer.
243
+
244
+ Llama-3
245
+
246
+ <table><tr><td></td><td>SoM (4x)</td><td>DebateGPT</td><td>Actor</td><td>Critic</td><td>ACC-Collab</td><td>ACC-Collab+</td></tr><tr><td>BoolQ</td><td>.811±.007</td><td>.815±.005</td><td>.867±.003</td><td>.834±.005</td><td>.887±.005</td><td>.894±.003</td></tr><tr><td>MMLU</td><td>.635±.004</td><td>.654±.005</td><td>.651±.012</td><td>.65±.008</td><td>.644±.01</td><td>.683±.012</td></tr><tr><td>BBH</td><td>.514±.005</td><td>.551±.008</td><td>.583±.01</td><td>.55±.015</td><td>.593±.006</td><td>.574±.003</td></tr><tr><td>SCIQ</td><td>.923±.002</td><td>.932±.001</td><td>.947±.002</td><td>.945±.001</td><td>.952±.0</td><td>.948±.003</td></tr><tr><td>ARC</td><td>.874±.001</td><td>.876±.002</td><td>.885±.003</td><td>.866±.002</td><td>.881±.004</td><td>.869±.002</td></tr><tr><td colspan="7">Mistral</td></tr><tr><td></td><td>SoM (4x)</td><td>DebateGPT</td><td>Actor</td><td>Critic</td><td>ACC-Collab</td><td>ACC-Collab+</td></tr><tr><td>BoolQ</td><td>.798±.004</td><td>.848±.002</td><td>.873±.002</td><td>.843±.002</td><td>.877±.002</td><td>.893±.002</td></tr><tr><td>MMLU</td><td>.562±.005</td><td>.577±.002</td><td>.598±.002</td><td>.611±.001</td><td>.61±.005</td><td>.672±.004</td></tr><tr><td>BBH</td><td>.462±.003</td><td>.48±.012</td><td>.493±.012</td><td>.518±.005</td><td>.519±.009</td><td>.601±.004</td></tr><tr><td>SCIQ</td><td>.856±.002</td><td>.871±.003</td><td>.891±.001</td><td>.891±.002</td><td>.902±.005</td><td>.905±.002</td></tr><tr><td>ARC</td><td>.823±.002</td><td>.822±.002</td><td>.833±.003</td><td>.842±.003</td><td>.843±.003</td><td>.856±.003</td></tr><tr><td colspan="7">Gemma-2</td></tr><tr><td></td><td>SoM (4x)</td><td>DebateGPT</td><td>Actor</td><td>Critic</td><td>ACC-Collab</td><td>ACC-Collab+</td></tr><tr><td>BoolQ</td><td>.759±.004</td><td>.812±.003</td><td>.839±.005</td><td>.774±.014</td><td>.84±.005</td><td>.845±.005</td></tr><tr><td>MMLU</td><td>.578±.002</td><td>.582±.002</td><td>.519±.026</td><td>.566±.002</td><td>.51±.016</td><td>.555±.003</td></tr><tr><td>BBH</td><td>.449±.01</td><td>.491±.01</td><td>.51±.011</td><td>.475±.004</td><td>.513±.006</td><td>.475±.008</td></tr><tr><td>SCIQ</td><td>.903±.002</td><td>.914±.002</td><td>.923±.002</td><td>.912±.001</td><td>.918±.003</td><td>.909±.003</td></tr><tr><td>ARC</td><td>.843±.005</td><td>.851±.003</td><td>.85±.002</td><td>.855±.002</td><td>.852±.003</td><td>.849±.002</td></tr></table>
247
+
248
+ Table 2: Accuracy after 5 rounds of deliberation. The Actor (Critic) column corresponds to an actor-agent (critic-agent) trained via ACC-Collab and paired with an untrained Critic (Actor) during deliberation.
249
+
250
+ In contrast, the trained critic is more willing to disagree with the actor and provides more detailed feedback. Largely, we observe that this trend is common; untrained critics are too agreeable and are thus less able to change the actor's mind, while trained critics are more willing to disagree (see Section C.2 for more examples). When examining the trained actor's responses, we do not find a notable qualitative change compared to the responses of an untrained actor. However, as shown previously, we do observe a qualitative change in the actor's responses to become more accurate.
251
+
252
+ # 6 CONCLUSION, LIMITATIONS AND IMPACT
253
+
254
+ In this paper, we propose ACC-Collab, a novel framework for jointly training a two-agent team (one actor-agent and one critic-agent) to collaboratively solve problems through iterative discussion. To train these agents, we developed an off-policy data generation scheme dubbed "Guided-Collaboration", which produces high-quality preference data for collaborative models. We found that ACC-Collab outperforms all baselines on a wide array of domains. In particular, even a single round of training for both the actor and critic results in a high-quality team. Of particular note is the effects that ACC-Collab has on the critic model. Without ACC-Collab, the critic model is often too agreeable and lacks vocabulary in their responses. In contrast, after training with ACC-Collab, the critic is far more likely to provide detailed disagreements during discussion.
255
+
256
+ However, our framework ACC-Collab also comes with limitations. Firstly, even though ACC-Collab attains superior performance compared to baselines on a wide array of domains, it is important to note that we conduct experiments mainly on question-answering tasks; thus, it remains to be seen whether such a framework would continue to be effective in other types of tasks. Moreover, in our experiments, we train and test models on the same task (partitioning each task into a training and testing set). As such, the generalizability of each actor-critic team to unseen domains is unknown. Our method makes use of the fact that for each question, correct and incorrect answers can be easily established. Secondly, while we provide results for three families of models, these experiments are performed on 2B, 7B, and 8B models. While our method is effective for these sizes (standard in open-source models), it remains to be seen whether this effectiveness will scale to larger models.
257
+
258
+ # ACKNOWLEDGMENTS
259
+
260
+ We would like to thank Li Hang for his valuable insights and guidance during the development of this work.
261
+
262
+ # REPRODUCIBILITY STATEMENT
263
+
264
+ Here, we outline the details necessary to reproduce our results. We provide an algorithm for our data generation procedure (Algorithm 1), as well as a description of our training procedure in Section 4.3. Each dataset, baseline method, and base model used are specified at the beginning of Section 5. We provide additional experimental details in Section A. Prompts used for our method can be found in Section C. Lastly, we publicly release the code used for our method.
265
+
266
+ # REFERENCES
267
+
268
+ Sahar Abdelnabi, Amr Gomaa, Sarath Sivaprasad, Lea Schonherr, and Mario Fritz. Llm-deliberation: Evaluating llms with interactive multi-agent negotiation games. arXiv preprint arXiv:2309.17234, 2023.
269
+ Chi-Min Chan, Weize Chen, Yusheng Su, Jianxuan Yu, Wei Xue, Shanghang Zhang, Jie Fu, and Zhiyuan Liu. Chateval: Towards better llm-based evaluators through multi-agent debate. arXiv preprint arXiv:2308.07201, 2023.
270
+ Edward Y Chang. Evince: Optimizing adversarial llm dialogues via conditional statistics and information theory. arXiv preprint arXiv:2408.14575, 2024a.
271
+ Edward Y Chang. Socrasynth: Multi-llm reasoning with conditional statistics. arXiv preprint arXiv:2402.06634, 2024b.
272
+ Justin Chih-Yao Chen, Swarnadeep Saha, and Mohit Bansal. Reconcile: Round-table conference improves reasoning via consensus among diverse llms. arXiv preprint arXiv:2309.13007, 2023.
273
+ Zixiang Chen, Yihe Deng, Huizhuo Yuan, Kaixuan Ji, and Quanquan Gu. Self-play fine-tuning converts weak language models to strong language models, 2024. URL https://arxiv.org/abs/2401.01335.
274
+ François Chollet. On the measure of intelligence. arXiv preprint arXiv:1911.01547, 2019.
275
+ Christopher Clark, Kenton Lee, Ming-Wei Chang, Tom Kwiatkowski, Michael Collins, and Kristina Toutanova. *Boolq: Exploring the surprising difficulty of natural yes/no questions.* arXiv preprint arXiv:1905.10044, 2019.
276
+ Yilun Du, Shuang Li, Antonio Torralba, Joshua B Tenenbaum, and Igor Mordatch. Improving factuality and reasoning in language models through multiagent debate. arXiv preprint arXiv:2305.14325, 2023.
277
+ Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, Ahmad Al-Dahle, Aiesha Letman, Akhil Mathur, Alan Schelten, Amy Yang, Angela Fan, et al. The llama 3 herd of models. arXiv preprint arXiv:2407.21783, 2024.
278
+ Shangbin Feng, Weijia Shi, Yike Wang, Wenxuan Ding, Vidhisha Balachandran, and Yulia Tsvetkov. Don't hallucinate, abstain: Identifying llm knowledge gaps via multi-llm collaboration. arXiv preprint arXiv:2402.00367, 2024.
279
+ Tanner Fiez, Benjamin Chasnov, and Lillian J Ratliff. Convergence of learning dynamics in stackelberg games. arXiv preprint arXiv:1906.01217, 2019.
280
+ Muhammad Usman Hadi, Rizwan Qureshi, Abbas Shah, Muhammad Irfan, Anas Zafar, Muhammad Bilal Shaikh, Naveed Akhtar, Jia Wu, Seyedali Mirjalili, et al. A survey on large language models: Applications, challenges, limitations, and practical usage. Authorea Preprints, 2023.
281
+
282
+ Shibo Hao, Yi Gu, Haotian Luo, Tianyang Liu, Xiyan Shao, Xinyuan Wang, Shuhua Xie, Haodi Ma, Adithya Samavedhi, Qiyue Gao, et al. Llm reasoners: New evaluation, library, and analysis of step-by-step reasoning with large language models. arXiv preprint arXiv:2404.05221, 2024.
283
+ Alex Havrilla, Sharath Raparthy, Christoforus Nalmantis, Jane Dwivedi-Yu, Maksym Zhuravinsky, Eric Hambro, and Roberta Railneau. Glore: When, where, and how to improve llm reasoning via global and local refinements. arXiv preprint arXiv:2402.10963, 2024.
284
+ Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and Jacob Steinhardt. Measuring massive multitask language understanding. arXiv preprint arXiv:2009.03300, 2020.
285
+ Sirui Hong, Xiawu Zheng, Jonathan Chen, Yuheng Cheng, Jinlin Wang, Ceyao Zhang, Zili Wang, Steven Ka Shing Yau, Zijuan Lin, Liyang Zhou, et al. Metagpt: Meta programming for multiagent collaborative framework. arXiv preprint arXiv:2308.00352, 2023.
286
+ Lei Huang, Weijiang Yu, Weitao Ma, Weihong Zhong, Zhangyin Feng, Haotian Wang, Qianglong Chen, Weihua Peng, Xiaocheng Feng, Bing Qin, et al. A survey on hallucination in large language models: Principles, taxonomy, challenges, and open questions. arXiv preprint arXiv:2311.05232, 2023.
287
+ Yiming Huang, Xiao Liu, Yeyun Gong, Zhibin Gou, Yelong Shen, Nan Duan, and Weizhu Chen. Key-point-driven data synthesis with its enhancement on mathematical reasoning. arXiv preprint arXiv:2403.02333, 2024.
288
+ Geoffrey Irving, Paul Christiano, and Dario Amodei. Ai safety via debate. arXiv preprint arXiv:1805.00899, 2018.
289
+ Ziwei Ji, Tiezheng Yu, Yan Xu, Nayeon Lee, Etsuko Ishii, and Pascale Fung. Towards mitigating llm hallucination via self reflection. In *Findings of the Association for Computational Linguistics: EMNLP* 2023, pp. 1827-1843, 2023.
290
+ Albert Q Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lucile Saulnier, et al. Mistral 7b. arXiv preprint arXiv:2310.06825, 2023.
291
+ Ziheng Jiang, Haibin Lin, Yinmin Zhong, Qi Huang, Yangrui Chen, Zhi Zhang, Yanghua Peng, Xiang Li, Cong Xie, Shibiao Nong, et al. {MegaScale}: Scaling large language model training to more than 10,000 {GPUs}. In 21st USENIX Symposium on Networked Systems Design and Implementation (NSDI 24), pp. 745-760, 2024.
292
+ Akbir Khan, John Hughes, Dan Valentine, Laura Ruis, Kshitij Sachan, Ansh Radhakrishnan, Edward Grefenstette, Samuel R Bowman, Tim Rocktäschel, and Ethan Perez. Debating with more persuasive llms leads to more truthful answers. arXiv preprint arXiv:2402.06782, 2024.
293
+ Woosuk Kwon, Zhuohan Li, Siyuan Zhuang, Ying Sheng, Lianmin Zheng, Cody Hao Yu, Joseph Gonzalez, Hao Zhang, and Ion Stoica. Efficient memory management for large language model serving with pagedattention. In Proceedings of the 29th Symposium on Operating Systems Principles, pp. 611-626, 2023.
294
+ Guohao Li, Hasan Abed Al Kader Hammoud, Hani Itani, Dmitrii Khizbullin, and Bernard Ghanem. "Camel: Communicative agents for" mind" exploration of large scale language model society. arXiv preprint arXiv:2303.17760, 2023a.
295
+ Huao Li, Yu Quan Chong, Simon Stepputtis, Joseph Campbell, Dana Hughes, Michael Lewis, and Katia Sycara. Theory of mind for multi-agent collaboration via large language models. arXiv preprint arXiv:2310.10701, 2023b.
296
+ Junyou Li, Qin Zhang, Yangbin Yu, Qiang Fu, and Deheng Ye. More agents is all you need. arXiv preprint arXiv:2402.05120, 2024a.
297
+ Ming Li, Jiuhai Chen, Lichang Chen, and Tianyi Zhou. Can llms speak for diverse people? tuning llms via debate to generate controllable controversial statements. arXiv preprint arXiv:2402.10614, 2024b.
298
+
299
+ Ruosen Li, Teerth Patel, and Xinya Du. Prd: Peer rank and discussion improve large language model based evaluations. arXiv preprint arXiv:2307.02762, 2023c.
300
+ Tao Li and Suresh P Sethi. A review of dynamic stackelberg game models. Discrete & Continuous Dynamical Systems-B, 22(1):125, 2017.
301
+ Yuan Li, Yixuan Zhang, and Lichao Sun. Metaagents: Simulating interactions of human behaviors for llm-based task-oriented coordination via collaborative generative agents. arXiv preprint arXiv:2310.06500, 2023d.
302
+ Yunxuan Li, Yibing Du, Jiageng Zhang, Le Hou, Peter Grabowski, Yeqing Li, and Eugene Ie. Improving multi-agent debate with sparse communication topology. arXiv preprint arXiv:2406.11776, 2024c.
303
+ Tian Liang, Zhiwei He, Wenxiang Jiao, Xing Wang, Yan Wang, Rui Wang, Yujiu Yang, Zhaopeng Tu, and Shuming Shi. Encouraging divergent thinking in large language models through multiagent debate. arXiv preprint arXiv:2305.19118, 2023.
304
+ Zijun Liu, Yanzhe Zhang, Peng Li, Yang Liu, and Diyi Yang. Dynamic llm-agent network: An llm-agent collaboration framework with agent team optimization. arXiv preprint arXiv:2310.02170, 2023.
305
+ Julian Michael, Salsabila Mahdi, David Rein, Jackson Petty, Julien Dirani, Vishakh Padmakumar, and Samuel R Bowman. Debate helps supervise unreliable experts. arXiv preprint arXiv:2311.08702, 2023.
306
+ Daniel W Otter, Julian R Medina, and Jugal K Kalita. A survey of the usages of deep learning for natural language processing. IEEE transactions on neural networks and learning systems, 32(2): 604-624, 2020.
307
+ Richard Yuanzhe Pang, Weizhe Yuan, Kyunghyun Cho, He He, Sainbayar Sukhbaatar, and Jason Weston. Iterative reasoning preference optimization. arXiv preprint arXiv:2404.19733, 2024a.
308
+ Richard Yuanzhe Pang, Weizhe Yuan, Kyunghyun Cho, He He, Sainbayar Sukhbaatar, and Jason Weston. Iterative reasoning preference optimization, 2024b. URL https://arxiv.org/abs/2404.19733.
309
+ Chau Pham, Boyi Liu, Yingxiang Yang, Zhengyu Chen, Tianyi Liu, Jianbo Yuan, Bryan A Plummer, Zhaoran Wang, and Hongxia Yang. Let models speak ciphers: Multiagent debate through embeddings. arXiv preprint arXiv:2310.06272, 2023.
310
+ Alec Radford. Improving language understanding by generative pre-training. 2018.
311
+ Rafael Rafailov, Archit Sharma, Eric Mitchell, Christopher D Manning, Stefano Ermon, and Chelsea Finn. Direct preference optimization: Your language model is secretly a reward model. Advances in Neural Information Processing Systems, 36, 2024.
312
+ Sumedh Rasal. Llm harmony: Multi-agent communication for problem solving. arXiv preprint arXiv:2401.01312, 2024.
313
+ Vipula Rawte, Amit Sheth, and Amitava Das. A survey of hallucination in large foundation models. arXiv preprint arXiv:2309.05922, 2023.
314
+ Noah Shinn, Federico Cassano, Edward Berman, Ashwin Gopinath, Karthik Narasimhan, and Shunyu Yao. Reflection: Language agents with verbal reinforcement learning, 2023. URL https://arxiv.org/abs/2303.11366.
315
+ Karan Singhal, Tao Tu, Juraj Gottweis, Rory Sayres, Ellery Wulczyn, Le Hou, Kevin Clark, Stephen Pfohl, Heather Cole-Lewis, Darlene Neal, et al. Towards expert-level medical question answering with large language models. arXiv preprint arXiv:2305.09617, 2023.
316
+ Andries Petrus Smit, Nathan Grinsztajn, Paul Duckworth, Thomas D Barrett, and Arnu Pretorius. Should we be going mad? a look at multi-agent debate strategies for llms. In *Forty-first International Conference on Machine Learning*.
317
+
318
+ Vighnesh Subramaniam, Antonio Torralba, and Shuang Li. Debategpt: Fine-tuning large language models with multi-agent debate supervision. 2024.
319
+ Mirac Suzgun, Nathan Scales, Nathanael Scharli, Sebastian Gehrmann, Yi Tay, Hyung Won Chung, Aakanksha Chowdhery, Quoc V Le, Ed H Chi, Denny Zhou, et al. Challenging big-bench tasks and whether chain-of-thought can solve them. arXiv preprint arXiv:2210.09261, 2022.
320
+ Gemma Team, Morgane Riviere, Shreya Pathak, Pier Giuseppe Sessa, Cassidy Hardin, Surya Bhupatiraju, Léonard Hussenot, Thomas Mesnard, Bobak Shahriari, Alexandre Ramé, et al. Gemma 2: Improving open language models at a practical size. arXiv preprint arXiv:2408.00118, 2024.
321
+ Arun James Thirunavukarasu, Darren Shu Jeng Ting, Kabilan Elangovan, Laura Gutierrez, Ting Fang Tan, and Daniel Shu Wei Ting. Large language models in medicine. Nature medicine, 29(8):1930-1940, 2023.
322
+ SM Tonmoy, SM Zaman, Vinija Jain, Anku Rani, Vipula Rawte, Aman Chadha, and Amitava Das. A comprehensive survey of hallucination mitigation techniques in large language models. arXiv preprint arXiv:2401.01313, 2024.
323
+ Luong Trung, Xinbo Zhang, Zhanming Jie, Peng Sun, Xiaoran Jin, and Hang Li. Reft: Reasoning with reinforced fine-tuning. Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 2024.
324
+ Boshi Wang, Xiang Yue, and Huan Sun. Can chatgpt defend its belief in truth? evaluating lmm reasoning via debate. In Findings of the Association for Computational Linguistics: EMNLP 2023, pp. 11865-11881, 2023a.
325
+ Qineng Wang, Zihao Wang, Ying Su, Hanghang Tong, and Yangqiu Song. Rethinking the bounds of lIm reasoning: Are multi-agent discussions the key? arXiv preprint arXiv:2402.18272, 2024.
326
+ Zhenhailong Wang, Shaoguang Mao, Wenshan Wu, Tao Ge, Furu Wei, and Heng Ji. Unleashing the emergent cognitive synergy in large language models: A task-solving agent through multi-personal self-collaboration. arXiv preprint arXiv:2307.05300, 2023b.
327
+ Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in neural information processing systems, 35:24824-24837, 2022.
328
+ Johannes Welbl, Nelson F Liu, and Matt Gardner. Crowdsourcing multiple choice science questions. arXiv preprint arXiv:1707.06209, 2017.
329
+ Qingyun Wu, Gagan Bansal, Jieyu Zhang, Yiran Wu, Shaokun Zhang, Erkang Zhu, Beibin Li, Li Jiang, Xiaoyun Zhang, and Chi Wang. Autogen: Enabling next-gen llm applications via multiagent conversation framework. arXiv preprint arXiv:2308.08155, 2023.
330
+ Weimin Xiong, Yifan Song, Xiutian Zhao, Wenhao Wu, Xun Wang, Ke Wang, Cheng Li, Wei Peng, and Sujian Li. Watch every step! llm agent learning via iterative step-level process refinement, 2024. URL https://arxiv.org/abs/2406.11176.
331
+ Ruixin Yang, Dheeraj Rajagopa, Shirley Anugrah Hayati, Bin Hu, and Dongyeop Kang. Confidence calibration and rationalization for llms via multi-agent deliberation. arXiv preprint arXiv:2404.09127, 2024.
332
+ Jintian Zhang, Xin Xu, and Shumin Deng. Exploring collaboration mechanisms for llm agents: A social psychology view. arXiv preprint arXiv:2310.02124, 2023.
333
+ Yadong Zhang, Shaoguang Mao, Tao Ge, Xun Wang, Adrian de Wynter, Yan Xia, Wenshan Wu, Ting Song, Man Lan, and Furu Wei. Llm as a mastermind: A survey of strategic reasoning with large language models. arXiv preprint arXiv:2404.01230, 2024.
334
+ Wenhao Zhu, Hongyi Liu, Qingxiu Dong, Jingjing Xu, Shujian Huang, Lingpeng Kong, Jiajun Chen, and Lei Li. Multilingual machine translation with large language models: Empirical results and analysis, 2024. URL https://arxiv.org/abs/2304.04675.
335
+
336
+ # APPENDIX
337
+
338
+ # A EXPERIMENTS
339
+
340
+ Here, we provide additional details regarding our experiments.
341
+
342
+ Datasets Each dataset is split into a training set, a validation set, and a testing set. For datasets that come with an explicit partition of these sets we use the given partitions; this includes BoolQ, MMLU, SCIQ, and ARC. For BBH, we randomly sample roughly $25\%$ and $10\%$ of the questions from each category in BBH to create a test and validation set, respectively; this comes out to 1260 questions for the test set and 500 questions for the validation set. All results are reported on questions in the test set.
343
+
344
+ Compute All training was performed on a single Nvidia-H800 GPU. Inference for Llama-3 and Mistral based models is performed on a single Nvidia-v100 GPU, for Gemma-2 based models we used a single Nvidia-H800. All inference is performed with the VLLM library Kwon et al. (2023).
345
+
346
+ Training Training for all models was performed via the trl library, using LoRAs of size 256. When training ACC-Collab with DPO we use a negative log-likelihood (NLL) regularization term (with weight 1) as outlined in Pang et al. (2024a).
347
+
348
+ ![](images/f6859f5f5c42fff1926afa3ccdcbf1f42f9e86719dea2fd8e613e0917c098c74.jpg)
349
+
350
+ ![](images/9111b7bb58907b2f14b0f27c2d97e3d15fb1955cf0ab13209a1cc0f6b0a01093.jpg)
351
+ Figure 5: ACC-Collab training pipeline, exemplified for the actor (top) and critic (bottom). The process 1) We generate data from both natural deliberation as well as guided deliberation towards and away from the ground truth answer $y$ using the actor and critic. 2) We compute the relative quality of each trajectory based on the expected quality difference $\Delta_y$ , $\Delta_{!y}$ w.r.t. to the natural response. 3) We store all high-quality pairwise data in our database and train the actor model. 4) We alternate this procedure for the actor and critic. As outline in Section 4 both the guidance procedure and the computation of $\Delta_y$ , $\Delta_{!y}$ differ between the actor and critic.
352
+
353
+ # A.1 CORRECTNESS OF REASONING STEPS
354
+
355
+ In addition to measuring outcome accuracy (i.e., the accuracy of the actor's final answer), we also measure process accuracy (i.e., the accuracy of the reasoning and discussion steps taken by the actor and the critic model). These results are displayed in Table 3. We observe that our training pipeline typically maintains or increases the accuracy of the reasoning steps.
356
+
357
+ Unlike outcome accuracy, process accuracy does not have ground truth answers. Thus we utilize GPT-4o to serve as an oracle. We use the following steps to evaluate the correctness of the reasoning and discussion steps.
358
+
359
+ 1. For both an untrained actor-critic team and a trained actor-critic team (trained via ACC-Collab), we randomly sample 500 deliberations, which resulted in the correct answer after 5 rounds of deliberation.
360
+ 2. We then prompted GPT-4o to evaluate the correctness of the reasoning and discussion steps taken by the actor-critic teams in each of the 500 deliberations.
361
+ 3. We then compute the average accuracy of the reasoning and discussion steps as evaluated by GPT-4o.
362
+
363
+ <table><tr><td>Dataset</td><td>ACC-Collab+ Accuracy</td><td>ACC-Collab Accuracy</td><td>Untrained Accuracy</td></tr><tr><td colspan="4">Llama-3</td></tr><tr><td>BoolQ</td><td>.890±.035</td><td>.894±.034</td><td>.794±.045</td></tr><tr><td>MMLU</td><td>.968±.020</td><td>.976±.017</td><td>.909±.032</td></tr><tr><td>BBH</td><td>.519±.064</td><td>.575±.055</td><td>.610±.055</td></tr><tr><td>SCIQ</td><td>.958±.022</td><td>.981±.015</td><td>.937±.027</td></tr><tr><td>ARC</td><td>.882±.046</td><td>.833±.042</td><td>.883±.036</td></tr><tr><td colspan="4">Mistral</td></tr><tr><td>BoolQ</td><td>.907±.035</td><td>.928±.029</td><td>.894±.034</td></tr><tr><td>MMLU</td><td>.882±.042</td><td>.884±.036</td><td>.878±.037</td></tr><tr><td>BBH</td><td>.766±.045</td><td>.818±.043</td><td>.538±.056</td></tr><tr><td>SCIQ</td><td>.973±.018</td><td>.972±.018</td><td>.933±.028</td></tr><tr><td>ARC</td><td>.927±.029</td><td>.952±.024</td><td>.852±.040</td></tr><tr><td colspan="4">Gemma-2</td></tr><tr><td>BoolQ</td><td>.869±.038</td><td>.872±.037</td><td>.861±.039</td></tr><tr><td>MMLU</td><td>.844±.041</td><td>.843±.041</td><td>.800±.045</td></tr><tr><td>BBH</td><td>.597±.056</td><td>.637±.054</td><td>.514±.056</td></tr><tr><td>SCIQ</td><td>.984±.014</td><td>.978±.016</td><td>.985±.014</td></tr><tr><td>ARC</td><td>.850±.040</td><td>.854±.039</td><td>.834±.042</td></tr></table>
364
+
365
+ Table 3: Accuracy of the actor's and critic's reasoning and discussion steps as evaluated by GPT-4o.
366
+
367
+ # B PREFERENCE OPTIMIZATION
368
+
369
+ Here, we remark on how other preference optimization schemes can be used in place of DPO. Broadly speaking, preference optimization schemes can broken into two categories: those which optimize reward directly (e.g., PPO) and those which optimize reward indirectly (e.g., DPO). In the case of the latter, the positive and negative pairs produced by our method (Algorithm 1) can be directly plugged into the preference optimizer.
370
+
371
+ In the case of explicit reward maximization, the reward function $r(z^{(t)},x,y)$ can first be learned automatically by simply simulating deliberation. The most straightforward way to do this is to first simulate one full deliberation between the actor and critic, i.e., $\langle (z_a^{(0)},z_c^{(0)}),\ldots ,(z_a^{(T)},z_c^{(T)})\rangle$ Then for each $z_{a}^{(t)},z_{c}^{(t)}$ the remaining $T - t$ deliberation steps can be resampled to estimate the corresponding value of $r(z_{a}^{(t)},x,y)$ and $r(z_c^{(t)},x,y)$ . These pairs, namely $\left(z_{a}^{(t)},r(z_{a}^{(t)},x,y)\right)$ and
372
+
373
+ $\left(z_{c}^{(t)},r(z_{c}^{(t)},x,y)\right)$ can then be used to learn the reward function. This reward function can then be plugged into the desired preference optimization scheme.
374
+
375
+ ![](images/ea1af199c584993212964a44b54bc61e2f34e121c8bb5757a7af75a88a57e859.jpg)
376
+ Figure 6: Accuracy over five rounds of deliberation on BoolQ.
377
+
378
+ ![](images/b0e7337d75b2e31c880e236549f325ac3b9713bc1585335726a90807d8bce311.jpg)
379
+ Figure 7: Accuracy over five rounds of deliberation on MMLU.
380
+
381
+ ![](images/3a515182787a6090e3a280915550764e2aa1a66cfbfd5fee862ef355f0c8435b.jpg)
382
+ Figure 8: Accuracy over five rounds of deliberation on BBH.
383
+
384
+ # C EXAMPLES
385
+
386
+ # C.1 PROMPTS
387
+
388
+ Here, we provide examples of the prompts used in our experiments. For illustration, we provide prompts for the BoolQ dataset in which agents are asked a yes-no question about a passage.
389
+
390
+ # Single-Shot Prompt (No Deliberation)
391
+
392
+ ```javascript
393
+ prompt = ('You will be given a yes-no question which is based on a passage.' 'You should use the passage to help you answer the question.' 'You should give a brief justification for your answer,' 'and you must provide a final answer of either Yes or No.' '\nQuestion: {question}?' '\nPassage: {passage}'
394
+ ```
395
+
396
+ ![](images/1859fb6c8c4b68e3a452cdb68bc080c0c3b089d155e3a0a5757119c5b7be985e.jpg)
397
+ Figure 9: Accuracy over five rounds of deliberation on SCIQ.
398
+
399
+ ![](images/e59ec8205f536cdb6be927a03b7503da6b9eb1dd36e75a8420f6578440a6e4c5.jpg)
400
+ Figure 10: Accuracy over five rounds of deliberation on ARC.
401
+
402
+ # Guided Single-Shot Prompt (No Deliberation)
403
+
404
+ ```txt
405
+ prompt = ('You will be given a yes-no question which is based on a passage.' 'You should use the passage to help you answer the question' 'with a {target_answer}. ' 'You should give a brief justification for your answer of {target_answer}, and you must state that your final answer is {target_answer}.' '\nQuestion: {question}?' '\nPassage: {passage}'
406
+ ```
407
+
408
+ # Deliberation Prompt for Actor
409
+
410
+ ```txt
411
+ prompt = ('Several people have provided answers to a yes-no question.' 'Below are their responses:' '\\nPerson {1} said: {responses[1]}' '\\nPerson {2} said: {responses[2]}' ... '\\nPerson {n} said: {responses[n]}' '\\nYou should take these answers into consideration when answering ' the following yes-no question, which is based on a passage.' 'You should give a brief justification for your answer, and you must ' provide a final answer of either Yes or No.' '\\nQuestion: {question}' '\\nPassage: {passage}'
412
+ ```
413
+
414
+ # Guided Deliberation Prompt for Actor
415
+
416
+ ```txt
417
+ prompt = ('Several people have provided answers to a yes-no question.' 'Below are their responses:'
418
+ ```
419
+
420
+ ```txt
421
+ \`nPerson {1} said: \{responses[1]}'
422
+ \`nPerson {2} said: \{responses[2]}'
423
+ .
424
+ .
425
+ \`nPerson {n} said: \{responses[n]}'
426
+ 'You should take these answers and the passage into consideration when answering the following question with {target_answer}. '
427
+ 'You should give a brief justification for your answer of {target_answer},
428
+ \`nQuestion: \{question\}'
429
+ \`nPassage: \{passage\}'
430
+ ```
431
+
432
+ # Deliberation Prompt for Critic
433
+
434
+ ```python
435
+ prompt = ('I am answering a question based on a passage.'
436
+ 'I would like you to help me improve the correctness of my response'
437
+ 'by briefly providing some additional details my original response'
438
+ 'may have missed.'
439
+ '\nQuestion: {question}'
440
+ '\nPassage: {passage}'
441
+ '\nMy Response: {actor_response}'
442
+ )
443
+ ```
444
+
445
+ # Guided Deliberation Prompt for Critic
446
+
447
+ ```python
448
+ prompt = ('I would like you to be a deliberation assistant.' 'You will be given a question based on a passage and my response' 'to the question.' 'You should use the passage and my response to develop additional details' 'for why the correct answer is {target_answer}.' 'Your details must be brief and must support the fact that the' 'correct answer is {target_answer}.' '\nQuestion: {question}?' '\nPassage: {passage}' '\nMy Response: {actor_response}'
449
+ ```
450
+
451
+ # C.2 CRITIC RESPONSES
452
+
453
+ Here, we present several examples of the critic's responses when the actor provides the wrong answer; critic responses are shown before and after training with ACC-Collab. We observed a qualitative change in the critic's responses after training with our pipeline; the critic was more willing to disagree with the actor, and the critic provided more detailed feedback.
454
+
455
+ ACTOR: [Wrong] Final answer, No. Non-residents cannot bring a handgun into New York for possession or carry within the state based on the passage. They can only transport it through New York to a destination outside of the state where it is legal to possess and carry.
456
+
457
+ UNTRAINED CRITIC: Your response is mostly correct. I would add that New York has strict gun control laws, so even if a non-resident is allowed to transport a firearm through the state, they should be aware that certain areas may have additional restrictions, such as prohibitions on carrying the firearm in certain public places or parks. It's always a good idea to check the specific laws of the jurisdiction they will be passing through and their final destination.
458
+
459
+ TRAINED CRITIC: Details refuting your response:
460
+
461
+ 1. The passage mentions that non-residents can transport any lawful firearm through New York, which implies that the firearm is temporarily in New York state territory during transit. This suggests that under certain circumstances, a handgun could be brought into New York for transit purposes, even if it cannot be carried within the state.
462
+ 2. The passage does not explicitly state that non-residents are prohibited from bringing handguns into New York. Instead, it only specifies that they cannot possess or carry handguns within the state. This ambiguity leaves room for interpretation that a handgun could be brought into New York for transit purposes as long as it is not carried within the state.
463
+
464
+ ACTOR: [Wrong] Answer: Yes, the passage states that in the Northern Atlantic Ocean, the hurricane season starts as early as June 1 and ends on November 30, with the peak of activity occurring from late August through September. However, the passage also mentions an exception where the hurricane season started as early as January 3 in 1938. Therefore, while the typical start of the hurricane season is June 1, it is possible for the season to begin earlier, as indicated by the passage.
465
+
466
+ UNTRAINED CRITIC: Your response is correct and provides a good justification based on the passage. However, you could add that the passage mentions that May is the least active month for tropical cyclone activity worldwide, but May is still within the Atlantic hurricane season. Therefore, while the peak of the Atlantic hurricane season is from late August through September, the season technically starts earlier.
467
+
468
+ TRAINED CRITIC: While your response is correct in stating that the typical hurricane season in the Northern Atlantic Ocean starts on June 1 and ends on November 30, with the peak of activity occurring from late August through September, the passage also emphasizes that each particular basin has its own seasonal patterns. The passage mentions the exception of the 1938 hurricane season starting as early as January 3, which is outside the typical seasonal window. This detail directly refutes your response by emphasizing that while the typical season starts in June, it is not unheard of for the season to begin earlier, as demonstrated by historical records.
469
+
470
+ Another detail from the passage that supports the fact that the answer is No, is that the passage states that "worldwide, tropical cyclone activity peaks in late summer," which is contradictory to your response that the hurricane season starts in spring. The passage explicitly states that the peak of tropical cyclone activity occurs in late summer, which is later than spring.
471
+
472
+ ACTOR: [Wrong] Final Answer: (A) rated the group as less interesting and enjoyable.
473
+
474
+ Justification: Aronson and Mills' study found that severe initiation processes negatively impact immediate attitudes towards the group due to discomfort and negative emotions. (Person 0) Additionally, severe initiation may create pressure to conform and social comparison, making the group less appealing. (Person 1)
475
+
476
+ UNTRAINED CRITIC: Your response is correct. I'd add that Aronson and Mills found that severe initiation rites can create a sense of exclusion and pain, which may negatively impact attitudes towards the group. (A)
477
+
478
+ TRAINED CRITIC: Response: While your justification supports the fact that severe initiation can negatively impact attitudes towards the group, it does not fully refute the alternative answers. Let me provide brief details to argue for the correct answer, B being "rated the group as more interesting and enjoyable":
479
+
480
+ 1. Counter-intuitive findings: Research suggests that negative experiences can sometimes lead to stronger group identification and commitment. The discomfort and negative emotions generated during severe initiation may create a sense of camaraderie and solidarity among group members, ultimately increasing their overall enjoyment of the group (Tajfel & Turner, 1986).
481
+ 2. Long-term benefits: Although severe initiation might result in negative emotions initially, the feelings of bonding and group cohesion that develop during the initiation process can endure and even enhance members' attachment to the group over time (Sue Carter & Branscombe, 1994). This increased commitment can lead to a heightened sense of enjoyment and engagement with the group.
482
+
483
+ ACTOR: [Wrong] Final Answer: (B) Catalhoyuk and Gobekli Tepe Justification: These sites have provided evidence for social complexity without clear agricultural surpluses. Catalhoyuk had centralized planning and hierarchical social organization but no conclusive evidence for agricultural surpluses. Gobekli Tepe was likely a religious or ceremonial center predating the agricultural revolution.
484
+
485
+ UNTRAINNED CRITIC: Your response is correct. I'd add that Catalhoyuk had a population of up to 8,000 people and was the world's first known urban settlement, while Gobekli Tepe's T-shaped pillars date back to around 9600 BCE and suggest the existence of social and religious institutions that predate agriculture.
486
+
487
+ TRAINED CRITIC: While your justification for option (B) is correct, I would like to provide additional details for option (D) to refute potential responses in favor of other options.
488
+
489
+ Option (A): Stonehenge and Caral
490
+
491
+ Justification: Although these sites have evidence of social complexity, there is no clear evidence that they lacked agricultural surpluses. Stonehenge, for instance, is believed to have served as a ceremonial and astronomical center, but its relationship to agriculture remains unclear. Caral, on the other hand, has evidence of irrigation systems, suggesting some agricultural production, even if not a surplus.
492
+
493
+ Option (C): La Venta and Catalhoyuk
494
+
495
+ Justification: La Venta is an example of a complex civilization with evidence of agricultural surpluses. While Catalhoyuk is an example of social complexity without clear agricultural surpluses, as you mentioned, La Venta challenges the idea that social complexity always required surpluses.
496
+
497
+ Option (D):Gobekli Tepe and Poverty Point
498
+
499
+ Justification: Gobekli Tepe is believed to be a religious or ceremonial center that predates the agricultural revolution. Poverty Point, on the other hand, is a complex society that existed around 3200 BCE and had advanced engineering skills but no clear evidence of agricultural surpluses. These sites' existence undermines the hypothesis that social complexity always required agricultural surpluses, providing further evidence for option (D) being the correct answer.
500
+
501
+ ACTOR: [Wrong] Final Answer: (A) psychological reactance. This is because your friend's statement suggests that they feel frustrated or resistant when external factors prevent them from going on vacation, indicating a desire to have more freedom to make their own choices. Psychological reactance is the tendency to react negatively when our freedom is threatened.
502
+
503
+ UNTRAINED CRITIC: Your response is correct. I'd add that psychological reactance often results in an exaggerated perception of the importance or severity of the obstacles, leading your friend to
504
+
505
+ believe that there's always a crisis preventing them from vacationing when in reality, the obstacles may not be as significant as they seem.
506
+
507
+ TRAINED CRITIC: While your response is plausible and accurately describes psychological reactance, let me provide two brief reasons why the correct answer is still (C) illusory correlation:
508
+
509
+ 1. Illusory correlation refers to the perception of a relationship between variables when none or only a weak relationship actually exists. In this case, your friend might perceive a strong correlation between planning a vacation and experiencing crises at home or work, leading them to believe that their vacations are consistently disrupted, even if the actual correlation is weak or non-existent.
510
+ 2. Psychological reactance is primarily concerned with an individual's desire to react negatively when their freedom is restricted or threatened, not necessarily a misperception of causal relationships. However, the friend's statement does imply a belief that external factors are consistently preventing them from going on vacations, which could be a manifestation of illusory correlation.
511
+
512
+ ACTOR: [Wrong] Final Answer: (D) a zygote. Justification: A zygote is the first cell formed when two gametes merge, marking the beginning of organismic development.
513
+
514
+ UNTRAINED CRITIC: Your response is correct. Just to add, a zygote is formed by the fusion of an sperm (male gamete) and an egg (female gamete), resulting in a diploid cell containing genetic material from both parents. This is the initial stage of an organism's development.
515
+
516
+ TRAINED CRITIC: I agree with your response that the answer is (D) a zygote. However, to further support why (A) a gamete and (C) a nucleus are incorrect, let me provide brief details:
517
+
518
+ (A) A gamete is a reproductive cell that fuses with another gamete during fertilization to form a zygote. Therefore, it is not an organism in the earliest stages of development but rather a part of the process leading to the formation of an organism.
519
+ (C) A nucleus is an organelle within a cell that contains genetic material. It is not an organism but rather a component of a cell, and it is present in the zygote after fertilization has occurred.
520
+
521
+ So, both (A) and (C) can be ruled out as they do not represent the organism in the earliest stages of development, and the correct answer is (D) a zygote.
522
+
523
+ ACTOR: [Wrong] Final Answer: (A) metamorphic rocks and (B) any types of rocks (specifically igneous rocks when they undergo metamorphosis).
524
+
525
+ UNTRAINED CRITIC: Your response is mostly correct, but it would be more precise to clarify that options B and C also include metamorphic rocks when they mention "any types of rocks" and "igneous rocks," respectively. Metamorphic rocks form from the alteration of pre-existing rocks under heat, pressure, or chemical conditions. Thus, all three options (A, B, and C) are correct, but option A is the most explicit about the transformation process involved.
526
+
527
+ TRAINED CRITIC: Sure! Here are two brief details supporting the correct answer (A) metamorphic rocks and (B) any types of rocks (specifically igneous rocks when they undergo metamorphosis):
528
+
529
+ 1. Metamorphic rocks result from the transformation of pre-existing rocks under high pressure and temperature conditions. This process causes the minerals in the rocks to change, resulting in a new type of rock. For example, sedimentary rocks like shale or limestone can be transformed into metamorphic rocks like slate or marble, respectively.
530
+
531
+ 2. Igneous rocks can also undergo metamorphosis when they are exposed to high pressure and temperature conditions. This process results in the recrystallization of minerals and the formation of new textures. For example, granite can be transformed into gneiss or schist when subjected to metamorphic conditions. Therefore, any type of rock, including igneous rocks, can change and become new types of rocks through metamorphosis.
532
+
533
+ # ACTOR: [Wrong] Final Answer: (C) white present
534
+
535
+ Justification: Gertrude initially had a yellow present, but she swapped it with Alice, who had a white present. Therefore, Gertrude now has the white present.
536
+
537
+ UNTRAINED CRITIC: Your response is correct. I'd add that after Gertrude swaps with Alice, none of the following swaps involve Gertrude giving away or receiving the white present.
538
+
539
+ TRAINED CRITIC: While your justification is correct that Gertrude initially had a yellow present and swapped it with Alice's white present, I would like to add two additional details that refute your response and support option D, brown present.
540
+
541
+ First, prior to swapping with Alice, Gertrude swapped her yellow present with Dave's brown present.
542
+
543
+ Second, in the final round of swaps, Fred and Eve swap their presents, and Gertrude's white present (which she received from Alice) is not mentioned in the swap between Fred and Eve.
544
+
545
+ Therefore, based on the information provided, Gertrude could still have the brown present that she received from Dave earlier in the game.
546
+
547
+ ACTOR: [Wrong] Final Answer: None of the given options are correct. The construction site was closed during the hours Thomas was engaged in the mentioned activities, and there is no information indicating that Thomas visited the construction site before 6am.
548
+
549
+ UNTRAINED CRITIC: Your response is correct. I would add that Thomas' activities from 6am to 8am (gym, yoga studio, and movie theater) all occurred before the construction site opened. Therefore, option (D) can be eliminated as well.
550
+
551
+ TRAINED CRITIC: While it's true that none of the given options are correct, I'd like to provide some brief additional details to support the fact that the correct answer is D (6am to 8am), if that was an option. Here's why:
552
+
553
+ 1. Thomas was seen at the gym from 8am to 11am, and at the yoga studio from 11am to 3pm, meaning he was engaged in those activities between those hours. There's no information indicating that he left those places earlier or returned later.
554
+ 2. He was seen at the movie theater from 3pm to 6pm, and at the Leaning Tower of Pisa from 6pm to 7pm. Again, there's no information indicating that he left the movie theater earlier or arrived at the Leaning Tower of Pisa later.
555
+ 3. The construction site was closed after 7pm. However, the options don't include any time before 7pm. If Thomas could have gone to the construction site before 7pm, the correct answer would be one of the options A, B, or C.
556
+ 4. Since none of the options include the time before 7pm, and Thomas was engaged in activities during the entire day, the only time left for him to visit the construction site, assuming he did, would be before 7am. However, the options don't include that time frame. Therefore, if we had the
557
+
558
+ option D (6am to 8am), it would be the correct answer, as it's the only time frame that fits the given information.
559
+
560
+ In conclusion, while the given options are not correct, if option D (6am to 8am) was an option, it would be the correct answer based on the information provided.
acccollabanactorcriticapproachtomultiagentllmcollaboration/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1238d47629b5c54755c3b375386f7e8b9b46348db777e2d79d90272c131ffcf1
3
+ size 936579
acccollabanactorcriticapproachtomultiagentllmcollaboration/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8def0c637ca124291ab48385db775b045ae8520eb87df51f2568f261e41083b7
3
+ size 686508
acceleratedoverrelaxationheavyballmethodachievingglobalacceleratedconvergencewithbroadgeneralization/47e0cc3c-eff0-4705-ba54-f8fe497facdf_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:50e3af461626706137a56dd406cf112b50a7412b396178ef4059e48f2be1e045
3
+ size 222657
acceleratedoverrelaxationheavyballmethodachievingglobalacceleratedconvergencewithbroadgeneralization/47e0cc3c-eff0-4705-ba54-f8fe497facdf_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:beaf119f865066f96eb7c842c4133a7ea2c67e60c963a425404268bbd85bfce2
3
+ size 268233
acceleratedoverrelaxationheavyballmethodachievingglobalacceleratedconvergencewithbroadgeneralization/47e0cc3c-eff0-4705-ba54-f8fe497facdf_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:14b660531411e4f78a19a02ef0f0ca180ef8a1084bb344e6872d48ce8713ebd0
3
+ size 585439
acceleratedoverrelaxationheavyballmethodachievingglobalacceleratedconvergencewithbroadgeneralization/full.md ADDED
The diff for this file is too large to render. See raw diff
 
acceleratedoverrelaxationheavyballmethodachievingglobalacceleratedconvergencewithbroadgeneralization/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1a80b56615cde98bd01e639773eec36817f64a701cb5e4b3ace5655b8ba28540
3
+ size 1430536
acceleratedoverrelaxationheavyballmethodachievingglobalacceleratedconvergencewithbroadgeneralization/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f2d1af717f7d4dd68f0de5999baf252e7c1be0ea94cb179a917c844fd504d8b6
3
+ size 1373901
accelerating3dmoleculegenerationviajointlygeometricoptimaltransport/408fb72c-ed7d-466d-a23b-e67b16738ba0_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:62abd555cec7689432da79c806970705c2f5e73600d9e69b4305822bbb6ead8b
3
+ size 131388
accelerating3dmoleculegenerationviajointlygeometricoptimaltransport/408fb72c-ed7d-466d-a23b-e67b16738ba0_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8af81e9a5c0d2ecbecda13d89656c2f7ed2aca6a0bddf5a25f54206e3d28d315
3
+ size 153587
accelerating3dmoleculegenerationviajointlygeometricoptimaltransport/408fb72c-ed7d-466d-a23b-e67b16738ba0_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6cfdd600c73ce1f7bf7ff7cd721671f97df71c0d7ccd8bc14961991bbb9a6a7d
3
+ size 2881980
accelerating3dmoleculegenerationviajointlygeometricoptimaltransport/full.md ADDED
@@ -0,0 +1,471 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # ACCELERATING 3D MOLECULE GENERATION VIA JOINTLY GEOMETRIC OPTIMAL TRANSPORT
2
+
3
+ Haokai Hong, Wanyu Lin* Kay Chen Tan
4
+
5
+ Department of Data Science and Artificial Intelligence
6
+
7
+ Department of Computing
8
+
9
+ The Hong Kong Polytechnic University, Hong Kong SAR, China.
10
+
11
+ haokai.hong@connect.polyu.hk, {wan-yu.lin, kctan}@polyu.edu.hk
12
+
13
+ # ABSTRACT
14
+
15
+ This paper proposes a new 3D molecule generation framework, called GOAT, for fast and effective 3D molecule generation based on the flow-matching optimal transport objective. Specifically, we formulate a geometric transport formula for measuring the cost of mapping multi-modal features (e.g., continuous atom coordinates and categorical atom types) between a base distribution and a target data distribution. Our formula is solved within a joint, equivariant, and smooth representation space. This is achieved by transforming the multi-modal features into a continuous latent space with equivariant networks. In addition, we find that identifying optimal distributional coupling is necessary for fast and effective transport between any two distributions. We further propose a mechanism for estimating and purifying optimal coupling to train the flow model with optimal transport. By doing so, GOAT can turn arbitrary distribution couplings into new deterministic couplings, leading to an estimated optimal transport plan for fast 3D molecule generation. The purification filters out the subpar molecules to ensure the ultimate generation quality. We theoretically and empirically prove that the proposed optimal coupling estimation and purification yield transport plan with non-increasing cost. Finally, extensive experiments show that GOAT enjoys the efficiency of solving geometric optimal transport, leading to a double speedup compared to the sub-optimal method while achieving the best generation quality regarding validity, uniqueness, and novelty. The code is available at github.
16
+
17
+ # 1 INTRODUCTION
18
+
19
+ The problem of 3D molecule generation is essential in various scientific fields, such as material science, biology, and chemistry (Hoogeboom et al., 2022; Watson et al., 2023; Xie et al., 2022). Typically, 3D molecules can be represented as atomic geometric graphs (Hoogeboom et al., 2022; Xu et al., 2023; Song et al., 2023b), where each atom/node is embedded in the Cartesian coordinates and encompasses rich features, such as atom types and charges. There has been fruitful research progress on geometric generative modeling for facilitating the process of 3D molecule generation. Specifically, geometric generative models are proposed to estimate the distribution of complex geometries and are used for generating feature-rich geometries. The success of these generative modeling mainly comes from the advancements in using the notion of probability measurement transport for generating samples. Generative modeling aims to generate samples via mapping a simple prior distribution, e.g., Gaussian, to a desired target probability distribution. This mapping process can be termed as a distribution transport/generative problem (Peluchetti, 2023).
20
+
21
+ Recent representative models for sampling 3D molecules in silicon include diffusion-based models (Hoogeboom et al., 2022; Wu et al., 2022; Xu et al., 2023) and flow matching-based models (Song et al., 2023a; Klein et al., 2023). Diffusion-based models have shown superior results on molecule generation tasks (Xu et al., 2023; Hoogeboom et al., 2022; Jung et al., 2024). They simulate a stochastic differential equation (SDE) to transport a base distribution (e.g., Gaussian) to the data distribution. However, a major drawback of diffusion-based models is their slow inference
22
+
23
+ ![](images/e3c495af33a3a200dd3715a15d99181678e360a3991b7f55e4f08df430671224.jpg)
24
+ Figure 1: The Illustration of Probability Paths based on Different Molecule Generative Models. 1. The diffusion path (Hoogeboom et al., 2022; Xu et al., 2023), which typically deviates from a straight line map, necessitates a large number of sampling steps. 2. The hybrid transport (Song et al., 2023a) ensures straight transport for atomic coordinates, but it does not guarantee the same for atom features. Furthermore, this cost does not consider the optimal distribution couplings, leading to suboptimal transport between distributions. 3. GOAT simultaneously considers the optimal transport for atom coordinates and features, providing a joint and straight path for fast sampling.
25
+
26
+ speed with the learned stochastic transport trajectory (Hoogeboom et al., 2022; Wu et al., 2022; Xu et al., 2023); they typically need approximately 1,000 sampling steps to produce valid samples. This could make large-scale inference prohibitively expensive. Accordingly, flow matching — built upon continuous normalizing flows — has emerged as a new paradigm that could potentially provide effective density estimation and fast inference (Lipman et al., 2022; Tong et al., 2023; Dao et al., 2023; Liu et al., 2022; Klein et al., 2023; Song et al., 2023a).
27
+
28
+ This paper aims to deal with the problem of fast and effective 3D molecule generation based on flow matching optimal transport principle. In particular, our objective is to obtain a distribution transport trajectory with optimal transport cost and generation quality regarding molecule validity, uniqueness, and novelty. A few recent works have been proposed to improve the sampling efficiency of the geometrical domains via flow matching principle. (Klein et al., 2023) was proposed for efficient, simulation-free training of equivariant continuous normalizing flows, which can produce samples from the equilibrium Boltzmann distribution of a molecule in Cartesian coordinates. However, it can only work for many-body molecular systems and does not consider atomic features.
29
+
30
+ In the context of molecule generation, properly characterizing the transport cost to optimize over is indispensable and challenging. There are two main challenges. Firstly, the multi-modal property of molecular space, typically consisting of continuous atom coordinates and categorical atom types, makes the transport cost measurement hard to optimize. Secondly, the optimal transport problem essentially is to search optimal distribution couplings for mapping. This process typically requires similarity computation of the two distributions. However, the various sizes of geometric graphs to transport introduce difficulties in evaluating the distribution similarity. The closest to ours is EquiFM (Song et al., 2023a), which attempts to address the multi-modality issue by using different probability paths to transport each modality separately. The proposed equivariant optimal transport (OT) for transporting atom coordinates indeed forms a straight-line trajectory for training, while the variance-preserving principle could not ensure a straight-line trajectory for atom features. Therefore, the fused flow paths might deviate strongly from the OT paths and could not ensure optimal coupling between two probability measurements, leading to high computational costs and numerical errors.
31
+
32
+ In this work, we propose a new framework for fast 3D molecule generation based on a novel and principled optimal transport flow-matching objective, dubbed as Geometric OptimAl Transport (GOAT). In particular, we formulate a geometric transport cost measurement for optimally transporting continuous atom coordinates and categorical features, which is inherently a Bilevel optimization problem. To deal with the first challenge induced by transporting multiple modalities, GOAT leverages a latent variable model equipped with equivariant networks to map the multi-modal features into a joint, equivalent, and smooth representation space. This equivariant latent variable model has been proven to be flexible and expressive for modeling complex 3D molecules (Satorras et al., 2021; Xu et al., 2023). A latent flow matching then operates over the latent space, which can provide distributional coupling estimation.
33
+
34
+ To tackle the second challenge — obtaining the optimal distribution couplings, we propose to refrain from directly working with distribution similarity computation. Specifically, we propose to rectify the flow with the estimated ones based on the latent flow matching. Because the estimated distributional couplings are identified based on the synthesized samples, they might deviate from the real-world samples in terms of quality. We provide a purification process for high-quality samples regarding validity, uniqueness, and novelty. With this process, we can turn arbitrary couplings into deterministic and causal ones, leading to the optimal transport path for fast and effective generation.
35
+
36
+ We theoretically demonstrate that optimal coupling estimation and purification result in non-increasing geometric transport costs. Moreover, we empirically highlight the superiority of GOTA by conducting experiments on widely used benchmarks. The proposed method reduced the transport cost by nearly $89.65\%$ , halving the sampling times compared to EquiFM. In terms of generation quality, our method achieves up to a $17.1\%$ improvement over existing algorithms.
37
+
38
+ # 2 PROBLEM SETUP
39
+
40
+ Notations. A three-dimensional (3D) molecule with $N$ atoms can be represented as a geometric graph denoted as $\mathbf{g} = \langle \mathbf{x},\mathbf{h}\rangle$ , where $\mathbf{x} = (\mathbf{x}_1,\dots ,\mathbf{x}_N)\in \mathbb{R}^{N\times 3}$ represents the atom coordinates and $\mathbf{h} = (\mathbf{h}_1,\ldots ,\mathbf{h}_N)\in \mathbb{R}^{N\times d}$ is the atom features containing atomic types, charges, etc. $d$ is the dimensionality of the atom features. A zero center-of-mass (Zero CoM) space is defined as $\mathbb{X} = \{\mathbf{x}\in \mathbb{R}^{N\times 3}: \frac{1}{N}\sum_{i = 1}^{N}\mathbf{x}^i = \mathbf{0}\}$ , indicating that the mean of the $N$ atoms' coordinates should be 0. In what follows, we will introduce some necessary concepts, including flow matching and optimal transport, to facilitate the definition of our problem.
41
+
42
+ General Flow Matching<sup>1</sup>. Given noise $\mathbf{x}_0 \in R^N \sim p_0$ and data $\mathbf{x}_1 \in R^N \sim p_1$ , the general flow-based model considers a mapping $f: \mathbb{R}^N \to \mathbb{R}^N$ as a smooth time-varying vector field $u: [0,1] \times \mathbb{R}^N \to \mathbb{R}^N$ , which defines an ordinary differential equation (ODE): $d\mathbf{x} = u_t(\mathbf{x})dt$ . Continuous normalizing flows (CNFs) were introduced with black-box ODE solvers to train approximate $u_t$ . However, CNFs are hard to train as they need numerous evaluations of the vector field.
43
+
44
+ Flow matching (Lipman et al., 2022), a simulation-free approach for training CNFs, is proposed to regress the neural network $v_{\theta}(\mathbf{x},t)$ to some target vector field $u_{t}(\mathbf{x})$ : $\mathcal{L}_{\mathrm{FM}}(\theta) := \mathbb{E}_{t\sim \mathcal{U}(0,1),\mathbf{x}\sim p_t(\mathbf{x})}\| v_\theta (\mathbf{x},t) - u_t(\mathbf{x})\|^2$ , where $p_t(\mathbf{x})$ is the corresponding probability path which conditioned on $\mathbf{x}_1\sim p_1$ and then defined as $p_t(\mathbf{x}|\mathbf{x}_1) = \int p_t(\mathbf{x}|\mathbf{x}_1)p_1(\mathbf{x}_1)d\mathbf{x}_1$ . In implementation, common probability paths include variance exploding (VE) diffusion path (Song et al., 2020), variance preserving (VP) diffusion path (Ho et al., 2020), and straight transport path (Lipman et al., 2022; Liu et al., 2022).
45
+
46
+ Optimal Transport (OT). Transport plan between $p_0$ and $p_1$ is also called coupling and we denoted it as $\Gamma(p_0, p_1)$ (Ambrosio et al., 2021). OT addresses the problem of finding the optimal coupling that minimizes the transport cost, typically in the form of $\mathbb{E}[c(\mathbf{x}_1 - \mathbf{x}_0)]$ , where $c: \mathbb{R}^N \to \mathbb{R}$ is a cost function, such as $c(\cdot) = \| \cdot \|^2$ . Formally, a coupling $\Gamma(p_0, p_1)$ is called optimal only if it achieves the minimum value of $\mathbb{E}[c(\mathbf{x}_1 - \mathbf{x}_0)|(\mathbf{x}_1, \mathbf{x}_0) \in \Gamma(p_0, p_1)]$ among all couplings that share the same marginals. An ideal optimal transport trajectory is a map of optimal couplings.
47
+
48
+ Geometric Optimal Transport. In our task, we consider a pair of geometric probability distributions ${}^{2}$ over ${\mathbb{R}}^{N \times \left( {3 + d}\right) }$ with densities $p\left( {\mathrm{\;g}}_{0}\right)$ and $p\left( {\mathrm{\;g}}_{1}\right)$ (or denoted as ${p}_{0}$ and ${p}_{1}$ ). Geometric generative modeling considers the task of fitting a mapping $f$ from ${\mathbb{R}}^{N \times \left( {3 + d}\right) }$ to ${\mathbb{R}}^{N \times \left( {3 + d}\right) }$ that transforms ${\mathrm{g}}_{0}$ to ${\mathrm{g}}_{1}$ . More specifically,if ${\mathrm{g}}_{0}$ is distributed with density ${p}_{0}$ then $f\left( {\mathrm{\;g}}_{0}\right)$ is distributed with density ${p}_{1}$ . Typically, ${p}_{0}$ is an easily sampled density,such as a Gaussian.
49
+
50
+ In our specific problem, beyond geometric distribution transport, we concentrate on fast 3D molecule generation based on the flow model with optimal transport (OT), which has been proven effective in accelerating non-geometric flow models (Liu et al., 2022; Tong et al., 2023). Therefore, our problem is defined as geometric optimal transport flow matching.
51
+
52
+ # 3 OUR METHOD: GOAT
53
+
54
+ # 3.1 FORMULATING GEOMETRIC OPTIMAL TRANSPORT PROBLEM
55
+
56
+ Our objective is to obtain an optimal transport trajectory for fast 3D molecule geometry generation based on optimal transport flow models. In this regard, we can reformulate our problem into searching for optimal coupling for geometric optimal transport. Specifically, it involves the transportation of molecules via optimal coupling, where each atom follows a straight and shortest path. In other words, each molecule is coupled with a noisy sample that incurs the minimum cost, and each atom within the target molecule is paired with its counterpart in the noise, leading to the minimum cost. To encapsulate the optimal scenario, we consider the problem of geometric optimal transport with two components: 1) optimal molecule transport (OMT) with equivariant OT for atom coordinates and invariant OT for atom features; 2) optimal distribution transport (ODT) with optimal molecule coupling.
57
+
58
+ Geometric Transport Cost. Transporting a molecule includes transforming atom coordinates and features. We can depict the molecule transport cost as below:
59
+
60
+ $$
61
+ c _ {g} \left(\mathbf {g} _ {0}, \mathbf {g} _ {1}\right) = \left\| \mathbf {x} _ {1} - \mathbf {x} _ {0} \right\| _ {2} + \left\| \mathbf {h} _ {1} - \mathbf {h} _ {0} \right\| _ {2}, \tag {1}
62
+ $$
63
+
64
+ where $\mathbf{g}_0\sim p_0$ and $\mathbf{g}_1\sim p_1$ . In addition, given coupling $\Gamma (p_0,p_1)$ , we measure the distribution similarity between two distributions denoted as $p_0$ and $p_1$ based on the probability transport cost as follows:
65
+
66
+ $$
67
+ C _ {g} = \mathbb {E} \left[ c _ {g} \left(\mathbf {g} _ {0}, \mathbf {g} _ {1}\right) \right], \left(\mathbf {g} _ {0}, \mathbf {g} _ {1}\right) \in \Gamma \left(p _ {0}, p _ {1}\right). \tag {2}
68
+ $$
69
+
70
+ Geometric Optimal Transport Problem. Building upon the established transport cost, we can formulate the geometric optimal transport problem for fast and effective 3D molecule generation. In particular, a molecule remains invariant for any rotation, translation, and permutation, while the transport cost is not invariant or equivariant to the above operations. Therefore, there exists a minimum molecule transport cost with 1) optimal rotation and translation transformations such that the molecules from the data and noise are nearest to each other at the atomic coordinate level and 2) optimal permutation transformation such that the atomic features of the two are also nearest.
71
+
72
+ We supplement the detailed analysis of equivariance and invariance in Appendix A and present geometric optimal transport as follows:
73
+
74
+ $$
75
+ \min _ {\Gamma} \mathbb {E} [ \hat {c} _ {g} (\mathbf {g} _ {0}, \mathbf {g} _ {1}) ],
76
+ $$
77
+
78
+ s.t. $(\mathbf{g}_0,\mathbf{g}_1)\in \Gamma (p_0,p_1),$
79
+
80
+ $$
81
+ \begin{array}{l} \hat {c} _ {g} \left(\mathbf {g} _ {0}, \mathbf {g} _ {1}\right) = \lambda \min _ {\mathbf {R}, \mathbf {t}, \pi} \| \pi \left(\mathbf {R} \mathbf {x} _ {1} ^ {1} + \mathbf {t}, \mathbf {R} \mathbf {x} _ {1} ^ {2} + \mathbf {t}, \dots , \mathbf {R} \mathbf {x} _ {1} ^ {N} + \mathbf {t}\right) - \left(\mathbf {x} _ {0} ^ {1}, \mathbf {x} _ {0} ^ {2}, \dots , \mathbf {x} _ {0} ^ {N}\right) \| _ {2} \tag {3} \\ + (1 - \lambda) \min _ {\pi} \| \pi (\mathbf {h} _ {1} ^ {1}, \mathbf {h} _ {1} ^ {2}, \dots , \mathbf {h} _ {1} ^ {N}) - (\mathbf {h} _ {0} ^ {1}, \mathbf {h} _ {0} ^ {2}, \dots , \mathbf {h} _ {0} ^ {N}) \| _ {2}, \forall \pi , \mathbf {R}, \text {a n d} \mathbf {t}, \\ \end{array}
82
+ $$
83
+
84
+ where $\hat{c}_g$ denotes optimal molecule transport cost, $\pi$ represents a permutation of $N$ elements, $\lambda$ is a trade-off coefficient balancing the transport costs of atom coordinates and atom features for searching optimal permutation $(\hat{\pi})$ . Additionally, $\mathbf{R}$ and $\mathbf{t}$ denote a rotation matrix and a translation, respectively. The defined geometric optimal transport problem forms a bi-level optimization problem that involves two levels of optimization tasks. Specifically, minimizing molecule transport cost is nested inside the optimizing distribution transport cost.
85
+
86
+ The Challenges of Solving the Geometric Optimal Transport Problem. First, optimal molecule transport involves searching for a unified optimal permutation for atom coordinates and features with minimum transport cost. The paths for transporting continuous coordinates and categorical features are incompatible and require sophisticated, hybrid modeling of multi-modal variables (Song et al., 2023a), leading to a sub-optimal solution. Second, a molecular distribution comprises molecules with diverse numbers of atoms, introducing difficulties in quantifying the transport cost for searching optimal coupling. As a result, the minimization of geometric transport $C_g$ within molecular distributions poses a more significant challenge compared to other domains such as computer vision (Tong et al., 2023) or many-body systems (Klein et al., 2023). Moreover, the proposed geometric optimal transport problem, which involves a nested optimization structure, presents a significant computational challenge for optimization.
87
+
88
+ # 3.2 SOLVING GEOMETRIC OPTIMAL TRANSPORT PROBLEM
89
+
90
+ In this section, we will address the above challenges under the depicted problem for fast and effective 3D molecule generation from two aspects, including optimal molecule transport and optimal distribution transport.
91
+
92
+ # 3.2.1 SOLVING OPTIMAL MOLECULE TRANSPORT
93
+
94
+ As illustrated in Eq. (3), our objective for optimal molecule transport is to find $\hat{\mathbf{R}}$ , $\hat{\mathbf{t}}$ , and $\hat{\pi}$ that minimize the transport cost denoted as $c_{g}$ for two given molecule geometries represented as $\mathbf{g}_0$ and $\mathbf{g}_1$ :
95
+
96
+ $$
97
+ \hat {\mathbf {R}}, \hat {\mathbf {t}}, \hat {\pi} = \arg \min _ {\mathbf {R}, \mathbf {t}, \pi} c _ {g} \left(\mathbf {g} _ {0}, \mathbf {g} _ {1}\right), \forall \pi , \mathbf {R}, \text {a n d} \mathbf {t}. \tag {4}
98
+ $$
99
+
100
+ As per Eq. (3), the optimal molecule transport problem entails the consideration of both atom coordinates and features for comprehensive representations of 3D molecules. Though coordinates and features represent different modalities, they need to be considered in tandem. Previous research (Song et al., 2023a; Klein et al., 2023) either solely focused on equivariant optimal transport for coordinates or transported atom features via a distinct yet non-optimal path. In contrast, we propose to unify the transport of atom coordinates and features. If we can unify different modalities within an equivariant and smooth representation space, optimal transport from a base distribution to the data distribution, leading to fast and effective molecule generation, is possible.
101
+
102
+ Specifically, we map the atom coordinates and features from the data space into a latent space with an equivariant autoencoder (Satorras et al., 2021), which enables us to compute a unified optimal permutation $(\hat{\pi})$ .
103
+
104
+ After the mapping, we ascertain the optimal rotation $(\hat{\mathbf{R}})$ for two atomic coordinate sets for transportation, utilizing the Kabsch algorithm (Kabsch, 1976) as did in (Song et al., 2023a; Klein et al., 2023). The computed rotation matrix ensures that the coordinates of the target molecules in the latent space are in closest proximity to the noise molecules. In addition, to achieve optimal translation $(\hat{\mathbf{t}})$ , we establish the data distribution $p_1$ and base distribution $p_0$ in the zero CoM space (Hoogeboom et al., 2022; Xu et al., 2023). Equipped with the equivariant autoencoder, the latent representation also resides in the zero CoM space, thus ensuring optimal translation in the latent space.
105
+
106
+ Implementation. Initially, the distributions $p_0$ and $p_1$ are aligned to the Zero CoM by subtracting the center of gravity using $\hat{\mathbf{t}}$ .
107
+
108
+ Subsequently, an equivariant autoencoder is designed to project $\mathbf{g}_1\sim p_1$ into the latent space. Here, the encoder $\mathcal{E}_{\phi}$ transforms $\mathbf{g}_1$ into latent domain $\mathbf{z}_1 = \mathcal{E}_{\phi}(\mathbf{g}_1)$ , where $\mathbf{z}_1 = \langle \mathbf{z}_{\mathbf{x},1}\in \mathbb{R}^{N\times 3},\mathbf{z}_{\mathbf{h},1}\in$ $\mathbb{R}^{N\times k}\rangle$ and $k$ represents the latent dimensionality for the atomic features. The decoder $\mathcal{D}_{\epsilon}$ then learns to decode $\mathbf{z}_1$ back to molecular domain formulated as $\hat{\mathbf{g}}_1 = \mathcal{D}_{\epsilon}(\mathbf{z}_1)$ . The equivariant autoencoder can be trained by minimizing the reconstruction objective, which is $d(\mathcal{D}(\mathcal{E}(\mathbf{g}_1)),\mathbf{g}_1)$ . With the encoded $\mathbf{z}_1$ and the sampled noise $\mathbf{z}_0 = \langle \mathbf{z}_{\mathbf{x},0}\in \mathbb{R}^{N\times 3},\mathbf{z}_{\mathbf{h},0}\in \mathbb{R}^{N\times k}\rangle$ from $p_0$ , we then formulate the atom-level cost matrix as $M_{c_g}[i,j] = \| \mathbf{z}_1^i -\mathbf{z}_0^j\| ^2 = \| \mathbf{z}_{\mathbf{x},1}^i -\mathbf{z}_{\mathbf{x},0}^j\| ^2 +\| \mathbf{z}_{\mathbf{h},1}^i -\mathbf{z}_{\mathbf{h},0}^j\| ^2$ , which is the 2-norm distance between $i$ -th atom of $\mathbf{z}_1$ and $j$ -th atom of $\mathbf{z}_0$ including the latent coordinates $\mathbf{z}_{\mathbf{x}}$ and the latent features $\mathbf{z_h}$ . With $M_{c_g}$ , the optimal permutation $\hat{\pi}$ is induced with the Hungarian algorithm (Kuhn, 1955). The coordinates of the noise molecule $\mathbf{z}_{\mathbf{x},0}$ and the latent coordinates of the target molecule $\mathbf{z}_{\mathbf{x},1}$ are then aligned through rotation $\hat{\mathbf{R}}$ solved by the Kabsch algorithm (Kabsch, 1976). In summary, we perform the above-calculated translation, encoding, rotation, and permutation on the target molecule $\mathbf{g}_1$ to obtain $\hat{\mathbf{z}}_1$ , which forms the optimal molecule transport cost with the sampled noise $\mathbf{z}_0$ . The complete process is denoted as $\hat{\mathbf{z}}_1 = \pi (\hat{\mathbf{R}}\mathcal{E}_{\phi}(\mathbf{g}_1 + \hat{\mathbf{t}}))$
109
+
110
+ # 3.2.2 SEARCHING OPTIMAL COUPLING FOR OPTIMAL DISTRIBUTION TRANSPORT
111
+
112
+ By solving Eq. (4), we can obtain $\hat{\mathbf{R}},\hat{\mathbf{t}}$ , and $\hat{\pi}$ yielding an optimal molecule transport trajectory - a straight one - given two data points from the base distribution and target distribution, respectively (see the gray trajectory in Figure 2). Nevertheless, ensuring a straight trajectory does not necessarily yield optimal transport for generative modeling because a straight map for two data points does not indicate a straight map between two distributions. Figure 2 depicts two possible trajectories for
113
+
114
+ generative modeling. The gray one shows a straight but not the shortest map, while the red one represents the shortest map, indicating an optimal trajectory. As discussed in Sec. 3.1, an optimal trajectory can only be achieved with optimal coupling, leading to the shortest path for mapping the base distribution to the target distribution. To approximate optimal coupling and further boost the sampling speed, we introduce the second part of our framework, optimal flow estimation and purification, which is dedicated to solving optimal distribution transport.
115
+
116
+ The pathway to optimal distribution transport is to identify the optimal coupling $\hat{\Gamma} (p_0,p_1)$ that satisfies the condition formulated as:
117
+
118
+ $$
119
+ \begin{array}{l} \hat {\Gamma} = \arg \min _ {\Gamma} \mathbb {E} [ \hat {c} _ {g} (\mathbf {z} _ {0}, \mathbf {z} _ {1}) ] \\ \mathbf {s}. \mathbf {t}. (\mathbf {z} _ {0}, \mathbf {z} _ {1}) \in \Gamma (p _ {0}, p _ {1}), \\ \end{array}
120
+ $$
121
+
122
+ where $\Gamma$ is an arbitrary coupling plan between $p_0$ and $p_1$ and $\hat{c}_g$ is optimal molecule transport cost defined in Eq. (3).
123
+
124
+ However, measuring the distribution transport cost for searching for optimal coupling is challenging due to the various sizes of molecules. Inspired by (Liu et al., 2022), we circumvent the need to quantify transport costs by estimating the optimal coupling, $\hat{\Gamma} (p_0,p_1^{\prime})$ based on a trained flow model with the initial coupling denoted as $\Gamma (p_0,p_1)$
125
+
126
+ The estimated optimal coupling can minimize the transport cost but may introduce generation error for the following reasons. The first type of error arises from estimating the flow path between $p_0$ to $p_1$ via a
127
+
128
+ neural network $v_{\theta}$ , implying that $p_1'$ , characterized by $v_{\theta}$ , does not perfectly match the data distribution $p_1$ . The second type of error stems from the discreteness of molecular data and the continuity of the distribution. In essence, an intermediate value between two similar and valid molecules, which are closely distributed, may not be biochemically valid. To compensate for such discrepancies, we implement a purification process on the generated coupling to ensure effective generation. We present a detailed implementation below.
129
+
130
+ Implementation. First, based on Sec. 3.2.1, we can obtain a set of noise and target molecule pairs with optimal molecule transport cost. We leverage the corresponding transport path as the conditional probability path for training the flow with the loss formulated as:
131
+
132
+ $$
133
+ \mathcal {L} _ {F 1} (\theta) = \mathbb {E} _ {t, p _ {0}, p _ {1}} \| v _ {\theta} \left(\hat {\mathbf {z}} _ {t}, t\right) - \left(\hat {\mathbf {z}} _ {1} - \mathbf {z} _ {0}\right) \| ^ {2}, \tag {6}
134
+ $$
135
+
136
+ where $\hat{\mathbf{z}}_t = t\hat{\mathbf{z}}_1 + (1 - t)\mathbf{z}_0, t \in [0,1]$ . Compared with using conditional optimal transport path and variance-preserving path in a hybrid fashion (Song et al., 2023a), our method employs a unified linear interpolation of $\hat{\mathbf{z}}_1 - \mathbf{z}_0$ as the flow probability path. Such straight trajectory adheres to the naive ODE formula denoted as $d\mathbf{z}_t = (\hat{\mathbf{z}}_1 - \mathbf{z}_0)dt$ , thereby providing a more straight flow path for fast sampling. The optimum of $\mathcal{L}_{F1}$ is achieved when $v_{\hat{\theta}}(\mathbf{z}_t,t) = \mathbb{E}[\mathbf{z}_1 - \mathbf{z}_0|\mathbf{z}_t]$ .
137
+
138
+ The proposed framework then samples data pairs $(\mathbf{z}_0, \mathbf{z}_1')$ via trained flow model $\hat{\theta}_1$ as the estimated optimal coupling. Specifically, $\mathbf{z}_1'$ can be sampled following $d\mathbf{z}_t = v_{\hat{\theta}_1}(\mathbf{z}_t, t)dt$ starting from $\mathbf{z}_0 \sim p_0$ and the process of sampling is denoted as $\mathrm{ODE}_{v_\hat{\theta}}$ . Pair of $\mathbf{z}_0$ and $\mathbf{z}_1'$ is set as fixed and a batch of pairs will be generated as the estimated optimal coupling represented as $\hat{\Gamma}(p_0, p_1') = \{(\mathbf{z}_0, \mathbf{z}_1')\}$ , where $\mathbf{z}_1' = \mathrm{ODE}_{v_\hat{\theta}}(\mathbf{z}_0)$ . Finally, we estimate and purify the flow. Specifically, $\mathbf{z}_1'$ is decoded by $\mathcal{D}_{\epsilon}$ for $\mathbf{g}_1'$ and evaluated in terms of stability and validity by RdKit (Landrum et al., 2016). This provides a criterion for filtering out invalid molecules to purify the coupling. Subsequently, the optimal flow is trained using the loss in Eq. (6) with estimated and purified coupling.
139
+
140
+ **Provably Reduced Geometric Transport Cost.** The estimated optimal coupling $\hat{\Gamma}$ can boost generation only when geometric transport cost is reduced. We theoretically show that our approach can indeed reduce geometric transport costs as follows:
141
+
142
+ ![](images/45283c5434af6d89bd3454f2af45a58f6ad9e089c5c2b9804aced3b2b9e3ddf9.jpg)
143
+ Figure 2: An Illustration of the Difference Between Straight Coupling and Optimal Coupling. GOAT approximates optimal coupling for a fast generation.
144
+
145
+ Theorem 3.1. The coupling $\hat{\Gamma}$ incurs no larger geometric transport cost than the random coupling $\Gamma(p_0, p_1)$ in that $\mathbf{E}[\hat{c}_g(\mathbf{z}_0, \mathbf{z}_1')] \leq \mathbf{E}[\hat{c}_g(\mathbf{z}_0, \mathbf{z}_1)]$ , where $(\mathbf{z}_0, \mathbf{z}_1') = \hat{\Gamma}(p_0, p_1')$ , $(\mathbf{z}_0, \mathbf{z}_1) = \Gamma(p_0, p_1)$ , and $\hat{c}_g$ is optimal molecule transport cost.
146
+
147
+ With this theorem, the proposed GOAT is guaranteed a Pareto descent on the geometric transport cost, leading to faster generation. A comprehensive proof is given in Appendix B, and the pseudocode for training and sampling is presented in Appendix C.
148
+
149
+ # 4 EXPERIMENTAL STUDIES
150
+
151
+ Datasets. We evaluate over benchmark datasets for 3D molecule generation, including QM9 (Ramakrishnan et al., 2014) and the GEOM-DRUG (Axelrod & Gomez-Bombarelli, 2022). QM9 is a standard dataset that contains 130k 3D molecules with up to 29 atoms. GEOM-DRUG encompasses around 450K molecules, each with an average of 44 atoms and a maximum of 181 atoms. More dataset details are presented in Appendix E.
152
+
153
+ Baselines. We compare GOAT with several competitive baseline models. G-Schnet (Gebauer et al., 2019) and Equivariant Normalizing Flows (ENF) (Chen et al., 2018) are equivariant generative models utilizing the autoregressive models and continuous normalizing flow, respectively. Equivariant Graph Diffusion Model (EDM) and its variant GDM-Aug (Hoogeboom et al., 2022), EDM-Bridge (Wu et al., 2022), GeoLDM (Xu et al., 2023) are diffusion-based approaches. GeoBFN (Song et al., 2023b) leverages Bayesian flow networks for distributional parameter approximation. EquiFM (Song et al., 2023a) is the first flow-matching method for 3D molecule generation.
154
+
155
+ # 4.1 EVALUATION METRICS
156
+
157
+ Evaluating Generation Quality. Without loss of generality, we use validity, uniqueness, and novelty to evaluate the quality of generated molecules (Zhang et al., 2023). Existing experiments calculate validity, uniqueness, and novelty, which are nested; novelty measures novel molecules among unique and valid molecules. However, such a calculation cannot reflect the ultimate quality among all samples. We further propose a new metric toward the significance of generative models (Walters & Murcko, 2020).
158
+
159
+ Below, we provide the detailed definitions of these metrics. 1) Validity. An essential criterion for molecule generation is that the generated molecules must be chemically valid, which implies that the molecules should obey chemical bonds and valency constraints. We use RdKit (Landrum et al., 2016) to check if a molecule obeys the chemical valency rules. Validity calculates the percentage of valid molecules among all the generated molecules; 2) Uniqueness. An important indicator of a molecule generative model is whether it can continuously generate different samples, which is quantified by the uniqueness. We evaluate uniqueness by measuring the fraction of unique molecules among all the generated valid ones; 3) Novelty. An ideal generative model for de novo molecule design should be able to generate novel molecular samples that do not exist in the training set. Therefore, we report novelty that quantifies the percentage of novel samples among all the valid and unique molecules; 4) Significance. To comprehensively evaluate the molecule generative models, we represent a new metric, significance, to quantify the percentage of valid, unique, and novel molecules among the generated samples.
160
+
161
+ Evaluating Generation Efficiency. 1) We report sampling steps to measure the generation speed. The time cost of each sampling step in most baselines, including EDM, EDM-Bridge, GeoBFN, GeoLFM, and EquiFM, is identical because they all applied EGNN (Satorras et al., 2021) with the same layers and parameters. Fewer steps indicate higher generation efficiency. For EquiFM and the proposed GOAT, we applied the same adaptive stepsize on ODE solver Dopri5 (Dormand & Prince, 1980) for a fair comparison. 2) Considering the generation quality and efficiency simultaneously, we propose to report the m il (valid, unique, and novel) molecule, denoted as S-Time. The metric is calculated by the number of significant molecules over the total time consumed by generating all molecules, and it comprehensively reflects the performance of generation quality and efficiency. 3) We measure the generation efficiency by comparing geometric transport cost, which is calculated by Eq. (2).
162
+
163
+ # 4.2 RESULTS AND ANALYSIS
164
+
165
+ Table 1: Comparisons of generation quality regarding Atom Stability, Validity, Uniqueness, Novelty, and Significance. And comparisons of generation efficiency regarding Steps and Time. The best results are highlighted in bold.
166
+
167
+ <table><tr><td rowspan="2">QM9 Metrics</td><td colspan="5">Quality (↑)</td><td colspan="2">Efficiency (↓)</td></tr><tr><td>Atom Sta</td><td>Valid</td><td>Uniqueness</td><td>Novelty</td><td>Significance</td><td>Steps</td><td>S-Time</td></tr><tr><td>Data</td><td>99.0</td><td>97.7</td><td>100.0</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td>ENF</td><td>85.0</td><td>40.2</td><td>98.0</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td>G-Schnet</td><td>95.7</td><td>85.5</td><td>93.9</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td>GDM-aug</td><td>97.6</td><td>90.4</td><td>99.0</td><td>74.6</td><td>66.8</td><td>1000</td><td>1.50</td></tr><tr><td>EDM</td><td>98.7</td><td>91.9</td><td>98.7</td><td>65.7</td><td>59.6</td><td>1000</td><td>1.68</td></tr><tr><td>EDM-Bridge</td><td>98.8</td><td>92.0</td><td>98.6</td><td>-</td><td>-</td><td>1000</td><td>-</td></tr><tr><td>GeoLDM</td><td>98.9</td><td>93.8</td><td>98.8</td><td>58.1</td><td>53.9</td><td>1000</td><td>1.86</td></tr><tr><td>GeoBFN</td><td>98.6</td><td>93.0</td><td>98.4</td><td>70.3</td><td>64.4</td><td>100</td><td>0.16</td></tr><tr><td>EquiFM</td><td>98.9</td><td>94.7</td><td>98.7</td><td>57.4</td><td>53.7</td><td>200</td><td>0.37</td></tr><tr><td>GOAT (Ours)</td><td>99.2</td><td>92.9</td><td>99.0</td><td>78.6</td><td>72.3</td><td>90</td><td>0.12</td></tr></table>
168
+
169
+ Table 2: Comparisons of generation quality regarding Atom Stability, Validity, Steps, and Time on GEOM-DRUG. The best results are highlighted in bold.
170
+
171
+ <table><tr><td rowspan="2">GEOM-DRUG Metrics</td><td colspan="2">Quality (↑)</td><td colspan="2">Efficiency (↓)</td></tr><tr><td>Atom Sta</td><td>Valid</td><td>Steps</td><td>S-Time</td></tr><tr><td>Data</td><td>86.5</td><td>99.9</td><td>-</td><td>-</td></tr><tr><td>GDM-aug</td><td>77.7</td><td>91.8</td><td>1000</td><td>-</td></tr><tr><td>EDM</td><td>81.3</td><td>92.6</td><td>1000</td><td>14.88</td></tr><tr><td>EDM-Bridge</td><td>82.4</td><td>92.8</td><td>1000</td><td>-</td></tr><tr><td>GeoLDM</td><td>84.4</td><td>99.3</td><td>1000</td><td>12.84</td></tr><tr><td>GeoBFN</td><td>78.9</td><td>93.1</td><td>100</td><td>1.27</td></tr><tr><td>EquiFM</td><td>84.1</td><td>98.9</td><td>200</td><td>2.02</td></tr><tr><td>GOAT (Ours)</td><td>84.8</td><td>96.2</td><td>90</td><td>0.94</td></tr></table>
172
+
173
+ ![](images/d1d4d61cd49ec19b04b521e937f0efe16f6ee5dd3d72c0ae31a0e540e6361bf0.jpg)
174
+ Figure 3: The blue histogram plots the comparisons of distribution transport cost. The red line chart depicts the average transport cost per atom (best view in color).
175
+
176
+ In this study, we generate 10K molecular samples for each method and compute the aforementioned metrics for comparisons. The evaluation results are presented in Tables 1 and 2 with Figure 3.
177
+
178
+ Performance Comparisons with Diffusion-Based Methods. We observe that all diffusion-based generation methods indeed need 1000 sampling steps to achieve comparable generation quality. Surprisingly, with the least sampling steps, GOAT achieves the best atom stability, uniqueness, novelty, and significance over QM9. Specifically, it improves novelty by up to $35.2\%$ and significance by up to $34.1\%$ , respectively. Among these diffusion models, GeoLDM achieves the best validity performance. However, it owns relatively poor novelty and significance, $58.1\%$ and $53.9\%$ on QM9, respectively. These results indicate that the latent diffusion models can model the complex geometric 3D molecules well but introduce a serious overfitting problem — generating more molecules that are the same as the training samples. Though GDM-Aug can achieve the second-best novelty among all methods, it needs 1000 sampling steps for 3D molecule generation. As for GEOM-DRUG, we directly compare the validity as ultimate significance since all compared methods achieved almost $100\%$ uniqueness (Xu et al., 2023). Table 2 shows that the proposed algorithm also achieves competitive performance while maintaining a leading edge in generation speed on such a large-scale dataset. Specifically, GOAT only spends 0.94 seconds for each valid, unique, and novel molecule on average and reaches $96.2\%$ validity, while GeoLDM takes more than $10\times$ seconds to reach $99.3\%$ . We believe this performance is competitive and more efficient.
179
+
180
+ Performance Comparisons with Flow-Matching-Based Methods. EquiFM and GOAT are all based on flow matching, using an ODE solver for generation. We can see that flow-matching-based methods can obtain faster generation speeds than diffusion models. In particular, GOAT only needs 90 steps, while EquiFM requires 200 steps for sampling. EquiFM solely considers optimal transport for atom coordinates. Therefore, the generation speed is still inferior to ours. Because the proposed GOAT solves optimal molecule transport and optimal distribution transport together, the number
181
+
182
+ Table 3: MAE for molecular property prediction. A lower number indicates a better controllable generation result. The best results are highlighted in bold.
183
+
184
+ <table><tr><td>Property</td><td>Steps</td><td>α</td><td>Δε</td><td>εHOMO</td><td>εLUMO</td><td>μ</td><td>Cv</td></tr><tr><td>Units</td><td></td><td>Bohr3</td><td>meV</td><td>meV</td><td>meV</td><td>D</td><td>cal mol K</td></tr><tr><td>QM9</td><td>-</td><td>0.100</td><td>64</td><td>39</td><td>36</td><td>0.043</td><td>0.040</td></tr><tr><td>Random</td><td>-</td><td>9.010</td><td>1470</td><td>645</td><td>1457</td><td>1.616</td><td>6.857</td></tr><tr><td>N atoms</td><td>-</td><td>3.860</td><td>866</td><td>426</td><td>813</td><td>1.053</td><td>1.971</td></tr><tr><td>EDM</td><td>1000</td><td>2.760</td><td>655</td><td>356</td><td>583</td><td>1.111</td><td>1.101</td></tr><tr><td>GeoLDM</td><td>1000</td><td>2.370</td><td>587</td><td>340</td><td>522</td><td>1.108</td><td>1.025</td></tr><tr><td>EquiFM</td><td>220</td><td>2.410</td><td>591</td><td>337</td><td>530</td><td>1.106</td><td>1.033</td></tr><tr><td>GOAT (Ours)</td><td>200</td><td>1.725</td><td>585</td><td>330</td><td>521</td><td>0.906</td><td>0.881</td></tr><tr><td>GeoBFN</td><td>100</td><td>3.875</td><td>768</td><td>426</td><td>855</td><td>1.331</td><td>1.401</td></tr><tr><td>EquiFM</td><td>100</td><td>3.006</td><td>830</td><td>392</td><td>735</td><td>1.064</td><td>1.177</td></tr><tr><td>GOAT (Ours)</td><td>100</td><td>2.740</td><td>605</td><td>350</td><td>534</td><td>1.010</td><td>0.883</td></tr></table>
185
+
186
+ ![](images/3ad78498aff3815c7b8a8a9bcae682ecaf7343beb2ae037b59f985c21e85f2c0.jpg)
187
+ Figure 4: Quality vs. Speed $(\alpha)$ . GOAT shows the optimal trade-off between generation quality and speed.
188
+
189
+ of sampling steps is further reduced by $2 \times$ compared to EquiFM with the same ODE solver. This verifies our hypothesis that a joint optimal transport path can further boost the generation efficiency.
190
+
191
+ Though EquiFM can perform well in terms of molecule validity, it achieves unsatisfactory performance in novelty and significance on QM9 among all methods. More specifically, nearly half of the generated samples are the same as the training samples, which is unacceptable in the context of de novo molecule design. In contrast, GOAT can obtain $78.6\%$ novelty with $37\%$ improvement and $72.3\%$ significance with $34.6\%$ improvement compared to EquiFM. On GEOM-DRUG, the proposed method achieves approximate performance compared to EquiFM while taking only half the sampling steps. GeoBFN (Song et al., 2023b) can have comparable sampling efficiency to ours, which is neither diffusion-based nor flow-matching based methods. We find that its generation quality over GEOM-DRUG is around $3\%$ lower than GOAT regarding the validity, and it owns around $8\%$ decrease in novelty with a similar sampling speed.
192
+
193
+ Geometric Transport Cost Comparisons. As EquiFM and GOAT are both flow-matching-based transport methods, we compare their transport costs and present the visualized results in Figure 3. We present distribution transport cost $(p_0 \to p_1)$ in blue bars and molecule transport cost averaged over the number of atoms $(\mathbf{g}_0 \to \mathbf{g}_1)$ in red lines. Compared to EquiFM transports with a hybrid method, the proposed method reduced the geometric transport cost with 1) unified transport (Unified), 2) optimal molecule transport (1-OMT), and 3) optimal distribution transport cost (1-ODT), thereby achieving a significant reduction in geometric transport cost by nearly $89.65\%$ , leading to faster generation. We further minimize molecule and distribution transport costs (2-OMT and 2-ODT) and observe that the transport cost is reduced marginally, indicating a nearly optimum of the proposed method. The above analysis reveals that the proposed method indeed reduced the geometric transport cost by unifying transport, minimizing molecule transport cost, and estimating optimal couplings. The most intuitive manifestation of the reduction in transport cost is the boost in generation speed, which has been demonstrated in the previous section.
194
+
195
+ Controllable Molecule Generation. Without loss of generality, GOAT can be readily adapted to perform controllable molecule generation with a desired property $s$ by modeling the neural network as $v_{\theta}(\mathbf{z},t|s)$ . We evaluated the performance of GOAT on generating molecules with properties including $\alpha$ , $\Delta \varepsilon$ , $\varepsilon_{\mathrm{HOMO}}$ , $\varepsilon_{\mathrm{LUMO}}$ , $\mu$ , and $C_v$ . The quality of the generated molecules concerning their desired property was assessed using the Mean Absolute Error (MAE) between the conditioned property and the predicted property. This measure helps to determine how closely the generated molecules align with the desired property.
196
+
197
+ We use the property classifier network $\varphi$ from (Garcia Satorras et al., 2021) and split the QM9 training partition into two halves with $50\mathrm{K}$ samples each. The classifier $\varphi$ is trained in the first half, while the Conditional GOAT is trained in the second half. Then, $\varphi$ is applied to evaluate conditionally generated samples by the GOAT. We report the numerical results in Table 3. Random means we simply do random shuffling of the property labels in the dataset and then evaluate $\varphi$ on it. $N_{\mathrm{atoms}}$ predicts the molecular properties by only using the number of atoms in the molecule.
198
+
199
+ Compared to existing methods, our proposed approach demonstrates superior performance in controllable generation tasks. Specifically, it achieves the best results across all six tasks when evaluated with 100 sampling steps, outperforming other variable sampling step methods (EquiFM and GeoBFN). Furthermore, even with increased sampling steps, our method maintains outstanding per
200
+
201
+ Table 4: Ablation Studies. OMT represents optimal molecule transport, and ODT stands for optimal distribution transport. The best results are highlighted in bold, and the second-best results are highlighted with underlines.
202
+
203
+ <table><tr><td colspan="2">Metrics (QM9)</td><td colspan="5">Quality (↑)</td><td colspan="3">Efficiency (↓)</td></tr><tr><td>Components</td><td>λ</td><td>Atom Sta</td><td>Valid</td><td>Uniqueness</td><td>Novelty</td><td>Significance</td><td>Steps</td><td>Time</td><td>Cost</td></tr><tr><td>w/o EAE</td><td>-</td><td>98.8</td><td>92.8</td><td>92.2</td><td>58.4</td><td>53.9</td><td>280</td><td>0.63</td><td>30.25</td></tr><tr><td>w/o ODT</td><td>1</td><td>97.7</td><td>89.5</td><td>98.7</td><td>70.1</td><td>61.9</td><td>120</td><td>0.17</td><td>5.01</td></tr><tr><td>w/o ODT</td><td>0.75</td><td>97.9</td><td>89.7</td><td>98.9</td><td>70.2</td><td>62.3</td><td>110</td><td>0.16</td><td>4.63</td></tr><tr><td>w/o ODT</td><td>0.5</td><td>98.1</td><td>89.9</td><td>98.8</td><td>70.4</td><td>62.5</td><td>100</td><td>0.14</td><td>4.13</td></tr><tr><td>w/o ODT</td><td>0.25</td><td>97.8</td><td>89.6</td><td>98.7</td><td>70.1</td><td>62.0</td><td>120</td><td>0.17</td><td>4.87</td></tr><tr><td>w/o ODT</td><td>0</td><td>97.5</td><td>89.3</td><td>98.8</td><td>70.0</td><td>61.8</td><td>130</td><td>0.19</td><td>5.41</td></tr><tr><td>w/o OMT</td><td>0.5</td><td>96.5</td><td>85.0</td><td>98.9</td><td>69.1</td><td>58.1</td><td>170</td><td>0.31</td><td>5.32</td></tr><tr><td>GOAT</td><td>0.5</td><td>99.2</td><td>92.9</td><td>99.0</td><td>78.6</td><td>72.3</td><td>90.0</td><td>0.12</td><td>3.14</td></tr></table>
204
+
205
+ formance in generating molecules with properties $\alpha$ , $\mu$ , and $C_v$ , whereas other methods require longer sampling steps to achieve comparable results. Notably, only GeoBFN, which requires more than doubled sampling steps, shows a marginal advantage in other properties. To better illustrate the advantages of the proposed method, we present a performance comparison in accuracy and efficiency, as measured by property $\alpha$ , in Figure 4. The figure demonstrates that the proposed method achieves a new trade-off between accuracy and efficiency in conditional molecule generation.
206
+
207
+ Parameter Analysis and Ablation Studies. In this section, we first analyze the effectiveness of the parameter $\lambda$ , which determines the trade-off between transporting atom coordinates and atom features. Since the parameter only affects the optimization of molecule transport, we compare its influence without considering optimal distribution transport. Our results in Table 4 indicate that when $\lambda = 0.5$ , meaning the weights for optimizing transport costs in two modalities are equal, the proposed algorithm achieves the best transport plan. We hypothesize that this is because the transport cost calculated by equation 3 with $\lambda = 0.5$ accurately reflects the actual cost for the generative model in transporting noise to the data distribution.
208
+
209
+ Additionally, we conduct ablation studies on equivariant autoencoder (EAE), optimal molecule transport, and optimal distribution transport. Without considering optimal transport, the model trained solely with flow matching in the latent space (w/o OMT) shows a significant increase in training speed. This can be attributed to the reduced transport cost resulting from the unified space, although the performance remains suboptimal. When solving OMT without ODT (w/o ODT, $\lambda = 0.5$ ), both performance and speed improve, but they still do not reach the final results, which account for both molecule and distribution in geometric optimal transport.
210
+
211
+ Limitations. Addressing the optimal transport costs, particularly those involving rotation and permutation aspects, can be computationally intensive (Song et al., 2023a; Klein et al., 2023). However, these operations can be efficiently parallelized on CPUs to enhance the training speed. Besides, refining the flow may require additional time-consuming training, but such an operation boosts the generation speed and improves novelty without compromising quality. In summary, the above-mentioned operations will accelerate the generation of molecules once and for all after training, which is prioritized in this research. We leave improvements concerning training efficiency and other methods for boosting generation speed, such as distillation (Liu et al., 2022), for future work.
212
+
213
+ # 5 CONCLUSION
214
+
215
+ This paper introduces GOAT, a 3D molecular generation framework that tackles optimal transport for enhanced generation quality and efficiency in molecule design. Recognizing that in silico molecule generation is a problem of probability distribution transport and the key to accelerating this lies in minimizing the transport cost. To this end, we formulated the geometric optimal transport problem tailored for molecular distribution. This proposed problem led us to consider the transport cost of atom coordinates, atom features, and the complete molecules. This motivates the design of joint transport to solve optimal molecule transport with different modalities and the framework to minimize the distributional transport cost. Both theoretical and empirical validations confirm that GOAT reduces the geometric transport cost, resulting in faster and more effective molecule generation. Our method achieves state-of-the-art performance in generating valid, unique, and novel molecules, thereby enhancing the ultimate significance of in-silico molecule generation.
216
+
217
+ # ACKNOWLEDGMENTS
218
+
219
+ This work was supported in part by the Research Grants Council of the Hong Kong (HK) SAR under Grant No. C5052-23G, Grant PolyU 15229824, Grant PolyU 15218622, Grant PolyU 15215623 and Grant PolyU 15208222; the National Natural Science Foundation of China (NSFC) under Grants U21A20512; NSFC Young Scientist Fund under Grant PolyU A0040473.
220
+
221
+ # REFERENCES
222
+
223
+ Luigi Ambrosio, Elia Brué, Daniele Semola, et al. Lectures on Optimal Transport, volume 130. Springer, 2021.
224
+ Brandon Anderson, Truong Son Hy, and Risi Kondor. Cormorant: Covariant Molecular Neural Networks. Advances in Neural Information Processing Systems, 32, 2019.
225
+ Simon Axelrod and Rafael Gómez-Bombarelli. GEOM, Energy-Annotated Molecular Conformations for Property Prediction and Molecular Generation. Scientific Data, 9(1):185, 2022. ISSN 2052-4463.
226
+ Enrico Celeghini, Riccardo Giachetti, Emanuele Sorace, and Marco Tarlini. The Three-Dimensional Euclidean Quantum Group E (3)q and Its R-Matrix. Journal of Mathematical Physics, 32(5): 1159-1165, 1991.
227
+ Ricky TQ Chen, Yulia Rubanova, Jesse Bettencourt, and David K Duvenaud. Neural Ordinary Differential Equations. Advances in Neural Information Processing Systems, 31, 2018.
228
+ Quan Dao, Hao Phung, Binh Nguyen, and Anh Tran. Flow Matching in Latent Space. arXiv preprint arXiv:2307.08698, 2023.
229
+ John R Dormand and Peter J Prince. A Family of Embedded Runge-Kutta Formulae. Journal of computational and applied mathematics, 6(1):19-26, 1980.
230
+ Victor Garcia Satorras, Emiel Hoogeboom, Fabian Fuchs, Ingmar Posner, and Max Welling. E(n) Equivariant Normalizing Flows. In M. Ranzato, A. Beygelzimer, Y. Dauphin, P.S. Liang, and J. Wortman Vaughan (eds.), Advances in Neural Information Processing Systems, volume 34, pp. 4181-4192. Curran Associates, Inc., 2021.
231
+ Niklas Gebauer, Michael Gastegger, and Kristof Schütt. Symmetry-Adapted Generation of 3D Point Sets for the Targeted Discovery of Molecules. Advances in Neural Information Processing Systems, 32, 2019.
232
+ Xu Han, Caihua Shan, Yifei Shen, Can Xu, Han Yang, Xiang Li, and Dongsheng Li. Training-free Multi-objective Diffusion Model for 3D Molecule Generation. In The Twelfth International Conference on Learning Representations, 2023.
233
+ Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising Diffusion Probabilistic Models. Advances in Neural Information Processing Systems, 33:6840-6851, 2020.
234
+ Emiel Hoogeboom, Victor Garcia Satorras, Clément Vignac, and Max Welling. Equivariant Diffusion for Molecule Generation in 3D. In International Conference on Machine Learning, pp. 8867-8887. PMLR, 2022.
235
+ Wengong Jin, Regina Barzilay, and Tommi Jaakkola. Junction Tree Variational Autoencoder for Molecular Graph Generation. In International Conference on Machine Learning, pp. 2323-2332. PMLR, 2018.
236
+ Hojung Jung, Youngrok Park, Laura Schmid, Jaehyeong Jo, Dongkyu Lee, Bongsang Kim, SeYoung Yun, and Jinwoo Shin. Conditional Synthesis of 3D Molecules with Time Correction Sampler. Advances in Neural Information Processing Systems, 37:75914-75941, 2024.
237
+ Wolfgang Kabsch. A Solution for the Best Rotation to Relate Two Sets of Vectors. Acta Crystallographica Section A: Crystal Physics, Diffraction, Theoretical and General Crystallography, 32 (5):922-923, 1976.
238
+
239
+ Diederik Kingma and Jimmy Ba. Adam: A Method for Stochastic Optimization. In International Conference on Learning Representations (ICLR), San Diego, CA, USA, 2015.
240
+ Leon Klein, Andreas Kramer, and Frank Noé. Equivariant Flow Matching. Advances in Neural Information Processing Systems, 36, 2023.
241
+ Harold W Kuhn. The Hungarian Method for The Assignment Problem. Naval Research Logistics Quarterly, 2(1-2):83-97, 1955.
242
+ Greg Landrum et al. Rdkit: Open-Source Cheminformatics Software. Open-source Cheminformatics, 2016.
243
+ Yaron Lipman, Ricky TQ Chen, Heli Ben-Hamu, Maximilian Nickel, and Matthew Le. Flow Matching for Generative Modeling. In The Eleventh International Conference on Learning Representations, 2022.
244
+ Qi Liu, Miltiadis Allamanis, Marc Brockschmidt, and Alexander Gaunt. Constrained Graph Variational Autoencoders for Molecule Design. Advances in Neural Information Processing Systems, 31, 2018.
245
+ Xingchao Liu, Chengyue Gong, et al. Flow Straight and Fast: Learning to Generate and Transfer Data with Rectified Flow. In The Eleventh International Conference on Learning Representations, 2022.
246
+ Youzhi Luo and Shuiwang Ji. An Autoregressive Flow Model for 3D Molecular Geometry Generation From Scratch. In International Conference on Learning Representations (ICLR), 2022.
247
+ Laurence Midgley, Vincent Stimper, Javier Antorán, Emile Mathieu, Bernhard Schölkopf, and José Miguel Hernández-Lobato. SE (3) Equivariant Augmented Coupling Flows. Advances in Neural Information Processing Systems, 36, 2023.
248
+ Stefano Peluchetti. Diffusion Bridge Mixture Transports, Schrödinger Bridge Problems and Generative Modeling. Journal of Machine Learning Research, 24(374):1-51, 2023.
249
+ Xingang Peng, Jiaqi Guan, Qiang Liu, and Jianzhu Ma. MolDiff: Addressing the Atom-Bond Inconsistency Problem in 3D Molecule Diffusion Generation. In Proceedings of the 40th International Conference on Machine Learning, pp. 27611-27629, 2023.
250
+ Raghunathan Ramakrishnan, Pavlo O Dral, Matthias Rupp, and O Anatole Von Lilienfeld. Quantum Chemistry Structures and Properties of 134 Kilo Molecules. Scientific data, 1(1):1-7, 2014.
251
+ Danilo Rezende and Shakir Mohamed. Variational Inference with Normalizing Flows. In International Conference on Machine Learning, pp. 1530-1538. PMLR, 2015.
252
+ Lars Ruddigkeit, Ruud Van Deursen, Lorenz C Blum, and Jean-Louis Reymond. Enumeration of 166 Billion Organic Small Molecules in the Chemical Universe Database GDB-17. Journal of chemical information and modeling, 52(11):2864-2875, 2012.
253
+ Victor Garcia Satorras, Emiel Hoogeboom, and Max Welling. E (n) Equivariant Graph Neural Networks. In International Conference on Machine Learning, pp. 9323-9332. PMLR, 2021.
254
+ Jean-Pierre Serre et al. Linear Representations of Finite Groups, volume 42. Springer, 1977.
255
+ Chence Shi, Minkai Xu, Zhaocheng Zhu, Weinan Zhang, Ming Zhang, and Jian Tang. GraphAF: a Flow-based Autoregressive Model for Molecular Graph Generation. In International Conference on Learning Representations, 2019.
256
+ Yang Song, Jascha Sohl-Dickstein, Diederik P Kingma, Abhishek Kumar, Stefano Ermon, and Ben Poole. Score-Based Generative Modeling through Stochastic Differential Equations. In International Conference on Learning Representations, 2020.
257
+ Yuxuan Song, Jingjing Gong, Minkai Xu, Ziyao Cao, Yanyan Lan, Stefano Ermon, Hao Zhou, and Wei-Ying Ma. Equivariant Flow Matching with Hybrid Probability Transport for 3D Molecule Generation. Advances in Neural Information Processing Systems, 36, 2023a.
258
+
259
+ Yuxuan Song, Jingjing Gong, Hao Zhou, Mingyue Zheng, Jingjing Liu, and Wei-Ying Ma. Unified Generative Modeling of 3D Molecules with Bayesian Flow Networks. In The Twelfth International Conference on Learning Representations, 2023b.
260
+ Alexander Tong, Nikolay Malkin, Guillaume Huguet, Yanlei Zhang, Jarrid Rector-Brooks, Kilian FATRAS, Guy Wolf, and Yoshua Bengio. Improving and Generalizing Flow-Based Generative Models with Minibatch Optimal Transport. In ICML Workshop on New Frontiers in Learning, Control, and Dynamical Systems, 2023.
261
+ W Patrick Walters and Mark Murcko. Assessing the Impact of Generative AI on Medicinal Chemistry. Nature Biotechnology, 38(2):143-145, 2020.
262
+ Joseph L. Watson, David Juergens, Nathaniel R. Bennett, Brian L. Trippe, Jason Yim, Helen E. Eisenach, Woody Ahern, Andrew J. Borst, Robert J. Ragotte, Lukas F. Milles, Basile I. M. Wicky, Nikita Hanikel, Samuel J. Pellock, Alexis Courbet, William Sheffler, Jue Wang, Preetham Venkatesh, Isaac Sappington, Susana Vázquez Torres, Anna Lauko, Valentin De Bortoli, Emile Mathieu, Sergey Ovchinnikov, Regina Barzilay, Tommi S. Jaakkola, Frank DiMaio, Minkyung Baek, and David Baker. De Novo Design of Protein Structure and Function with RFdiffusion. Nature, 620(7976):1089-1100, 2023. ISSN 1476-4687.
263
+ Lemeng Wu, Chengyue Gong, Xingchao Liu, Mao Ye, and Qiang Liu. Diffusion-Based Molecule Generation with Informative Prior Bridges. Advances in Neural Information Processing Systems, 35:36533-36545, 2022.
264
+ Tian Xie, Xiang Fu, Octavian-Eugen Ganea, Regina Barzilay, and Tommi S. Jaakkola. Crystal Diffusion Variational Autoencoder for Periodic Material Generation. In International Conference on Learning Representations, 2022.
265
+ Minkai Xu, Alexander S Powers, Ron O Dror, Stefano Ermon, and Jure Leskovec. Geometric Latent Diffusion Models for 3D Molecule Generation. In International Conference on Machine Learning, pp. 38592-38610. PMLR, 2023.
266
+ Zaixi Zhang, Qi Liu, Chee-Kong Lee, Chang-Yu Hsieh, and Enhong Chen. An Equivariant Generative Framework for Molecular Graph-Structure Co-Design. Chemical Science, 14(31):8380-8392, 2023.
267
+
268
+ # APPENDIX
269
+
270
+ # A EQUIVARIANCE AND INVARIANCE IN GEOMETRIC OPTIMAL TRANSPORT
271
+
272
+ Equivalence. Molecules, typically existing within a three-dimensional physical space, are subject to geometric symmetries, including translations, rotations, and potential reflections. These are collectively referred to as the Euclidean group in 3 dimensions, denoted as $\mathrm{E}(3)$ (Celeghini et al., 1991).
273
+
274
+ A function $F$ is said to be equivariant to the action of a group $G$ if $T_{g} \circ F(\mathbf{x}) = F \circ S_{g}(\mathbf{x})$ for all $g \in G$ , where $S_{g}, T_{g}$ are linear representations related to the group element $g$ (Serre et al., 1977). Invariance. A function $F$ is said to be invariant to the action of a group $G$ if $F \circ \pi(\mathbf{x}) = F(\mathbf{x})$ for all $g \in G$ and every permutation $\pi \in S_{n}$ .
275
+
276
+ Equivalence and Invariance in Molecules. For geometric graph generation, we consider the special Euclidean group $\mathrm{SE}(3)$ , involving translations and rotations. Moreover, the transformations $S_{g}$ or $T_{g}$ can be represented by a translation $\mathbf{t}$ and an orthogonal matrix rotation $\mathbf{R}$ .
277
+
278
+ For a molecule $\mathbf{g} = \langle \mathbf{x},\mathbf{h}\rangle$ , the node features $\mathbf{h}$ are SE(3)-invariant while the coordinates $\mathbf{x}$ are SE(3)-equivariant, which can be expressed as $\mathbf{R}\mathbf{x} + \mathbf{t} = (\mathbf{R}\mathbf{x}_1 + \mathbf{t},\dots,\mathbf{R}\mathbf{x}_N + \mathbf{t})$
279
+
280
+ Equivalence and Invariance in Geometric Optimal Transport. For non-topological data, such as images, the transport cost between two given data points is fixed. However, this does not apply to topological graphs. For instance, when a topological graph (molecule) undergoes rotation or translation, the inherent properties of the molecule remain unchanged, but the cost of transporting coordinates may vary. Similarly, if the atom order in one of the molecules changes in silico, the molecule remains constant, but the transport cost of coordinates and features may alter. Therefore, the proposed optimal molecule transport problem aims to find an optimal rotation, translation, and permutation transformation for one molecule to minimize the distance, considering both coordinates and features, from another molecule.
281
+
282
+ # B PROOF FOR THEOREM 3.1
283
+
284
+ The theorem 3.1 is reproduced here for convenience:
285
+
286
+ Theorem 3.1 The coupling $\hat{\Gamma}$ incurs no larger geometric transport costs than the arbitrary coupling $\Gamma(p_0, p_1)$ in that $\mathbf{E}[\hat{c}_g(\mathbf{z}_0, \mathbf{z}_1')] \leq \mathbf{E}[\hat{c}_g(\mathbf{z}_0, \mathbf{z}_1)]$ where $(\mathbf{z}_0, \mathbf{z}_1') \in \hat{\Gamma}(p_0, p_1)$ , $(\mathbf{z}_0, \mathbf{z}_1) \in \Gamma(p_0, p_1)$ , and $\hat{c}_g(\mathbf{z}_0, \mathbf{z}_1) = \min \|\pi(\mathbf{R}\mathbf{z}_{\mathbf{x},1}^1 + \mathbf{t}, \mathbf{R}\mathbf{z}_{\mathbf{x},1}^2 + \mathbf{t}, \dots, \mathbf{R}\mathbf{z}_{\mathbf{x},1}^N + \mathbf{t}) - (\mathbf{z}_{\mathbf{x},0}^1, \mathbf{z}_{\mathbf{x},0}^2, \dots, \mathbf{z}_{\mathbf{x},0}^N)\|_2 + \min \|\pi(\mathbf{z}_{\mathbf{h},1}^1, \mathbf{z}_{\mathbf{h},1}^2, \dots, \mathbf{z}_{\mathbf{h},1}^N) - (\mathbf{z}_{\mathbf{h},0}^1, \mathbf{z}_{\mathbf{h},0}^2, \dots, \mathbf{z}_{\mathbf{h},0}^N)\|_2, \forall \pi, \mathbf{R}$ , and $\mathbf{t}$ .
287
+
288
+ $\mathbf{z}$ is geometry $\mathbf{g}$ in the latent space, which is composed of $\mathbf{z}_{\mathbf{x}} \in \mathbb{R}^{N \times 3}$ and $\mathbf{z}_{\mathbf{h}} \in \mathbb{R}^{N \times k}$ , where $k$ is the latent dimension characterized by $\mathcal{E}_{\phi}$ .
289
+
290
+ With node-granular optimal transport $\hat{\mathbf{R}},\hat{\mathbf{t}}$ and $\hat{\pi}$ we have:
291
+
292
+ $$
293
+ \begin{array}{l} \mathbb {E} \left[ \hat {c} _ {g} \left(\mathbf {z} _ {0}, \mathbf {z} _ {1} ^ {\prime}\right) \right] = \mathbb {E} \left[ \min \left\| \pi \left(\mathbf {R} \mathbf {z} _ {\mathbf {x}, 1} ^ {\prime 1} + \mathbf {t}, \mathbf {R} \mathbf {z} _ {\mathbf {x}, 1} ^ {\prime 2} + \mathbf {t}, \dots , \mathbf {R} \mathbf {z} _ {\mathbf {x}, 1} ^ {\prime N} + \mathbf {t}\right) - \left(\mathbf {z} _ {\mathbf {x}, 0} ^ {1}, \mathbf {z} _ {\mathbf {x}, 0} ^ {2}, \dots , \mathbf {z} _ {\mathbf {x}, 0} ^ {N}\right) \right\| _ {2} \right. \\ + \min \left\| \pi \left(\mathbf {z} _ {\mathbf {h}, 1} ^ {r _ {1}}, \mathbf {z} _ {\mathbf {h}, 2} ^ {r _ {2}}, \dots , \mathbf {z} _ {\mathbf {h}, 1} ^ {r _ {N}}\right) - \left(\mathbf {z} _ {\mathbf {h}, 0} ^ {1}, \mathbf {z} _ {\mathbf {h}, 0} ^ {2}, \dots , \mathbf {z} _ {\mathbf {h}, 0} ^ {N}\right) \right\| _ {2}, \forall \pi , \mathbf {R}, \text {a n d} \mathbf {t} ] \\ = \mathbb {E} [ \| \hat {\pi} (\hat {\mathbf {R}} \mathbf {z} _ {\mathbf {x}, 1} ^ {\prime 1} + \hat {\mathbf {t}}, \hat {\mathbf {R}} \mathbf {z} _ {\mathbf {x}, 1} ^ {\prime 2} + \hat {\mathbf {t}}, \dots , \hat {\mathbf {R}} \mathbf {z} _ {\mathbf {x}, 1} ^ {\prime N} + \hat {\mathbf {t}}) - (\mathbf {z} _ {\mathbf {x}, 0} ^ {1}, \mathbf {z} _ {\mathbf {x}, 0} ^ {2}, \dots , \mathbf {z} _ {\mathbf {x}, 0} ^ {N}) \| _ {2} \\ + \| \hat {\pi} \left(\mathbf {z} _ {\mathbf {h}, 1} ^ {r _ {1}}, \mathbf {z} _ {\mathbf {h}, 2} ^ {r _ {2}}, \dots , \mathbf {z} _ {\mathbf {h}, 1} ^ {r _ {N}}\right) - \left(\mathbf {z} _ {\mathbf {h}, 0} ^ {1}, \mathbf {z} _ {\mathbf {h}, 0} ^ {2}, \dots , \mathbf {z} _ {\mathbf {h}, 0} ^ {N}\right) \| _ {2} \| \\ \end{array}
294
+ $$
295
+
296
+ Let $\hat{\mathbf{z}}_{\mathbf{x}} = \hat{\pi} (\hat{\mathbf{R}}\mathbf{z}_{\mathbf{x}}^{1} + \hat{\mathbf{t}},\hat{\mathbf{R}}\mathbf{z}_{\mathbf{x}}^{2} + \hat{\mathbf{t}},\dots ,\hat{\mathbf{R}}\mathbf{z}_{\mathbf{x}}^{N} + \hat{\mathbf{t}})$ , $\hat{\mathbf{z}}_{\mathbf{h}} = \hat{\pi} (\mathbf{z}_{\mathbf{h}}^{1},\mathbf{z}_{\mathbf{h}}^{2},\dots ,\mathbf{z}_{\mathbf{h}}^{N})$ , and $\hat{\mathbf{z}} = [\hat{\mathbf{z}}_x,\hat{\mathbf{z}}_h]\in$ $\mathbb{R}^{N\times (3 + k)}$ , then we have:
297
+
298
+ $$
299
+ \begin{array}{l} \mathbb {E} \big [ \hat {c} _ {g} \big (\mathbf {z} _ {0}, \mathbf {z} _ {1} ^ {\prime} \big) \big ] = \mathbb {E} \big [ \| \big (\hat {\mathbf {z}} _ {\mathbf {x}, 1} ^ {\prime 1} +, \hat {\mathbf {z}} _ {\mathbf {x}, 1} ^ {\prime 2} +, \dots , \hat {\mathbf {z}} _ {\mathbf {x}, 1} ^ {\prime N} \big) - \big (\mathbf {z} _ {\mathbf {x}, 0} ^ {1}, \mathbf {z} _ {\mathbf {x}, 0} ^ {2}, \dots , \mathbf {z} _ {\mathbf {x}, 0} ^ {N} \big) \big \| _ {2} \\ + \left\| \left(\hat {\mathbf {z}} _ {\mathbf {h}, 1} ^ {\prime 1}, \hat {\mathbf {z}} _ {\mathbf {h}, 1} ^ {\prime 2}, \dots , \hat {\mathbf {z}} _ {\mathbf {h}, 1} ^ {\prime N}\right) - \left(\mathbf {z} _ {\mathbf {h}, 0} ^ {1}, \mathbf {z} _ {\mathbf {h}, 0} ^ {2}, \dots , \mathbf {z} _ {\mathbf {h}, 0} ^ {N}\right) \right\| _ {2} ] \\ = \mathbb {E} \left[ \left\| \left(\hat {\mathbf {z}} _ {1} ^ {\prime 1} +, \hat {\mathbf {z}} _ {1} ^ {\prime 2} +, \dots , \hat {\mathbf {z}} _ {1} ^ {\prime N}\right) - \left(\mathbf {z} _ {0} ^ {1}, \mathbf {z} _ {0} ^ {2}, \dots , \mathbf {z} _ {0} ^ {N}\right) \right\| _ {2} \right] \\ = \mathbb {E} \left[ \left\| \hat {\mathbf {z}} _ {1} ^ {\prime} - \mathbf {z} _ {0} \right\| _ {2} \right]. \\ \end{array}
300
+ $$
301
+
302
+ Likewise, we have:
303
+
304
+ $$
305
+ \mathbb {E} \left[ \hat {c} _ {g} \left(\mathbf {z} _ {0}, \mathbf {z} _ {1}\right) \right] = \mathbb {E} \left[ \| \hat {\mathbf {z}} _ {1} - \mathbf {z} _ {0} \| _ {2} \right]. \tag {7}
306
+ $$
307
+
308
+ At this point, what we aim to prove is simplified to:
309
+
310
+ $$
311
+ \mathbb {E} \left[ \| \hat {\mathbf {z}} _ {1} ^ {\prime} - \mathbf {z} _ {0} \| _ {2} \right] \leq \mathbb {E} \left[ \| \hat {\mathbf {z}} _ {1} - \mathbf {z} _ {0} \| _ {2} \right] \tag {8}
312
+ $$
313
+
314
+ Proof. Given that $\mathbf{z}_1' = \mathrm{ODE}_{\hat{\theta}}(\mathbf{z}_0)$ , $d\mathbf{z}_t = v_{\hat{\theta}}(\mathbf{z}_t, t)dt$ , we have:
315
+
316
+ $$
317
+ \mathbb {E} \left[ \hat {c} _ {g} \left(\mathbf {z} _ {0}, \mathbf {z} _ {1} ^ {\prime}\right) \right] = \mathbb {E} \left[ \left\| \int_ {0} ^ {1} v _ {\hat {\theta}} \left(\mathbf {z} _ {t}, t\right) d t \right\| _ {2} \right] \tag {9}
318
+ $$
319
+
320
+ $\| \cdot \|_2 : \mathbb{R}^{N \times (3 + k)} \to \mathbb{R}_+$ is the Euclidean norm of $\cdot$ and it is convex, therefore, with $\| \int_{\Omega} v dt \| \leq \int_{\Omega} \| v \| dt$ induced by Jensen's inequality we have:
321
+
322
+ $$
323
+ \mathbb {E} \left[ \hat {c} _ {g} \left(\mathbf {z} _ {0}, \mathbf {z} _ {1} ^ {\prime}\right) \right] \leq \mathbb {E} \left[ \int_ {0} ^ {1} \left\| v _ {\hat {\theta}} \left(\mathbf {z} _ {t}, t\right) \right\| _ {2} d t \right]. \tag {10}
324
+ $$
325
+
326
+ With defined $v_{\hat{\theta}}(\mathbf{z}_t,t) = \mathbb{E}[\mathbf{z}_1 - \mathbf{z}_0|\mathbf{z}_t]$ , we then have:
327
+
328
+ $$
329
+ \mathbb {E} \left[ \hat {c} _ {g} \left(\mathbf {z} _ {0}, \mathbf {z} _ {1} ^ {\prime}\right) \right] = \mathbb {E} \left[ \int_ {0} ^ {1} \| \mathbb {E} \left[ \mathbf {z} _ {1} - \mathbf {z} _ {0} \mid \mathbf {z} _ {t} \right] \| _ {2} d t \right]. \tag {11}
330
+ $$
331
+
332
+ Again, with the finite form of Jensen's inequality, we have:
333
+
334
+ $$
335
+ \begin{array}{l} \mathbb {E} \left[ \hat {c} _ {g} \left(\mathbf {z} _ {0}, \mathbf {z} _ {1} ^ {\prime}\right) \right] \leq \mathbb {E} \left[ \int_ {0} ^ {1} \mathbb {E} \left[ \| \mathbf {z} _ {1} - \mathbf {z} _ {0} \| _ {2} | \mathbf {z} _ {t} \right] d t \right] // \text {J e s s e n ' s i n e q u a l i t y} \\ = \int_ {0} ^ {1} \mathbb {E} \left[ \mathbb {E} \left[ \| \mathbf {z} _ {1} - \mathbf {z} _ {0} \| _ {2} | \mathbf {z} _ {t} \right] \right] d t \tag {12} \\ = \int_ {0} ^ {1} \mathbb {E} [ \| \mathbf {z} _ {1} - \mathbf {z} _ {0} \| _ {2} ] d t \quad / / \mathbb {E} [ \| \mathbf {z} _ {1} - \mathbf {z} _ {0} \| _ {2} | \mathbf {z} _ {t} ] = \| \mathbf {z} _ {1} - \mathbf {z} _ {0} \| _ {2} \\ = \mathbb {E} \left[ \| \hat {\mathbf {z}} _ {1} - \hat {\mathbf {z}} _ {0} \| _ {2} \right] \\ = \mathbb {E} \left[ \hat {c} _ {g} \left(\mathbf {z} _ {0}, \mathbf {z} _ {1}\right) \right] \quad / / \text {B y E q . 7} \\ \end{array}
336
+ $$
337
+
338
+ Combining equations 9 to 12, Eq. 8 is proved.
339
+
340
+ It is important to note that solving the geometric optimal transport problem in the latent space does not necessarily ensure that the molecule itself or its distribution also satisfies the optimal transport in the original space. However, given that the proposed flow model is trained in the latent space, it is sufficient to ensure that latent molecules and distributions are transported with optimal cost, thereby accelerating the flow model in the generation of molecules.
341
+
342
+ # C ALGORITHMS
343
+
344
+ This section contains the main algorithms of the proposed GOAT. First, we present the algorithm for solving optimal molecule transport and unified flow in Algorithm 1 and Algorithm 2, respectively. Algorithm 3 presents the pseudo-code for training the GOAT. Algorithm 4 presents the process of fast molecule generation with GOAT.
345
+
346
+ Algorithm 1 Optimal Molecule Transport
347
+ 1: Input: $\mathbf{z}_1$ and $\mathbf{z}_0$ .
348
+ 2: Output: $\hat{\mathbf{z}}_1$ and $\mathbf{z}_0$ .
349
+ 3: Optimal Molecule Transport:
350
+ 4: $M_{c_g}[i,j] \leftarrow \|\mathbf{z}_1^i - \mathbf{z}_0^j\|^2 \leftarrow \|\mathbf{z}_{\mathbf{x},1}^i - \mathbf{z}_{\mathbf{x},0}^j\|^2 + \|\mathbf{z}_{\mathbf{h},1}^i - \mathbf{z}_{\mathbf{h},0}^j\|^2 //$ Construct Atom-level Transport Cost Matrix
351
+ 5: $\hat{\pi} \gets$ Hungarian algorithm (Kuhn, 1955) // Optimal Permutation
352
+ 6: $\hat{\mathbf{R}} \gets$ Kabsch algorithm (Kabsch, 1976) // Optimal Rotation
353
+ 7: $\hat{\mathbf{z}}_1 = \pi(\hat{\mathbf{R}}\mathbf{z}_1)$ // Optimal Molecule Transport
354
+ 8: return $\hat{\mathbf{z}}_1, \mathbf{z}_0$
355
+
356
+ Algorithm 2 Equivariant Autoencoder
357
+ 1: Input: geometric data point $\mathbf{g} = \langle \mathbf{x},\mathbf{h}\rangle$ , equivariant encoder $\mathcal{E}_{\phi}$
358
+ 2: Output: encoded data point $\mathbf{z}$
359
+ 3: Unified Flow:
360
+ 4: $\mathbf{x}\gets \mathbf{x} - \mathbf{G}(\mathbf{x})$ // Translate to CoM Space
361
+ 5: $\mu_{x},\mu_{h}\leftarrow \mathcal{E}_{\phi}(\mathbf{x},\mathbf{h})$ // Encode
362
+ 6: $\langle \epsilon_{\mathbf{x}},\epsilon_{\mathbf{h}}\rangle \sim \mathcal{N}(\mathbf{0},\mathbf{I})$ // Sample noise for Equivariant Autoencoder
363
+ 7: $\epsilon_{\mathrm{x}}\gets \epsilon_{\mathrm{x}} - \mathbf{G}(\epsilon_{\mathrm{x}})$ // Translate to CoM Space
364
+ 8: $\mathbf{z_x,z_h}\gets \mu +\langle \epsilon_x,\epsilon_h\rangle \odot \sigma_0$ // Obtain Latent Representation
365
+ 9: $\mathbf{z}\gets [\mathbf{z}_x,\mathbf{z}_h]$
366
+ 10: return z
367
+
368
+ Algorithm 3 Geometric Optimal Transport
369
+ 1: Input: data distribution $p_1$ , equivariant encoder $\mathcal{E}_{\phi}$ , decoder $\mathcal{D}_{\epsilon}$ , flow network $v_{\theta}$
370
+ 2: Output: GOAT: $(\hat{v}_{\theta})$
371
+ 3: for $\mathbf{g}_1 = \langle \mathbf{x}, \mathbf{h} \rangle \sim p_1$ do
372
+ 4: $\mathbf{z}_1 \leftarrow$ Equivariant Autoencoder $(\mathbf{g}_1)$ // Algorithm2
373
+ 5: $\mathbf{z}_0 \leftarrow \langle \mathbf{z}_{\mathbf{x}, 0}, \mathbf{z}_{\mathbf{h}, 0} \rangle \sim \mathcal{N}(\mathbf{0}, \mathbf{I})$ // Sample noise from base distribution $p_0$
374
+ 6: $\hat{\mathbf{z}}_1, \mathbf{z}_0 =$ Optimal Molecule Transport $(\mathbf{z}_1, \mathbf{z}_0)$ // Algorithm 1
375
+ 7: $\mathcal{L}_{F1}(\theta) = \mathbb{E}_{t,p_0,p_1} \| v_{\theta}(\hat{\mathbf{z}}_t, t) - (\hat{\mathbf{z}}_1 - \mathbf{z}_0) \|^2$ // Loss for the flow
376
+ 8: $\hat{\theta} \leftarrow$ optimizer $(\mathcal{L}_F, \theta)$ // Optimize
377
+ 9: end for
378
+ 10: for $\mathbf{g}_1 = \langle \mathbf{x}, \mathbf{h} \rangle \sim p_1$ do
379
+ 11: $\mathbf{z}_0, \mathbf{z}_1', \mathbf{g}_1' \leftarrow$ Sampling $(\mathcal{D}_{\epsilon}, \hat{\theta})$ // Algorithm 4
380
+ 12: if $\mathbf{g}_1'$ meets quality (measure by RdKit (Landrum et al., 2016)) then
381
+ 13: $\hat{\mathbf{z}}_1', \mathbf{z}_0 =$ Optimal Molecule Transport $(\mathbf{z}_1', \mathbf{z}_0)$ // Algorithm 1
382
+ 14: $\mathcal{L}_{F1}(\theta) = \mathbb{E}_{t,p_0,p_1} \| v_{\theta}(\hat{\mathbf{z}}_t, t) - (\hat{\mathbf{z}}_1' - \mathbf{z}_0) \|^2$ // Loss for the flow
383
+ 15: $\hat{\theta} \leftarrow$ optimizer $(\mathcal{L}_F, \theta)$ // Optimize
384
+ 16: end if
385
+ 17: end for
386
+ 18: return $\hat{\theta}$
387
+
388
+ Algorithm 4 Sampling
389
+ 1: Input: equivariant decoder $\mathcal{D}_{\epsilon}$ , flow network $\theta$ .
390
+ 2: Output: noise: $\mathbf{z}_0$ , generated latent sample: $\mathbf{z}_1'$ , generated molecule: $\mathbf{g}_1'$ .
391
+ 3: $\mathbf{z}_0 \gets \langle \mathbf{z}_{\mathbf{x},0}, \mathbf{z}_{\mathbf{h},0} \rangle \sim \mathcal{N}(\mathbf{0}, \mathbf{I})$ // Sample noise from base distribution $p_0$
392
+ 4: $\mathbf{z}_1' \gets \mathrm{ODE}_{v_{\hat{\theta}}}(\mathbf{z}_0)$
393
+ 5: $\mathbf{g}_1' \gets \mathcal{D}_{\epsilon}(\mathbf{z}_1')$ // Solve ODE
394
+ 6: return $\mathbf{z}_0, \mathbf{z}_1', \mathbf{g}_1'$
395
+
396
+ # D RELATED WORKS
397
+
398
+ Molecule Generation Models. Initial research in molecule generation primarily concentrated on the creation of molecules as 2D graphs (Jin et al., 2018; Liu et al., 2018; Shi et al., 2019). However, the field has seen a shift in interest towards 3D molecule generation. Techniques such as G-SchNet (Gebauer et al., 2019) and G-SphereNet (Luo & Ji, 2022) employ autoregressive methods to incrementally construct molecules by progressively linking atoms or molecular fragments. These approaches necessitate either a detailed formulation of a complex action space or an ordering of actions.
399
+
400
+ Motivated by the success of Diffusion Models (DMs) in image generation, the focus has now turned to their application in 3D molecule generation from noise (Hoogeboom et al., 2022; Xu et al., 2023; Wu et al., 2022; Han et al., 2023). To address the inconsistency of unified Gaussian diffusion across various modalities, a latent space was introduced by (Xu et al., 2023). To resolve the atom-bond inconsistency issue, (Peng et al., 2023) proposed different noise schedulers for different modalities to accommodate noise sensitivity. However, diffusion-based models consistently face the challenge of slow sampling speed, resulting in a significant computational burden for generation. To enhance the speed, recent proposals have introduced flow matching-based (Song et al., 2023a) and Bayesian flow network-based (Song et al., 2023b) models. Despite these advancements, there remains substantial potential for improvement in these frameworks regarding speed, novelty, and ultimate significance.
401
+
402
+ Flow Models. Introduced in (Chen et al., 2018), Continuous Normalizing Flows (CNFs) represent a continuous-time variant of Normalizing Flows (Rezende & Mohamed, 2015). Subsequently, flow matching (Lipman et al., 2022) and rectified flow (Liu et al., 2022) were proposed to circumvent the need for ODE simulations during forward and backward propagation in CNF, and they introduced optimal transport for faster generation. Leveraging these advanced flow models, (Garcia Satorras et al., 2021) pioneered the use of flow models for molecule generation, which was later followed by the proposal of (Song et al., 2023a), based on hybrid transport. Beyond the realm of 3D molecule generation, the concept of flow matching and optimal transport has also found applications in many-body systems (Garcia Satorras et al., 2021) and molecule simulations (Midgley et al., 2023). Despite these advancements, existing models primarily focus on atomic coordinates, leaving the challenge of geometric optimal transport unresolved.
403
+
404
+ # E DATASET
405
+
406
+ # E.1 QM9 DATASET
407
+
408
+ QM9 (Ramakrishnan et al., 2014) is a comprehensive dataset that provides geometric, energetic, electronic, and thermodynamic properties for a subset of the GDB-17 database (Ruddigkeit et al., 2012) comprises a total of 130,831 molecules. We utilize the train/validation/test partitions delineated in (Anderson et al., 2019), comprising 100K, 18K, and 13K samples for each respective partition.
409
+
410
+ # E.2 GEOM-DRUG DATASET
411
+
412
+ GEOM-DRUG (Geometric Ensemble Of Molecules) dataset (Axelrod & Gomez-Bombarelli, 2022) encompasses around 450,000 molecules, each with an average of 44.2 atoms and a maximum of 181 atoms<sup>4</sup>. We build the GEOM-DRUG dataset following (Hoogeboom et al., 2022) with the provided code.
413
+
414
+ # F IMPLEMENTATION DETAILS
415
+
416
+ In this study, all the neural networks utilized for the encoder, flow network, and decoder are implemented using EGNNs (Satorras et al., 2021). The dimension of latent invariant features, denoted as $k$ , is set to 2 for QM9 and 1 for GEOM-DRUG, to map the molecule for a unified flow matching.
417
+
418
+ For the training of the flow neural network, we employ EGNNs with 9 layers and 256 hidden features on QM9, and 4 layers and 256 hidden features on GEOM-DRUG, with a batch size of 64 and 16, respectively.
419
+
420
+ In the case of equivariant autoencoders, the decoder is parameterized in the same manner as the encoder, but the encoder is implemented with a 1-layer EGNN. This shallow encoder effectively constrains the encoding capacity and aids in regularizing the latent space (Xu et al., 2023).
421
+
422
+ All models utilize SiLU activations and are trained until convergence. Across all experiments, the Adam optimizer (Kingma & Ba, 2015) with a constant learning rate of $10^{-4}$ is chosen as our default training configuration. The training process for QM9 takes approximately 3000 epochs, while for GEOM-DRUG, it takes about 20 epochs.
423
+
424
+ With the flow model trained on QM9 or GEOM-DRUG, we then generate and purify the coupling to obtain a total of 100K molecular pairs, which form the estimated couplings.
425
+
426
+ # Hardware Configuration
427
+
428
+ 1. GPU: NVIDIA GeForce RTX 3090
429
+ 2. CPU: Intel(R) Xeon(R) Platinum 8338C CPU
430
+ 3. Memory: 512 GB
431
+ 4. Time: Around 7 days for QM9 and 20 days for GEOM-DRUG.
432
+
433
+ # G MORE EXPERIMENTAL RESULTS
434
+
435
+ We present the full results in Tables 5 and 6. In our detailed experimental results on QM9, we reproduced EDM, GeoLDM, and EquiFM on the QM9 dataset to obtain the actual generation time consumption with the same compute configuration. As a result, the proposed method achieves the fastest sampling speed, which is consistent with the measurement of sampling steps. We also witness a huge generation speed improvement by the proposed GOAT for GEOM-DRUG.
436
+
437
+ In addition to supplementing the actual time used for generation, we also added the metrics of molecule stability, and it is obvious that all methods achieve nearly $0\%$ molecule stability in GEOM-DRUG. This is because metrics, atom and molecule stability, create errors during bond type prediction based on pair-wise atom types and distances. Therefore, we concentrate on metrics measured by RdKit.
438
+
439
+ Lastly, we produced the full results of GeoBFN using sampling steps from 50 to 1,000. It is worth noting that the novelty and significance continue to decrease on QM9 datasets as sampling steps increase, which aligns with our conjecture in the experiments. Besides, we also observed that its performance on GEOM-DRUG also decreased in terms of validity. Combined with its efficiency and quality, we believe that our method, GOAT, has competitive performance compared with GeoBFN.
440
+
441
+ We present the visualization of generated molecules on QM9 and GEOM-DRUG in Figures 5 and 6.
442
+
443
+ Table 5: Comparisons of generation quality (larger is better) in terms of Atom Stability, Molecule Stability, Validity, Uniqueness, Novelty, and Significance. And comparisons of generation efficiency regarding generation time and sampling steps for one molecule (less is better). The best results are highlighted in bold.
444
+
445
+ <table><tr><td colspan="9">QM9</td></tr><tr><td rowspan="2"># Metrics</td><td colspan="2">Efficiency</td><td colspan="6">Quality (%)</td></tr><tr><td>S-Time</td><td>Steps</td><td>Atom Sta</td><td>Mol Sta</td><td>Valid</td><td>Uniqueness</td><td>Novelty</td><td>Significance</td></tr><tr><td>Data</td><td>-</td><td>-</td><td>99.0</td><td>95.2</td><td>97.7</td><td>100.0</td><td>-</td><td>-</td></tr><tr><td>ENF</td><td>-</td><td>-</td><td>85.0</td><td>4.9</td><td>40.2</td><td>98.0</td><td>-</td><td>-</td></tr><tr><td>G-Schnet</td><td>-</td><td>-</td><td>95.7</td><td>68.1</td><td>85.5</td><td>93.9</td><td>-</td><td>-</td></tr><tr><td>GDM-aug</td><td>1.50</td><td>1000</td><td>97.6</td><td>71.6</td><td>90.4</td><td>99.0</td><td>66.8</td><td>73.9</td></tr><tr><td>EDM</td><td>1.68</td><td>1000</td><td>98.7</td><td>82.0</td><td>91.9</td><td>98.7</td><td>65.7</td><td>64.8</td></tr><tr><td>EDM-Bridge</td><td>-</td><td>1000</td><td>98.8</td><td>84.6</td><td>92.0</td><td>98.6</td><td>-</td><td>-</td></tr><tr><td>GeoLDM</td><td>1.86</td><td>1000</td><td>98.9</td><td>89.4</td><td>93.8</td><td>98.8</td><td>58.1</td><td>53.9</td></tr><tr><td rowspan="4">GeoBFN</td><td>-</td><td>50</td><td>98.3</td><td>85.1</td><td>92.3</td><td>98.3</td><td>72.9</td><td>66.1</td></tr><tr><td>0.16</td><td>100</td><td>98.6</td><td>87.2</td><td>93.0</td><td>98.4</td><td>70.3</td><td>64.4</td></tr><tr><td>-</td><td>500</td><td>98.8</td><td>88.4</td><td>93.4</td><td>98.3</td><td>67.7</td><td>62.1</td></tr><tr><td>-</td><td>1000</td><td>99.1</td><td>90.9</td><td>95.3</td><td>97.6</td><td>66.4</td><td>61.8</td></tr><tr><td>EquiFM</td><td>0.37</td><td>200</td><td>98.9</td><td>88.3</td><td>94.7</td><td>98.7</td><td>57.4</td><td>53.7</td></tr><tr><td>GOAT</td><td>0.12</td><td>90</td><td>98.4</td><td>84.1</td><td>90.0</td><td>99.0</td><td>78.6</td><td>72.3</td></tr></table>
446
+
447
+ Table 6: Comparisons of generation quality (larger is better) in terms of Atom Stability, Molecule Stability, Validity, Uniqueness, Novelty, and Significance. And comparisons of generation efficiency regarding generation time and sampling steps per molecule (less is better). The best results are highlighted in bold.
448
+
449
+ <table><tr><td colspan="7">GEOM-DRUG</td></tr><tr><td rowspan="2"># Metrics</td><td colspan="2">Efficiency</td><td colspan="3">Quality (%)</td><td rowspan="2">Uniqueness</td></tr><tr><td>S-Time</td><td>Steps</td><td>Atom Sta</td><td>Mol Sta</td><td>Valid</td></tr><tr><td>Data</td><td>-</td><td>-</td><td>86.5</td><td>0.0</td><td>99.9</td><td>100.0</td></tr><tr><td>ENF</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td>G-Schnet</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td>GDM-aug</td><td>-</td><td>1000</td><td>77.7</td><td>-</td><td>91.8</td><td>-</td></tr><tr><td>EDM</td><td>14.88</td><td>1000</td><td>81.3</td><td>0.0</td><td>92.6</td><td>99.9</td></tr><tr><td>EDM-Bridge</td><td>-</td><td>1000</td><td>82.4</td><td>-</td><td>92.8</td><td>-</td></tr><tr><td rowspan="4">GeoBFN</td><td>-</td><td>50</td><td>78.9</td><td>-</td><td>93.1</td><td>-</td></tr><tr><td>-</td><td>100</td><td>81.4</td><td>-</td><td>93.5</td><td>-</td></tr><tr><td>-</td><td>500</td><td>85.6</td><td>-</td><td>92.1</td><td>-</td></tr><tr><td>-</td><td>1000</td><td>86.2</td><td>-</td><td>91.7</td><td>-</td></tr><tr><td>GeoLDM</td><td>12.84</td><td>1000</td><td>84.4</td><td>0.0</td><td>99.3</td><td>99.9</td></tr><tr><td>EquiFM</td><td>-</td><td>200</td><td>84.1</td><td>-</td><td>98.9</td><td>-</td></tr><tr><td>GOAT</td><td>0.94</td><td>90</td><td>84.8</td><td>0.0</td><td>96.2</td><td>99.9</td></tr></table>
450
+
451
+ ![](images/3db6abb0fecbe912bff7a9998805aa08d1e0fa34dbb5d903994b181532c191c2.jpg)
452
+ Figure 5: Molecules Generated by GOAT trained on QM9.
453
+
454
+ ![](images/871eeaecfe01870167a09e1fe6d80740370bd9075cf5797ad3e66eb4ae73e90b.jpg)
455
+ Figure 6: Molecules Generated by GOAT trained on GEOM-DRUG.
456
+
457
+ # H DISTANCE BETWEEN NOISES AND GENERATED MOLECULES
458
+
459
+ We presented a comparison of the distance between generated molecules and the initial noise, including compared methods, the proposed GOAT, and its variants (w/o ODT and w/o OMT). We presented the experimental results in the Table:
460
+
461
+ <table><tr><td></td><td colspan="2">QM9</td><td colspan="2">GEOM-DRUG</td></tr><tr><td>Metrics</td><td>Average distance</td><td>Average distance per atom</td><td>Average distance</td><td>Average distance per atom</td></tr><tr><td>LDM</td><td>651.52</td><td>22.47</td><td>1834.22</td><td>10.13</td></tr><tr><td>LDM</td><td>185.01</td><td>6.38</td><td>1046.67</td><td>5.78</td></tr><tr><td>MFM</td><td>530.86</td><td>18.31</td><td>1543.23</td><td>8.57</td></tr><tr><td>%o ODT</td><td>72.48</td><td>2.50</td><td>924.50</td><td>5.14</td></tr><tr><td>%o OMT</td><td>93.36</td><td>3.22</td><td>1190.88</td><td>6.62</td></tr><tr><td>BAT</td><td>55.10</td><td>1.90</td><td>702.89</td><td>3.88</td></tr></table>
462
+
463
+ Table 7: Distance Between Noises and Generated Molecules
464
+
465
+ The experimental results on the distance between molecules and noises validate that the proposed method achieves the minimum transport distance from the noise and thereby also verifies the superiority of our method in generation speed.
466
+
467
+ # I IMPACT STATEMENTS
468
+
469
+ This paper contributes to the advancement of generative Artificial Intelligence (AI) in scientific domains, including material science, chemistry, and biology. The insights gained will significantly enhance generative AI technologies, thereby streamlining the process of scientific knowledge discovery.
470
+
471
+ The application of machine learning to molecule generation expands the possibilities for molecule design beyond therapeutic purposes, potentially leading to the creation of illicit drugs or hazardous substances. This potential for misuse and unforeseen consequences underscores the need for stringent ethical guidelines, robust regulation, and responsible use of these technologies to safeguard individuals and society.
accelerating3dmoleculegenerationviajointlygeometricoptimaltransport/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ca84d5b75b48876f0886015890bed9d66cac709104fec215316664b79d43db3c
3
+ size 726532
accelerating3dmoleculegenerationviajointlygeometricoptimaltransport/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:600c5de919851d85cbf5097023e90a96ee88beaead7f3e95df4912031a27894f
3
+ size 690533
acceleratingautoregressivetexttoimagegenerationwithtrainingfreespeculativejacobidecoding/c5fa3fe5-d415-4220-8a61-0115cb31e01c_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fc2a64315f6c4fe24e031d8b9917b94b69fd9dace21a2a469da01da56a037497
3
+ size 143722
acceleratingautoregressivetexttoimagegenerationwithtrainingfreespeculativejacobidecoding/c5fa3fe5-d415-4220-8a61-0115cb31e01c_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bdea9c8ed95ff114dafcd873cbeb8714047a75fd42338a8408476b2995b43e72
3
+ size 187108
acceleratingautoregressivetexttoimagegenerationwithtrainingfreespeculativejacobidecoding/c5fa3fe5-d415-4220-8a61-0115cb31e01c_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:af3e3205de7c2a47e63f7577c6302c6312e214bc11304217ca69796de8b805f5
3
+ size 6424464
acceleratingautoregressivetexttoimagegenerationwithtrainingfreespeculativejacobidecoding/full.md ADDED
@@ -0,0 +1,644 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # ACCELERATING AUTO-REGRESSIVE TEXT-TO-IMAGE GENERATION WITH TRAINING-FREE SPECULATIVE JACOBI DECODING
2
+
3
+ Yao Teng $^{1}$ Han Shi $^{2}$ Xian Liu $^{3}$ Xuefei Ning $^{4}$
4
+
5
+ Guohao Dai $^{5,6}$ Yu Wang $^{4}$ Zhenguo Li $^{2}$ Xihui Liu $^{1*}$
6
+
7
+ <sup>1</sup>The University of Hong Kong <sup>2</sup>Huawei Noah's Ark Lab <sup>3</sup>CUHK
8
+
9
+ 4Tsinghua University 5Shanghai Jiao Tong University 6Infinigence AI
10
+
11
+ # ABSTRACT
12
+
13
+ The current large auto-regressive models can generate high-quality, high-resolution images, but these models require hundreds or even thousands of steps of next-token prediction during inference, resulting in substantial time consumption. In existing studies, Jacobi decoding, an iterative parallel decoding algorithm, has been used to accelerate the auto-regressive generation and can be executed without training. However, the Jacobi decoding relies on a deterministic criterion to determine the convergence of iterations. Thus, it works for greedy decoding but is incompatible with sampling-based decoding which is crucial for visual quality and diversity in the current auto-regressive text-to-image generation. In this paper, we propose a training-free probabilistic parallel decoding algorithm, Speculative Jacobi Decoding (SJD), to accelerate auto-regressive text-to-image generation. By introducing a probabilistic convergence criterion, our SJD accelerates the inference of auto-regressive text-to-image generation while maintaining the randomness in sampling-based token decoding and allowing the model to generate diverse images. Specifically, SJD facilitates the model to predict multiple tokens at each step and accepts tokens based on the probabilistic criterion, enabling the model to generate images with fewer steps than the conventional next-token-prediction paradigm. We also investigate the token initialization strategies that leverage the spatial locality of visual data to further improve the acceleration ratio under specific scenarios. We conduct experiments for our proposed SJD on multiple auto-regressive text-to-image generation models, showing the effectiveness of model acceleration without sacrificing the visual quality. The code of our work is available here: https://github.com/tyshiwo1/Accelerating-T2I-AR-with-SJD/.
14
+
15
+ # 1 INTRODUCTION
16
+
17
+ Auto-regressive models enable generative tasks by performing next-token prediction, which is widely used in multiple domains such as the language (Bubeck et al., 2023), image (Yu et al., 2022), and video (Kondratyuk et al., 2023; Wang et al., 2024b) generation. Notably, auto-regressive text-to-image generation models (Ding et al., 2021; Ramesh et al., 2021; Yu et al., 2022) have shown promising results in generating high-quality images. Auto-regressive text-to-image generation models have better potential in scalability and pave the way for native multi-modal models (Team, 2024). However, the auto-regressive paradigm creates high latency during inference because it necessitates the decoding of tokens in a sequential, token-by-token manner. Therefore, the models have to sequentially go through hundreds or even thousands of forward passes to generate a single image. Unlike diffusion models of which the inference acceleration methods have been extensively investigated (Song et al., 2023; Luo et al., 2023; Yin et al., 2024b), there has been limited previous work exploring the acceleration of auto-regressive text-to-image generation models. Moreover, those auto-regressive models that are capable of text-to-image generation typically have several billions of parameters, making the common training-based generative model acceleration techniques such as self-consistency
18
+
19
+ ![](images/ce60b485b390d151383059e842397af29d76dbdd9c3a49f92e7ef2fe5b60e5a5.jpg)
20
+ Steps: 2357
21
+
22
+ ![](images/78bd9653b49197ac38e8f78b96ab1b77710036831327bb92cf27ffdc830f151e.jpg)
23
+ Steps: 1012 (2.3 × Faster)
24
+
25
+ ![](images/93a43ebaa43516ebd4211f83f2cd7d6753209aa3714e8cb85d8bf0c176088a65.jpg)
26
+ Steps: 2357
27
+
28
+ ![](images/b63470f2199c3919a7152f810b9565e37cd2746f7d68e5a5a97b0fecefc6195d.jpg)
29
+ Steps: 959 (2.5 × Faster)
30
+
31
+ ![](images/1927a0c63096af31cfda93d78af35d1305c2d07325b7f2e9da5e3fbfbd94d887.jpg)
32
+ Prompt: A hawk-man with a red head
33
+ Steps: 2357
34
+
35
+ ![](images/894cb2f2ba5a7a6fc6c113a882110a63c60e99a7b2e0c7c803de2fc7711fe0f2.jpg)
36
+ Steps: 1085 (2.2 × Faster)
37
+
38
+ ![](images/bf124ae22afdd60b5b146b428444913aceb568152a96903ef307338f2b44abdf.jpg)
39
+ Steps: 2357
40
+
41
+ ![](images/639f7fe32fabf79fb80b8c965796364fb12b2df6edd3141bcc40803e6e666e2d.jpg)
42
+ Steps: 844 (2.8 × Faster)
43
+
44
+ ![](images/3f5a72ff3c8d1a48cb5f65211f84693f9293fe08fd795e2b2b749616fd6676cd.jpg)
45
+ Prompt: Macro photography of a transparent water drop in the shape of a cat.
46
+ Steps: 2357
47
+ Prompt: most beautiful anime artwork, a most cute anime girl, double exposure, iridescent nebula galaxy, black background, ethereal glow, bloom, hdr, high-quality, 8K
48
+ Figure 1: We propose Speculative Jacobi Decoding, a training-free multi-token prediction algorithm, to accelerate auto-regressive text-to-image generation by reducing the number of model forward passes (denoted as steps) during inference. We perform our algorithm on Lumina-mGPT, and the reduced steps are marked in red. The original steps are marked in black.
49
+
50
+ ![](images/fd5120c9e47708df0bb5d9de6e7ce260763852fe7c88cd5121bb066233a79320.jpg)
51
+ Steps: 934 (2.5 × Faster)
52
+ Prompt: A cool furry black monkey meditates on the clean wet ground, in the dusk, the golden sunset is shining on the ground on one side and the other side, high-quality, 8K facial-symmetry
53
+
54
+ ![](images/2e2681328c40602d3f2bbc962d68f53720063e80e3085f33665c9588bc593d92.jpg)
55
+ Prompt: A Corgi dog in 2D logo style, simple texture, clean background, facial- and eye-symmetry.
56
+ Steps: 2357
57
+
58
+ ![](images/93186f6c3fc358f12a5a0048df2dded4350653faaf46d41945a589e0757d5a25.jpg)
59
+ Steps: 1075 $(2.2 \times$ Faster)
60
+
61
+ distillation computationally expensive (Kou et al., 2024). Therefore, our work aims to accelerate the auto-regressive text-to-image generation models in a training-free manner.
62
+
63
+ An intuitive approach is to enable the auto-regressive models to decode multiple tokens in parallel within a forward pass. In the early research on auto-regressive image generation, Jacobi decoding (Ortega & Rheinboldt, 2000) has been employed to achieve this objective (Song et al., 2021). Jacobi decoding is an iterative algorithm starting from a sequence of randomly initialized tokens, and this algorithm can be executed directly on pre-trained auto-regressive models in a training-free way. In each Jacobi iteration, the model performs a single forward pass on the input sequence with a causal mask, thus decoding tokens in parallel. The decoded tokens would converge after multiple iterations of parallel decoding. The criterion for this convergence is defined as follows: the difference between the values of decoded tokens remains within a sufficiently small threshold over two consecutive iterations. Since the number of iterations required for convergence is typically smaller than the sequence length and the parallel forward pass runs fast in GPUs, the generation can be accelerated with Jacobi Decoding.
64
+
65
+ However, Jacobi decoding faces significant challenges when applied to recent auto-regressive text-to-image generation models. We observe that the recent auto-regressive text-to-image generation models (Liu et al., 2024b; Chern et al., 2024; Sun et al., 2024a) greatly rely on sampling-based decoding with high randomness to generate diverse images. We present the generated images using top- $K$ sampling with various $K$ values, where a larger $K$ indicates higher randomness. As shown in Fig. 2, the model with high randomness in sampling generates images with diverse and high-fidelity details and structures, whereas it outputs monotonous or even incomprehensible images with greedy decoding. Unfortunately, Jacobi decoding with the deterministic criterion of convergence is
66
+
67
+ incompatible with such highly random sampling (analyzed in Sec. 5.4), i.e., and it cannot accelerate the inference given such sampling decoding.
68
+
69
+ In this work, we propose to use a probabilistic Jacobi decoding algorithm to accelerate the inference of auto-regressive text-to-image generation models and to support the sampling decoding methods for those models. We observe that the acceleration of Jacobi decoding relies on the assumption that multiple consecutive tokens can be correctly decoded in each Jacobi iteration (shown by green stepped area in Fig. 3). Similar ideas have been applied in other probabilistic algorithms for accelerating the decoding of large language models. For example, in speculative sampling (Leviathan et al., 2023; Chen et al., 2023), an additional small model is trained for rapidly generating draft sequences, and then the large language model probabilistically accepts a subset of draft tokens from left to right. Drawing from the above analysis, in this paper, we directly advance the deterministic Jacobi decoding into a probabilistic algorithm, coined as Speculative Jacobi Decoding (SJD). Our method allows the auto-regressive text-to-image generation models to decode multiple tokens within one forward pass in a training-free manner. In SJD, the model computes the conditional probability for a sequence of draft tokens with a single forward pass. Then, we define a probabilistic criterion to determine which draft tokens to accept, from left to right. The accepted tokens are appended to the fixed pre-filling sequence. The remaining tokens are concatenated with a set of newly initialized tokens, serving as the draft tokens for the next decoding iteration. Our SJD accelerates the inference of auto-regressive text-to-image generation models without requiring additional training or tuning of separate modules. Moreover, we propose the spatial locality-aware token initialization strategy to accelerate the generation process further.
70
+
71
+ We perform quantitative and qualitative experiments to demonstrate the effectiveness of our method. Results show that our method can accelerate several auto-regressive text-to-image generation models without sacrificing the quality of generated images. For example, it can accelerate Anole (Chern et al., 2024) and Lumina-mGPT (Liu et al., 2024b) by about $2 \times$ with almost no loss in visual quality. Moreover, the acceleration ratio can be beyond $3 \times$ in certain scenarios containing simple patterns.
72
+
73
+ To the best of our knowledge, SJD is the first method for accelerating the inference of auto-regressive text-to-image models that rely on sampling decoding. We summarize our contributions as follows:
74
+
75
+ - We propose a new probabilistic multi-token decoding algorithm, coined as Speculative Jacobi Decoding (SJD). By improving the previous Jacobi decoding with a probabilistic criterion for token acceptance, we can accelerate the recent auto-regressive text-to-image generation models that rely heavily on random token samplers.
76
+ - Compared with previous Speculative Decoding to accelerate language models, our approach is training-free and does not require training an extra model to predict draft tokens.
77
+ - Experiments demonstrate that our method can accelerate auto-regressive text-to-image generation by around $2 \times$ with almost no sacrifice in visual quality.
78
+
79
+ # 2 RELATED WORK
80
+
81
+ Auto-regressive image generation. Auto-regressive image generation models have two features: next-token-prediction and discrete image tokenization. Early works including PixelCNNs (Van den Oord et al., 2016; Salimans et al., 2017) and PixelSNAIL (Chen et al., 2018) use the auto-regressive strategy to model the image generation with the convolutional neural networks on the discretized pixels. These works generate pixels in the raster-scan ordering or the zigzag ordering. DALL-E (Ramesh et al., 2021) and CogView (Ding et al., 2021) pave the way for the pipeline of the auto-regressive image generation: A discrete autoencoder compresses RGB images into image tokens and a large auto-regressive model makes predictions based on these image tokens. Parti (Yu et al., 2022) uses a transformer encoder (Vaswani et al., 2017) to provide the textual features for the auto-regressive model to perform the next image token prediction, thereby achieving text-to-image generation. LlamaGen (Sun et al., 2024a) acts as a class-to-image auto-regressive baseline on ImageNet dataset (Deng et al., 2009). MARS (He et al., 2024) performs multi-modal generation with a mixture of auto-regressive models, where its image model is initialized with the pre-trained large language model and is fine-tuned to perform image generation. Chameleon (Team, 2024) aims to unify all multi-modal tasks with discrete tokens and perform the next token prediction on these tokens with a large auto-regressive model. Lumina-mGPT (Liu et al., 2024b) and Anole (Chern et al.,
82
+
83
+ 2024) fine-tune Chameleon for better text-to-image generation. In this paper, we conduct experiments mainly on Lumina-mGPT and Anole to verify the effectiveness of our method.
84
+
85
+ Acceleration of image generation models. The iterative image generation requires acceleration. For instance, the diffusion model, originally trained on a denoising trajectory with one thousand steps, has been accelerated to perform inference using just dozens or even a few steps. Given that the diffusion model has emerged as a leading approach in text-to-image generation (OpenAI, 2023; Rombach et al., 2022; Esser et al., 2024), most acceleration methods in image generation are built upon it. Many acceleration methods focus on shortening the denoising trajectory by distillation technique (Salimans & Ho, 2022; Song et al., 2023; Wang et al., 2024a; Kim et al., 2023; Xu et al., 2024; Yin et al., 2024b;a) while some other studies focus on reducing the computational complexity (Yuan et al., 2024; Zhao et al., 2024; Ma et al., 2024). In contrast to the diffusion model, acceleration methods for auto-regressive image generation have not been extensively explored, primarily due to the absence of powerful base models. Jacobi decoding is applied to PixelCNNs for inference acceleration in the early research (Song et al., 2021), yet it lacks a careful design for the random token sampling, significantly impacting its acceleration on current auto-regressive models. In this paper, we enhance Jacobi decoding to be compatible with this random sampling. Also, the inference process of each iteration in our method is similar to that of non-auto-regressive models (Chang et al., 2022; Tian et al., 2024; Li et al., 2024b). Nevertheless, unlike these models, our approach only modifies the inference schedule of pre-trained auto-regressive models instead of training a separate non-auto-regressive model, thereby preserving the performance and scalability of the auto-regressive models.
86
+
87
+ Acceleration of language models. Different from image generation, the auto-regressive paradigm prevails in language processing. A lot of works (Zhou et al., 2024; Devoto et al., 2024; Liu et al., 2024a;c; Yang et al., 2024a; DeepSeek-AI, 2024; Zhang et al., 2024; Fu et al., 2024a; Li et al., 2024a) focus on compressing the models by weight pruning, activation sparsification, quantization, factorization, but the paradigm of token-by-token prediction remains unchanged. There are also works fine-tuning the auto-regressive models to predict multiple tokens in parallel with several decoding heads (Gloockle et al., 2024). However, these works require more memory to load these additional heads in GPUs. The speculative sampling (Leviathan et al., 2023; Chen et al., 2023; Li et al., 2024c; Sun et al., 2024b) uses a small language model to assist the large language model in sequence generation. This model is trained on the same domain as the large model and is small enough for faster generation. It first generates a sequence with its own inference paradigm. Then, the large model verifies and samples only one prefix of this sequence to serve as part of the final output by executing a single forward pass. The verification phase is well-designed to guarantee that each sampled token theoretically satisfies the conditional probability parameterized by the large model. Jacobi decoding (Song et al., 2021; Santilli et al., 2023) allows the model to iteratively decode multiple tokens in fewer steps than the token counts with the deterministic greedy sampling but without auxiliary modules. CLLM (Kou et al., 2024) fine-tunes the collected Jacobi trajectories into large language models for acceleration. Lookahead Decoding (Fu et al., 2024b) adapts training-free Jacobi decoding in large language models by using a pool of $n$ -grams obtained via Jacobi iterations with greedy sampling. In this work, we directly adapt the probabilistic verification of speculative sampling into Jacobi decoding to advance it into a probabilistic algorithm without any additional auxiliary designs and training.
88
+
89
+ # 3 PRELIMINARIES
90
+
91
+ # 3.1 AUTO-REGRESSIVE TEXT-TO-IMAGE GENERATION
92
+
93
+ The auto-regressive text-to-image generation models are composed of three components: a discrete image tokenizer that encodes images into discrete tokens, an auto-regressive transformer-based generator that generates discrete image tokens with next-token-prediction conditioned on the text prompts, and an image decoder that decodes the predicted image tokens to the images in pixel space. The most time-consuming component for auto-regressive text-to-image generation is the auto-regressive transformer, and our work aims at accelerating the inference of the auto-regressive transformer to predict discrete image tokens based on text prompts.
94
+
95
+ During each inference step of the auto-regressive transformer, the model predicts the probability distribution of the next token over the entire vocabulary of the tokenizer (implemented through a softmax classifier) and then samples from this distribution to generate the token. Specifically, given a
96
+
97
+ ![](images/1098cb16829d929ca356d838b1649e532eb3c86d60fcf080edad90378b0499cc.jpg)
98
+ Figure 2: The results of the greedy decoding (no randomness), top-10, top-100, and top-2000 sampling (high randomness) of Lumina-mGPT (Liu et al., 2024b). Each row presents the images generated with the same random seeds.
99
+
100
+ ![](images/985db2d3e452f6bdc4ef9b5152940e33ead640cbf280ff77ed24186f04b93c4b.jpg)
101
+ Figure 3: The pipeline of the vanilla Jacobi decoding on an auto-regressive model. The prediction with sampling is performed in parallel at each Jacobi iteration. We use different shades of blue to indicate the differences between the tokens that have not been accepted.
102
+
103
+ sequence of pre-filled or already decoded tokens $(x_{1}, x_{2}, \dots, x_{i})$ , the auto-regressive model predicts a categorical distribution $p_{\theta}(x | \boldsymbol{x}_{1:i})$ , where we denote the input token sequence $(x_{1}, x_{2}, \dots, x_{i})$ as $\boldsymbol{x}_{1:i}$ for simplicity, $\theta$ denotes the auto-regressive model parameters, and $x$ is the random variable representing the next token (category). Then, a token is sampled according to $p_{\theta}(x | \boldsymbol{x}_{1:i})$ , treated as $x_{i+1}$ , and is subsequently appended to $(x_{1}, x_{2}, \dots, x_{i})$ for the next decoding step. In text-to-image auto-regressive generation, the above process starts with a sequence of text tokens and a special token to represent the beginning of image token prediction. To facilitate the generation of diverse images, top- $K$ sampling is commonly employed as the token sampling strategy for text-to-image generation.
104
+
105
+ # 3.2 JACOBI DECODING
106
+
107
+ Jacobi decoding deems the auto-regressive inference as a process of solving the fixed point of a nonlinear equation in a triangular system (Song et al., 2021). This decoding algorithm iteratively performs multi-token decoding and can be executed without fine-tuning or auxiliary modules. We show the specific process of decoding one sequence of tokens in Fig. 3. First, given the previously pre-filled or decoded tokens, we randomly initialize a sequence of candidate tokens. Then, in each iteration, we execute one forward pass of the auto-regressive model for all the candidate tokens with a causal mask. The predicted probabilities then generate the tokens via greedy sampling, and these sampled tokens are taken as the inputs of the next iteration. This process can be formulated as: $x_{i}^{(j + 1)} = \arg \max_{x}p_{\theta}(x|\boldsymbol{x}_{1:i - 1}^{(j)})$ , where $i$ denotes the token index and $j$ denotes the iteration index. The Jacobi decoding process continues iterating until the convergence is reached, as determined by a deterministic criterion where these tokens remain unchanged between consecutive iterations.
108
+
109
+ Discussion. The acceleration of Jacobi decoding derives from an assumption that multiple tokens can be correctly decoded within one forward pass in Jacobi iteration. Fig. 3 illustrates this scenario, where the accepted tokens (green stepped area) extend beyond the dashed green triangle outline. Specifically, the model accepts two consecutive tokens after the first Jacobi iteration. Thus, it can generate at least four tokens through three forward passes. In the worst case, only three tokens can be generated through three forward passes (Song et al., 2021). Note that the number of forward passes in the worst case of Jacobi decoding is equal to that in the original auto-regressive case.
110
+
111
+ # 4 SPECULATIVE JACOBI DECODING
112
+
113
+ Analysis. The vanilla Jacobi decoding incorporates a deterministic criterion for determining the convergence, which works well with greedy sampling (no randomness) in language models (Fu et al., 2024b; Kou et al., 2024). In contrast, in auto-regressive text-to-image generation, randomness plays a crucial role in the sampling-based decoding process, i.e., higher randomness corresponds to highly diverse details and structures in the generated images. As shown in Fig. 2, we use the text prompt "a cat on a mat" to generate images with different sampling strategies including greedy decoding and
114
+
115
+ ![](images/6e3f08f784fcf7a16c13b51de28bb437e84fc100fa4f1f850c509de45b326804.jpg)
116
+ Figure 4: Overview of one iteration of our speculative Jacobi decoding (SJD). First, a sequence of draft tokens and the corresponding probabilities are taken as the inputs. Second, we perform a forward pass with the auto-regressive model on the draft tokens, obtaining the probabilities of these tokens. Third, we perform the verification according to these two types of probabilities, accepting a subset of tokens and (re-)sampling the remaining tokens. Last, the accepted tokens are appended to the pre-filling tokens and fixed, while the resampled tokens, along with newly initialized tokens, will serve as the draft tokens for the next iteration.
117
+
118
+ top- $K$ decoding with different values of $K$ . We observe that the generated images would contain more details and diverse structures as the $K$ increases. The greedy decoding ( $K = 1$ with no equal probabilities) leads to suboptimal performance with low quality and no diversity in generated images. Therefore, random sampling-based decoding is important to image generation, but the original Jacobi decoding is incompatible with such randomness in sampling.
119
+
120
+ To address the aforementioned issue, we advance the deterministic Jacobi iteration into a new training-free probabilistic parallel decoding algorithm, inspired by speculative sampling (Leviathan et al., 2023). Specifically, in each iteration, we decode multiple tokens in parallel and utilize a probabilistic criterion to accept multiple decoded tokens from the outputs of the previous iteration. Moreover, to reduce the number of iterations required for inference, we propose a new image token initialization strategy incorporating spatial priors.
121
+
122
+ # 4.1 SPECULATIVE JACOBI ITERATION
123
+
124
+ After pre-filling the tokens of text prompts, we perform speculative Jacobi decoding for the image tokens. Acknowledging the computational expense of decoding all image tokens simultaneously, we decode multiple tokens per iteration in a sliding-window manner, termed the Jacobi window. Our method starts with a sequence of initialized candidate tokens, the length of which corresponds to the window size. During each decoding iteration, we predict the token probabilities for the draft token sequence within the current window. Then, a subset of these tokens is accepted based on a probabilistic criterion, and these accepted tokens are added to the fixed pre-filling tokens for the next iteration. The remaining unaccepted tokens are resampled for the next iteration. In the next iteration, the Jacobi windows slides to include the unaccepted tokens from the previous iteration plus some newly initialized tokens to maintain the Jacobi window size during decoding. The process of our iteration is illustrated in Fig. 4. Assuming that we have pre-filled or accepted $n$ tokens and the Jacobi window size is $W$ , we would decode the next $W$ tokens. The iteration can be described as follows:
125
+
126
+ Step 1: At the $j$ -th iteration, we have the input tokens that are either predicted (but not accepted) in the previous $(j - 1)$ -th iteration or newly initialized (Sec. 4.2). These tokens serve as the draft tokens in this iteration, denoted as $(x_{n}^{(j)}, x_{n + 1}^{(j)}, \dots, x_{n + W - 1}^{(j)})$ . We denote the probability corresponding to the draft token $x_{i}^{(j)}$ as $p_{\theta}(x | \mathbf{x}_{1:i - 1}^{(j - 1)})$ . This probability is set to be conditioned on the input tokens of the previous iteration (Step 3 ensures this setting).
127
+
128
+ Step 2: We execute a single forward pass of the auto-regressive model to obtain the conditional probability for the draft tokens in parallel. The probability for $x_{i}^{(j)}$ is denoted as $p_{\theta}(x|\pmb{x}_{1:i-1}^{(j)})$ .
129
+
130
+ Step 3: We conduct the speculative verification between the conditional probability given the draft tokens from the previous iteration, $p_{\theta}(x|\pmb{x}_{1:i-1}^{(j-1)})$ , and the conditional probability from the current iteration, $p_{\theta}(x|\pmb{x}_{1:i-1}^{(j)})$ . In this verification process, we scan the draft sequence from left to right and determine the acceptance of each token based on a probabilistic threshold. We measure the ratio of the probability conditioned on the draft tokens in the current iteration to that conditioned on the tokens from the previous iteration. Intuitively, this ratio measures how well the token is decoded from the previous iteration (the draft token) and whether further decoding is necessary. The acceptance criterion of a token $x_i^{(j)}$ can be formulated as follows:
131
+
132
+ $$
133
+ x _ {i} ^ {(j + 1)} \leftarrow x _ {i} ^ {(j)} \text {i f} r \sim \mathcal {U} [ 0, 1 ], r < \min \left(1, \frac {p _ {\theta} \left(x _ {i} ^ {(j)} \mid \boldsymbol {x} _ {1 : i - 1} ^ {(j)}\right)}{p _ {\theta} \left(x _ {i} ^ {(j)} \mid \boldsymbol {x} _ {1 : i - 1} ^ {(j - 1)}\right)}\right), \tag {1}
134
+ $$
135
+
136
+ where $r$ is a random variable and $\mathcal{U}[0,1]$ represents a uniform distribution between 0 and 1. If a token meets the above criterion, it is accepted and appended to the pre-filling token sequence for the next iteration, i.e., $x_{i}^{(j^{\prime})} = x_{i}^{(j)}\forall j^{\prime} > j$ . After accepting one token, we continue this scan until a token is rejected. If the token $x_{i}^{(j)}$ is rejected, i.e., the inequality in Equ. (1) is not true, we resample a new token by a calibrated distribution:
137
+
138
+ $$
139
+ x _ {i} ^ {(j + 1)} \sim \frac {\operatorname* {m a x} \left(0 , p _ {\theta} \left(x \mid \boldsymbol {x} _ {1 : i - 1} ^ {(j)}\right) - p _ {\theta} \left(x \mid \boldsymbol {x} _ {1 : i - 1} ^ {(j - 1)}\right)\right)}{\sum_ {x} \operatorname* {m a x} \left(0 , p _ {\theta} \left(x \mid \boldsymbol {x} _ {1 : i - 1} ^ {(j)}\right) - p _ {\theta} \left(x \mid \boldsymbol {x} _ {1 : i - 1} ^ {(j - 1)}\right)\right)}. \tag {2}
140
+ $$
141
+
142
+ Then, unlike the vanilla speculative sampling, we do not end this scan, but sample the tokens at the remaining indexes with the conditional probability calculated in this iteration. This sampling process is consistent with the original Jacobi iteration, and the specific process is as follows:
143
+
144
+ $$
145
+ x _ {i ^ {\prime}} ^ {(j + 1)} \sim p _ {\theta} (x | \boldsymbol {x} _ {1: i ^ {\prime} - 1} ^ {(j)}), \forall i ^ {\prime} > i. \tag {3}
146
+ $$
147
+
148
+ It can be proven that all the accepted and sampled tokens in the Jacobi iteration satisfy $x_{i}^{(j + 1)} \sim p_{\theta}(x|\pmb{x}_{1:i - 1}^{(j)})$ (the proof is in the appendix). This conditional probability $p_{\theta}(x|\pmb{x}_{1:i - 1}^{(j)})$ is exactly the probability predicted by the parallel forward pass on the input draft tokens, and is passed to the next iteration together with the sampled tokens.
149
+
150
+ Step 4: we append the unaccepted tokens with newly initialized candidate tokens, forming a new Jacobi window with $W$ tokens, as the draft tokens for the next iteration. We use this fixed window size instead of the whole sequence to save the memory usage and accelerate the inference speed.
151
+
152
+ # 4.2TOKEN INITIALIZATION WITH SPATIAL PRIOR
153
+
154
+ Vanilla Jacobi decoding methods sample the initial candidate tokens from a uniform distribution. However, 2D images exhibit unique characteristics of spatial locality, i.e., spatially adjacent tokens tend to share similar semantics and textures. Leveraging these characteristics for token initialization may enable faster convergence. Considering that auto-regressive models generate image tokens in a raster scan order (from the top-left to the bottom-right in 2D space), we propose the following strategies for initializing new tokens: (a) repeating the previously generated left adjacent token; (b) repeating the previously generated above adjacent token; (c) resampling from the predicted probability from the left adjacent token; (d) resampling from the predicted probability from the above adjacent token. Experimental results demonstrate that these strategies provide greater acceleration than random initialization under certain scenarios.
155
+
156
+ # 5 EXPERIMENTS
157
+
158
+ # 5.1 IMPLEMENTATION DETAILS
159
+
160
+ We experiment with two recent and representative auto-regressive text-to-image generation models, Lumina-mGPT (Liu et al., 2024b) and Anole (Chern et al., 2024). For Lumina-mGPT (Liu et al.,
161
+
162
+ Table 1: The evaluation on the validation set of MSCOCO2017 with A100. JD: Jacobi decoding. ISP: initialization with spatial prior. SJD: Speculative Jacobi decoding.
163
+
164
+ <table><tr><td rowspan="2" colspan="2">Configuration</td><td rowspan="2">Average Latency (↓)</td><td colspan="2">Acceleration (↑)</td><td rowspan="2">FID (↓)</td><td rowspan="2">CLIP-Score (↑)</td></tr><tr><td>Latency</td><td>Step</td></tr><tr><td>A</td><td>Lumina-mGPT (Liu et al., 2024b)</td><td>87.23s</td><td>1.00×</td><td>1.00×</td><td>30.76</td><td>31.29</td></tr><tr><td>B</td><td>w. JD (Song et al., 2021)</td><td>85.20s</td><td>1.02×</td><td>1.04×</td><td>30.66</td><td>31.38</td></tr><tr><td>C</td><td>w. SJD</td><td>42.73s</td><td>2.04×</td><td>2.22×</td><td>30.85</td><td>31.35</td></tr><tr><td>D</td><td>w. SJD (ISP)</td><td>42.49s</td><td>2.05×</td><td>2.23×</td><td>31.13</td><td>31.33</td></tr><tr><td>E</td><td>Anole (Chern et al., 2024)</td><td>48.96s</td><td>1.00×</td><td>1.00×</td><td>28.87</td><td>30.59</td></tr><tr><td>F</td><td>w. SJD (ISP)</td><td>26.18s</td><td>1.87×</td><td>1.97×</td><td>29.14</td><td>30.61</td></tr></table>
165
+
166
+ Table 2: The evaluation on the validation set of Parti-prompt with RTX4090. JD: Jacobi decoding. ISP: initialization with spatial prior. SJD: Speculative Jacobi decoding.
167
+
168
+ <table><tr><td rowspan="2" colspan="2">Configuration</td><td rowspan="2">Average Latency (↓)</td><td colspan="2">Acceleration (↑)</td><td rowspan="2">CLIP-Score (↑)</td></tr><tr><td>Latency</td><td>Step</td></tr><tr><td>A</td><td>Lumina-mGPT (Liu et al., 2024b)</td><td>100.69s</td><td>1.00×</td><td>1.00×</td><td>32.13</td></tr><tr><td>B</td><td>w. JD (Song et al., 2021)</td><td>100.00s</td><td>1.01×</td><td>1.04×</td><td>32.17</td></tr><tr><td>C</td><td>w. SJD</td><td>47.52s</td><td>2.12×</td><td>2.26×</td><td>32.13</td></tr><tr><td>D</td><td>w. SJD (ISP)</td><td>47.35s</td><td>2.13×</td><td>2.28×</td><td>32.06</td></tr><tr><td>E</td><td>Anole (Chern et al., 2024)</td><td>48.24s</td><td>1.00×</td><td>1.00×</td><td>30.46</td></tr><tr><td>F</td><td>w. SJD (ISP)</td><td>25.12s</td><td>1.92×</td><td>2.11×</td><td>30.48</td></tr></table>
169
+
170
+ 2024b), by default, we use its 7B version to generate $768 \times 768$ images for evaluation, and we measure the sampling randomness by the value $K$ of its top- $K$ logit sampler. Following the basic setting of Lumina-mGPT, $K$ is set to 2000 and the classifier-free guidance weight is set to 3.0. Anole (Chern et al., 2024) is another 7B auto-regressive generation model finetuned from Chameleon (Team, 2024) that can generate $512 \times 512$ images.
171
+
172
+ Metrics. For visual quality, we use FID (Heusel et al., 2017) and CLIP-Score (Radford et al., 2021) as the metrics for evaluation. We use the step compression ratio (Fu et al., 2024b): $S = \frac{\# \text{generated tokens}}{\# \text{decoding steps}}$ to show the theoretical acceleration ratio. For each benchmark, we report the average of the step compression ratio on all generated images. We also attach this ratio to each image sample in the qualitative comparison of our method with other approaches. Moreover, we also report the latency acceleration of the model forward passes on a single GPU for testing the actual speedup.
173
+
174
+ Benchmark. The parti-prompts (Yu et al., 2022) and the validation set of MS-COCO 2017 (Lin et al., 2014) are taken as the benchmarks of image generation. On parti-prompts, we use the CLIP-Score and the acceleration of latency and steps excluding FID for evaluation because this benchmark only provides prompts without ground-truth images.
175
+
176
+ # 5.2 QUANTITATIVE RESULTS
177
+
178
+ As shown in Tab. 1 and Tab. 2, our speculative Jacobi decoding accelerates the auto-regressive text-to-image generation nearly without sacrificing visual qualities. When comparing our SJD (config C and D) with the vanilla Jacobi decoding (config B) on Lumina-mGPT, we observe that our probabilistic method greatly accelerates the generation by more than $2 \times$ while the Jacobi decoding cannot. Moreover, our method can provide a step compression of about $2 \times$ for Anole. We observe that the token initialization with spatial priors has a marginal influence on the speed of general image generation. We further analyze the specific scenarios of this modification in our ablation studies.
179
+
180
+ # 5.3 QUALITATIVE RESULTS
181
+
182
+ As shown in Fig. 5, we present the images generated with different configurations. For comparison, we set the same random seed for each image sample. According to our observation, the visual qualities of the images generated by different methods are similar, illustrating that our method can keep the visual quality for multiple styles of images. More importantly, our speculative Jacobi decoding with or without our spatial initialization can greatly reduce the inference steps by more than $2 \times$ for each case, and thus accelerate the inference process.
183
+
184
+ ![](images/51b0089f6b2d75033d63fd124dcf8bfdc8131fa5f9ac79ef67475d3a65a78c45.jpg)
185
+ Original
186
+ Steps: 2357
187
+
188
+ ![](images/dc1143853d8c17f46bda8bf8f3878edcbbaf8f48cd78afe3ae96bfe3ffc8697a.jpg)
189
+ Jacobi
190
+ Decoding
191
+ Steps: 2257 (1.0 × Faster)
192
+
193
+ ![](images/9feef08824f1d5f252fd1b15605d5f7341b05519304259afcea4c210cb49e875.jpg)
194
+ Speculative Jacobi
195
+ Decoding
196
+ Steps: 1012 (2.3 × Faster)
197
+
198
+ ![](images/e59ee93e6077a4527322099fb116ce7ed86703625944c0bdec6cda57b12b4436.jpg)
199
+ Speculative Jacobi
200
+ Decoding + Spatial Init
201
+ Steps: 1001 (2.4 × Faster)
202
+ Prompt: a cool man with a beautiful face wearing a yellow suit stands in the Mountain, the most Professional high-quality 8K photograph
203
+
204
+ ![](images/24bd84b3e2b21f81924e438724dd921a18b8737cd6097e1b02c6052b1534254c.jpg)
205
+ Steps: 2357
206
+ Prompt: Most beautiful girl (with deep red iris, short white hair, and a shy smile) wearing purple clothes and a hood. eye-symmetry, facial symmetry, 8K, high quality, realistic
207
+
208
+ ![](images/7daaa7dd31a64989cd00f3a14edb04a252ed4b0a9ed4e5c077fb25a68af451a7.jpg)
209
+ Steps: 2267 (1.0 × Faster)
210
+
211
+ ![](images/43d43ad6727074ec2f8916dba09ec96a824b1185ac9fac549e4509feb762812d.jpg)
212
+ Steps: 1077 (2.2 × Faster)
213
+
214
+ ![](images/59ccf39e4d7e7655984f083a8ad1b3710df07b42941dd8e20cf343574d1f8399.jpg)
215
+ Steps: 1049 (2.2 × Faster)
216
+
217
+ ![](images/c8d4566d70d4d83cc79ba4cf5d621bf69fb003cb445eb5589ca5970350707ade.jpg)
218
+ Steps: 2357
219
+ Prompt: a giant golden flying saucer firing lasers from the bottom, scorching the ground, the most Fantasy high-quality photos
220
+
221
+ ![](images/afd70e386838e7c17b961796e45669b2f078e3facc83908a23d4596a88e844e2.jpg)
222
+ Steps: 2293 $(1.0\times$ Faster)
223
+
224
+ ![](images/710e45f63909cdfdb3b37bdd68580c93dfd4408c35c26d2b6d9cd6aac36725c4.jpg)
225
+ Steps: 1059 $(2.2\times$ Faster)
226
+
227
+ ![](images/391acd80573da35f9c41106bb2116d7a15d9faac4618c62580e2cfb0c8342011.jpg)
228
+ Steps: 1029 (2.3 × Faster)
229
+
230
+ ![](images/a3f450d168db15658bad2bfb133837ed43b4320a10cd0af048a7ce34f9fcaef5.jpg)
231
+ Steps: 2357
232
+ Prompt: one typical zebra's upper body in the grassland, not occluded by grass, the zebra has only two ears, clear striped pattern, the most
233
+ Professional high-quality 8K photograph, clear texture, sharp-focus, Exquisite details
234
+ Figure 5: The images generated by Lumina-mGPT with different acceleration methods.
235
+
236
+ ![](images/07a325322b43018e34962b0b890614a36805523253c526d544faeaf11a2e1ccf.jpg)
237
+ Steps: 2253 (1.0 × Faster)
238
+
239
+ ![](images/1f8e578577acf9a079b1411e94cc6ace53139f24f8d4aaf9d85d510c35bd92ea.jpg)
240
+ Steps: 1035 (2.3 × Faster)
241
+
242
+ ![](images/0eab56ef1a83694707c08d43c28d42f8bd12767490991abb46ad2d5b4c148913.jpg)
243
+ Steps: 1032 (2.3 × Faster)
244
+
245
+ # 5.4 ABLATION STUDIES
246
+
247
+ We perform ablation studies on Lumina-mGPT 7B. Except for the experiments involving multiple resolutions, we use this model to generate $768 \times 768$ images for evaluation.
248
+
249
+ The correlation between the sampling strategy in decoding and the acceleration ratio. We compare the deterministic Jacobi decoding to our method under various randomness of the logit sampling. In Fig. 6, We show the correlation between their acceleration ratio and the randomness of logit sampling. According to this figure, our method is stable across multiple randomness and can achieve more than $2 \times$ step compression ratio. On the contrary, the Jacobi decoding can only accelerate the greedy sampling (top-1 sampling), which is useless for image generation.
250
+
251
+ The relationship between the image resolution and the acceleration ratio. We employ the 7B Lumina-mGPT to generate images with the resolutions $512 \times 512$ (about 1,000 tokens), $768 \times 768$ (about 2,300 tokens), and $1024 \times 1024$ (about 4,100 tokens). We calculate the average step compression ratio for each resolution given the same set of text prompts. Then, we present these
252
+
253
+ ![](images/6450b065044c2ccbefe2c1748360bc64c5d26cee6b1cf2b0a89ce133b4591d9a.jpg)
254
+ Figure 6: Our method beats the vanilla Jacobi decoding under various sampling randomness.
255
+
256
+ ![](images/c5fd024ba1ca3ba788bd591436bf239854b4506012d8b03d71b8dcdb6ae2aef1.jpg)
257
+ Figure 7: Higher image resolution can result in a slightly larger acceleration in our method.
258
+
259
+ ![](images/2ea94d030ebdda3c028dba76b2e6821d8ebaaa38306997019e73b98835ffa721.jpg)
260
+ Figure 8: The acceleration ratio is the largest when the Jacobi window size is at least 16.
261
+
262
+ ![](images/9f8ed33f4d833b55691c90203fff4992cc2a73851569226783dd789431c4236d.jpg)
263
+ Figure 9: The token initialization strategy impacts the acceleration ratio of image generation that contains simple and repeat patterns (examples of generated images on the right side).
264
+
265
+ ratios in Fig. 7. The results demonstrate that our method is stable across multiple resolutions, i.e., the acceleration on each resolution is larger than $2 \times$ . Moreover, with higher resolutions, the acceleration can be slightly better. For example, SJD achieves $2.43 \times$ acceleration for $1024 \times 1024$ images.
266
+
267
+ Studies on the window size of each iteration. As mentioned in Sec. 4.1, we append newly initialized tokens onto the unaccepted tokens in each iteration, fixing the Jacobi window size. Accordingly, we perform the ablation studies on the size of the window. We report the acceleration ratios under various sequence lengths in Fig. 8. The results show that our acceleration ratio reaches almost the maximum when the number of input tokens is greater than or equal to 16 tokens.
268
+
269
+ Studies on the initialization of candidate tokens. The acceleration ratio of our speculative Jacobi decoding is also related to the application scenarios. For example, when generating images composed of many simple and repeating patterns, a token initialization correlated with the already sampled tokens can provide a more precise guess than the random initialization. As shown in Fig. 9, we adopt an extreme case, the textual prompt "2D logo of a pure white box in a pure black background", for evaluation. We run the accelerated forward passes ten times with different random seeds for each initialization. The results show that the average step compression with the spatial-prior-aware initialization is much greater than that using the random initialization. Also, Fig. 1 shows that generating 2D logo requires fewer steps than generating images containing exquisite details.
270
+
271
+ # 6 CONCLUSION
272
+
273
+ This paper proposes a new training-free probabilistic parallel decoding algorithm, called Speculative Jacobi Decoding (SJD), to accelerate auto-regressive text-to-image generation. The sampling-based decoding is critical for image generation models, which prevents naive Jacobi Decoding from being applied to accelerate auto-regressive text-to-image generation models. By introducing a probabilistic convergence criterion, our SJD allows the model to iteratively predict-then-sample multiple tokens in fewer steps than the token counts for auto-regressive text-to-image generation models with sampling-based decoding rather than greedy decoding. We also propose the spatial-aware token initialization to reduce the number of iterations under specific scenarios. We conduct experiments to verify the effectiveness of SJD on multiple auto-regressive text-to-image generation models, and it accelerates the models without sacrificing visual quality.
274
+
275
+ # ACKNOWLEDGE
276
+
277
+ This work is supported by the National Nature Science Foundation of China (No. 62402406) and HKU IDS research Seed Fund. This work is partly supported by HKU Shanghai Intelligent Computing Research Center.
278
+
279
+ # REFERENCES
280
+
281
+ Emu3 Team BAAI. Emu3: Next-token prediction is all you need, 2024. URL https://emu.baaia.ac.cn/.
282
+ Shaojie Bai, J Zico Kolter, and Vladlen Koltun. Deep equilibrium models. Advances in neural information processing systems, 32, 2019.
283
+ Shaojie Bai, Zhengyang Geng, Yash Savani, and J Zico Kolter. Deep equilibrium optical flow estimation. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 620-630, 2022.
284
+ Sebastien Bubeck, Varun Chandrasekaran, Ronen Eldan, Johannes Gehrke, Eric Horvitz, Ece Kamar, Peter Lee, Yin Tat Lee, Yuanzhi Li, Scott M. Lundberg, Harsha Nori, Hamid Palangi, Marco Túlio Ribeiro, and Yi Zhang. Sparks of artificial general intelligence: Early experiments with GPT-4. CoRR, abs/2303.12712, 2023.
285
+ Huiwen Chang, Han Zhang, Lu Jiang, Ce Liu, and William T Freeman. Maskgit: Masked generative image transformer. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 11315-11325, 2022.
286
+ Charlie Chen, Sebastian Borgeaud, Geoffrey Irving, Jean-Baptiste Lespiau, Laurent Sifre, and John Jumper. Accelerating large language model decoding with speculative sampling. arXiv preprint arXiv:2302.01318, 2023.
287
+ Xi Chen, Nikhil Mishra, Mostafa Rohaninejad, and Pieter Abbeel. Pixelsnail: An improved autoregressive generative model. In International conference on machine learning, pp. 864-872. PMLR, 2018.
288
+ Ethan Chern, Jiadi Su, Yan Ma, and Pengfei Liu. Anole: An open, autoregressive, native large multimodal models for interleaved image-text generation. arXiv preprint arXiv:2407.06135, 2024.
289
+ DeepSeek-AI. Deepseek-v2: A strong, economical, and efficient mixture-of-experts language model, 2024.
290
+ Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition, pp. 248-255. IEEE, 2009.
291
+ Alessio Devoto, Yu Zhao, Simone Scardapane, and Pasquale Minervini. A simple and effective $l_{-2}$ norm-based strategy for kv cache compression. arXiv preprint arXiv:2406.11430, 2024.
292
+ Ming Ding, Zhuoyi Yang, Wenyi Hong, Wendi Zheng, Chang Zhou, Da Yin, Junyang Lin, Xu Zou, Zhou Shao, Hongxia Yang, et al. Cogview: Mastering text-to-image generation via transformers. Advances in neural information processing systems, 34:19822-19835, 2021.
293
+ Patrick Esser, Sumith Kulal, Andreas Blattmann, Rahim Entezari, Jonas Müller, Harry Saini, Yam Levi, Dominik Lorenz, Axel Sauer, Frederic Boesel, et al. Scaling rectified flow transformers for high-resolution image synthesis. arXiv preprint arXiv:2403.03206, 2024.
294
+ Tianyu Fu, Haofeng Huang, Xuefei Ning, Genghan Zhang, Boju Chen, Tianqi Wu, Hongyi Wang, Zixiao Huang, Shiyao Li, Shengen Yan, et al. Moa: Mixture of sparse attention for automatic large language model compression. arXiv preprint arXiv:2406.14909, 2024a.
295
+ Yichao Fu, Peter Bailis, Ion Stoica, and Hao Zhang. Break the sequential dependency of llm inference using lookahead decoding. arXiv preprint arXiv:2402.02057, 2024b.
296
+
297
+ Fabian Gloeckle, Badr Youbi Idrissi, Baptiste Rozière, David Lopez-Paz, and Gabriel Synnaeve. Better & faster large language models via multi-token prediction. arXiv preprint arXiv:2404.19737, 2024.
298
+ Albert Gu and Tri Dao. Mamba: Linear-time sequence modeling with selective state spaces. arXiv preprint arXiv:2312.00752, 2023.
299
+ Wanggui He, Siming Fu, Mushui Liu, Xierui Wang, Wenyi Xiao, Fangxun Shu, Yi Wang, Lei Zhang, Zhelun Yu, Haoyuan Li, et al. Mars: Mixture of auto-regressive models for fine-grained text-to-image synthesis. arXiv preprint arXiv:2407.07614, 2024.
300
+ Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, and Sepp Hochreiter. Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems, 30, 2017.
301
+ Anthony Hu, Lloyd Russell, Hudson Yeo, Zak Murez, George Fedoseev, Alex Kendall, Jamie Shotton, and Gianluca Corrado. Gaia-1: A generative world model for autonomous driving. arXiv preprint arXiv:2309.17080, 2023.
302
+ Dongjun Kim, Chieh-Hsin Lai, Wei-Hsiang Liao, Naoki Murata, Yuhta Takida, Toshimitsu Uesaka, Yutong He, Yuki Mitsufuji, and Stefano Ermon. Consistency trajectory models: Learning probability flow ode trajectory of diffusion. arXiv preprint arXiv:2310.02279, 2023.
303
+ Dan Kondratyuk, Lijun Yu, Xiuye Gu, José Lezama, Jonathan Huang, Rachel Hornung, Hartwig Adam, Hassan Akbari, Yair Alon, Vighnesh Birodkar, et al. Videopoet: A large language model for zero-shot video generation. arXiv preprint arXiv:2312.14125, 2023.
304
+ Siqi Kou, Lanxiang Hu, Zhezhi He, Zhijie Deng, and Hao Zhang. Cllms: Consistency large language models. arXiv preprint arXiv:2403.00835, 2024.
305
+ LAION. Laion-coco 600m, 2022. URL https://laion.ai/blog/laion-coco.
306
+ Yaniv Leviathan, Matan Kalman, and Yossi Matias. Fast inference from transformers via speculative decoding. In International Conference on Machine Learning, pp. 19274-19286. PMLR, 2023.
307
+ Shiyao Li, Xuefei Ning, Luning Wang, Tengxuan Liu, Xiangsheng Shi, Shengen Yan, Guohao Dai, Huazhong Yang, and Yu Wang. Evaluating quantized large language models. International Conference on Machine Learning, 2024a.
308
+ Tianhong Li, Yonglong Tian, He Li, Mingyang Deng, and Kaiming He. Autoregressive image generation without vector quantization. arXiv preprint arXiv:2406.11838, 2024b.
309
+ Yuhui Li, Fangyun Wei, Chao Zhang, and Hongyang Zhang. Eagle: Speculative sampling requires rethinking feature uncertainty. arXiv preprint arXiv:2401.15077, 2024c.
310
+ Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dóllár, and C Lawrence Zitnick. Microsoft coco: Common objects in context. In European conference on computer vision, pp. 740-755. Springer, 2014.
311
+ Akide Liu, Jing Liu, Zizheng Pan, Yefei He, Gholamreza Haffari, and Bohan Zhuang. Minicache: Kv cache compression in depth dimension for large language models. arXiv preprint arXiv:2405.14366, 2024a.
312
+ Dongyang Liu, Shitian Zhao, Le Zhuo, Weifeng Lin, Yu Qiao, Hongsheng Li, and Peng Gao. Luminamgt: Illuminate flexible photorealistic text-to-image generation with multimodal generative pretraining. arXiv preprint arXiv:2408.02657, 2024b.
313
+ Zichang Liu, Aditya Desai, Fangshuo Liao, Weitao Wang, Victor Xie, Zhaozhuo Xu, Anastasios Kyrillidis, and Anshumali Shrivastava. Scissorhands: Exploiting the persistence of importance hypothesis for llm kv cache compression at test time. Advances in Neural Information Processing Systems, 36, 2024c.
314
+ Simian Luo, Yiqin Tan, Longbo Huang, Jian Li, and Hang Zhao. Latent consistency models: Synthesizing high-resolution images with few-step inference. arXiv preprint arXiv:2310.04378, 2023.
315
+
316
+ Xinyin Ma, Gongfan Fang, and Xinchao Wang. Deepcache: Accelerating diffusion models for free. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 15762-15772, 2024.
317
+ OpenAI. Dalle-2, 2023. URL https://openai.com/dall-e-2.
318
+ James M Ortega and Werner C Rheinboldt. Iterative solution of nonlinear equations in several variables. SIAM, 2000.
319
+ Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, and Ilya Sutskever. Learning transferable visual models from natural language supervision. In ICML, volume 139 of Proceedings of Machine Learning Research, pp. 8748-8763. PMLR, 2021.
320
+ Aditya Ramesh, Mikhail Pavlov, Gabriel Goh, Scott Gray, Chelsea Voss, Alec Radford, Mark Chen, and Ilya Sutskever. Zero-shot text-to-image generation. In International conference on machine learning, pp. 8821-8831. Pmlr, 2021.
321
+ Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. High-resolution image synthesis with latent diffusion models. In CVPR, pp. 10674-10685. IEEE, 2022.
322
+ Tim Salimans and Jonathan Ho. Progressive distillation for fast sampling of diffusion models. arXiv preprint arXiv:2202.00512, 2022.
323
+ Tim Salimans, Andrej Karpathy, Xi Chen, and Diederik P Kingma. Pixelconn++: Improving the pixelconn with discretized logistic mixture likelihood and other modifications. arXiv preprint arXiv:1701.05517, 2017.
324
+ Andrea Santilli, Silvio Severino, Emilian Postolache, Valentino Maiorca, Michele Mancusi, Riccardo Marin, and Emanuele Rodola. Accelerating transformer inference for translation via parallel decoding. arXiv preprint arXiv:2305.10427, 2023.
325
+ Yang Song, Chenlin Meng, Renjie Liao, and Stefano Ermon. Accelerating feedforward computation via parallel nonlinear equation solving. In International Conference on Machine Learning, pp. 9791-9800. PMLR, 2021.
326
+ Yang Song, Prafulla Dhariwal, Mark Chen, and Ilya Sutskever. Consistency models. arXiv preprint arXiv:2303.01469, 2023.
327
+ Peize Sun, Yi Jiang, Shoufa Chen, Shilong Zhang, Bingyue Peng, Ping Luo, and Zehuan Yuan. Autoregressive model beats diffusion: Llama for scalable image generation. arXiv preprint arXiv:2406.06525, 2024a.
328
+ Ziteng Sun, Ananda Theertha Suresh, Jae Hun Ro, Ahmad Beirami, Himanshu Jain, and Felix Yu. Spectr: Fast speculative decoding via optimal transport. Advances in Neural Information Processing Systems, 36, 2024b.
329
+ Chameleon Team. Chameleon: Mixed-modal early-fusion foundation models. arXiv preprint arXiv:2405.09818, 2024.
330
+ Keyu Tian, Yi Jiang, Zehuan Yuan, Bingyue Peng, and Liwei Wang. Visual autoregressive modeling: Scalable image generation via next-scale prediction. arXiv preprint arXiv:2404.02905, 2024.
331
+ Aaron Van den Oord, Nal Kalchbrenner, Lasse Espeholt, Oriol Vinyals, Alex Graves, et al. Conditional image generation with pixelCNN decoders. Advances in neural information processing systems, 29, 2016.
332
+ Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. Advances in neural information processing systems, 30, 2017.
333
+ Fu-Yun Wang, Zhaoyang Huang, Alexander William Bergman, Dazhong Shen, Peng Gao, Michael Lingelbach, Keqiang Sun, Weikang Bian, Guanglu Song, Yu Liu, et al. Phased consistency model. arXiv preprint arXiv:2405.18407, 2024a.
334
+
335
+ Shuai Wang, Yao Teng, and Limin Wang. Deep equilibrium object detection. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 6296-6306, 2023.
336
+ Yuqing Wang, Tianwei Xiong, Daquan Zhou, Zhijie Lin, Yang Zhao, Bingyi Kang, Jiashi Feng, and Xihui Liu. Loong: Generating minute-level long videos with autoregressive language models. arXiv preprint arXiv:2410.02757, 2024b.
337
+ Xiaoshi Wu, Yiming Hao, Keqiang Sun, Yixiong Chen, Feng Zhu, Rui Zhao, and Hongsheng Li. Human preference score v2: A solid benchmark for evaluating human preferences of text-to-image synthesis. arXiv preprint arXiv:2306.09341, 2023.
338
+ Chen Xu, Tianhui Song, Weixin Feng, Xubin Li, Tiezheng Ge, Bo Zheng, and Limin Wang. Accelerating image generation with sub-path linear approximation model. arXiv preprint arXiv:2404.13903, 2024.
339
+ Dongjie Yang, XiaoDong Han, Yan Gao, Yao Hu, Shilin Zhang, and Hai Zhao. Pyramidinfer: Pyramid kv cache compression for high-throughput llm inference. arXiv preprint arXiv:2405.12532, 2024a.
340
+ Songlin Yang, Bailin Wang, Yu Zhang, Yikang Shen, and Yoon Kim. Parallelizing linear transformers with the delta rule over sequence length. arXiv preprint arXiv:2406.06484, 2024b.
341
+ Tianwei Yin, Michael Gharbi, Taesung Park, Richard Zhang, Eli Shechtman, Fredo Durand, and William T Freeman. Improved distribution matching distillation for fast image synthesis. arXiv preprint arXiv:2405.14867, 2024a.
342
+ Tianwei Yin, Michael Gharbi, Richard Zhang, Eli Shechtman, Fredo Durand, William T Freeman, and Taesung Park. One-step diffusion with distribution matching distillation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 6613-6623, 2024b.
343
+ Jiahui Yu, Yuanzhong Xu, Jing Yu Koh, Thang Luong, Gunjan Baid, Zirui Wang, Vijay Vasudevan, Alexander Ku, Yinfei Yang, Burcu Karagol Ayan, et al. Scaling autoregressive models for content-rich text-to-image generation. arXiv preprint arXiv:2206.10789, 2022.
344
+ Zhihang Yuan, Pu Lu, Hanling Zhang, Xuefei Ning, Linfeng Zhang, Tianchen Zhao, Shengen Yan, Guohao Dai, and Yu Wang. Ditfastattn: Attention compression for diffusion transformer models. Advances in Neural Information Processing Systems, 2024.
345
+ Yichi Zhang, Bofei Gao, Tianyu Liu, Keming Lu, Wayne Xiong, Yue Dong, Baobao Chang, Junjie Hu, Wen Xiao, et al. Pyramidkv: Dynamic kv cache compression based on pyramidal information tunneling. arXiv preprint arXiv:2406.02069, 2024.
346
+ Tianchen Zhao, Xuefei Ning, Tongcheng Fang, Enshu Liu, Guyue Huang, Zinan Lin, Shengen Yan, Guohao Dai, and Yu Wang. Mixdq: Memory-efficient few-step text-to-image diffusion models with metric-decoupled mixed precision quantization. ECCV, 2024.
347
+ Zixuan Zhou, Xuefei Ning, Ke Hong, Tianyu Fu, Jiaming Xu, Shiyao Li, Yuming Lou, Luning Wang, Zhihang Yuan, Xiuhong Li, et al. A survey on efficient inference for large language models. arXiv preprint arXiv:2404.14294, 2024.
348
+
349
+ # APPENDIX
350
+
351
+ # A PROOFS
352
+
353
+ Theorem 1 (The correctness of speculative Jacobi decoding) The token sampled in each speculative Jacobi iteration satisfies $p_{\theta}(x|\pmb{x}_{1:i-1}^{(j)})$ , where $x$ denotes a token, $j$ denotes the index of iteration, $i$ denotes the token index, and $\theta$ denotes the auto-regressive model parameters.
354
+
355
+ Proof. The main process of speculative Jacobi iteration is decomposed into two cases: (a) obtaining the token sampled in the previous iteration and then accepting it according to an acceptance probability; (b) rejecting the sampled token and resampling a new token according to a calibrated probability. Thus, like the proof of the vanilla speculative sampling (Leviathan et al., 2023), to prove the correctness of speculative Jacobi decoding, we verify that the conditional probability of a token sampled following the above two cases, alongside the manually designed acceptance and resampling probability, remains $p_{\theta}(x|\pmb{x}_{1:i-1}^{(j)})$ .
356
+
357
+ For simplicity, by default, we omit the token index $i$ and denote the token category of $\pmb{x}_i^{(j)}$ as $x$ . We denote the condition of token $\pmb{x}_i^{(j)}$ at the $j$ -th Jacobi iteration (i.e., the tokens $\pmb{x}_{1:i-1}^{(j)}$ and model weights $\theta$ ) to $\mathcal{I}_j$ . Thus, the condition of the $(j-1)$ -th Jacobi iteration is denoted as $\mathcal{J}_{j-1}$ . Thus, we can denote the probability $p_{\theta}(x|\pmb{x}_{1:i-1}^{(j)})$ as $p(x|\mathcal{I}_j)$ , and denote $p_{\theta}(x|\pmb{x}_{1:i-1}^{(j-1)})$ as $p(x|\mathcal{J}_{j-1})$ . We use a random boolean variable $r$ to represent the acceptance. With these notations, the proof is as follows:
358
+
359
+ First, the acceptance probability on the token category $x$ is manually set as follows:
360
+
361
+ $$
362
+ p (r \text {i s t r u e} | x, \mathcal {J} _ {j}, \mathcal {J} _ {j - 1}) = \min \{1, \frac {p (x \mid \mathcal {J} _ {j})}{p (x \mid \mathcal {J} _ {j - 1})} \}, \tag {4}
363
+ $$
364
+
365
+ and the calibrated resampling probability subsequent to the rejection is set as follows:
366
+
367
+ $$
368
+ p (x \mid r \text {i s f a l s e}, \mathcal {J} _ {j}, \mathcal {J} _ {j - 1}) = \frac {\max \{0 , p (x \mid \mathcal {J} _ {j}) - p (x \mid \mathcal {J} _ {j - 1}) \}}{\sum_ {x ^ {\prime}} \max \{0 , p (x ^ {\prime} \mid \mathcal {J} _ {j}) - p (x ^ {\prime} \mid \mathcal {J} _ {j - 1}) \}}. \tag {5}
369
+ $$
370
+
371
+ Next, we make an assumption that $\mathcal{J}_j$ and $x$ are conditionally independent given $\mathcal{J}_{j - 1}$ :
372
+
373
+ $$
374
+ p \left(\mathcal {I} _ {j} | x, \mathcal {I} _ {j - 1}\right) = p \left(\mathcal {I} _ {j} \mid \mathcal {I} _ {j - 1}\right) \tag {6}
375
+ $$
376
+
377
+ This assumption is reasonable due to the properties of the Jacobi iteration and the auto-regressive paradigm, i.e., with the observation of the sequence $\pmb{x}_{1:i-1}^{(j-1)}$ , one of the tokens in $\pmb{x}_{1:i-1}^{(j)}$ (denoted as $\pmb{x}_k^{(j)}$ ) can be determined by $\pmb{x}_k^{(j)} = f(\pmb{x}_{1:k-1}^{(j-1)}, \theta)$ ( $k < i$ ) where the function $f$ indicates the prediction-then-sampling of auto-regressive models, so the variable $\pmb{x}_i^{(j)}$ is redundant as one of the conditions in the probability $p(\mathcal{I}_j|x,\mathcal{I}_{j-1})$ . Thus, Equ. (6) is reasonable.
378
+
379
+ Then, with Bayes rule, Equ. (6) has the following equivalence:
380
+
381
+ $$
382
+ p \left(\mathcal {J} _ {j} | x, \mathcal {J} _ {j - 1}\right) = p \left(\mathcal {J} _ {j} \mid \mathcal {J} _ {j - 1}\right) \Leftrightarrow p (x \mid \mathcal {J} _ {j}, \mathcal {J} _ {j - 1}) = p (x \mid \mathcal {J} _ {j - 1}) \tag {7}
383
+ $$
384
+
385
+ Hence, according to Equ. (4) and Equ. (7), the probability that a token category $x$ is sampled in the previous iteration and subsequently accepted can be computed as:
386
+
387
+ $$
388
+ \begin{array}{l} p (r \text {i s t r u e}, x | \mathcal {J} _ {j}, \mathcal {J} _ {j - 1}) = p (x | \mathcal {J} _ {j}, \mathcal {J} _ {j - 1}) \cdot p (r \text {i s t r u e} | x, \mathcal {J} _ {j}, \mathcal {J} _ {j - 1}) \\ = p (x \mid \mathcal {J} _ {j - 1}) \cdot \min \{1, \frac {p (x \mid \mathcal {J} _ {j})}{p (x \mid \mathcal {J} _ {j - 1})} \} \tag {8} \\ = \min \left\{p \left(x \mid \mathcal {J} _ {j}\right), p \left(x \mid \mathcal {J} _ {j - 1}\right) \right\} \\ \end{array}
389
+ $$
390
+
391
+ ![](images/9507fc00b83a8ffca7e056f62e8f369285ba2f7127fabf5a49578afb983c7d70.jpg)
392
+ Steps: $8193\rightarrow 3515$ (2.3 $\times$ Faster) Steps: $8193\rightarrow 3581$ (2.3 $\times$ Faster) Steps: $8193\rightarrow 3472$ (2.4 $\times$ Faster)
393
+ Figure 10: The images generated by Emu3 (BAAI, 2024) with our acceleration method.
394
+
395
+ With Equ. (8), we can calculate the probability of rejection with the law of total probability on the token categories:
396
+
397
+ $$
398
+ \begin{array}{l} p (r \text {i s f a l s e} | \mathcal {J} _ {j}, \mathcal {J} _ {j - 1}) = 1 - p (r \text {i s t r u e} | \mathcal {J} _ {j}, \mathcal {J} _ {j - 1}) \\ = 1 - \sum_ {x ^ {\prime}} p (r \text {i s t r u e}, x ^ {\prime} | \mathcal {J} _ {j}, \mathcal {J} _ {j - 1}) \\ = \sum_ {x ^ {\prime}} p \left(x ^ {\prime} \mid \mathcal {I} _ {j}\right) - \min \left\{p \left(x ^ {\prime} \mid \mathcal {I} _ {j}\right), p \left(x ^ {\prime} \mid \mathcal {I} _ {j - 1}\right) \right\} \tag {9} \\ = \sum_ {x ^ {\prime}} \max \{0, p (x ^ {\prime} | \mathcal {J} _ {j}) - p (x ^ {\prime} | \mathcal {J} _ {j - 1}) \}. \\ \end{array}
399
+ $$
400
+
401
+ Then, with Equ. (5) and Equ. (9), we get the following equation:
402
+
403
+ $$
404
+ \begin{array}{l} p (x \mid r \text {i s f a l s e}, \mathcal {I} _ {j}, \mathcal {I} _ {j - 1}) \cdot p (r \text {i s f a l s e} \left| \mathcal {I} _ {j}, \mathcal {I} _ {j - 1}\right) \\ = \frac {\max \left\{0 , p \left(x \mid \mathcal {J} _ {j}\right) - p \left(x \mid \mathcal {J} _ {j - 1}\right) \right\}}{\sum_ {x ^ {\prime}} \max \left\{0 , p \left(x ^ {\prime} \mid \mathcal {J} _ {j}\right) - p \left(x ^ {\prime} \mid \mathcal {J} _ {j - 1}\right) \right\}} \cdot \sum_ {x ^ {\prime}} \max \left\{0, p \left(x ^ {\prime} \mid \mathcal {J} _ {j}\right) - p \left(x ^ {\prime} \mid \mathcal {J} _ {j - 1}\right) \right\} \tag {10} \\ = \max \{0, p (x | \mathcal {I} _ {j}) - p (x | \mathcal {I} _ {j - 1}) \}. \\ \end{array}
405
+ $$
406
+
407
+ Since
408
+
409
+ $$
410
+ \forall a \in \mathbb {R}, b \in \mathbb {R}, a = \min \{a, b \} + \max \{0, a - b \}, \tag {11}
411
+ $$
412
+
413
+ we can decompose $p(x|\mathcal{I}_j)$ as follows:
414
+
415
+ $$
416
+ p (x | \mathcal {J} _ {j}) = \min \left\{p (x | \mathcal {J} _ {j}), p (x | \mathcal {J} _ {j - 1}) \right\} + \max \left\{0, p (x | \mathcal {J} _ {j}) - p (x | \mathcal {J} _ {j - 1}) \right\}. \tag {12}
417
+ $$
418
+
419
+ With Equ. (8), Equ. (10) and Equ. (12), we can compute:
420
+
421
+ $$
422
+ \begin{array}{l} p (x | \mathcal {J} _ {j}) = \min \left\{p (x | \mathcal {J} _ {j}), p (x | \mathcal {J} _ {j - 1}) \right\} + \max \left\{0, p (x | \mathcal {J} _ {j}) - p (x | \mathcal {J} _ {j - 1}) \right\} \\ = p (x | \mathcal {J} _ {j - 1}) \cdot p (r \text {i s t r u e} | x, \mathcal {J} _ {j}, \mathcal {J} _ {j - 1}) \tag {13} \\ + p (r \text {i s f a l s e} | \mathcal {J} _ {j}, \mathcal {J} _ {j - 1}) \cdot p (x | r \text {i s f a l s e}, \mathcal {J} _ {j}, \mathcal {J} _ {j - 1}). \\ \end{array}
423
+ $$
424
+
425
+ According to Equ. (13), the conditional distribution $p(x|\mathcal{I}_j)$ can exactly represent (a) obtaining the token sampled in the previous iteration and then accepting it according to an acceptance probability; (b) rejecting the sampled token and resampling a new token according to a calibrated probability. In conclusion, the token sampled in each speculative Jacobi iteration satisfies $p_{\theta}(x|\pmb{x}_{1:i-1}^{(j)})$ .
426
+
427
+ # B MORE QUALITATIVE RESULTS
428
+
429
+ In Fig. 11, we showcase more generated images with Lumina-mGPT accelerated by our method. These results illustrate that our method functions well on the image contents including humans, animals, and landscapes. Recently, a new powerful auto-regressive model, Emu3 (BAAI, 2024), has been released. We also explore our method on Emu3 for text-to-image generation, and we find it
430
+
431
+ ![](images/4bad8764d17baec72922b7960043fe66ccdf1cdc0741fc95f296947d2cd94e5d.jpg)
432
+ Figure 11: The images generated by Lumina-mGPT (Liu et al., 2024b) with our acceleration method.
433
+
434
+ still leads to great step compression, shown in Fig. 10. We leave the quantitative results of Emu3 for future work.
435
+
436
+ We have included additional qualitative results for Lumina-mGPT and Anole in the supplementary material of the revised paper, specifically in Fig. 15 and Fig. 16, and we report both the steps and latency. According to the reported latency and step compression in these figures, our SJD outperforms other decoding methods while maintaining visual quality. Furthermore, spatial token initialization can further enhance the acceleration of our SJD. Additionally, we observe that Anole exhibits significantly higher image diversity compared to Lumina-mGPT. Despite the fixed random seed, it remains challenging for Anole to generate similar images due to the differences among the decoding methods.
437
+
438
+ # C INFERENCE LATENCY
439
+
440
+ In addition to reporting the step compression ratio, we also report the practical latency of SJD on servers. We set the batch size as 1 for testing, and report the latency of the accelerated Lumina-mGPT 7B excluding the pre- and post-processing operations. For $768 \times 768$ image generation (the number of generated tokens is at least 2357), we perform the experiments on one RTX 4090 GPU. For $1024 \times 1024$ image generation (the number of generated tokens is at least 4165), we perform the experiments on one A100 GPU. In these settings, the latency of Lumina-mGPT with and without our method is presented in Fig. 12. Our method significantly accelerates the auto-regressive image generation.
441
+
442
+ ![](images/13ebda30a717214cd7adac25fef631912723f7228f60e29a9150a9230196ae19.jpg)
443
+ Figure 12: The latency of Lumina-mGPT on generating $768 \times 768$ and $1024 \times 1024$ images without or with our method.
444
+
445
+ Table 3: The evaluation of LlamaGen (Sun et al., 2024a) with or without our method on MSCOCO2017 (Lin et al., 2014) and Parti-prompt (Yu et al., 2022).
446
+
447
+ <table><tr><td rowspan="2">Dataset</td><td rowspan="2">Configuration</td><td colspan="2">Acceleration (↑)</td><td rowspan="2">FID (↓)</td><td rowspan="2">CLIP-Score (↑)</td></tr><tr><td>Latency</td><td>Step</td></tr><tr><td rowspan="4">COCO</td><td>LlamaGen-stage1</td><td>1.00×</td><td>1.00×</td><td>28.54</td><td>30.87</td></tr><tr><td>LlamaGen-stage1 + Ours</td><td>1.56×</td><td>1.63×</td><td>29.00</td><td>30.82</td></tr><tr><td>LlamaGen-stage2</td><td>1.00×</td><td>1.00×</td><td>56.21</td><td>28.26</td></tr><tr><td>LlamaGen-stage2 + Ours</td><td>1.54×</td><td>1.63×</td><td>57.02</td><td>28.33</td></tr><tr><td rowspan="4">Parti</td><td>LlamaGen-stage1</td><td>1.00×</td><td>1.00×</td><td>-</td><td>30.22</td></tr><tr><td>LlamaGen-stage1 + Ours</td><td>1.57×</td><td>1.73×</td><td>-</td><td>30.29</td></tr><tr><td>LlamaGen-stage2</td><td>1.00×</td><td>1.00×</td><td>-</td><td>28.14</td></tr><tr><td>LlamaGen-stage2 + Ours</td><td>1.62×</td><td>1.69×</td><td>-</td><td>28.16</td></tr></table>
448
+
449
+ # D MORE QUANTITATIVE RESULTS
450
+
451
+ More Results. We further compare SJD to other decoding methods on Anole (Chern et al., 2024). As shown in Tab. 4 and Tab. 5, consistent with the results on Lumina-mGPT, SJD with spatial token initialization can create larger acceleration ratios than other decoding methods on Anole, and the cost of visual quality is small. In addition to Anole (Chern et al., 2024) and Lumina-mGPT (Liu et al., 2024b), we evaluate our method with the text-to-image LlamaGen (Sun et al., 2024a). This model adopts a two-stage training strategy: (a) stage1: LlamaGen is first trained on a subset of LAION-COCO (LAION, 2022) (50M $256 \times 256$ images); (b) stage2: it is then fine-tuned on 10M high aesthetic quality internal data with a resolution of $512 \times 512$ . In Tab. 3, we evaluate our method with the two versions of LlamaGen. The results show that our method can still accelerate this model without sacrificing the visual quality. However, in comparison to the experiments conducted on Lumina-mGPT and Anole, the acceleration ratios on LlamaGen are lower. We hypothesize that this discrepancy is attributed to the model size, as some existing works for multi-token prediction demonstrate that the model size has a great influence on the effectiveness of acceleration (Gloeckle et al., 2024). We leave this investigation to future work.
452
+
453
+ More results about visual quality. We take the CLIP-Score and the human preference score (HPSv2) (Wu et al., 2023) as the metrics for evaluating the visual quality for our ablation studies (the step compression ratios are reported in Sec. 5.4). We present the results in Tab. 7, Tab. 8, Tab. 9, and Tab. 10. From Tab. 7, given any $K$ values in the top- $K$ sampling strategies, we can observe that the human preferences are also not much different among the original auto-regressive decoding, the original Jacobi decoding, and our SJD.
454
+
455
+ Perplexity. We also compare the perplexities between SJD and other decoding methods on LuminamGPT, as detailed in Tab. 6. Since the perplexities are influenced by the sampling strategies (Hu et al., 2023), we report the perplexities under various $K$ values. Given an identical $K$ value, the perplexities between our method and other decoding methods are close. Furthermore, we note that $K = 2000$ results in a perplexity higher than that of large language models (Gu & Dao, 2023; Yang et al., 2024b) on language processing tasks. Despite this high value, the text-to-image auto-regressive model can still generate high-quality images. This indicates that image generation can tolerate a wide range of image tokens.
456
+
457
+ Statistics of model outputs. We compute the statistics of the logarithm of the token probability for both auto-regressive decoding and our method. The average and standard deviation of all image tokens are presented in Tab. 11. The results demonstrate that the image tokens accepted by our method exhibit similar statistics to those accepted by the original auto-regressive decoding. Consequently, our method generally does not mistakenly accept tokens with lower probabilities.
458
+
459
+ # E VISUALIZATION OF ACCELERATION IN 2D SPACE
460
+
461
+ We visualize the impact of multi-token prediction in a 2D space. As illustrated in Fig. 13, the color of each long strip area represents the length of accepted tokens from that area, with darker colors indicating longer sequences of accepted tokens, i.e., higher acceleration. We observe that
462
+
463
+ ![](images/9e876582b47d846b198588fb51163047f281520c9dcc20f0a6755c6948d85261.jpg)
464
+ Figure 13: The visualization of the accelerated tokens on 2D space.
465
+
466
+ ![](images/81a197cb58ac169715c38a509cd0b49eeffe291cda23fac6059830cb7c972111.jpg)
467
+
468
+ ![](images/0d13a4608498e5a268d282d7f99033717dcb3b91d2c00496ccd74aae16877848.jpg)
469
+
470
+ high acceleration tends to occur in the background, particularly on the left and right sides of images. Additionally, while some high acceleration is observed on foreground objects, it is relatively sparse in 2D space.
471
+
472
+ # F ANALYSIS ON THE EFFECTIVENESS OF OUR METHOD
473
+
474
+ This section analyzes the acceleration mechanism of our speculative Jacobi decoding in image generation. We empirically find that this acceleration stems from the resampling of unaccepted tokens. Specifically, some tokens are continuously resampled (i.e., their positions within the entire sequence are reused for multiple forward passes) according to Equ. (3) over iterations until they are accepted. For clarity and simplicity, we refer to this process of a token being continuously resampled by Equ. (3) (except the possible rejection resampling) as refinement, following the terminology in fixed-point iteration (Bai et al., 2019; 2022; Wang et al., 2023). Consequently, Equ. (3) is the main operation of every refinement step. In the following paragraphs, we explore the influences of this refinement.
475
+
476
+ The acceleration originates from the refinement of unaccepted tokens. In our verification phase, there are three treatments for the tokens: acceptance, rejection, and refinement, corresponding to Equ. (1), Equ. (2), and Equ. (3), respectively. We empirically find that only the first two treatments are insufficient to support acceleration. We conduct the following experiment to demonstrate that our method makes it hard to achieve acceleration without refinement: when we deactivate the refinement (i.e., using the newly initialized tokens to replace the unaccepted tokens as the draft tokens in the next iterations), we observe that the model requires over two thousand forward passes to generate images rather than one thousand forward passes. Although our token initializations with spatial prior (e.g., horizontal repeat) are slightly better than the random token initialization in replacing the unaccepted tokens, its performance is still much worse than directly refining the unaccepted tokens. The examples of the generated images under such setting are shown in Fig. 14. This phenomenon illustrates that the acceleration of our method originates from refining unaccepted tokens.
477
+
478
+ # G QUALITATIVE ANALYSIS OF IMAGE RANDOMNESS ON OUR METHOD
479
+
480
+ Like Fig. 2, we also examine the image randomness with both the auto-regressive decoding and our speculative Jacobi decoding. As shown in Fig. 17, first, we find that SJD does introduce some randomness into image generation (the random variable $r$ in Equ. (1)), so the images generated with auto-regressive decoding cannot exactly align those generated with SJD, even when the random seed is fixed. Therefore, in Fig. 17, given a column, two images with the same $K$ value cannot be exactly identical. However, in general, the diversity of the set of images is not influenced much. In Fig. 17, we present the images generated based on three textual prompts. Given the same prompt and $K$ value from top- $K$ sampling, the model with different decoding methods generates images with many similarities. For example, when $K = 2000$ , for the first prompt "an apple of a strange color", the images in the identical columns show the apples with similar color patterns and styles. Also, for the third prompt "pumpkin on the table", the frequency of faces carved on the pumpkins is similar for these two decoding methods.
481
+
482
+ Moreover, the $K$ value in top- $K$ sampling still dominates the image randomness in terms of texture, color, and local structure details. With larger $K$ , the image details about textures, colors, and local structures increase. Such image randomness still largely comes from the random token sampling.
483
+
484
+ ![](images/081a903ee98aac23bf876ae0bb5a1a9ecc428f21f42e441eb2d9168c62abfd4f.jpg)
485
+ Examples of Images Generated (Without Refinement; With random initialization)
486
+ Figure 14: Ablation studies on acceleration mechanism: examples of images generated by our SJD without or with refining unaccepted tokens. When the refinement defined by Equ. (3) is NOT applied (i.e., using the newly initialized tokens to replace the unaccepted tokens as the draft tokens in the next iterations), there is almost no acceleration (though one of our token initializations with spatial prior, horizontal repeat, can slightly reduce the steps in these images). This illustrates that refining unaccepted tokens are essential to the acceleration mechanism in SJD.
487
+
488
+ # H ANALYSIS ON FAILURE CASES
489
+
490
+ As shown in Fig. 18, when generating the images with exquisite details, although auto-regressive decoding can produce artifacts, SJD seems to generate continuous tokens that cause the artifacts, as highlighted by the red boxes in this figure. The pre-trained auto-regressive model is not sufficiently robust to handle such complex images. Consequently, it may mistakenly accept a sequence of draft tokens that contain artifacts.
491
+
492
+ # I LIMITATION AND FUTURE WORK
493
+
494
+ Since our speculative Jacobi decoding is training-free, the accelerated model itself is still not specialized for multi-token prediction. Therefore, the acceleration ratio has the potential to be further improved. In the future, we believe that fine-tuning the auto-regressive models for fast image generation is a promising direction. Also, acceleration is important for long-sequence generation, like video generation. Since videos contain more redundancy than images, the initialization of candidate tokens should be carefully designed if applying our speculative Jacobi decoding to video generation.
495
+
496
+ Table 4: The evaluation of Anole on the validation set of MSCOCO2017. JD: Jacobi decoding. ISP: initialization with spatial prior. SJD: Speculative Jacobi decoding.
497
+
498
+ <table><tr><td rowspan="2" colspan="2">Configuration</td><td rowspan="2">Average Latency (↓)</td><td colspan="2">Acceleration (↑)</td><td rowspan="2">FID (↓)</td><td rowspan="2">CLIP-Score (↑)</td></tr><tr><td>Latency</td><td>Step</td></tr><tr><td>A</td><td>Anole (Chern et al., 2024)</td><td>48.96s</td><td>1.00×</td><td>1.00×</td><td>28.87</td><td>30.59</td></tr><tr><td>B</td><td>w. JD (Song et al., 2021)</td><td>47.60s</td><td>1.03×</td><td>1.06×</td><td>29.34</td><td>30.64</td></tr><tr><td>C</td><td>w. SJD</td><td>27.08s</td><td>1.81×</td><td>1.94×</td><td>29.04</td><td>30.54</td></tr><tr><td>D</td><td>w. SJD (ISP)</td><td>26.18s</td><td>1.87×</td><td>1.97×</td><td>29.14</td><td>30.61</td></tr></table>
499
+
500
+ Table 5: The evaluation of Anole on the validation set of Parti-prompt. JD: Jacobi decoding. ISP: initialization with spatial prior. SJD: Speculative Jacobi decoding.
501
+
502
+ <table><tr><td rowspan="2" colspan="2">Configuration</td><td rowspan="2">Average Latency (↓)</td><td colspan="2">Acceleration (↑)</td><td rowspan="2">CLIP-Score (↑)</td></tr><tr><td>Latency</td><td>Step</td></tr><tr><td>A</td><td>Anole (Chern et al., 2024)</td><td>48.24s</td><td>1.00×</td><td>1.00×</td><td>30.46</td></tr><tr><td>B</td><td>w. JD (Song et al., 2021)</td><td>44.65s</td><td>1.08×</td><td>1.14×</td><td>30.57</td></tr><tr><td>C</td><td>w. SJD</td><td>26.77s</td><td>1.80×</td><td>2.00×</td><td>30.55</td></tr><tr><td>D</td><td>w. SJD (ISP)</td><td>25.12s</td><td>1.92×</td><td>2.11×</td><td>30.48</td></tr></table>
503
+
504
+ # J BROADER IMPACTS
505
+
506
+ Image generation offers extensive utility in helping users, designers, and artists produce fantastic content. Nonetheless, these models could be exploited to create deceptive content. Thus, it is crucial for the users including researchers and developers to acknowledge the potential negative social impact of image generation models.
507
+
508
+ Table 6: The comparison of perplexity on Lumina-mGPT.
509
+
510
+ <table><tr><td rowspan="2" colspan="2">Configuration</td><td colspan="3">Perplexity with Top-K sampling</td></tr><tr><td>K=10</td><td>K=100</td><td>K=2000</td></tr><tr><td>A</td><td>Lumina-mGPT (Liu et al., 2024b)</td><td>7.31</td><td>43.37</td><td>204.06</td></tr><tr><td>B</td><td>w.JD (Song et al., 2021)</td><td>7.20</td><td>43.85</td><td>197.64</td></tr><tr><td>C</td><td>w.SJD</td><td>7.34</td><td>43.87</td><td>217.96</td></tr><tr><td>D</td><td>w.SJD (ISP)</td><td>7.26</td><td>44.03</td><td>199.70</td></tr></table>
511
+
512
+ Table 7: CLIP-Score of various decoding methods on Lumina-mGPT with different top- $K$ values. The image qualities for Jacobi Decoding and our method correspond to Fig. 6. The image qualities for Auto-regression are only for the comparison in this table. Note that the image quality score with greedy sampling is extremely poor, as this setting leads to meaningless images for a lot of prompts (analyzed in Fig. 2).
513
+
514
+ <table><tr><td>Decoding Methods</td><td>Sampling</td><td>CLIP-Score</td><td>HPSv2</td></tr><tr><td>Auto-regression</td><td>Top-1 Sampling</td><td>26.40</td><td>0.1976</td></tr><tr><td>Auto-regression</td><td>Top-10 Sampling</td><td>32.83</td><td>0.2950</td></tr><tr><td>Auto-regression</td><td>Top-100 Sampling</td><td>32.41</td><td>0.3020</td></tr><tr><td>Auto-regression</td><td>Top-2000 Sampling</td><td>32.00</td><td>0.2965</td></tr><tr><td>Jacobi Decoding</td><td>Top-1 Sampling</td><td>26.34</td><td>0.1413</td></tr><tr><td>Jacobi Decoding</td><td>Top-10 Sampling</td><td>32.75</td><td>0.2960</td></tr><tr><td>Jacobi Decoding</td><td>Top-100 Sampling</td><td>32.46</td><td>0.3089</td></tr><tr><td>Jacobi Decoding</td><td>Top-2000 Sampling</td><td>31.68</td><td>0.3103</td></tr><tr><td>Ours</td><td>Top-1 Sampling</td><td>26.16</td><td>0.1695</td></tr><tr><td>Ours</td><td>Top-10 Sampling</td><td>32.27</td><td>0.2942</td></tr><tr><td>Ours</td><td>Top-100 Sampling</td><td>32.65</td><td>0.2977</td></tr><tr><td>Ours</td><td>Top-2000 Sampling</td><td>31.83</td><td>0.3020</td></tr></table>
515
+
516
+ Table 8: CLIP-Scores on Lumina-mGPT with various resolutions. The image qualities of our method under different settings correspond to Fig. 7. The image qualities for Auto-regression are only for the comparison in this table.
517
+
518
+ <table><tr><td>Decoding Methods</td><td>Resolutions</td><td>CLIP-Score</td><td>HPSv2</td></tr><tr><td>Auto-regression</td><td>512</td><td>29.49</td><td>0.2503</td></tr><tr><td>Auto-regression</td><td>768</td><td>32.00</td><td>0.2965</td></tr><tr><td>Auto-regression</td><td>1024</td><td>31.41</td><td>0.2961</td></tr><tr><td>Ours</td><td>512</td><td>29.69</td><td>0.2558</td></tr><tr><td>Ours</td><td>768</td><td>31.83</td><td>0.3020</td></tr><tr><td>Ours</td><td>1024</td><td>31.11</td><td>0.2935</td></tr></table>
519
+
520
+ Table 9: CLIP-Score of our method on Lumina-mGPT with various Jacobi window sizes. The image qualities correspond to Fig. 8.
521
+
522
+ <table><tr><td>Window Size</td><td>CLIP-Score</td><td>HPSv2</td></tr><tr><td>1</td><td>32.00</td><td>0.2965</td></tr><tr><td>4</td><td>31.91</td><td>0.3046</td></tr><tr><td>16</td><td>31.83</td><td>0.3020</td></tr><tr><td>32</td><td>31.55</td><td>0.3045</td></tr></table>
523
+
524
+ Table 10: CLIP-Score of our method on LuminamGPT with various token initialization when generating images with simple patterns. The image qualities correspond to Fig. 9.
525
+
526
+ <table><tr><td>Token Initialization</td><td>CLIP-Score</td><td>HPSv2</td></tr><tr><td>Horizontal Sample</td><td>31.52</td><td>0.2567</td></tr><tr><td>Vertical Sample</td><td>30.91</td><td>0.2622</td></tr><tr><td>Horizontal Repeat</td><td>31.17</td><td>0.2616</td></tr><tr><td>Vertical Repeat</td><td>31.15</td><td>0.2651</td></tr><tr><td>Random</td><td>31.37</td><td>0.2681</td></tr></table>
527
+
528
+ ![](images/387540095cf3d8bd49aa7c1cea1dc95c9ad577c47691b315e6bf235d2961c216.jpg)
529
+ Original
530
+ Latency: 91.58s
531
+ Steps: 2357
532
+ Jacobi Decoding
533
+ Latency: 91.15s (1.0 $\times$ Faster) Steps: 2293 (1.0 $\times$ Fewer)
534
+ Speculative Jacobi Decoding
535
+ Latency: 47.13s (1.9 × Faster)
536
+ Steps: 1115 (2.1 × Fewer)
537
+ Speculative Jacobi Decoding + Spatial Init
538
+ Latency: 45.97s (2.0 × Faster)
539
+ Steps: 1094 (2.2 × Fewer)
540
+ Prompt: A giant golden-haired lion with an indigo face roars at the gate of heaven
541
+
542
+ ![](images/dac27a4de3dc6bff0bebae3b51537142fee46d2bd6ef7b4fe414098f00dddc49.jpg)
543
+ Latency: 90.05s Steps:2357
544
+ Latency: 88.31s $(1.0\times$ Faster) Steps:2240 $(1.1\times$ Fewer)
545
+ Latency: 43.09s (2.1 × Faster)
546
+ Steps: 1031 (2.3 × Fewer)
547
+ Latency: 41.75s $(2.2\times$ Faster) Steps:996 $(2.4\times$ Fewer
548
+ Prompt: Portrait of the most beautiful Asian woman, Wearing a dress and headdress decorated with peacock feathers
549
+
550
+ ![](images/396803e0ec96a2eab6b2209cf8ea3d872b02f01a1e0b5e2374e2c5ef74f85684.jpg)
551
+ Latency: 90.60s Steps:2357
552
+ Latency: 89.74s (1.0 × Faster)
553
+ Steps: 2270 (1.0 × Fewer)
554
+ Latency: 43.26s (2.1 × Faster)
555
+ Steps: 1009 (2.3 × Fewer)
556
+ Latency: 40.39s (2.2 × Faster)
557
+ Steps: 962 (2.5 × Fewer)
558
+ Prompt: An oil painting of a lady
559
+
560
+ ![](images/e5030f711c834aaf64f077a4eeee6c2a1541a775ba9851a5d4793bc1efe85999.jpg)
561
+ Latency: 92.47s Steps: 2357
562
+ Latency: 89.84s $(1.0\times$ Faster) Steps:2293 $(1.0\times$ Fewer
563
+ Latency: 42.315 $(2.3\times$ Faster) Steps: $1003(2.3\times$ Fewer
564
+ Latency: 41.95s (2.3 × Faster)
565
+ Steps: 969 (2.4 × Fewer)
566
+ Prompt: One lynx in the forest is illuminated by a gloomy strong light, the most Professional high-quality 8K photograph
567
+
568
+ ![](images/45d2f28b9826d85ef9a174dbe579e5e5db1f3c58c5d4fd87b74e387db8df356f.jpg)
569
+ Latency: 93.63s Steps:2357
570
+ Latency: 92.00s $(1.0\times$ Faster) Steps: 2304 $(1.0\times$ Fewer
571
+ Latency: 42.14s $(2.3\times$ Faster) Steps: $1015(2.3\times$ Fewer
572
+ Latency: 41.23s (2.3 × Faster)
573
+ Steps: 947 (2.5 × Fewer)
574
+ Prompt: Atlantis, the most Fantasy high-quality photos
575
+ Figure 15: The qualitative comparison of different decoding methods on Lumina-mGPT (Liu et al., 2024b).
576
+
577
+ ![](images/94c116b72fae887223e6f9ba3c5deb9fb194b2aa1fdbe29a8eab9034109e1f90.jpg)
578
+ Latency: 47.90s
579
+ Steps: 1026
580
+
581
+ ![](images/710590bbfa915736fbaaa3cce0ce35a99f2adf1e6d68a991d9d96b363cd875e5.jpg)
582
+ Original
583
+ Jacobi Decoding
584
+ Latency: 46.36s (1.0 × Faster)
585
+ Steps: 955 (1.1 × Fewer)
586
+ Prompt: Generate an image of a cute Beagle dog.
587
+
588
+ ![](images/72d8aaa854cc5053ccb2d517f937f573bda413bb4d8e243ca80f241244e1522b.jpg)
589
+ Speculative Jacobi Decoding
590
+ Latency: 24.14s (2.0 × Faster)
591
+ Steps: 483 (2.1 × Fewer)
592
+
593
+ ![](images/e477d9539be25f6b31d37bfc7b6cb564eae2c40bed5aa961006af0438eaed9f7.jpg)
594
+ Speculative Jacobi Decoding + Spatial Init
595
+ Latency: 23.55s (2.0 × Faster)
596
+ Steps: 463 (2.2 × Fewer)
597
+
598
+ ![](images/979f4fe824ab40e2358f2d47be8d599d7f4af4f5437c3964f00cac87b35ed761.jpg)
599
+ Latency: 47.54s
600
+ Steps: 1026
601
+
602
+ ![](images/515e739c15dfcb26c901e193004c5fd817e570df49aad2cb6ed0e5dfa586ecf3.jpg)
603
+ Latency: 46.63s (1.0 × Faster)
604
+ Steps: 969 (1.1 × Fewer)
605
+ Latency: 26.71s (1.8 × Faster)
606
+ Steps: 548 (1.9 × Fewer)
607
+ Prompt: Generate an image of a square box of cookies.
608
+
609
+ ![](images/fafa734e8892d2625c200cc99006030a324041601997b920a048b14297f0b81f.jpg)
610
+
611
+ ![](images/72fbed3f46b9629b9b72e6bce21bc976a411e59f53e6537ebd57dba9b6c94bbb.jpg)
612
+ Latency: 21.38s (2.2 × Faster)
613
+ Steps: 428 (2.4 × Fewer)
614
+
615
+ ![](images/41e979060ad4f31583024b0cdddd8fcf662c397cd49e54fa37d68bf412b54c89.jpg)
616
+ Latency: 47.40s
617
+ Steps: 1026
618
+ Prompt: Generate an image of a blue Porsche 356 parked in front of a yellow brick wall.
619
+ Figure 16: The qualitative comparison of different decoding methods on Anole (Chern et al., 2024). Considering the high image diversity of Anole, although the random seed is fixed, it is still hard for Anole to generate similar images with different decoding methods.
620
+
621
+ ![](images/a43396724f4b044ee44c563af4074b93432d2924f88d32e77f1212eb3366e9ed.jpg)
622
+ Latency: 46.21s (1.0 × Faster)
623
+ Steps: 966 (1.1 × Fewer)
624
+
625
+ ![](images/ec20da3c92fc36b6fe2ed65d22d2299c9c7e2156f964d42838022add11b14f5a.jpg)
626
+ Latency: 25.91s (1.8 × Faster)
627
+ Steps: 526 (2.0 × Fewer)
628
+
629
+ ![](images/c18cce598307529f0826840a2411c94f8678b2845d53e4dc9ed3d994806aa90f.jpg)
630
+ Latency: 25.05s (1.9 × Faster)
631
+ Steps: 505 (2.0 × Fewer)
632
+
633
+ Table 11: The comparison of token statistics on Lumina-mGPT.
634
+
635
+ <table><tr><td rowspan="2">Decoding Methods</td><td colspan="2">Logarithm of Token Probability</td></tr><tr><td>Average</td><td>Standard Deviation</td></tr><tr><td>Auto-regression</td><td>-4.8950</td><td>2.3457</td></tr><tr><td>Ours</td><td>-4.9007</td><td>2.3275</td></tr></table>
636
+
637
+ ![](images/c0a0d825ad6b2a05d4d16553bee1277b1f45967d4f1f96462b131b1a2a225192.jpg)
638
+ Figure 17: Comparing our method to the original auto-regressive decoding on the image randomness. First, considering the random variable in SJD, given a column, two images with the same $K$ value cannot be exactly identical. Second, changing the decoding method from auto-regression to SJD has little influence on the image diversity for each prompt (e.g., given $K = 2000$ for each decoding method, the color patterns and styles of the generated apples are similar, and the frequency of the carved faces on pumpkins is also similar). Third, the top- $K$ sampling still dominates the image randomness about texture, color, and local structure details. The images in each column share a single random seed.
639
+
640
+ Prompt: Image of a bustling downtown street in Tokyo at night, with neon signs, crowded sidewalks, and tall skyscrapers.
641
+
642
+ ![](images/9c57ad680dda8dabedac13ddbca2f9ecc9b25b5addd3b921f551122d3b7c0c11.jpg)
643
+ Speculative Jacobi Decoding
644
+ Figure 18: Failure Cases. In complex image scenarios, our method generates some continuous tokens that result in artifacts, as highlighted by the red boxes. The pre-trained model inaccurately accepts a large sequence of the tokens that cause the artifacts.
acceleratingautoregressivetexttoimagegenerationwithtrainingfreespeculativejacobidecoding/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8921e698cc329d18fbcef23d43bed5b43cae32f9432cd9fb8e0b74d2b4761515
3
+ size 2230856
acceleratingautoregressivetexttoimagegenerationwithtrainingfreespeculativejacobidecoding/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:97b0d8f0b93c252094b7740978984d7b6a5762a57daafa1a21afdb0469f41ece
3
+ size 827328
acceleratingdiffusiontransformerswithtokenwisefeaturecaching/1d42a978-2ece-486e-8135-a5ec0a04b970_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6286f49bcd3e26e650e03e5a4bfab58d58d0e1d477911fd2e598c0800117545d
3
+ size 130006
acceleratingdiffusiontransformerswithtokenwisefeaturecaching/1d42a978-2ece-486e-8135-a5ec0a04b970_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5338428d20ab3e8dc3707cd556802069c377979f01655b741e818d167ff503db
3
+ size 151341
acceleratingdiffusiontransformerswithtokenwisefeaturecaching/1d42a978-2ece-486e-8135-a5ec0a04b970_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:266827001d3f06b8d34008340d2c3d8504ea2f2cdf8bb4256ea36f18f675b986
3
+ size 4358193
acceleratingdiffusiontransformerswithtokenwisefeaturecaching/full.md ADDED
@@ -0,0 +1,443 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # ACCELERATING DIFFUSION TRANSFORMERS WITHTOKEN-WISE FEATURE CACHING
2
+
3
+ Chang Zou $^{1,2*}$ Xuyang Liu $^{3*}$ Ting Liu $^{4}$ Siteng Huang $^{5}$ Linfeng Zhang $^{1\dagger}$
4
+
5
+ $^{1}$ Shanghai Jiao Tong University $^{2}$ University of Electronic Science & Technology of China
6
+
7
+ $^{3}$ Sichuan University $^{4}$ National University of Defense Technology $^{5}$ Zhejiang University
8
+
9
+ Code: https://github.com/Shenyi-Z/ToCa
10
+
11
+ # ABSTRACT
12
+
13
+ Diffusion transformers have shown significant effectiveness in both image and video synthesis at the expense of huge computation costs. To address this problem, feature caching methods have been introduced to accelerate diffusion transformers by caching the features in previous timesteps and reusing them in the following timesteps. However, previous caching methods ignore that different tokens exhibit different sensitivities to feature caching, and feature caching on some tokens may lead to $10 \times$ more destruction to the overall generation quality compared with other tokens. In this paper, we introduce token-wise feature caching, allowing us to adaptively select the most suitable tokens for caching, and further enable us to apply different caching ratios to neural layers in different types and depths. Extensive experiments on PixArt- $\alpha$ , OpenSora, DiT and FLUX demonstrate our effectiveness in both image and video generation with no requirements for training. For instance, $2.36 \times$ and $1.93 \times$ acceleration are achieved on OpenSora and PixArt- $\alpha$ with almost no drop in generation quality.
14
+
15
+ # 1 INTRODUCTION
16
+
17
+ Diffusion models (DMs) have demonstrated impressive performance across a wide range of generative tasks such as image generation (Rombach et al., 2022) and video generation (Blattmann et al., 2023). Recently, the popularity of diffusion transformers further extends the boundary of visual generation by scaling up the parameters and computations (Peebles & Xie, 2023). However, a significant challenge for diffusion transformers lies in their high computational costs, leading to slow inference speeds, which hinder their practical application in real-time scenarios. To address this, a series of acceleration methods have been proposed, focusing on reducing the sampling steps (Song et al., 2021) and accelerating the denoising networks (Bolya & Hoffman, 2023; Fang et al., 2023).
18
+
19
+ Among these, cache-based methods (Ma et al., 2024b; Wimbauer et al., 2024), which accelerate the sampling process by reusing similar features across adjacent timesteps (e.g. reusing the features cached at timestep $t$ in timestep $t - 1$ ), have obtained abundant attention in the industrial community thanks to their plug-and-play property. As the pioneering works in this line, DeepCache (Ma et al., 2024b) and Block Caching (Wimbauer et al., 2024) were proposed to reuse the cached features in certain layers of U-Net-like diffusion models by leveraging the skip connections in the U-Net. However, the dependency on the U-Net architectures also makes them unsuitable for diffusion transformers, which have gradually become the most powerful models in visual generation. Most recently, FORA (Selvaraju et al., 2024) and $\Delta$ -DiT (Chen et al., 2024b) have been proposed as a direct application of previous cache methods to diffusion transformers, though still not fully analyzed and exploited the property of the transformer-architecture. To tackle this challenge, this paper begins by studying how feature caching influences diffusion transformers at the token level.
20
+
21
+ Difference in Temporal Redundancy: Figure 1 shows the distribution of the feature distance between the adjacent timesteps for different tokens, where a higher value indicates that this token exhibits a lower similarity in the adjacent timesteps. It is observed that there exist some tokens that show relatively lower distance (in light blue) while some tokens show extremely higher distance (in dark blue), almost $2.5 \times$ larger than the mean distance, indicating caching such tokens can lead to an overlarge negative influence. This observation indicates that different tokens have different redundancy across the dimension of timesteps, (i.e. different temporal redundancy).
22
+
23
+ ![](images/78ac9cb41d9cc194ca37396d01fbeaaa14e670cc950058d010098d885d839eb1.jpg)
24
+ Figure 1: Temporal Redundancy: Distribution of the distance between the feature of tokens in the previous and the current timestep.
25
+
26
+ ![](images/87ee43b6eb3bf4bff531fd1abbb51f60a1eca84e32e329a00b8df9381275e532.jpg)
27
+ Figure 2: Error Propagation: Distribution of the error in the final layer output when the same noise is applied to each token in the first layer.
28
+
29
+ Difference in Error Propagation: Figure 2 introduces the other interesting perspective of error propagation in diffusion transformers. Specifically, self-attention and cross-attention layers are widely utilized in diffusion transformers to formulate the dependency between different tokens. As a result, the error in one of the tokens may propagate to some other tokens by self-attention and finally result in the error in all of the tokens. To understand the error propagation in tokens of diffusion transformers, we apply Gaussian noise with the same intensity to each token and compute the resulting error in all the tokens in the output of the final layer. Surprisingly, Figure 2 shows that the same noise in different tokens leads to significantly different propagation errors, with the largest propagation error being more than $10 \times$ the smallest one. In the context of feature caching, this indicates that the same error introduced by feature cache can result in vastly different errors in the final generation result due to error propagation.
30
+
31
+ Moreover, we have also investigated the difference of the tokens in layers of different depths and types, which demonstrates significant differences, as introduced in the following sections. In summary, different tokens exhibit significant differences in their sensitivities to feature caching, indicating that they deserve different priorities during the caching process. This motivates us to study the token-wise feature caching strategy, which aims to select the maximal number of tokens to maximize the acceleration ratio while minimizing the resulting error introduced by the feature caching by selecting the tokens that make the least caching error.
32
+
33
+ To tackle this challenge, this paper introduces Token-wise feature Caching ToCa for training-free acceleration of diffusion transformers, which provides a fine-grained caching strategy for different tokens in the same layer and the tokens in different layers. The core challenge of ToCa is to accurately select the tokens that are suitable for feature caching with the computation-cheapest operations. Consistent with the two previous analyses, we mainly study this problem from the perspective of temporal redundancy and error propagation by defining four scores for token selection. Specifically, for temporal redundancy, we try to select the tokens with the highest similarity (i.e. lowest difference) with their value in the previous timesteps by considering their frequency of being cached, as well as their distribution in the spatial dimension of the images. For error propagation, we attempt to cache the tokens which makes the least influence on other tokens based on their attendance in self-attention and cross-attention layers. Besides, all of these scores can be obtained without any additional computation costs.
34
+
35
+ Extensive experiments on text-to-image, text-to-video and class-to-image generation demonstrate the effectiveness of ToCa on PixArt- $\alpha$ , OpenSora, and DiT over previous feature caching methods. For instance, $2.36 \times$ acceleration can be achieved on OpenSora without requirements for training, outperforming halving the number of timesteps directly by 1.56 on VBench. On PariPrompt, ToCa even leads to 1.13 improvements on CLIP Score, indicating higher consistency to the text conditions.
36
+
37
+ In summary, the contributions of this paper are as follows:
38
+
39
+ 1. We propose Token-wise Caching (ToCa) as a fine-grained feature caching strategy tailored to acceleration for diffusion transformers. To the best of our knowledge, ToCa first introduces the perspective of error propagation in feature caching methods.
40
+ 2. We introduce four scores to select the most suitable tokens for feature caching in each layer with no additional computation costs. Besides, ToCa enables us to apply different caching ratios in layers of different depths and types, and also bring a bag of techniques for feature caching.
41
+ 3. Abundant experiments on PixArt- $\alpha$ , OpenSora, and DiT have been conducted, which demonstrates that ToCa achieves a high acceleration ratio while maintaining nearly lossless generation quality. Our codes have been released for further exploration in this domain.
42
+
43
+ # 2 RELATED WORK
44
+
45
+ # Transformers in Diffusion Models
46
+
47
+ Diffusion models (DMs) (Ho et al., 2020; Sohl-Dickstein et al., 2015), which iteratively denoise an initial noise input through a series of diffusion steps, have achieved remarkable success across various generation applications (Rombach et al., 2022; Balaji et al., 2022). Early DMs (Ho et al., 2020; Rombach et al., 2022) are based on the U-Net architecture (Ronneberger et al., 2015), consistently achieving satisfactory generation results. Recently, Diffusion Transformer (DiT) (Peebles & Xie, 2023) has emerged as a major advancement by replacing the U-Net backbone with a Transformer architecture. This transition enhances the scalability and efficiency of DMs across various generative tasks (Chen et al., 2024a; Brooks et al., 2024). For example, PixArt- $\alpha$ (Chen et al., 2024a) utilizes DiT as a scalable foundational model, adapting it for text-to-image generation, while Sora (Brooks et al., 2024) demonstrates DiT's potential in high-fidelity video generation, inspiring a series of related open-source projects (Zheng et al., 2024; Lab & etc., 2024). Despite their success, the iterative denoising process of these DMs is significantly time-consuming, making them less feasible for practical applications.
48
+
49
+ # Acceleration of Diffusion Models
50
+
51
+ To improve the generation efficiency of DMs, numerous diffusion acceleration methods have been proposed, falling broadly into two categories: (1) reducing the number of sampling timesteps, and (2) accelerating the denoising networks. The first category aims to achieve high-quality generation results with fewer sampling steps. DDIM (Song et al., 2021) introduces a deterministic sampling process that reduces the number of denoising steps while preserving generation quality. DPM-Solver (Lu et al., 2022a) and DPM-Solver++ (Lu et al., 2022b) propose adaptive high-order solvers for a faster generation without compromising on generation results. Rectified flow (Liu et al., 2023) optimizes distribution transport in ODE models to facilitate efficient and high-quality generation, enabling sampling with fewer timesteps. Step-distillation (Salimans & Ho, 2022; Meng et al., 2023) minimizes the number of timesteps with knowledge distillation from multiple timesteps to fewer ones. Consistency models (Song et al., 2023) accelerate generative modeling by mapping noise directly to data and enforcing self-consistency across steps. In the second category, various efforts have been paid to token reduction (Bolya & Hoffman, 2023; Zhang et al., 2025; 2024a), knowledge distillation (Li et al., 2024), and weight quantization (Li et al., 2023b; Shang et al., 2023) and pruning (Fang et al., 2023) on the denoising networks. Additionally, recent cache-based methods reduce redundant computations to accelerate inference for DMs. These cache-based methods have obtained abundant attention since they have no requirements for additional training. DeepCache (Ma et al., 2024b) eliminates redundant computations in Stable Diffusion (Rombach et al., 2022) by reusing intermediate features of low-resolution layers in the U-Net. Faster Diffusion (Li et al., 2023a) accelerates the sampling process of DMs by caching U-Net encoder features across timesteps, skipping encoder computations at certain steps. Unfortunately, DeepCache and Faster Diffusion are designed specifically for U-Net-based denoisers and can not be applied to DiT (Chen et al., 2024b). Recently, FORA (Selvaraju et al., 2024) and $\Delta$ -DiT (Chen et al., 2024b) have been proposed to cache the features and the residual of features for DiT. Learning-to-Cache (Ma et al., 2024a) learns an optimal cache strategy, which achieves a slightly higher acceleration ratio but introduces the requirements of training. However, these methods apply the identical cache solution to all the tokens and even all the layers, which leads to a significant performance degradation in generation quality.
52
+
53
+ # 3 METHODOLOGY
54
+
55
+ In the Methodology section, we briefly introduce Diffusion Models and the feature caching acceleration method for Diffusion Transformers, followed by the ToCa workflow and important token selection. The importance of a token $x_{i}$ is determined by $s_1$ interaction strength with other tokens, $s_2$ association with global textual information, $s_3$ accumulated cache error, which, if excessive, can cause image collapse and should be controlled, and $s_4$ spatial uniformity of selected tokens.
56
+
57
+ # 3.1 PRELIMINARY
58
+
59
+ Diffusion Models Diffusion models are formulated to contain two processes, including a forward process which adds Gaussian noise to a clean image, and a reverse process which gradually denoises a standard Gaussian noise to a real image. By denoting $t$ as the timestep and $\beta_{t}$ as the noise variance
60
+
61
+ ![](images/8a81d127ac640d4a8a632fa4f935f9191a8ef5407dfa4110b89bb38baef26b9a.jpg)
62
+ Figure 3: The overview of $\mathrm{ToCa}$ on the example of the first layer with caching ratio $R = 40\%$ . (a) In the first timestep of the cache period, $\mathrm{ToCa}$ computes all the tokens and stores them in the cache for initialization. Then, in the next timestep, $\mathrm{ToCa}$ first computes the caching score of each token and selects the tokens for cache based on them. Then, $\mathrm{ToCa}$ fetches the features of cached tokens from the cache while performing real computations in the other tokens. Then, the features of tokens that have been computed are utilized to update their value in the cache. (b) $\mathrm{ToCa}$ applies a higher cache ratio in the deep layer and a relatively lower cache ratio in the shallow layers.
63
+
64
+ schedule, then the conditional probability in the reverse (denoise) process can be modeled as
65
+
66
+ $$
67
+ p _ {\theta} \left(x _ {t - 1} \mid x _ {t}\right) = \mathcal {N} \left(x _ {t - 1}; \frac {1}{\sqrt {\alpha_ {t}}} \left(x _ {t} - \frac {1 - \alpha_ {t}}{\sqrt {1 - \bar {\alpha} _ {t}}} \epsilon_ {\theta} \left(x _ {t}, t\right)\right), \beta_ {t} \mathbf {I}\right), \tag {1}
68
+ $$
69
+
70
+ where $\alpha_{t} = 1 - \beta_{t}$ , $\bar{\alpha}_{t} = \prod_{i=1}^{T} \alpha_{i}$ , and $T$ denotes the number of timesteps. Importantly, $\epsilon_{\theta}$ denotes a denoising network with its parameters $\theta$ that takes $x_{t}$ and $t$ as the input and then predicts the corresponding noise for denoising. For image generation with $\mathcal{T}$ timesteps, $\epsilon_{\theta}$ is required to infer for $\mathcal{T}$ times, which takes most of the computation costs in the diffusion models. Recently, fruitful works demonstrate that formulating $\epsilon_{\theta}$ as a transformer usually leads to better generation quality.
71
+
72
+ Diffusion Transformer Diffusion transformer models are usually composed of stacking groups of self-attention layers $f_{\mathrm{SA}}$ , multilayer perceptron $f_{\mathrm{MLP}}$ , and cross-attention layers $f_{\mathrm{CA}}$ (for conditional generation). It can be roughly formulated as $g_1 \circ g_2 \circ \ldots \circ g_L$ where $g^i = \{f_{\mathrm{SA}}^i, f_{\mathrm{CA}}^i, f_{\mathrm{MLP}}^i\}$ . The upper script denotes the index of layer groups and $L$ denotes its maximal number. We omit the other components such as layer norm and residual connections here for simplicity. For diffusion transformers, the input data $\mathbf{x}_t$ is a sequence of tokens corresponding to different patches in the generated images, which can be formulated as $\mathbf{x}_t = \{x_i\}_{i=1}^{H \times W}$ , where $H$ and $W$ denote the height and width of the images or the latent code of images, respectively.
73
+
74
+ # 3.2 NAIVE FEATURE CACHING FOR DIFFUSION TRANSFORMERS
75
+
76
+ We follow the naive scheme for feature caching adopted by most previous caching methods(Ma et al., 2024b) for diffusion denoisers. Given a set of $\mathcal{N}$ adjacent timesteps $\{t, t + 1, \dots, t + \mathcal{N} - 1\}$ , native feature caching performs the complete computation at the first timestep $t$ and stores the intermediate features in all the layers, which can be formulated as $\mathcal{C}(\mathbf{x}_t) \coloneqq f(\mathbf{x}_t^l)$ , for $\forall l \in [0, L]$ , where “:=” indicates the operation of assigning the value. Then, in the next $\mathcal{N} - 1$ timesteps, feature caching avoids the computation of self-attention, cross-attention, and MLP layers by reusing the feature cached at timestep $t$ . By denoting the cache as $\mathcal{C}$ and the expected feature of the input $\mathbf{x}_t$ in the $l_{th}$ layer as $\mathcal{F}(\mathbf{x}_t^l)$ , then for $\forall l \in [1, L]$ , the naive feature caching can be formulated as
77
+
78
+ $$
79
+ \mathcal {F} \left(\mathbf {x} _ {t + 1} ^ {l - 1}\right) = \mathcal {F} \left(\mathbf {x} _ {t + 2} ^ {l - 1}\right) = \dots = \mathcal {F} \left(\mathbf {x} _ {t + \mathcal {N}} ^ {l - 1}\right) := \mathcal {C} \left(\mathbf {x} _ {t} ^ {l}\right). \tag {2}
80
+ $$
81
+
82
+ In these $\mathcal{N}$ timesteps, naive feature caching avoids almost all the computation in $\mathcal{N} - 1$ timesteps, leading to around $\mathcal{N} - 1$ times acceleration. After the $\mathcal{N}$ timesteps, the feature cache then starts a new period from initializing the cache as aforementioned, again. The effectiveness of feature caching can be explained by the extremely low difference between the tokens in the adjacent timesteps. However, as $\mathcal{N}$ increases, the difference between the feature value in the cache and their correct value can be exponentially increased, leading to degeneration in generation quality, which motivates us to study more fine-grained methods for feature caching.
83
+
84
+ # 3.3TOKEN-WISE FEATURE CACHING
85
+
86
+ The naive feature caching scheme caches all the tokens of the diffusion transformers with the same strategy. However, as demonstrated in Figure 1, 2 and 5, feature cache introduces significantly
87
+
88
+ ![](images/2da03c6e42b170997a442d7c76f25ad6c0d8b42cb720586efb1b150513a9bf6c.jpg)
89
+ (I) Influence to Other Tokens
90
+
91
+ ![](images/afa4f9ef0b11c79bbb317aab1b52da6f7c368b778dad74de0846b274dc612255.jpg)
92
+ (II) Control Ability
93
+
94
+ ![](images/f917d5a8d4ca32918eb5f1bd02cc37d23d8cb76220d14c006d503c624fb680c9.jpg)
95
+ (III) Cache Frequency
96
+ Figure 4: The computation of caching scores in ToCa, where a token with a lower cache score is encouraged to be cached: (I) Self-attention weights are utilized to measure the influence of each token on the other tokens, where a token with higher influence is considered not suitable for caching. (II) Cross-attention weights are utilized to measure the influence of each image token on the text (condition) tokens, where an image token with higher entropy is considered not suitable for caching. (III) Tokens that have been cached multiple times are encouraged to not be cached in the following layers. (IV) We increase the cache score for the token with the largest cache score in its neighboring pixels to make the cached tokens distributed uniformly in the spatial dimension.
97
+
98
+ ![](images/05de82a37dbab942e03ffcada7971a5565838df58e457ff98bdce9f9c6bc96b5.jpg)
99
+ (IV) Uniform Spatial Distribution
100
+
101
+ different influence to different tokens, motivating us to design a more fine-grained caching method in the token-level. In this section, we begin with the overall framework of ToCa, then introduce our strategy for token selection and caching ratios.
102
+
103
+ # 3.4 OVERALL FRAMEWORK
104
+
105
+ Cache Initialization Similar to previous caching methods, given a set of adjacent timesteps $\{t,t + 1,\dots ,t + \mathcal{N} - 1\}$ , our method begins with computing all the tokens at the first timestep $t$ , and storing the computation result (intermediate features) of each self-attention, cross-attention, and MLP layer in a cache, denoted by $\mathcal{C}$ , as shown in the left part in Figure 3. This can be considered as the initialization of $\mathcal{C}$ , which has no difference compared with previous caching methods.
106
+
107
+ Computing with the Cache In the following timesteps, we can skip the computation of some unimportant tokens by re-using their value in the cache $\mathcal{C}$ . We firstly pre-define the cache ratio $R$ of tokens in each layer, which indicates that the computation of $R\%$ of the tokens in this layer should be skipped by using their value in the cache, and the other $(1 - R\%)$ tokens should still be computed. To achieve this, a caching score function $\mathcal{S}$ is introduced to decide whether a token should be cached, which will be detailed in the next section. Then, with $\mathcal{S}$ , we can select a set of cached tokens as $\mathcal{I}_{\mathrm{Cache}}$ and the other set of tokens for real computation as $\mathcal{I}_{\mathrm{Compute}} = \{x_i\}_{i=1}^N - \mathcal{I}_{\mathrm{Cache}}$ . Then, the computation of the layer $f$ for $i_{th}$ token $x_i$ can be formulated as $\mathcal{F}(x_i) = \gamma_if(x_i) + (1 - \gamma_i)\mathcal{C}(x_i)$ , where $\gamma_i = 0$ for $i \in \mathcal{I}_{\mathrm{Cache}}$ and $\gamma_i = 1$ for $i \in \mathcal{I}_{\mathrm{Compute}}$ . $\mathcal{C}(x_i)$ denotes fetching the cached value of $x_i$ from $\mathcal{C}$ , which has no computation costs and hence leads to overall acceleration in $f$ .
108
+
109
+ Cache Updating As a significant difference between traditional cache methods and ToCa, traditional cache methods only update the feature in the cache at the first timestep for each caching period while ToCa can update the feature in the cache at all the timesteps, which helps to reduce the error introduced by feature reusing. For the tokens $x_{i} \in \mathcal{I}_{\mathrm{Compute}}$ which are computed, we update their corresponding value in the cache $\mathcal{C}$ , which can be formulated as $\mathcal{C}(x_i) \coloneqq \mathcal{F}(x_i)$ for $i \in \mathcal{I}_{\mathrm{Compute}}$ .
110
+
111
+ # 3.5TOKEN SELECTION
112
+
113
+ Given a sequence of tokens $\mathbf{x}_t = \{x_i\}_{i=1}^N$ , token selection aims to select the tokens that are suitable for caching. To this end, we define a caching score function $S(x_i)$ to decide whether the $i_{th}$ token $x_i$ should be cached, where a token with a higher score has a lower priority for caching and a higher priority to be actually computed. The $S(x_i)$ is composed of four sub-scores $\{s_1, s_2, s_3, s_4\}$ , corresponding to the following four principals.
114
+
115
+ (I) Influence to Other Tokens: If a token has a significant contribution to the value of other tokens, then the error caused by token caching on this token can easily propagate to the other tokens, ultimately leading to discrepancies between all tokens and their correct values. Consequently, we consider the contribution of each token to other tokens as one of the criteria for defining whether it should be cached, estimated with an attention score in self-attention. Recall that the self-attention can be formulated as $\mathbf{O} = \mathbf{A}\mathbf{V}$ , where $\mathbf{A} = \operatorname{Softmax}\left(\frac{\mathbf{Q}\mathbf{K}^{\mathrm{T}}}{\sqrt{\mathbf{d}}}\right) \in \mathbb{R}^{N \times N}$ denotes the normalized
116
+
117
+ ![](images/5f083c8ee4ba41aa58f24dee53fccf60d3e64793e612367e1be4ac7177f5eb52.jpg)
118
+
119
+ ![](images/05eb1c28cc9d133da755159c13aa020c693a220e7b730ce3d19698bc4cdc2ac9.jpg)
120
+ Figure 5: (a) The distance between features at the last timestep and the current timestep for features in different layer depths. (b) The distribution of errors in the output of the final layer when the same Gaussian noise is applied to tokens in different layer depths. (c) The distribution of errors in the output of the final layer when the same Gaussian noise is applied to tokens in different layer types.
121
+
122
+ ![](images/40a717723d0346fba3804cd30f31f094156bb53c53905ab27fc7814b48154575.jpg)
123
+
124
+ attention map. $\mathbf{Q},\mathbf{K},\mathbf{V}$ and $\mathbf{O}\in \mathbb{R}^{N\times d}$ are query, key, value and output tokens respectively; $\mathbf{d}$ is the hidden size of each token and $N$ is the total number of tokens. More specifically, the $i_{th}$ output token is obtained through $o_i = \sum_{j = 1}^N\alpha_{ij}v_j$ , where $\alpha_{ij}$ is the $(i,j)$ element of the attention map $\mathbf{A}$ , denoting the contribution of value token $v_{j}$ to the output token $o_j$ . With these notations, we define $s_1$ to measure the contribution of $x_{i}$ to other tokens as $s_1(x_i) = \lambda_1\sum_{i = 1}^N\alpha_{ij}$ , as shown in Figure 4(I).
125
+
126
+ (II) Influence to Control Ability: The control ability of diffusion models in the text-to-image generation is usually achieved with a cross-attention layer which injects the control signal (e.g. text) into the image tokens. Hence, the cross-attention map reflects how each image token is influenced by the control signal. In this paper, we define the image tokens that are influenced by more tokens in the control signal as the tokens that are not suitable for caching, since the caching error on these tokens leads to more harm in the control ability. Specifically, by denoting $c_{ij}$ as the $(i,j)$ element in the cross attention score $\mathbf{C} = \mathrm{Softmax}\left(\frac{\mathbf{QK}_{\mathrm{text}}^{\mathrm{T}}}{\sqrt{\mathbf{d}}}\right)$ , where $\mathbf{K}_{\mathrm{text}}$ denotes the keys of text tokens (control tokens). Then, as shown in Figure 4(II), we employ the entropy $H(x_i)$ of the cross-attention weight for each image token $x_i$ as its influence on the control-ability of diffusion models, which can be formulated as $s_2(x_i) = H(x_i) = -\sum_{j=1}^{N} c_{ij} \log(c_{ij})$ .
127
+
128
+ (III) Cache Frequency: We observe that when a token is cached across multiple adjacent layers, the error introduced by feature caching in this token can be quickly accumulated, and the difference between it and its correct value can be exponentially amplified, which significantly degrades the overall quality of images. Hence, we define recently cached tokens as unsuitable for cache in the next layers and time steps. Conversely, the tokens that have not been cached for multiple layers and timesteps are encouraged to be cached. As shown in Figure 4(III), this selection rule is achieved by recording the times of being cached for each token after their last real computation, which can be formulated as $s_3(x_i) = \frac{n_i}{\mathcal{N}}$ , where $n_i$ represents the number of times that $x_i$ has been cached after its last computation. $\mathcal{N}$ is the number of timesteps in each feature caching cycle.
129
+
130
+ (IV) Uniform Spatial Distribution: The pixels in the neighboring patch of the images usually contain similar information. As discussed in previous works, overwhelmingly pruning the information in a local spatial region may result in significant performance degradation in the whole images (Bolya & Hoffman, 2023). Hence, to guarantee that the errors introduced by caching are not densely distributed in the same spatial region, we define the following scoring function: $s_4(x_i) = \mathcal{I}(x_i) \cdot (\lambda_1 \cdot s_1(x_i) + \lambda_2 \cdot s_2(x_i) + \lambda_3 \cdot s_3(x_i))$ , where $\mathcal{I}(x_i)$ is an indicator function which equals to 1 if $x_i$ has the highest score of $\lambda_1 \cdot s_1(x_i) + \lambda_2 \cdot s_2(x_i) + \lambda_3 \cdot s_3(x_i)$ in its neighboring $k \times k$ pixels and 0 in the other settings, and $\lambda_j$ are hyper-parameters to balance each score.
131
+
132
+ In summary, the overall caching score of $x_{i}$ can be formulated as $S(x_{i}) = \sum_{j=1}^{4} \lambda_{j} \cdot s_{j}(x_{i})$ , where $\lambda_{j}$ are hyper-parameters to balance each score. Then, with the cache ratio $R$ , the index set for the cached tokens is obtained in the following form:
133
+
134
+ $$
135
+ \mathcal {I} _ {\text {C a c h e}} = \underset {\{i _ {1}, i _ {2}, \dots , i _ {R \% \times N} \} \subseteq \{1, 2, \dots , n \}} {\arg \min } \left\{\mathcal {S} \left(x _ {i _ {1}}\right), \mathcal {S} \left(x _ {i _ {2}}\right), \dots , \mathcal {S} \left(x _ {i _ {n}}\right) \right\}. \tag{3}
136
+ $$
137
+
138
+ # 3.6 DIFFERENT Cache RATIOS IN DIFFERENT LAYERS
139
+
140
+ Figure 5 shows the difference in feature caching of different layers, where (a) shows that the output features of different layers have different distances compared with their value in the last step (i.e. different temporal redundancy). (b) and (c) show that when errors in the same density are applied to
141
+
142
+ ![](images/d375c56f5a6600fe794e583f2a10b3860242ba243684da3fe1d74789193ad9cc.jpg)
143
+ Figure 6: Visualization examples for different acceleration methods on PixArt- $\alpha$ .
144
+
145
+ layers in different depths and types, the resulting error in the final layer exhibits extremely different magnitudes. Specifically, in the three studies, the disparity between the maximum and minimum error can be several orders of magnitude. Fortunately, $\mathsf{T o C a}$ enables us to apply different caching ratios for layers in various depths and types. By denoting the overall caching ratio for all the layers and timesteps as $\mathcal{R}_0$ , then two factors $r_{\mathrm{depth}}$ and $r_{\mathrm{type}}$ are introduced to adjust the caching ratios. Then the final caching ratio of the layer in $l$ depth and type can be written as $R_{\mathrm{type}}^{l} = R\times r_{l}\times r_{\mathrm{type}}$
146
+
147
+ $r_l$ : As introduced in Figure 5(a) and (b), although the features of the shallow layers tend to exhibit lower differences than the deeper layers, the error introduced by the cached tokens in the shallow layers can be propagated to the other tokens and amplified during the computation in all the following layers, resulting in a much larger caching error. Based on this observation, we set larger and smaller cache ratios for deeper and shallower layers, respectively, by setting $r_l = 0.5 + \lambda_l(l / L - 0.5)$ , where 0.5 is utilized for 1-center and $L$ denotes the maximal depth and $\lambda_l$ controls the slope.
148
+
149
+ $r_{\mathrm{type}}$ : As shown in Figure 5(c), layers of different types have different sensitivities to feature caching. Especially, the error on the token in self-attention layers can quickly propagate to other tokens, due to the property that each token in self-attention layers can attend to all the tokens. A naive solution is to set lower cache ratios for self-attention layers. However, we observe that even if only a smaller ratio of tokens is cached, the error introduced by these tokens still quickly propagates to all other tokens, and has almost the same negative influence as caching all the tokens. Based on this fact, we propose to cache all tokens in self-attention layers. For MLP and cross-attention layers, $r_{\mathrm{type}}$ is set to the ratio of their computation costs over the overall computation costs. This strategy encourages layers with more computation costs to have a higher cache ratio.
150
+
151
+ # 4 EXPERIMENT
152
+
153
+ # 4.1 EXPERIMENT SETTINGS
154
+
155
+ Model Configurations We conduct experiments on three commonly-used DiT-based models across different generation tasks, including PixArt- $\alpha$ (Chen et al., 2024a) for text-to-image generation, OpenSora (Zheng et al., 2024) for text-to-video generation, and DiT-XL/2 (Peebles & Xie, 2023) for class-conditional image generation with NVIDIA A800 80GB GPUs. Each model utilizes its default sampling method: DPM-Solver++ (Lu et al., 2022b) with 20 steps for PixArt- $\alpha$ , rflow (Liu et al., 2023) with 30 steps for OpenSora and DDPM (Ho et al., 2020) with 250 steps for DiT-XL/2. For each model, we configure different average forced activation cycles $\mathcal{N}$ and average caching ratios $R$ for ToCa as follows: PixArt- $\alpha$ : $\mathcal{N} = 3$ and $R = 70\%$ , OpenSora: $\mathcal{N} = 3$ for temporal
156
+
157
+ Table 1: Qualitative comparison of text-to-image generation on MS-COC02017 and PartiPrompts with PixArt- $\alpha$ and 20 DPM++ sampling steps by default.
158
+
159
+ <table><tr><td>Method</td><td colspan="3">Latency(s) ↓ FLOPs ↓ Speed ↑</td><td colspan="2">MS-COCO2017
160
+ FID-30k ↓ CLIP ↑</td><td>PartiPrompts
161
+ CLIP ↑</td></tr><tr><td>PixArt-α (Chen et al., 2024a)</td><td>0.682</td><td>11.18</td><td>1.00×</td><td>28.09</td><td>16.32</td><td>16.70</td></tr><tr><td>50% steps</td><td>0.391</td><td>5.59</td><td>2.00×</td><td>37.46</td><td>15.85</td><td>16.37</td></tr><tr><td>FORA(N = 2) (Selvaraju et al., 2024)</td><td>0.416</td><td>5.66</td><td>1.98×</td><td>29.67</td><td>16.40</td><td>17.19</td></tr><tr><td>FORA(N = 3) (Selvaraju et al., 2024)</td><td>0.342</td><td>4.01</td><td>2.79×</td><td>29.88</td><td>16.42</td><td>17.15</td></tr><tr><td>ToCa (N = 3, R = 60%)</td><td>0.410</td><td>6.33</td><td>1.77×</td><td>28.02</td><td>16.45</td><td>17.15</td></tr><tr><td>ToCa (N = 3, R = 70%)</td><td>0.390</td><td>5.78</td><td>1.93×</td><td>28.33</td><td>16.44</td><td>17.75</td></tr><tr><td>ToCa (N = 3, R = 80%)</td><td>0.370</td><td>5.05</td><td>2.21×</td><td>28.82</td><td>16.44</td><td>17.83</td></tr><tr><td>ToCa (N = 3, R = 90%)</td><td>0.347</td><td>4.26</td><td>2.62×</td><td>29.73</td><td>16.45</td><td>17.82</td></tr></table>
162
+
163
+ Table 2: Quantitative comparison in text-to-video generation on VBench. *Results are from PAB (Zhao et al., 2024). PAB $^{1-3}$ indicate PAB with different hyper-parameters.
164
+
165
+ <table><tr><td>Method</td><td>Latency(s) ↓</td><td>FLOPs(T) ↓</td><td>Speed ↑</td><td>VBenchmark(%) ↑</td></tr><tr><td>OpenSora (Zheng et al., 2024)</td><td>81.18</td><td>3283.20</td><td>1.00×</td><td>79.13</td></tr><tr><td>Δ-DiT* (Chen et al., 2024b)</td><td>79.14</td><td>3166.47</td><td>1.04×</td><td>78.21</td></tr><tr><td>T-GATE* (Zhang et al., 2024b)</td><td>67.98</td><td>2818.40</td><td>1.16×</td><td>77.61</td></tr><tr><td>PAB1* (Zhao et al., 2024)</td><td>60.78</td><td>2657.70</td><td>1.24×</td><td>78.51</td></tr><tr><td>PAB2* (Zhao et al., 2024)</td><td>59.16</td><td>2615.15</td><td>1.26×</td><td>77.64</td></tr><tr><td>PAB3* (Zhao et al., 2024)</td><td>56.64</td><td>2558.25</td><td>1.28×</td><td>76.95</td></tr><tr><td>50% steps</td><td>42.72</td><td>1641.60</td><td>2.00×</td><td>76.78</td></tr><tr><td>FORA(Selvaraju et al., 2024)</td><td>49.26</td><td>1751.32</td><td>1.87×</td><td>76.91</td></tr><tr><td>ToCa(R = 80%)</td><td>43.52</td><td>1439.70</td><td>2.28×</td><td>78.59</td></tr><tr><td>ToCa(R = 85%)</td><td>43.08</td><td>1394.03</td><td>2.36×</td><td>78.34</td></tr></table>
166
+
167
+ attention, spatial attention, MLP, and $\mathcal{N} = 6$ for cross-attention, with $R = 85\%$ exclusively for MLP, and DiT: $\mathcal{N} = 4$ and $R = 93\%$ . Please refer to the appendix for more implementation details.
168
+
169
+ Evaluation and Metrics For text-to-image generation, we utilize 30,000 captions randomly selected from COCO-2017 (Lin et al., 2014) to generate an equivalent number of images. FID-30k is computed to assess image quality, while the CLIP Score (Hessel et al., 2021) is used to evaluate the alignment between image content and captions. In the case of text-to-video generation, we leverage the VBenchmark framework (Huang et al., 2024), generating 5 videos for each of the 950 benchmark prompts under different random seeds, resulting in a total of 4,750 videos. The generated videos are comprehensively evaluated across 16 aspects proposed in VBenchmark. For class-conditional image generation, we uniformly sample from 1,000 classes in ImageNet (Deng et al., 2009) to produce 50,000 images at a resolution of $256 \times 256$ , evaluating performance using FID-50k (Heusel et al., 2017). Additionally, we employ sFID, Precision, and Recall as supplementary metrics.
170
+
171
+ # 4.2 RESULTS ON TEXT-TO-IMAGE GENERATION
172
+
173
+ In Table 1, we compare ToCa configured with parameters to achieve an acceleration ratio close to 2.0, against two other training-free acceleration approaches: FORA (Selvaraju et al., 2024), a recent cache-based high-acceleration method, and the 10-step DPM-Solver++ sampling (Lu et al., 2022b). In terms of generation quality, the quantitative results demonstrate that ToCa achieves the lowest FID among the compared acceleration methods while maintaining a high acceleration ratio. Figure 6 also illustrates that our generated results most closely resemble those of the original PixArt-α. Regarding generation consistency, Table 1 demonstrates that ToCa achieves the highest CLIP score on both MS-COCO2017(Lin et al., 2014) and the PartiPrompts(Yu et al., 2022). Figure 6 shows that ToCa generates images that align more closely with the text descriptions compared to other methods. This is particularly evident in the fourth case, where only ToCa successfully generates an image matching "a small rowboat tied to a wooden dock", while other methods fail to generate the content of "a wooden dock". This may be caused by cross-attention score $s_2$ in ToCa that ensures the frequent refreshing of tokens that are semantically relevant to the text descriptions, resulting in generated images with enhanced semantic consistency to the text prompts.
174
+
175
+ ![](images/c6c2b90c36028a9f3f13f40aadec3f9a13a3dffbdcc24f03181179c34af4f519.jpg)
176
+ Figure 7: Visualization examples for different acceleration methods on OpenSora. Please kindly refer to the supplementary material or our web page for viewing these videos.
177
+
178
+ Notably, Table 1 shows that under similar acceleration ratios, ToCa exhibits a very marginal decrease in generation quality. In contrast, directly halving the number of timesteps leads to 9.37 increments in FID, indicating a significant performance drop. This observation indicates that when the number of sampling steps is already relatively low (e.g., 20 steps), further reduction in the number of sampling steps may severely compromise the generation quality. In contrast, ToCa remains effective, demonstrating the distinct advantage of ToCa in the situation of low sampling steps.
179
+
180
+ # 4.3 RESULTS ON TEXT-TO-VIDEO GENERATION
181
+
182
+ We compare ToCa with adjusted rflow sampling steps from 20 to 10, alongside other acceleration methods including FORA, PAB (Zhao et al., 2024), $\Delta$ -DiT (Chen et al., 2024b), and T-GATE (Zhang et al., 2024b) using OpenSora (Zheng et al., 2024) for text-to-video generation. As presented in Table 2, the experimental results show that ToCa achieves an impressive VBench score offering the lowest computational cost and highest inference speed among all methods tested. The VBench score of the $2.36\times$ accelerated ToCa scheme drops by only 0.79 compared to the non-accelerated scheme, while FORA's score decreases by 2.22, resulting in a $64.4\%$ reduction in quality loss. Additionally, more VBench metrics results are presented in Figure 8, which illustrate that ToCa significantly speeds up
183
+
184
+ the original OpenSora with only slight performance degradation on a few metrics. Notably, ToCa stands out as the sole acceleration method achieving nearly overall consistency performance with the original OpenSora, clearly outperforming another cache-based acceleration method FORA. This again highlights the effectiveness of our proposed cross-attention-based token selection strategy, ensuring that the generated videos are highly aligned with the text descriptions. We further present some video generation results in Figure 7, where we observe that the visual fidelity, and overall consistency of ToCa are closest to the original OpenSora. Please kindly refer to our video demos in the supplementary material or the web page for viewing the video demo.
185
+
186
+ ![](images/9d5da4647a55191d47796ed4aa10d77f7567ce82a9c424d5c36118a490e3611a.jpg)
187
+ Figure 8: VBench metrics and acceleration ratio of proposed ToCa and other methods
188
+
189
+ # 4.4 RESULTS ON CLASS-CONDITIONAL IMAGE GENERATION
190
+
191
+ Quantitative comparison between ToCa with other training-free DiT acceleration methods is shown in Table 3, which demonstrates that ToCa outperforms other methods in terms of both FID and sFID by a clear margin under the similar acceleration ratio. For instance, ToCa leads to 0.39 and 0.22 lower values in sFID and FID compared with FORA with a similar acceleration ratio, respectively.
192
+
193
+ Table 3: Quantitative comparison on class-to-image generation on ImageNet with DiT-XL/2.
194
+
195
+ <table><tr><td>Method</td><td>Latency(s) ↓</td><td>FLOPs(T) ↓</td><td>Speed ↑</td><td>sFID ↓</td><td>FID ↓</td><td>Precision ↑</td><td>Recall ↑</td></tr><tr><td>DiT-XL/2-G (cfg = 1.50)</td><td>2.012</td><td>118.68</td><td>1.00×</td><td>4.98</td><td>2.31</td><td>0.82</td><td>0.58</td></tr><tr><td>33% steps</td><td>0.681</td><td>39.40</td><td>3.01×</td><td>6.31</td><td>2.76</td><td>0.81</td><td>0.57</td></tr><tr><td>37% steps</td><td>0.748</td><td>44.15</td><td>2.69×</td><td>6.04</td><td>2.64</td><td>0.81</td><td>0.58</td></tr><tr><td>FORA(N = 3)</td><td>0.807</td><td>39.95</td><td>2.97×</td><td>6.21</td><td>2.80</td><td>0.80</td><td>0.59</td></tr><tr><td>FORA(N = 2.8)</td><td>0.815</td><td>43.36</td><td>2.74×</td><td>6.13</td><td>2.80</td><td>0.80</td><td>0.59</td></tr><tr><td>ToCa (N = 4, R = 93%)</td><td>0.820</td><td>43.22</td><td>2.75×</td><td>5.74</td><td>2.58</td><td>0.81</td><td>0.59</td></tr></table>
196
+
197
+ Table 4: Ablation studies with DiT-XL/2-G (cfg-1.50). $s_1$ is used in all experiments. $s_2$ is not used since DiT does not have cross-attention layers.
198
+
199
+ <table><tr><td colspan="2">R Schedule</td><td rowspan="2">Uniform Spatial Distribution s4</td><td rowspan="2">Cache Frequency s3</td><td rowspan="2">ImageNet FID-5k ↓</td></tr><tr><td>rl</td><td>rttype</td></tr><tr><td>✓</td><td>✓</td><td>✓</td><td>✓</td><td>9.32</td></tr><tr><td>X</td><td>✓</td><td>✓</td><td>✓</td><td>9.60</td></tr><tr><td>✓</td><td>X</td><td>✓</td><td>✓</td><td>9.67</td></tr><tr><td>✓</td><td>✓</td><td>X</td><td>✓</td><td>9.35</td></tr><tr><td>✓</td><td>✓</td><td>✓</td><td>X</td><td>9.65</td></tr></table>
200
+
201
+ Table 5: Ablation studies of token selection based on different attention scores $\left( {s}_{1}\right.$ and ${s}_{2}$ ) with PixArt- $\alpha$ . "Random" indicates replacing attention scores with random values. ${s}_{3},{s}_{4},{r}_{l},{r}_{\text{type }}$ are used in all three settings.
202
+
203
+ <table><tr><td>Token Selection Methods</td><td>MS-COCO2017FID-30k ↓</td><td>PartiPromptsCLIP ↑</td></tr><tr><td>Cross-Attention s2</td><td>28.33</td><td>17.75</td></tr><tr><td>Self-Attention s1</td><td>28.21</td><td>17.13</td></tr><tr><td>Random</td><td>28.46</td><td>17.08</td></tr></table>
204
+
205
+ Ablation Study Table 4 presents the effect of the two factors on adjusting the caching ratio in different layers, where applying different caching ratios to layers in different types ( $r_{\mathrm{type}}$ ) and different depths ( $r_l$ ) leads to 0.28 and 0.35 FID reduction, respectively. Besides, using the score of uniform spatial distribution ( $s_3$ ) and cache frequency ( $s_4$ ) reduces FID by 0.02 and 0.33, respectively. Table 5 compares the influence of selecting tokens with the self-attention weights ( $s_1$ ) and the cross-attention weights ( $s_2$ ). The other ToCa modules including $s_3, s_4, r_l, r_{\mathrm{type}}$ are utilized in these experiments. It is observed that $s_1$ tends to achieve a lower FID while $s_2$ tends to reach a higher CLIP score, which is reasonable since self-attention is mainly utilized for the generation of the overall images while cross-attention is utilized to inject the conditional signals. In summary, these results demonstrate that all the cache scores in ToCa have their benefits in different dimensions.
206
+
207
+ ![](images/d86dee1d9945d6789b073e2648e56d32b6239f2142b4a552a6fd8b7e83627d77.jpg)
208
+ (a) ToCa w/o Cache Frequency Score $s_3$
209
+
210
+ ![](images/51ebbf55f7cc65fa4cc70b8aeb3a1781102793b7840fe27c939e78c66992d6a7.jpg)
211
+
212
+ Frequency Score $s_3$
213
+
214
+ ![](images/7609c3f001bbb0bda21734ee52bd94748e1242644ba9e3e74fd75bd25e777d10.jpg)
215
+ Figure 9: Visualization of cached tokens selected with and without $s_3$ (cache frequency). The pixel with a darker color indicates the corresponding tokens are more frequently cached. (c) The distribution of the number of being cached for each token w/ and w/o $s_3$ .
216
+
217
+ ![](images/8942690a6d48ec75817c9d0487004334a4f44061d41a8cef027a94b56f5dafce.jpg)
218
+ (b) ToCa with Cache Frequency Score $s_3$
219
+
220
+ ![](images/753d6dabf46519bd032dcd90c2d264b3c83c24f03214740e95f40c7bd6e7dde2.jpg)
221
+ (c) Distributions of the cache times of each token (with and w/o $s_3$ )
222
+
223
+ Visualization on the Cached Tokens Figure 9 (a-b) show the number of times that each token is cached during generation, where darker colors indicate more frequent caching. It is observed that both the two schemes perform more cache in the unimportant background tokens while performing more real computations in the tokens of the Arctic fox. However, the image without $s_3$ has a bad quality in the background since the background tokens have been cached too many times. In contrast, applying the score of cache frequency $s_3$ , which aims to stop caching the tokens that have been cached in the previous layers, can reduce the gap between the important and unimportant tokens, and prevent the background tokens from overlarge caching frequency. This observation has also been verified in Figure 9(c) that $s_3$ reduces the number of tokens cached by more than 4.5k times.
224
+
225
+ # 5 CONCLUSION
226
+
227
+ Motivated by the observation that different tokens exhibit different temporal redundancy and different error propagation, this paper introduces ToCa, a token-wise feature caching method, which adaptively skips the computation of some tokens by resuing their features in previous timesteps. By leveraging the difference in different tokens, ToCa achieves better acceleration performance compared with previous caching methods by a clear margin in both image and video generation, providing insights for token-wise optimization in diffusion transformers.
228
+
229
+ # REFERENCES
230
+
231
+ Yogesh Balaji, Seungjun Nah, Xun Huang, Arash Vahdat, Jiaming Song, Qinsheng Zhang, Karsten Kreis, Miika Aittala, Timo Aila, Samuli Laine, et al. ediff-i: Text-to-image diffusion models with an ensemble of expert denoisers. arXiv preprint arXiv:2211.01324, 2022.
232
+ Andreas Blattmann, Tim Dockhorn, Sumith Kulal, Daniel Mendelevitch, Maciej Kilian, Dominik Lorenz, Yam Levi, Zion English, Vikram Voleti, Adam Letts, et al. Stable video diffusion: Scaling latent video diffusion models to large datasets. arXiv preprint arXiv:2311.15127, 2023.
233
+ Daniel Bolya and Judy Hoffman. Token merging for fast stable diffusion. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 4599-4603, 2023.
234
+ Tim Brooks, Bill Peebles, Connor Holmes, Will DePue, Yufei Guo, Li Jing, David Schnurr, Joe Taylor, Troy Luhman, Eric Luhman, Clarence Ng, Ricky Wang, and Aditya Ramesh. Video generation models as world simulators. 2024. URL https://openai.com/research/video-generation-models-as-world-simulators.
235
+ Junsong Chen, Jincheng Yu, Chongjian Ge, Lewei Yao, Enze Xie, Yue Wu, Zhongdao Wang, James Kwok, Ping Luo, Huchuan Lu, and Zhenguo Li. Pixart-α: Fast training of diffusion transformer for photorealistic text-to-image synthesis. In International Conference on Learning Representations, 2024a.
236
+ Pengtao Chen, Mingzhu Shen, Peng Ye, Jianjian Cao, Chongjun Tu, Christos-Savvas Bouganis, Yiren Zhao, and Tao Chen. $\delta$ -dit: A training-free acceleration method tailored for diffusion transformers. arXiv preprint arXiv:2406.01125, 2024b.
237
+ Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition, pp. 248-255. IEEE, 2009.
238
+ Gongfan Fang, Xinyin Ma, and Xinchao Wang. Structural pruning for diffusion models. arXiv preprint arXiv:2305.10924, 2023.
239
+ Jack Hessel, Ari Holtzman, Maxwell Forbes, Ronan Le Bras, and Yejin Choi. Clipscore: A reference-free evaluation metric for image captioning. arXiv preprint arXiv:2104.08718, 2021.
240
+ Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, and Sepp Hochreiter. Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems, 30, 2017.
241
+ Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffusion probabilistic models. Advances in neural information processing systems, 33:6840-6851, 2020.
242
+ Ziqi Huang, Yinan He, Jiashuo Yu, Fan Zhang, Chenyang Si, Yuming Jiang, Yuanhan Zhang, Tianxing Wu, Qingyang Jin, Nattapol Chanpaisit, et al. Vbench: Comprehensive benchmark suite for video generative models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 21807-21818, 2024.
243
+ PKU-Yuan Lab and Tuzhan AI etc. Open-sora-plan, April 2024. URL https://doi.org/10.5281/zenodo.10948109.
244
+ Black Forest Labs. Flux. https://github.com/black-forest-labs/flux, 2024.
245
+ Senmao Li, Taihang Hu, Fahad Shahbaz Khan, Linxuan Li, Shiqi Yang, Yaxing Wang, Ming-Ming Cheng, and Jian Yang. Faster diffusion: Rethinking the role of unet encoder in diffusion models. arXiv preprint arXiv:2312.09608, 2023a.
246
+ Xiuyu Li, Yijiang Liu, Long Lian, Huanrui Yang, Zhen Dong, Daniel Kang, Shanghang Zhang, and Kurt Keutzer. Q-diffusion: Quantizing diffusion models. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 17535-17545, 2023b.
247
+ Yanyu Li, Huan Wang, Qing Jin, Ju Hu, Pavlo Chemerys, Yun Fu, Yanzhi Wang, Sergey Tulyakov, and Jian Ren. Snapfusion: Text-to-image diffusion model on mobile devices within two seconds. Advances in Neural Information Processing Systems, 36, 2024.
248
+
249
+ Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollar, and C Lawrence Zitnick. Microsoft coco: Common objects in context. In Computer Vision-ECCV 2014: 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part V 13, pp. 740-755. Springer, 2014.
250
+ Xingchao Liu, Chengyue Gong, et al. Flow straight and fast: Learning to generate and transfer data with rectified flow. In The Eleventh International Conference on Learning Representations, 2023.
251
+ Cheng Lu, Yuhao Zhou, Fan Bao, Jianfei Chen, Chongxuan Li, and Jun Zhu. Dpm-solver: A fast ode solver for diffusion probabilistic model sampling in around 10 steps. Advances in Neural Information Processing Systems, 35:5775-5787, 2022a.
252
+ Cheng Lu, Yuhao Zhou, Fan Bao, Jianfei Chen, Chongxuan Li, and Jun Zhu. Dpm-solver++: Fast solver for guided sampling of diffusion probabilistic models. arXiv preprint arXiv:2211.01095, 2022b.
253
+ Xinyin Ma, Gongfan Fang, Michael Bi Mi, and Xinchao Wang. Learning-to-cache: Accelerating diffusion transformer via layer caching. arXiv preprint arXiv:2406.01733, 2024a.
254
+ Xinyin Ma, Gongfan Fang, and Xinchao Wang. Deepcache: Accelerating diffusion models for free. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 15762-15772, 2024b.
255
+ Chenlin Meng, Robin Rombach, Ruiqi Gao, Diederik Kingma, Stefano Ermon, Jonathan Ho, and Tim Salimans. On distillation of guided diffusion models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 14297-14306, 2023.
256
+ William Peebles and Saining Xie. Scalable diffusion models with transformers. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 4195-4205, 2023.
257
+ Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. High-resolution image synthesis with latent diffusion models. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 10684-10695, 2022.
258
+ Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U-net: Convolutional networks for biomedical image segmentation. In Medical image computing and computer-assisted intervention-MICCAI 2015: 18th international conference, Munich, Germany, October 5-9, 2015, proceedings, part III 18, pp. 234-241. Springer, 2015.
259
+ Tim Salimans and Jonathan Ho. Progressive distillation for fast sampling of diffusion models. arXiv preprint arXiv:2202.00512, 2022.
260
+ Pratheba Selvaraju, Tianyu Ding, Tianyi Chen, Ilya Zharkov, and Luming Liang. Fora: Fast-forward caching in diffusion transformer acceleration. arXiv preprint arXiv:2407.01425, 2024.
261
+ Yuzhang Shang, Zhihang Yuan, Bin Xie, Bingzhe Wu, and Yan Yan. Post-training quantization on diffusion models. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 1972-1981, 2023.
262
+ Jascha Sohl-Dickstein, Eric Weiss, Niru Maheswaranathan, and Surya Ganguli. Deep unsupervised learning using nonequilibrium thermodynamics. In International conference on machine learning, pp. 2256-2265. PMLR, 2015.
263
+ Jiaming Song, Chenlin Meng, and Stefano Ermon. Denoising diffusion implicit models. In International Conference on Learning Representations, 2021.
264
+ Yang Song, Prafulla Dhariwal, Mark Chen, and Ilya Sutskever. Consistency models. In International Conference on Machine Learning, pp. 32211-32252. PMLR, 2023.
265
+ Felix Wimbauer, Bichen Wu, Edgar Schoenfeld, Xiaoliang Dai, Ji Hou, Zijian He, Artsiom Sanakoyeu, Peizhao Zhang, Sam Tsai, Jonas Kohler, et al. Cache me if you can: Accelerating diffusion models through block caching. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 6211-6220, 2024.
266
+
267
+ Jiahui Yu, Yuanzhong Xu, Jing Yu Koh, Thang Luong, Gunjan Baid, Zirui Wang, Vijay Vasudevan, Alexander Ku, Yinfei Yang, Burcu Karagol Ayan, et al. Scaling autoregressive models for content-rich text-to-image generation. arXiv preprint arXiv:2206.10789, 2(3):5, 2022.
268
+ Evelyn Zhang, Bang Xiao, Jiayi Tang, Qianli Ma, Chang Zou, Xuefei Ning, Xuming Hu, and Linfeng Zhang. Token pruning for caching better: 9 times acceleration on stable diffusion for free. ArXiv, abs/2501.00375, 2024a. URL https://api_semanticscholar.org/ CorpusID:275211828.
269
+ Evelyn Zhang, Jiayi Tang, Xuefei Ning, and Linfeng Zhang. Training-free and hardware-friendly acceleration for diffusion models via similarity-based token pruning. In Proceedings of the AAAI Conference on Artificial Intelligence, 2025.
270
+ Wentian Zhang, Haozhe Liu, Jinheng Xie, Francesco Faccio, Mike Zheng Shou, and Jurgen Schmidhuber. Cross-attention makes inference cumbersome in text-to-image diffusion models. arXiv preprint arXiv:2404.02747v1, 2024b.
271
+ Xuanlei Zhao, Xiaolong Jin, Kai Wang, and Yang You. Real-time video generation with pyramid attention broadcast. arXiv preprint arXiv:2408.12588, 2024.
272
+ Zangwei Zheng, Xiangyu Peng, Tianji Yang, Chenhui Shen, Shenggui Li, Hongxin Liu, Yukun Zhou, Tianyi Li, and Yang You. Open-sora: Democratizing efficient video production for all, March 2024. URL https://github.com/hpcaitech/Open-Sora.
273
+
274
+ # A APPENDIX
275
+
276
+ # A.1 ENGINEERING DETAILS
277
+
278
+ This section introduces some engineering techniques in our work.
279
+
280
+ # A.1.1 STEP-WISE CACHING SCHEDULING
281
+
282
+ In section 3.6, we propose a method for dynamically adjusting the caching ratio $R$ based on the time redundancy and noise diffusion speed across different depths and types of layers, which constitutes a key part of our contributions. In the following section, we further explore the dynamic adjustment of $R$ along the timestep dimension, as well as strategies for dynamically adjusting the forced activation cycle $\mathcal{N}$ .
283
+
284
+ At the initial stages of image generation, the model primarily focuses on generating contours, while in the later stages, it pays more attention to details. In the early contour generation phase, it is not necessary for too many tokens to be fully computed with high precision. By multiplying by a term $r_t$ , we achieve dynamic adjustment of $R$ along the timestep dimension, where $r_t = 0.5 + \lambda_t(0.5 - t / T)$ , $\lambda_t$ is a positive parameter controlling the slope, $t$ is the number of timesteps already processed, and $T$ is the total number of timesteps. By adjusting $R$ in this way, we shift some of the computational load from earlier timesteps to later ones, improving the quality of the generated images. Finally, the caching ratio is determined as $R_{\mathrm{type}}^{l,t} = R \times r_l \times r_{\mathrm{type}} \times r_t$ .
285
+
286
+ Similarly, we set a larger forced activation cycle $\mathcal{N}$ during the earlier stages, while a smaller $\mathcal{N}$ is used during the later detail generation phase to enhance the quality of the details. To ensure that the adjustment of $\mathcal{N}$ has minimal impact on the theoretical speedup, we define it as follows: $\mathcal{N}_t = \mathcal{N}_0 / (0.5 + w_t(t / T - 0.5))$ , where $\mathcal{N}_0$ corresponds to the expected theoretical speedup induced by $\mathcal{N}$ , and $w_{t}$ is a hyperparameter controlling the slope.
287
+
288
+ # A.1.2 PARTIALLY COMPUTATION ON SELF-ATTENTION
289
+
290
+ In the previous section, we mentioned that partial computation in the Self-Attention module can lead to rapid propagation and accumulation of errors. Therefore, we considered avoiding partial computation in the Self-Attention module, meaning that during the non-forced activation phase, the Self-Attention module has $r_{type} = 0$ . In the subsequent Sensitivity Study, we explored a trade-off scheme between Self-Attention and MLP modules, with the corresponding formulas for allocation
291
+
292
+ being $r_{type} = 1 - 0.4\lambda_{type}$ for the Self-Attention module, and $r_{type} = 1 + 0.6\lambda_{type}$ for the MLP module. The factors 0.6 and 0.4 are derived from the approximate computational ratio between these two modules in the DiT model.
293
+
294
+ # A.1.3TOKEN SELECTION FOR CFG AND NON-CFG
295
+
296
+ In the series of DiT-based models, the tensors of CFG (class-free guidance) and noncfg are concatenated along the batch dimension. A pertinent question in token selection is whether the same token selection strategy should be applied to both the CFG and noncfg parts for the same image (i.e., if a token is cached in the CFG part, it should also be cached in the corresponding noncfg part). We have observed significant sensitivity differences among models with different types of conditioning regarding whether the same selection strategy is used. For instance, in the text-to-image and text-to-video models, such as PixArt- $\alpha$ and OpenSora, if independent selection schemes are applied for the CFG and noncfg parts, the model performance degrades substantially. Thus, it is necessary to enforce a consistent token selection scheme between the CFG and noncfg parts.
297
+
298
+ However, in the class-to-image DiT model, this sensitivity issue is considerably reduced. Using independent or identical schemes for the CFG and noncfg parts results in only minor differences. This can be attributed to the fact that, in text-conditional models, the cross-attention module injects the conditioning information into the CFG and noncfg parts unevenly, leading to a significant disparity in attention distribution between the two. Conversely, in class-conditional models, the influence on both parts is relatively uniform, causing no noticeable changes in token attention distribution.
299
+
300
+ # A.2 MORE IMPLEMENTATION DETAILS ON EXPERIMENTAL SETTINGS
301
+
302
+ For the DiT-XL/2 model, we uniformly sampled from 1,000 classes in ImageNet (Deng et al., 2009) and generated 50,000 images with a resolution of $256 \times 256$ . We explored the optimal solution for DiT-XL/2 using FID-5k (Heusel et al., 2017) and evaluated its performance with FID-50k. Additionally, sFID, Inception Score, and Precision and Recall were used as secondary metrics. For the PixArt- $\alpha$ model, we used 30,000 captions randomly selected from COCO-2017 (Lin et al., 2014) to generate 30,000 images. We computed FID-30k to assess image quality and used the CLIP Score between the images and prompts to evaluate the alignment between image content and the prompts. For the OpenSora model, we used the VBench framework (Huang et al., 2024), generating 5 videos for each of the 950 VBench benchmark prompts under different random seeds, resulting in a total of 4,750 videos. These videos have a resolution of $480\mathrm{p}$ , an aspect ratio of 16:9, a duration of 2 seconds, and consist of 51 frames saved at a frame rate of 24 frames per second. The model was comprehensively evaluated across 16 aspects: subject consistency, imaging quality, background consistency, motion smoothness, overall consistency, human action, multiple objects, spatial relationships, object class, color, aesthetic quality, appearance style, temporal flickering, scene, temporal style, and dynamic degree.
303
+
304
+ PixArt- $\alpha$ : We set the average forced activation cycle of $\mathrm{ToCa}$ to $\mathcal{N} = 2$ , supplemented with a dynamic adjustment parameter $w_{t} = 0.1$ . The parameter $\lambda_{t} = 0.4$ adjusts $R$ at different time steps, and the average caching ratio is $R = 70\%$ . The parameter $r_{l} = 0.3$ adjusts $R$ at different depth layers. The module preference weight $r_{type} = 1.0$ shifts part of the computation from cross-attention layers to MLP layers.
305
+
306
+ OpenSora: For OpenSora, we fixed the forced activation cycle for temporal attention, spatial attention, and MLP at 3, and set the forced activation cycle for cross-attention to 6. The ToCa strategy ensures that a portion of token computations is conducted solely in the MLP, with $R_{mlp}$ fixed at $85\%$ .
307
+
308
+ DiT: We set the average forced activation cycle of ToCa to $\mathcal{N} = 3$ , supplemented with a dynamic adjustment parameter $w_{t} = 0.03$ to gradually increase the density of forced activations as the sampling steps progress. The parameter $\lambda_{t} = 0.03$ adjusts $R$ at different time steps. Additionally, during the sampling steps in the interval $t \in [50,100]$ , the forced activation cycle is fixed at $\mathcal{N} = 2$ to promote more thorough computation in sensitive regions. The average caching ratio is $R = 93\%$ and the parameter $\lambda_{l} = 0.06$ adjusts $R$ at different depth layers. The module preference weight $r_{type} = 0.8$ means that during steps outside the forced activation ones, no extra computations are performed in attention layers, but additional computations are performed in the MLP layers.
309
+
310
+ ![](images/a0de2745d93f43b8a1baf3129137ab8e4e559c7e3ee7875e9221084584d0b995.jpg)
311
+ Figure 10: Visualization examples for different acceleration methods on DiT.
312
+
313
+ All of our experiments were conducted on 6 A800 GPUs, each with 80GB of memory, running CUDA version 12.1. The DiT model was executed in Python 3.12 with PyTorch version 2.4.0, while PixArt- $\alpha$ and OpenSora were run in Python 3.9. The PyTorch version for PixArt- $\alpha$ was 2.4.0, and for OpenSora it was 2.2.2. The CPUs used across all experiments were 84 vCPUs from an Intel(R) Xeon(R) Gold 6348 CPU @ 2.60GHz.
314
+
315
+ # A.3 SENSITIVITY STUDY
316
+
317
+ We explored the optimal parameter configuration for the ToCa acceleration scheme on DiT and analyzed the sensitivity of each parameter. The experiments used FID-5k and sFID-5k as evaluation metrics. From Figure 11 (a) to (f), we respectively investigated the effects of the caching ratio weights $\lambda_{l}$ , $\lambda_{type}$ , $\lambda_{t}$ , the weight of Cache Frequency score $\lambda_{3}$ , Uniform Spatial Distribution $\lambda_{4}$ , and the dynamic adjustment weight for forced activation $w_{t}$ . It is observed that: (a) The optimal parameter is $\lambda_{l} = 0.06$ , where the corresponding cache ratio shows approximately $6\%$ variation at both the last and first layers. (b) The optimal parameter is $\lambda_{type} = 2.5$ , at which point the Self-Attention layer does not perform any partial computation, with the entire computational load shifted to the MLP layer. It is also noted that as the computation load decreases in the Self-Attention layer and increases in the MLP layer, the generation quality shows a steady improvement. (c) The optimal parameter in the figure is $\lambda_{t} = 0.03$ , and at this point, there is little difference between the best and worst methods, suggesting that the model is not particularly sensitive to the adjustment of cache ratio along the timesteps. (d) The optimal weight for the Cache Frequency score is $\lambda_{3} = 0.25$ . We observe that as $\lambda_{3}$ increases, the model's generation quality initially shows a noticeable improvement, but beyond a weight value of 0.25, the fluctuation is minimal. This indicates that the Cache Frequency has reached a relatively uniform state, achieving a dynamic balance in caching among different tokens. (e) We conducted a search for the Uniform Spatial Distribution score with grid sizes of 2 and 4, and the experimental results show that the generation quality with a grid size of 2 is generally better than that with a grid size of 4. This suggests that a finer-grained spatial uniformity indeed contributes to an improvement in generation quality. (f) We explored the impact of dynamically adjusting the forced activation cycle on the model's generation quality and analyzed the effect of fixing the Force activation cycle $\beta$ at 2 for the relatively more sensitive 50–100 timesteps. The experimental results show that enforcing this fixed cycle in the 50–100 timesteps significantly improves generation quality, with the optimal parameter configuration being $w_{t} = 0.4$ .
318
+
319
+ In summary, these observations indicate that our method is not sensitive to the choice of hyperparameters. Actually, our experiment results demonstrate that stable performance can be observed when directly transferring hyper-parameters from one model to another model in the same model family such as DiT in different sizes.
320
+
321
+ ![](images/5ed6f72ce6e1fc829c1bb8c8475e7868f7fb10c0e1b9bb23760e9a5a2ae3b6ce.jpg)
322
+
323
+ ![](images/8daa688bdb4b77b546d0fa3450014d8debf98473d353912bc88632b15b0b9b66.jpg)
324
+
325
+ ![](images/ab3b0180a2c1fde9d293c287bfe78ef19b301d0272694edd59d3226bf904f3a9.jpg)
326
+
327
+ ![](images/4e0deee1b7bfa1e37d5814bdcd956acc5dfc3d14c02a83f126532a1966d16824.jpg)
328
+ Figure 11: Sensitivity study on different weights. From (a) to (f), the caching ratio weights $\lambda_{l}$ , $\lambda_{type}$ , $\lambda_{t}$ , the Cache Frequency score weight $\lambda_{3}$ , the Uniform Spatial Distribution weight $\lambda_{4}$ , and the dynamic schedule weight for forced activation $w_{t}$ are presented.
329
+
330
+ ![](images/d354a3aa94b98e80deda47540ecccd660bc12f406b721dce97901d68ef0bb4ae.jpg)
331
+
332
+ ![](images/172277b90535315d914826b4172d7bc6b604204c35e794359e9a1a0bcfdebb66.jpg)
333
+
334
+ # A.4 COMPUTATION COMPLEXITY ANALYSIS
335
+
336
+ # A.4.1 MAIN COMPUTATIONS
337
+
338
+ Complexity of Attention Layer. In the Attention layer, tokens are first processed through a linear layer to generate queries, keys, and values. Next, the queries and keys are multiplied using a dot product, passed through a softmax function, and then multiplied with the values. Finally, the result is passed through another linear layer to produce the output. The computational cost of the Self-Attention layer can be expressed as:
339
+
340
+ $$
341
+ \begin{array}{l} \operatorname {F L O P s} _ {S A} \approx N \times D \times 3 \times D \times 2 + N ^ {2} \times D \times 2 + N ^ {2} \times H \times 5 + N ^ {2} \times D \times 2 + N \times D \times D \times 2 \\ = 8 \times N \times D ^ {2} + 4 \times N ^ {2} \times D + 5 \times N ^ {2} \times H \approx O \left(N D ^ {2}\right) + O \left(N ^ {2} D\right), \tag {4} \\ \end{array}
342
+ $$
343
+
344
+ where $N$ is the number of tokens, $D$ is the hidden state of each token and $H$ is the number of heads, $(H\ll D)$ . The computational cost of a Cross-Attention layer can be expressed as:
345
+
346
+ $$
347
+ \begin{array}{l} \mathrm {F L O P s} _ {C A} \approx N _ {1} \times D \times D \times 2 + N _ {2} \times D \times 2 \times D \times 2 + N _ {1} \times N _ {2} \times D \times 2 \\ + N _ {1} \times N _ {2} \times H \times 5 + N _ {1} \times N _ {2} \times D \times 2 + N _ {1} \times D \times D \times 2 \\ = 4 \times \left(N _ {1} + N _ {2}\right) \times D ^ {2} + 4 \times N _ {1} \times N _ {2} \times D + 5 \times N _ {1} \times N _ {2} \times H \tag {5} \\ \approx O \left(\left(N _ {1} + N _ {2}\right) D ^ {2}\right) + O \left(N _ {1} N _ {2} D\right) = O \left(\left(N + N _ {2}\right) D ^ {2}\right) + O \left(N N _ {2} D\right), \\ \end{array}
348
+ $$
349
+
350
+ where $N_{1} = N$ , $N_{2}$ are the number of image and text tokens, $D$ is the hidden state of each token and $H$ is the number of heads, $(H \ll D)$ . In the previous computations, the softmax operation is approximated as involving 5 floating-point calculations per element.
351
+
352
+ Complexity of MLP Layer. The computational cost of MLP layer can be written as:
353
+
354
+ $$
355
+ \begin{array}{l} \mathrm {F L O P s} _ {M L P} \approx N \times D _ {1} \times D _ {2} \times 2 + N \times D _ {2} \times 6 + N \times D _ {1} \times D _ {2} \times 2 \\ = 4 \times N \times D _ {1} \times D _ {2} + 6 \times D _ {2} \times N \tag {6} \\ = 1 6 \times N \times D ^ {2} + 2 4 \times N \times D \approx O (N D ^ {2}), \\ \end{array}
356
+ $$
357
+
358
+ where $N$ is the number of tokens, $D_{1} = D$ and $D_{2} = 4D_{1}$ are the hidden and middle-hidden state in MLP, respectively. The activation function for MLP is approximated to involve 6 floating-point operations per element.
359
+
360
+ # A.4.2 COMPUTATION COSTS FROMTOKEN SELECTION
361
+
362
+ Self-Attention score $s_1$ . As mentioned in section 3.5, the Self-Attention score $s_1$ is computed as $s_1(x_i) = \sum_{i=1}^{N} \alpha_{ij}$ , where $x_i$ is the $i_{th}$ token, and $\alpha_{ij}$ is the element in the self-attention map.
363
+
364
+ Table 6: Quantitative comparison on class-to-image generation on ImageNet with 50 steps DDIM sampler as baseline on DiT-XL/2.
365
+
366
+ <table><tr><td>Method</td><td>Latency(s) ↓</td><td>FLOPs(T) ↓</td><td>Speed ↑</td><td>sFID ↓</td><td>FID ↓</td><td>Precision ↑</td><td>Recall ↑</td></tr><tr><td>DiT-XL/2-G (cfg = 1.50)</td><td>0.455</td><td>23.74</td><td>1.00×</td><td>4.40</td><td>2.43</td><td>0.80</td><td>0.59</td></tr><tr><td>50% steps</td><td>0.238</td><td>11.86</td><td>2.00×</td><td>4.74</td><td>3.18</td><td>0.79</td><td>0.58</td></tr><tr><td>40% steps</td><td>0.197</td><td>9.50</td><td>2.50×</td><td>5.15</td><td>3.81</td><td>0.78</td><td>0.57</td></tr><tr><td>34% steps</td><td>0.173</td><td>8.08</td><td>2.94×</td><td>5.76</td><td>4.58</td><td>0.77</td><td>0.56</td></tr><tr><td>FORA(N = 2.5)</td><td>0.219</td><td>10.48</td><td>2.27×</td><td>6.59</td><td>3.83</td><td>0.79</td><td>0.55</td></tr><tr><td>FORA(N = 3)</td><td>0.211</td><td>8.58</td><td>2.77×</td><td>6.43</td><td>3.88</td><td>0.79</td><td>0.54</td></tr><tr><td>ToCa (N = 3, R = 93%)</td><td>0.227</td><td>10.23</td><td>2.32×</td><td>4.74</td><td>3.04</td><td>0.80</td><td>0.57</td></tr><tr><td>ToCa (N = 4, R = 93%)</td><td>0.209</td><td>8.73</td><td>2.72×</td><td>5.11</td><td>3.60</td><td>0.79</td><td>0.56</td></tr></table>
367
+
368
+ Therefore, the computational complexity of the self-attention score is only $N \approx O(N)$ . In a practical case achieving about $2.3 \times$ acceleration, the computation cost of the Self-Attention score accounts for approximately $0.28\%$ of the main components.
369
+
370
+ Cross-Attention score $s_2$ . As mentioned in section 3.5, the Cross-Attention score $s_2$ is computed as $s_2(x_i) = -\sum_{j=1}^{N} c_{ij} \log(c_{ij})$ , where the $c_{ij}$ is the element in the cross-attention map. Therefore, the computational complexity of the cross-attention score is only $2N \approx O(N)$ . In a practical case achieving about $2.3 \times$ acceleration, the computation cost of the Cross-Attention score accounts for approximately $0.35\%$ of the main components.
371
+
372
+ Cache Frequency score $s_3$ and Uniform Spatial Distribution score $s_4$ . The Cache Frequency score $s_3$ is updated at each step, so its update cost per timestep is $N$ . When the Cache Frequency score is called for summation in practical applications, the computation cost is $2N$ . Thus, the total cost for one layer is $3N \approx O(N)$ . The Uniform Spatial Distribution score $s_4$ is computed by sorting within each grid of size $G \times G$ and weighting the top-scoring tokens. The computation cost is given by $\frac{N}{G^2} \times \tilde{G}^2 \log (G^2) + 2 \times \frac{N}{G^2}$ , where $G$ is the grid size, which is usually small. Therefore the computational complexity of $s_4$ is $O(N)$ . In a practical case achieving about $2.3 \times$ acceleration, the computation cost of the Cache Frequency score $s_3$ accounts for approximately $0.044\%$ and the Uniform Spatial Distribution score $s_4$ accounts for $0.15\%$ of the main computation components. In addition, the computational cost for sorting $N$ tokens is $O(N \log N)$ , which accounts for approximately $0.18\%$ of the main computational cost.
373
+
374
+ In summary, although $\mathsf{TOCa}$ introduces additional computations, its computational complexity of $O(N)$ is negligible compared to the main computational modules with complexities of $O(N^2 D)$ or $O(ND^2)$ . In practical tests, the time taken for token selection is minimal, typically less than $1\%$ of the main computational cost. At cache steps, taking a caching ratio of $\mathcal{R} = 90\%$ as an example, the computational cost of terms with a complexity of $O(ND^2)$ is reduced to $10\%$ of the original, while the computational cost of terms with a complexity of $O(N^2 D)$ is reduced to $1\%$ . (However, as mentioned earlier, in practice, it is more efficient to shift all computations at cache steps to the MLP. Therefore, all terms with a complexity of $O(N^2 D)$ at cache steps are ignored.)
375
+
376
+ # A.5 IMPLEMENTED RESULTS ON CLASS-CONDITIONAL IMAGE GENERATION
377
+
378
+ In addition to the series of experiments conducted using the DDPM(Ho et al., 2020) sampling method on DiT, which have already been included in the main part, we also performed validation with the more practically relevant DDIM(Song et al., 2021) sampling method to further demonstrate the effectiveness of ToCa as shown in Table 6.
379
+
380
+ For instance, $\mathrm{ToCa}$ leads to 1.32 and 0.28 lower values in sFID and FID compared with FORA with a similar acceleration ratio of approximately $2.7\times$ , respectively, and achieves 0.21 lower values in FID compared to the method of directly reducing the sampling steps with an acceleration ratio of approximately $2.5\times$ . As a trade-off between acceleration and performance, we selected the scheme $\mathcal{N} = 3$ , $R = 93\%$ as the final recommended approach for DDIM sampler.
381
+
382
+ ![](images/7e44fdba7af5cbe7b89c8dc79dbf1da14a38bd1c4e50ead9ac060f694e39bea4.jpg)
383
+
384
+ ![](images/b738ec840c1248ed15e7f73ed0e2e233f97c5a746284f0238397ca2d7caf7cc1.jpg)
385
+
386
+ ![](images/1fc2ed1cf010437211154e8d72aea30996b6b758aaa95ca0e8d2469a078b7d17.jpg)
387
+
388
+ ![](images/645f43e3066308a7b0419e37f0f088cd14045ebf0973b4684018920b00ea49f3.jpg)
389
+ Figure 12: A more detailed workflow for the proposed ToCa. The cache-and-reuse procedure is conducted on the model at all layers and timesteps.
390
+
391
+ Table 7: Quantitative comparison in text-to-image generation for FLUX on Image Reward.
392
+
393
+ <table><tr><td>Method</td><td>Latency(s) ↓</td><td>FLOPs(T) ↓</td><td>Speed ↑</td><td>Image Reward ↑</td></tr><tr><td>FLUX.1-dev (Labs, 2024)</td><td>33.85</td><td>3719.50</td><td>1.00×</td><td>1.202</td></tr><tr><td>68% steps</td><td>23.02</td><td>2529.26</td><td>1.47×</td><td>1.200</td></tr><tr><td>FORA (Selvaraju et al., 2024)</td><td>20.82</td><td>2483.32</td><td>1.51×</td><td>1.196</td></tr><tr><td>ToCa(N=2,R=90%)</td><td>19.88</td><td>2458.06</td><td>1.51×</td><td>1.202</td></tr><tr><td>FLUX.1-schnell (Labs, 2024)</td><td>2.882</td><td>277.88</td><td>1.00×</td><td>1.133</td></tr><tr><td>75% steps</td><td>2.162</td><td>208.41</td><td>1.33×</td><td>1.139</td></tr><tr><td>FORA1(Selvaraju et al., 2024)</td><td>2.365</td><td>225.60</td><td>1.23×</td><td>1.129</td></tr><tr><td>FORA2(Selvaraju et al., 2024)</td><td>2.365</td><td>225.60</td><td>1.23×</td><td>1.124</td></tr><tr><td>FORA3(Selvaraju et al., 2024)</td><td>2.365</td><td>225.60</td><td>1.23×</td><td>1.123</td></tr><tr><td>ToCa(N=2,R=90%)</td><td>1.890</td><td>181.30</td><td>1.53×</td><td>1.134</td></tr></table>
394
+
395
+ # A.6 RESULTS ON HIGHER-RESOLUTION AND MORE ADVANCED TEXT-TO-IMAGE MODELS
396
+
397
+ As shown in Table 7, we compared the performance of FORA and ToCa in generating high-resolution images $(1024\times 1024)$ using the more advanced text-conditional image generation models FLUX.1-dev and FLUX.1-schnell(Labs, 2024). The former uses 50 sampling steps, while the latter, as a more efficient model, uses only 4 sampling steps. The evaluation of generation quality was conducted using Image Reward, a metric better suited to measuring human preference. The generated images were based on 1,632 prompts from the PartiPrompts(Yu et al., 2022) dataset to comprehensively evaluate the generation quality of the acceleration methods on both FLUX.1-dev and FLUX.1-schnell.
398
+
399
+ In this comparison, FORA (Selvaraju et al., 2024) represents acceleration on the FLUX.1-dev model with 50 sampling steps, where caching is performed every other step (i.e., $\mathcal{N} = 2$ ). FORA $^1$ , FORA $^2$ , and FORA $^3$ correspond to skipping the 2nd, 3rd, and 4th steps, respectively, during the 4-step generation process. ToCa demonstrated nearly lossless performance under a $1.5\times$ acceleration, with Image Reward scores almost identical to the non-accelerated scenario in both the 50-step FLUX.1-dev and the 4-step FLUX.1-schnell models. In contrast, all configurations of FORA showed quality degradation. For example, in the 50-step FLUX.1-dev model, the $1.5\times$ acceleration setting and in the 4-step FLUX.1-schnell model, a $1.2\times$ acceleration setting both resulted in noticeable quality degradation. The corresponding visual results for FLUX.1-schnell are presented in Figure 13, demonstrating the lossless acceleration capability of ToCa.
400
+
401
+ # A.7 DETAILS FOR DISTRIBUTION FIGURES
402
+
403
+ In this section, we provide detailed explanations of the various distribution plots mentioned in the main text as supplementary information.
404
+
405
+ Figure 1 illustrates the distribution of the Frobenius norm of the differences between the feature maps $x_{t}^{L}$ and $x_{t + 1}^{L}$ , which are the outputs of the last DiT Block at each timestep $t$ and the previous timestep $t + 1$ , respectively. The figure also presents the corresponding statistical frequency density of these Frobenius norm values for each token, based on 500 randomly generated samples produced by DiT. This analysis reveals the conclusion that different tokens exhibit varying levels of temporal redundancy.
406
+
407
+ Figure 2 illustrates the varying rates of error accumulation and propagation across different tokens. Specifically, Gaussian noise with an intensity of 0.5 was independently added to the $i_{th}$ token of the first layer at each step. The Frobenius norm was then computed between the output features of all tokens at the last layer of the same step and the corresponding features from the noise-free output. This process was repeated for all steps and all layers. Given that each noise propagation required re-running the inference process, a random subset of 100 samples from the DiT model was selected for this case study, and the noise propagation results were recorded for each iteration. This analysis led to the conclusion that different tokens exhibit varying rates of error accumulation and propagation.
408
+
409
+ ![](images/8f0aec510b5848160252097e9aaa84f44e8f4e6aee4d1e9e59e2bdaf51fd8ff5.jpg)
410
+ Figure 13: Visualization examples for original FLUX.schnell(Labs, 2024) and proposed ToCa with almost lossless acceleration.
411
+
412
+ ![](images/28a61c820ff1b80b4ddff1405686753f1435c10c7a8237e4b240635a13d6872b.jpg)
413
+ (a)
414
+
415
+ ![](images/7aae30055eac07ad1c483c18f7fead44ea7331145360afe28167dde7d5757c3a.jpg)
416
+ (b)
417
+ Figure 14: Computation ratio for different types of computation layers on different timesteps and layers on DiT. The dark blue lines correspond to fully computed fresh steps. (a) The computation ratio distribution of the Self-Attention layers. As mentioned earlier, performing partial computation on attention layers is less cost-effective compared to MLP layers. Therefore, we do not implement partial computation on attention layers; apart from fresh steps, all other steps directly reuse the corresponding cached features. (b) The computation ratio distribution of the MLP layers. As shown, the computation ratio increases with deeper layers and as the number of inferred timesteps during the inference phase grows.
418
+
419
+ Figure 5(a) illustrates the varying temporal redundancy across layers of different depths. For each timestep, the Frobenius norm of the differences between the features of the current timestep and the corresponding features of the previous timestep at a specific layer depth was computed for each token. The resulting Frobenius norm values were then used to plot the distribution alongside their corresponding statistical frequency densities. To clearly demonstrate the trends, we selected layers 1, 15, and 28 for visualization. The samples used were randomly chosen from 200 DiT samples.
420
+
421
+ Figure 5(b) shows the variation in the offset distribution of the output values from the last layer of a timestep when Gaussian noise is added to a single token at different layer depths within the same timestep. This is measured by adding Gaussian noise with an intensity of 0.5 to a single token in a specific layer at one timestep and comparing the deviation in the output features of the last layer at that timestep with the noise-free scenario. For clarity, this operation was performed on layers 1, 15, and 25 across all timesteps, using 200 randomly selected samples from the DiT model to generate the examples. It is worth noting that Figure 5(b) may appear at first glance to violate the normalization condition for frequency density distributions. This is due to the large variations in Frobenius norm values, which necessitated the use of a logarithmic scale on the horizontal axis. Figure 5(a) and (b) demonstrate two key conclusions: deeper layers exhibit poorer temporal redundancy, but errors introduced in deeper layers have a smaller impact on the output at the same timestep.
422
+
423
+ Figure 5(c) illustrates the results on the PixArt- $\alpha$ model by adding Gaussian noise with an intensity of $0.5 \times \|x_k\|_F$ to a single token $x_k$ in the 10th layer (approximately the middle layer) at each timestep. This process was performed for three different types of layers (self-attention, cross-attention, and MLP). The Frobenius norm of the error induced by the noise was measured on the output of the last layer and normalized by the average Frobenius norm $\|x_k\|_F$ of tokens of the same type. The resulting distribution was plotted using 200 prompts randomly selected from the MS-COCO2017 dataset. It is important to note that the additional normalization step, based on the norm values of the tokens, was necessary because the norm values of tokens in self-attention, cross-attention, and MLP layers typically vary significantly. Normalization ensures a fair comparison across these layer types. Additionally, the distribution for the MLP layer in Figure 5(c) appears more dispersed. This is due to the generally larger variations in MLP output values across different prompts and timesteps. In practice, increasing the number of samples can make the distribution visually denser. However, given that each token requires a separate inference for the error propagation experiments, using 200 prompts already incurs a significant computational cost while remaining sufficient to reveal the trends.
424
+
425
+ Algorithm 1 ToCa
426
+ Input: current timestep $t$ , current layer id $l$ .
427
+ 1: if current timestep $t$ is a fresh step then
428
+ 2: Fully compute $\mathcal{F}^l(x)$ .
429
+ 3: $\mathcal{C}^l(x) := \mathcal{F}^l(x)$ ; # Update the cache.
430
+ 4: else
431
+ 5: $\mathcal{S}(x_i) = \sum_{j=1}^{4} \lambda_j \cdot s_j$ ; # Compute the cache score for each token.
432
+ 6: $\mathcal{I}_{\text{Compute}} := \text{TopK}(\mathcal{S}(x_i), R\%)$ ; # Fetch the index of computed tokens.
433
+ 7: for all tokens $x_i$ do
434
+ 8: if $i \in \mathcal{I}_{\text{Compute}}$ then
435
+ 9: Compute $\mathcal{F}^l(x_i)$ through the neural layer.
436
+ 10: $\mathcal{C}^l(x_i) := \mathcal{F}^l(x_i)$ ; # Update the cache.
437
+ 11: end if
438
+ 12: end for
439
+ 13: end if
440
+ 14: return $\mathcal{F}^l(x)$ . # return features for both cached and computed tokens for the next layer.
441
+
442
+ ![](images/e5b9177ddf5eedf9ff4cb60f4a787f4d026746af5fdedef53e20afb870689607.jpg)
443
+ Figure 15: Pareto curve with FLOPs-FID to better evaluate the performance of $\mathrm{ToCa}$ on DiT with 50 ddim sampling steps as baseline.
acceleratingdiffusiontransformerswithtokenwisefeaturecaching/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:be1717796ff37871f7a68ee6123262c065a87d5c0d70ffc9d7221e59fad9b4ac
3
+ size 1592207
acceleratingdiffusiontransformerswithtokenwisefeaturecaching/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:26a4aa48a833c89719135470276e0c043753c22a42a18970f92d8c8897b8b908
3
+ size 785427
acceleratinginferenceofretrievalaugmentedgenerationviasparsecontextselection/ad761159-4432-4ee7-b97b-d14149889864_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f2d3d8c47abb9208bac9d1ab5f5c87b4a7c34484e7f20d548a1e45ae872f8f0c
3
+ size 97607
acceleratinginferenceofretrievalaugmentedgenerationviasparsecontextselection/ad761159-4432-4ee7-b97b-d14149889864_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a768afa9e3955452ad09eb08cc7c926a8474652b1c880745e06a6866b8ca219a
3
+ size 115099
acceleratinginferenceofretrievalaugmentedgenerationviasparsecontextselection/ad761159-4432-4ee7-b97b-d14149889864_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:581aeb065191f0fc73e190e054db909639684d1c7015fc5e411069152b2bbba6
3
+ size 399988
acceleratinginferenceofretrievalaugmentedgenerationviasparsecontextselection/full.md ADDED
@@ -0,0 +1,333 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # ACCELERATING INFERENCE OF RETRIEVAL-AUGMENTED GENERATION VIA SPARSE CONTEXT SELECTION
2
+
3
+ Yun Zhu $^{1}$ , Jia-Chen Gu $^{3}$ , Caitlin Sikora $^{2}$ , Ho Ko $^{2}$ , Yinxiao Liu $^{1}$ , Chu-Cheng Lin $^{2}$ , Lei Shu $^{1}$ , Liangchen Luo $^{1}$ , Lei Meng $^{1}$ , Bang Liu $^{4}$ , Jindong Chen $^{1}$
4
+
5
+ $^{1}$ Google DeepMind
6
+
7
+ 2Google
8
+
9
+ <sup>3</sup>University of California, Los Angeles
10
+
11
+ 4Université de Montréal & Mila
12
+
13
+ {yunzhu,csikora,hoko,canoee,kitsing}@google.com {leishu,luolc,leimeng,jdchen}@google.com
14
+
15
+ gujc@ucla.edu,bang.liu@umontreal.ca
16
+
17
+ # ABSTRACT
18
+
19
+ Large language models (LLMs) augmented with retrieval exhibit robust performance and extensive versatility by incorporating external contexts. However, the input length grows linearly in the number of retrieved documents, causing a dramatic increase in latency. In this paper, we propose a novel paradigm named Sparse RAG, which seeks to cut computation costs through sparsity. Specifically, Sparse RAG encodes retrieved documents in parallel, which eliminates latency introduced by long-range attention of retrieved documents. Then, LLMs selectively decode the output by only attending to highly relevant caches auto-regressively, which are chosen via prompting LLMs with special control tokens. It is notable that Sparse RAG combines the assessment of each individual document and the generation of the response into a single process. The designed sparse mechanism in a RAG system can facilitate the reduction of the number of documents loaded during decoding for accelerating the inference of the RAG system. Additionally, filtering out undesirable contexts enhances the model's focus on relevant context, inherently improving its generation quality. Evaluation results on four datasets show that Sparse RAG can be used to strike an optimal balance between generation quality and computational efficiency, demonstrating its generalizability across tasks.
20
+
21
+ # 1 INTRODUCTION
22
+
23
+ Large language models (LLMs) have attracted increasing attention and exhibited impressive abilities to understand instructions and generate fluent outputs in natural language (Brown et al., 2020; Ouyang et al., 2022; Touvron et al., 2023; Team et al., 2023). Nevertheless, LLMs inevitably manifest hallucinations (Ji et al., 2023) due to their struggle with factual errors and inability to secure the accuracy of generated text solely by the parametric knowledge they encapsulate (Zhang et al., 2023; Muhlgay et al., 2024). Feeding the source of truth to LLMs in the format of retrieved context segments (Reid et al., 2024) alleviates this problem. The technique is widely known as Retrieval-Augmented Generation (RAG) (Lewis et al., 2020b; Li et al., 2022; Guu et al., 2020a).
24
+
25
+ Although the RAG framework is empirically shown to be effective, it can be expensive to scale up. This is because it requires prepending relevant documents retrieved from an external knowledge corpus to the queries (Guu et al., 2020a). As a result, the input length grows linearly in the number of documents, causing a dramatic increase in latency when using a standard Transformer whose latency scales quadratically with the input length. Some prior works such as Fusion-in-Decoder (FiD) (Izacard & Grave, 2021) and Parallel Context Windows (PCW) (Ratner et al., 2023) have proposed to alleviate this issue. Yet these methods fail to strike an optimal balance between generation quality and computational efficiency. FiD was originally designed for the encoder-decoder architecture, and thus is not compatible with currently prevalent decoder-only architectures without significant changes.
26
+
27
+ ![](images/f5aa3715cbb14d0d5e8e34c621af213e8500ef32e7b1078f4cd226979ec011ef.jpg)
28
+ Figure 1: An overview of Sparse RAG at inference. Each of the retrieved documents $d_{i}$ is assessed for relevance by the LLM and irrelevant documents are dropped. Then, the KV caches $c_{i}$ for the remaining documents are used for generation.
29
+
30
+ While PCW can be applied to decoder-only LLMs, it only speeds up the model pre-filling and still incurs high latency since the whole context window cache is still being attended to when decoding each token. Moreover, the heavy reliance of generation on the retrieved knowledge raises significant concerns about the model's behavior and performance in scenarios where retrieval may fail or return inaccurate results (Shi et al., 2023). A typical approach for mitigating this issue is to rely on an external classifier to rank or filter the documents before prepending them to the input (Yan et al., 2024), but this process requires extra model calls which adds new complexity to inference.
31
+
32
+ In light of the issues above, we propose a novel paradigm called Sparse RAG. It operationalizes through massive pre-filling, where the key-value cache is generated by a single forward pass of the input tokens, and selective decoding, where the output is generated by attending to only highly relevant tokens auto-regressively. Previous works where the length of the retrieved contexts during pre-filling are equal to that during decoding are called dense-RAG in this paper. Sparse RAG, on the other hand, causes the decoding context to be significantly shorter than the pre-filled context, where retrieved documents that are not highly relevant to the input query have been dynamically dropped. Furthermore, Sparse RAG combines the assessment of each individual context and the generation of the response into a single process, in which special control tokens are used to prompt the LLM to assess the relevance of each retrieved context, and then only the key-value caches of the most relevant contexts are loaded for decoding using another control token.
33
+
34
+ The design of Sparse RAG has two additional unique advantages. First, by reducing the number of key-value cache loads during the decoding process, the LLM can achieve lower latency where it is typically constrained by memory usage. Second, filtering out undesirable contexts enhances the model's focus on relevant contexts, inherently improving the quality of the generated output. To demonstrate the effectiveness and efficiency of the proposed method, we evaluate on four datasets: PopQA (Mallen et al., 2023), QMSum (Min et al., 2023), TriviaQA (Joshi et al., 2017), and HotpotQA (Yang et al., 2018). Experimental results show that Sparse RAG can achieve similar or better quality and much better latency compared with standard dense-RAG or PCW-RAG approaches. Moreover, the choice of the four datasets, which include short- and long-form generation, question answering, summarization, and multi-hop reasoning, demonstrates the generalizability of the Sparse RAG approach.
35
+
36
+ # 2 RELATED WORK
37
+
38
+ Retrieval-Augmented Generation RAG is a family of techniques for generating output while using retrieved nearest-neighbor context data as a reference. It typically involves two stages: retrieval and generation. Retrieval finds most similar contexts based on BM25 or learned embeddings, where
39
+
40
+ Table 1: Comparisons with existing RAG-related works.
41
+
42
+ <table><tr><td>Approach</td><td>Corrective</td><td>No extra model</td><td>Prefill efficiency</td><td>Decode efficiency</td></tr><tr><td>RAG (Lewis et al., 2020b)</td><td>No</td><td>Yes</td><td>No</td><td>No</td></tr><tr><td>Corrective RAG (Yan et al., 2024)</td><td>Yes</td><td>No</td><td>No</td><td>No</td></tr><tr><td>PCW RAG (Ratner et al., 2023)</td><td>No</td><td>Yes</td><td>Yes</td><td>No</td></tr><tr><td>Sparse RAG (Ours)</td><td>Yes</td><td>Yes</td><td>Yes</td><td>Yes</td></tr></table>
43
+
44
+ the context can be represented as token embeddings (Khandelwal et al., 2020; Yogatama et al., 2021), dense embeddings (de Jong et al., 2022) or raw text (Guu et al., 2020b; Izacard & Grave, 2021; Lewis et al., 2020b). Once those contexts are retrieved, different architectures are leveraged to incorporate them into the model. Popular approaches include concatenation (Izacard & Grave, 2021; Lewis et al., 2020b) and cross-attention (Borgeaud et al., 2022; Lewis et al., 2020a).
45
+
46
+ In recent years, LLM architectures have evolved towards decoder-only models with significantly larger sizes. To this end, concatenation of raw text (Lewis et al., 2020b) is becoming popular for its simplicity and practicality, and many advanced approaches have been developed on top of it. Yoran et al. (2024) designed an NLI model to identify irrelevant contexts and improve robustness. Jiang et al. (2023b) actively anticipate future content and decide when and what to retrieve in long-form generation. Self-RAG (Asai et al., 2024) is proposed to selectively retrieve knowledge on an as-needed basis, by introducing a separate critic model. The critic model generates "reflection" tokens to indicate whether to retrieve information. It runs inference on each document once and uses additional "reflection" tokens to select excerpts from the documents to use for generating the response. In contrast, we unify the generation of the special control tokens and regular vocabulary tokens with one single model, eliminating the additional model and computational overhead. CRAG (Yan et al., 2024) explores and designs corrective strategies for RAG to improve its robustness of generation. Specifically, an external T5 model is trained and used to determine the usefulness of the retrieved context. Generally, these approaches explore retrieval as a useful tool to augment generation and whether retrieval is necessary.
47
+
48
+ Efficiency in RAG The efficiency of LLM inference is a widely explored research area, where different categories of approaches have been studied, often targeting LLM inference in general rather than RAG specifically. Some works focus on architecture-level acceleration; examples include efficient attention (Shazeer, 2019), Mixture of Experts (Fedus et al., 2022), Transformer-alternative architectures (Gu & Dao, 2024), etc. Other works explore algorithm-level acceleration like quantization (Lin et al., 2024) or speculative decoding (Leviathan et al., 2023).
49
+
50
+ Recently, RAG-specific methods have been explored. RAG Cache (Jin et al., 2024), for example, was proposed as a multilevel dynamic caching system tailored for RAG from the system perspective. Another approach used in FiD (Izacard & Grave, 2021) and PCW (Ratner et al., 2023) parallelizes processing of individual documents and eliminates cross-document attention computations. FiD encodes each retrieved passage independently from other passages and decodes by attending over the concatenation of the resulting representations of all the retrieved passages. PCW carves a long context into chunks ("windows"), restricting the attention mechanism to apply only within each window, and re-uses the positional encodings across the windows.
51
+
52
+ Comparison with previous works that are the most relevant to our work is illustrated in Table 1. This work aims to strike an optimal balance between generation quality and computational efficiency. It is notable that the extra classifier in CRAG requires maintaining an extra model with more complex serving infrastructure; when there are $N$ contexts retrieved, there are $N + 1$ model runs in total. Our work also relies on classification to refine the retrieved documents, but it is handled by an "internal" classification process that is aligned with the generation process, so the total number of model runs in our case is 1.
53
+
54
+ # 3 SPARSE RAG
55
+
56
+ Sparse RAG is designed for the decoder-only model architecture, which is the typical architecture of most popular LLMs. Figure 1 presents an overview of Sparse RAG inference, in which document relevance assessment is used to improve the robustness of generation. The key hypothesis of our approach is that the RAG task and per context assessment are similar tasks and the model can handle both in one shot using simple and effective training and inference techniques.
57
+
58
+ # 3.1 TRAINING PROCESS
59
+
60
+ Our work assumes that a certain amount of RAG training data on the order of thousands of examples is accessible, which allows us to effectively tailor and adapt existing LLMs to our specific needs. In the training phase, we integrate an additional Per Context Assessment (PCA) task into the training mixture. By incorporating the PCA task, we aim enhance the model's ability to assess the relevance of retrieved documents and respond accurately in different RAG scenarios.
61
+
62
+ Data Augmentation with LLMs For typical RAG data, one question-answer pair can be mapped to multiple retrieved contexts using either BM25 or an existing stand-alone retriever. However, there are cases where no golden labels indicating the quality of every retrieved context is available.
63
+
64
+ To collect these missing labels, we leveraged two off-the-shelf LLMs (Anil et al., 2023; Team et al., 2023)-PALM2 and Gemini—to assess each context. We observe empirically that a second round of prompting for critique, especially using a different model from the initial round, ensures the best quality labels. We provide our prompts in Table 11 in the Appendix. We compare different model combinations for labeling to human ground truth labels in Section 4.
65
+
66
+ Multitasking Data Format The LLM is trained on a mixture of two types of tasks: Per Context Assessment (relevance rating) and answer generation. Specifically, we format the inputs and outputs of the two task types as
67
+
68
+ - Per Context Assessment: $\{\text{Question}\} \{\text{Context}\} \{\text{Control\_Assessment}\} \{\text{Rating}\}$
69
+ - Generation: $\{\text{Question}\} \{\text{Context}_1\} \ldots \{\text{Context}_N\} \{\text{Control}_{\text{Generation}}\} \{\text{Answer}\}$
70
+
71
+ where $\{\text{Rating}\}$ ("yes" for relevant or "no" for irrelevant) and $\{\text{Answer}\}$ are the targets for the generative tasks and all tokens before them are inputs. $\{\text{Control\_Assessment}\}$ and $\{\text{Control\_Generation}\}$ are special control tokens to ensure the LLM can differentiate the two tasks.
72
+
73
+ Parallel Contexts Since each context is rated independently in the PCA task, in which each example contains only one context, we introduce independence in the primary RAG generation training task as well so that the two tasks can reuse the KV cache at inference. Thus for the generation task, we enforce no cross-attention between different retrieved contexts as in Parallel Context Windows (Ratner et al., 2023).
74
+
75
+ Specifically, we modify two things in the standard LM training process. First, we change the attention masks to be block-wise, and restrict Context $_i$ and Context $_j$ from attending to one another. $\{Question\}$ , $\{Context_i\}$ and $\{Control\_Generation\}$ use the default causal attention mechanism, in which the latter tokens attend to all previous ones. Second, we use "parallel incremental" positional encodings to mimic the situation in which all retrieved contexts directly follow the query while maintaining the typical position ID of the $\{Control\_Generation\}$ and $\{Answer\}$ tokens as shown below.
76
+
77
+ ![](images/3a2eb8b4c19d1e70b59e2e55149b532e61968bb4c468b18b190b1b39c03f0d90.jpg)
78
+
79
+ # 3.2 INFERENCE PROCESS
80
+
81
+ Given the question and retrieved contexts, Sparse RAG handles the assessment task and generation task in one single pass.
82
+
83
+ Per Context Assessment Similar to the training process, when pre-filling the KV cache, each retrieved context is treated independently by masking cross-document attention. The KV cache is used to score each context by concatenating the $\{Control\_Assessment\}$ token. The relevance score is the probability of "yes" (indicating relevance) being the next token. The position encoding allows this to happen in parallel.
84
+
85
+ Generation The generation uses a filtered KV cache, where only $K$ out of $N$ cached values are loaded. We use a simple threshold-based filtering approach: we drop the context when its score is less than sigma. Once the cached KV vectors are loaded, the {Control Generation} token prompts the model to generate the answer.
86
+
87
+ Table 2: Auto-rater comparison to ground truth.
88
+
89
+ <table><tr><td colspan="2">Auto-labeling method</td><td rowspan="2">Average F1</td><td rowspan="2">F1 Label 0</td><td rowspan="2">F1 Label 1</td></tr><tr><td>Rater model</td><td>Critic model</td></tr><tr><td>PALM2 XL</td><td>n/a</td><td>0.729</td><td>0.765</td><td>0.694</td></tr><tr><td>PALM2 XL</td><td>PALM2 XL</td><td>0.781</td><td>0.820</td><td>0.741</td></tr><tr><td>Gemini Ultra</td><td>n/a</td><td>0.761</td><td>0.807</td><td>0.716</td></tr><tr><td>Gemini Ultra</td><td>Gemini Ultra</td><td>0.704</td><td>0.747</td><td>0.660</td></tr><tr><td>PALM2 XL</td><td>Gemini Ultra</td><td>0.728</td><td>0.776</td><td>0.680</td></tr><tr><td>Gemini Ultra</td><td>PALM2 XL</td><td>0.821</td><td>0.861</td><td>0.781</td></tr></table>
90
+
91
+ # 4 EVALUATION OF PER CONTEXT ASSESSMENT
92
+
93
+ # 4.1 NEW ANNOTATIONS: NATURAL QUESTIONS PER CONTEXT ASSESSMENT
94
+
95
+ We isolated a subset of 50 questions, each with 10 retrieved contexts, from the Natural Questions dataset. We assigned 3 raters to each question-context pair from a pool of 7 raters and provided the instructions in Section A.2.
96
+
97
+ We aggregated responses for all 3 raters for each context, selecting the majority decision 0 or 1 for each context. We found that raters unanimously agreed on 351 out of 500 context, with $30\%$ of the documents considered relevant. For questions where raters were not unanimously decided, a specialist rater was assigned to investigate more carefully and set the best label to correct mistakes of the other raters. This resulted in 6 additional documents considered relevant out of the entire dataset, boosting the portion of relevant documents to $31\%$ and slightly increasing alignment with the auto-rater approaches (average F-score increase of $1.4\%$ across auto-rater methods using these corrections as the ground truth).
98
+
99
+ # 4.2 LLM RATER COMPARISONS
100
+
101
+ We tested several different LLM-based automatic labeling methods-different combinations of models and prompts-for creating training data for the classifier in Sparse RAG. We compared several of these auto-rater approaches by creating a ground-truth relevance dataset using human labeling. The auto-rater comparison using the revised human labels as the ground-truth is shown in Table 2. We find that combining two different models in two rounds-initial prompting and critique-provides the labels that are most closely aligned with the human labels. We hypothesize that the different representations learned by two different models are able to capture the most nuance in the input sequences, leading to better relevance judgements. We also observe that Gemini Ultra appears slightly less effective at critiquing model outputs than PALM2 XL.
102
+
103
+ # 5 EVALUATION OF SPARSE RAG
104
+
105
+ # 5.1 BENCHMARKS AND METRICS
106
+
107
+ PopQA is a large-scale open-domain question answering (QA) dataset, consisting of 14k entity-centric QA pairs. Each question is created by converting a knowledge tuple retrieved from Wikidata using a template. We follow the setup from (Yan et al., 2024) and use Contriever (Izacard et al., 2022) to retrieve the related contexts. Since PopQA does not include per-context assessment relevance labels, we adopted the "Gemini + PALM2" combination to create training labels. We split the dataset into training, validation and test sets with 8:1:1 ratio. Since the answer is usually short, we report Exact Match (EM) and F1 scores.
108
+
109
+ QMSum (Zhong et al., 2021) is a human-annotated benchmark for a query-based multi-domain meeting summarization task, which consists of 1,808 query-summary pairs over 232 meetings in multiple domains. To adapt it to the RAG domain, we divide each conversation into different contexts where each turn in the conversation is a context and the average context contains 300 words. Note that this dataset has human labeled per-context assessments that we leverage during training. We use 250 training examples (one per meeting), 70 validation examples and 77 test examples. The targets for this dataset are longer and we report RougeLSum and F1 scores.
110
+
111
+ TriviaQA (Joshi et al., 2017) is a realistic text-based question answering dataset that includes 950K question-answer pairs from 662K documents collected from Wikipedia and the web. Similar to PopQA, we used the "Gemini + PALM2" combination to create relevance training labels. We randomly selected 8k training examples and 500 validation and test examples each. We report Exact Match (EM) and F1 scores.
112
+
113
+ HotpotQA (Yang et al., 2018) is a question answering dataset containing about 113K crowdsourced questions that are constructed to require the introduction paragraphs of at least two Wikipedia articles to answer, thus requiring multi-hop reasoning. We sample 6k training examples and 600 validation and 600 test examples. We report Exact Match (EM) and F1 scores.
114
+
115
+ These datasets were selected to demonstrate generalizability across the question answering tasks requiring single- or multi-hop reasoning, diverse context and output lengths, and summarization capabilities.
116
+
117
+ # 5.2 BASELINES
118
+
119
+ RAG We evaluated the performance of standard concatenation-based RAG where an LLM generates output given the query prepended with all the top-ranked documents using the same retriever as Sparse RAG system. RAG is finetuned with the training data.
120
+
121
+ Off-the-shelf We report a variant of concatenation-based RAG where the model is not finetuned with training data.
122
+
123
+ LLMLingua In this approach an external LLM was called to compress the prompt (Jiang et al., 2023a). In our comparison, we chose the compression ratio to be the same as Sparse RAG for fairness.
124
+
125
+ PCW-RAG We applied Parallel Context Windows (Ratner et al., 2023) to the RAG process, where no cross-attention is applied between documents. The model is finetuned with the training data.
126
+
127
+ **Corrective RAG** We evaluate CRAG using an external T5-XXL classifier trained using heuristic labels (Yan et al., 2024). This classifier is used to process all the documents and decide the rank. Note to facilitate a fair comparison, we did not adopt the "web search" feature of this paper.
128
+
129
+ # 5.3 EXPERIMENTAL CONFIGURATION
130
+
131
+ The base LLMs used in the paper were Gemini (Team et al., 2023). Although our approach could be applied at different training stages of the model, we apply LoRA tuning (Hu et al., 2022) to enforce alignment on top of the foundation LLMs due to its low resource requirements and wide usage. Note that the same LoRA tuning on the training data is applied to Sparse RAG and all baselines. In all our experiments, we apply LoRA in self-attention and use the default rank as 4. By default, we use the XXS size of Gemini which can run on-device.
132
+
133
+ During training, we use 64 Tensor Processing Units (TPU) V3 chips for PopQA while use 128 Units for the other datasets. The batch size is 64. We use the Adafactor optimizer (Shazeer & Stern, 2018) with a learning rate of 0.003. The training dropout rate is 0.05. We leverage the metrics of the validation set to pick the best checkpoint. During inference, the temperature is set to 0.5. Unless specifically noted, we use sampling decoding with sample number 1 for our experiments.
134
+
135
+ # 5.4 INFERENCE SETUP AND METRICS
136
+
137
+ Evaluation of Sparse RAG was conducted on a Samsung S21 Ultra, utilizing the device's CPU to assess real-world performance on a relatively mid-tier smartphone compared to the latest flagship models. Inference configuration consisted of fixed token lengths for queries, contexts and generated responses. This setup allows for evaluating the system's efficiency and effectiveness under resource constraints typical of mobile devices, providing insights into its practical applicability for on-device question-answering tasks. Specifically, the overall inference process considers two stages.
138
+
139
+ Prefill stage For the baseline RAG model, we measure the total time taken to process all input tokens (question and all contexts). For PCW RAG and Sparse RAG models, we take advantage of these models' ability to cache the question KV vectors. We first measure the time to process the
140
+
141
+ Table 3: Quality & efficiency tradeoff for both short-form and long-form generation tasks; Sparse RAG achieves both higher quality and efficiency compared to "dense" RAG approaches.
142
+
143
+ <table><tr><td>Dataset</td><td>Metrics</td><td>Off-the-shelf</td><td>LLMLingua</td><td>RAG</td><td>PCW-RAG</td><td>CRAG</td><td>Sparse RAG</td></tr><tr><td>-</td><td>ES</td><td>56.28</td><td>-</td><td>56.28</td><td>147.58</td><td>-</td><td>147.58</td></tr><tr><td rowspan="4">PopQA</td><td>EM</td><td>0.33</td><td>1.96</td><td>65.43</td><td>65.04</td><td>66.52</td><td>67.71</td></tr><tr><td>F1</td><td>12.76</td><td>12.15</td><td>69.99</td><td>69.54</td><td>70.99</td><td>71.16</td></tr><tr><td>K</td><td>20.00</td><td>7.84</td><td>20.00</td><td>20.00</td><td>8.9</td><td>7.84</td></tr><tr><td>DS</td><td>6.65</td><td>-</td><td>6.65</td><td>6.65</td><td>-</td><td>12.28</td></tr><tr><td rowspan="4">QMSum</td><td>F1</td><td>20.37</td><td>22.28</td><td>21.43</td><td>20.18</td><td>-</td><td>23.96</td></tr><tr><td>RougeSum</td><td>12.67</td><td>18.37</td><td>18.20</td><td>16.95</td><td>-</td><td>20.10</td></tr><tr><td>K</td><td>20.00</td><td>4.45</td><td>20.00</td><td>20.00</td><td>-</td><td>4.45</td></tr><tr><td>DS</td><td>6.65</td><td>-</td><td>6.65</td><td>6.65</td><td>-</td><td>16.05</td></tr><tr><td rowspan="4">TriviaQA</td><td>EM</td><td>0.00</td><td>2.6</td><td>46.20</td><td>46.00</td><td>-</td><td>47.50</td></tr><tr><td>F1</td><td>12.21</td><td>16.30</td><td>53.03</td><td>53.20</td><td>-</td><td>55.10</td></tr><tr><td>K</td><td>20.00</td><td>9.90</td><td>20.00</td><td>20.00</td><td>-</td><td>9.90</td></tr><tr><td>DS</td><td>6.65</td><td>-</td><td>6.65</td><td>6.65</td><td>-</td><td>10.18</td></tr><tr><td rowspan="4">HotpotQA</td><td>EM</td><td>0.00</td><td>1.33</td><td>43</td><td>38.83</td><td>-</td><td>43.50</td></tr><tr><td>F1</td><td>12.21</td><td>14.67</td><td>55.85</td><td>50.03</td><td>-</td><td>55.36</td></tr><tr><td>K</td><td>10.00</td><td>6.50</td><td>10.00</td><td>10.00</td><td>-</td><td>6.50</td></tr><tr><td>DS</td><td>10.31</td><td>-</td><td>10.31</td><td>10.31</td><td>-</td><td>13.00</td></tr></table>
144
+
145
+ question alone. Then, we measure the time to process different contexts with the pre-processed KV-cached question. We use Encoding Speed (ES) (tokens per second $(\mathrm{t / s})$ ) to measure the efficiency of the prefetch stage.
146
+
147
+ Decoding stage To assess the decoding speed comprehensively, we generate output sequences using the same question but varying the amount of relevant context data considered, ranging from the top 1 most relevant document to the top $K$ . For each context size, we produce output sequences of different lengths. This systematic approach allows us to evaluate the impact of both context size and response length on decoding speed. We use Decoding Speed (DS) (tokens per second (t/s)) to measure the efficiency of the decoding stage.
148
+
149
+ # 5.5 MAIN RESULTS
150
+
151
+ We report both quality and latency metrics in Table 3. "K" is the number of chosen contexts. Note that the CRAG approach relies on its classifier, which is exclusively trained on the PopQA dataset. Thus we only compare its performance on PopQA. Additionally, both LLMLingua and CRAG leveraged external classifiers, for which we cannot effectively measure ES and DS. We discuss the end-to-end latency of the external classifiers in Table 6.
152
+
153
+ Notably, our proposed approach achieves the best quality while being the most efficient during inference compared to other approaches. It can be seen that Sparse RAG shares the same pre-filling efficiency with PCW-RAG, due to the parallel context encoding, but it achieves significantly better quality than PCW-RAG and better decoding efficiency than standard RAG and PCW-RAG. To illustrate, out of 20 retrieved contexts, Sparse RAG has an average of 7.84 contexts for PopQA and 4.45 contexts for QMSum. This leads to almost double or even triple the decoding speed. Meanwhile, Sparse RAG achieves higher quality metrics than the dense counterparts, demonstrating that Sparse RAG effectively filters noisy and irrelevant contexts.
154
+
155
+ HotpotQA is the only dataset where Sparse RAG does not beat RAG's F1 score. PCW-RAG has a particularly large quality gap on HotpotQA, suggesting that masking cross-document attention may hinder multi-hop reasoning capabilities. However, Sparse RAG recovers quality to a level similar to RAG while maintaining lower latency, demonstrating the power of our context selection process.
156
+
157
+ We also observe that Sparse RAG outperforms CRAG on quality, suggesting that our "in-place" classifier may be outperforming CRAG's external T5 XXL classifier trained on the same dataset.
158
+
159
+ # 5.6 ANALYSIS
160
+
161
+ Impact of Confidence Threshold Table 4 illustrates how our metrics vary with different quality thresholds for Sparse RAG. As the threshold gradually increases, the system filters out more contexts,
162
+
163
+ reducing the number of contexts $\mathrm{K}$ and consequently the latency during inference. The response quality metrics increase with increasing threshold up to a certain point, showing the effectiveness of filtering out irrelevant contexts. Then, the performance is stable and eventually drops slightly, possibly because some relevant contexts are accidentally filtered out.
164
+
165
+ Table 4: Sampling various confidence threshold values. A higher threshold means fewer contexts.
166
+
167
+ <table><tr><td rowspan="2">Threshold</td><td colspan="4">PopQA</td><td colspan="4">QMSum</td></tr><tr><td>EM</td><td>F1</td><td>K</td><td>DS</td><td>F1</td><td>RougeLSum</td><td>K</td><td>DS</td></tr><tr><td>0.05</td><td>66.95</td><td>70.97</td><td>9.75</td><td>10.61</td><td>22.85</td><td>19.49</td><td>7.92</td><td>11.92</td></tr><tr><td>0.1</td><td>66.84</td><td>70.66</td><td>8.72</td><td>11.70</td><td>23.78</td><td>19.98</td><td>6.68</td><td>12.89</td></tr><tr><td>0.15</td><td>67.17</td><td>71.16</td><td>7.84</td><td>12.28</td><td>23.43</td><td>19.66</td><td>5.77</td><td>13.01</td></tr><tr><td>0.2</td><td>66.77</td><td>70.54</td><td>7.13</td><td>12.88</td><td>23.2</td><td>19.79</td><td>5.05</td><td>14.54</td></tr><tr><td>0.25</td><td>65.75</td><td>69.64</td><td>6.56</td><td>13.00</td><td>23.96</td><td>20.1</td><td>4.45</td><td>16.05</td></tr><tr><td>0.3</td><td>63.86</td><td>68.2</td><td>5.98</td><td>13.08</td><td>23.84</td><td>19.99</td><td>3.93</td><td>16.38</td></tr></table>
168
+
169
+ Number of Prefilled Documents To assess whether Sparse RAG's quality improvements are the result of "massive" prefilling which is not practical in real scenarios, we compare different numbers of prefilled documents (10 and 20) for PopQA. Results are shown in Table 5. Even with fewer documents prefilled, the quality of Sparse RAG remains better than RAG.
170
+
171
+ The gap between RAG and Sparse RAG is relatively small at 10 prefilled documents compared to 20 because there are fewer documents to be filtered. Moreover, for a smaller number of documents, the cross-document masking is less "sparse" compared to a larger number of prefilled documents. Meanwhile, only using the top 5, 3, or 1 documents introduces significantly lower EM and F1 scores because it is difficult to guarantee high-quality retrieval in the first step. This further motivates our design to increase the range of retrieved documents and then perform context selection.
172
+
173
+ Table 5: Ablation on different number of prefetch documents for PopQA.
174
+
175
+ <table><tr><td>Approach</td><td>Prefill Documents</td><td>EM</td><td>F1</td><td>K</td><td>ES</td><td>DS</td></tr><tr><td>RAG</td><td>1</td><td>146.10</td><td>46.01</td><td>50.12</td><td>1.00</td><td>22.74</td></tr><tr><td>RAG</td><td>3</td><td>120.36</td><td>55.28</td><td>59.32</td><td>3.00</td><td>18.96</td></tr><tr><td>RAG</td><td>5</td><td>102.51</td><td>58.66</td><td>63.49</td><td>5.00</td><td>15.98</td></tr><tr><td>RAG</td><td>10</td><td>64.66</td><td>68.67</td><td>10</td><td>80.74</td><td>10.31</td></tr><tr><td>PCW RAG</td><td>10</td><td>63.9</td><td>68.58</td><td>10</td><td>147.48</td><td>10.31</td></tr><tr><td>Sparse RAG</td><td>10</td><td>65.86</td><td>70.2</td><td>7.79</td><td>147.48</td><td>12.33</td></tr><tr><td>RAG</td><td>20</td><td>65.43</td><td>69.99</td><td>20</td><td>56.28</td><td>6.65</td></tr><tr><td>PCW RAG</td><td>20</td><td>65.04</td><td>69.95</td><td>20</td><td>147.58</td><td>6.65</td></tr><tr><td>Sparse RAG</td><td>20</td><td>67.17</td><td>71.16</td><td>7.84</td><td>147.58</td><td>12.28</td></tr></table>
176
+
177
+ Inference Efficiency Ablations We present Table 6 demonstrates the computational advantages of Sparse RAG for each stage of the retrieval and generation process compared to other methods. Sparse RAG, like PCW-RAG, reduces encoding latency to nearly 1/3 that of RAG. While CRAG's encoding latency appears lower than Sparse RAG, it has an additional classification step, which is
178
+
179
+ Table 6: Latency Decomposition.
180
+
181
+ <table><tr><td>End-To-End</td><td>K</td><td>External Classifier (ms)</td><td>Init Time (ms)</td><td>Encoding (ms)</td><td>Copy (ms)</td><td>Decoding (ms)</td><td>Total (ms)</td><td>Init + Copy Percentage</td></tr><tr><td>RAG</td><td>20.00</td><td>0</td><td>120</td><td>90962</td><td>0</td><td>4811</td><td>95893</td><td>0.13%</td></tr><tr><td>PCW-RAG</td><td>20.00</td><td>0</td><td>17</td><td>34716</td><td>151</td><td>4811</td><td>39697</td><td>0.43%</td></tr><tr><td>CRAG</td><td>8.9</td><td>40200</td><td>56</td><td>27362</td><td>0</td><td>2878</td><td>70497</td><td>0.08%</td></tr><tr><td>Sparse RAG</td><td>7.84</td><td>0</td><td>17</td><td>34716</td><td>56</td><td>2605</td><td>37396</td><td>0.20%</td></tr></table>
182
+
183
+ slow in our experiments because we used an older T5 model with more attention layers and fewer modern optimizations like kernel fusion and flash attention. This drives CRAG's total latency above that of Sparse RAG. It is possible that a different classifier could reduce this cost, but it would still duplicate the operations of encoding the context during the prefetch stage. Sparse RAG also reduces the decoding latency to nearly half that of RAG and PCW-RAG via context filtering. CRAG has similar decoding latency to that of Sparse RAG because it also uses context filtering, but overall Sparse RAG is still much faster.
184
+
185
+ We also explore trends in decoding latency by varying the number of retrieved contexts (i.e., top-K documents) and the length of the generated responses. As illustrated in Fig 2b, we observe that RAG requires over $50\%$ more time to generate outputs of varying lengths compared to the Sparse RAG approach. As shown in Fig 2a, this heightened demand for computational resources results in a notable slowdown in decoding speed. This underscores the efficiency advantages offered by Sparse RAG, especially in scenarios requiring a larger number of contexts during decoding.
186
+
187
+ Figure 2: Inference Efficiency Comparison.
188
+ ![](images/fee890fd7e28cbfe3825406e5c1409e376d702cf5732189a6c53a7df46f13ad6.jpg)
189
+ (a) E2E latency for decoding different number of tokens. (b) Decoding speed with different number of contexts.
190
+
191
+ ![](images/d86a500a8b727adb93f5249c9058b1fb219ce1e2cb56d588da475fc63defcde7.jpg)
192
+
193
+ Ablation on Foundation Model Size We applied Sparse RAG to different sizes of LLMs by testing it on Gemini XS and Gemini XXS. The results of these experiments are presented in Table 7. The findings demonstrate that Sparse RAG is compatible with various foundation models, effectively adapting to different model sizes. Notably, with a reduced amount of decoding caches, Sparse RAG is capable of achieving the highest quality results. This indicates that Sparse RAG maintains its efficiency and effectiveness across different foundation models, making it a versatile approach for various LLM configurations.
194
+
195
+ Table 7: Ablation on different model sizes.
196
+
197
+ <table><tr><td>Approach</td><td>Model Size</td><td>EM</td><td>F1</td><td>K</td></tr><tr><td>RAG</td><td>XS</td><td>66.52</td><td>70.87</td><td>20</td></tr><tr><td>PCW RAG</td><td>XS</td><td>65.75</td><td>70.37</td><td>20</td></tr><tr><td>Sparse RAG</td><td>XS</td><td>68.26</td><td>72.26</td><td>6.27</td></tr><tr><td>RAG</td><td>XXS</td><td>65.43</td><td>69.99</td><td>20</td></tr><tr><td>PCW RAG</td><td>XXS</td><td>65.04</td><td>69.95</td><td>20</td></tr><tr><td>Sparse RAG</td><td>XXS</td><td>67.17</td><td>71.16</td><td>7.84</td></tr></table>
198
+
199
+ Sparse RAG with Full Attention During Generation To demonstrate the isolated effects of our document assessment and filtering method and to evaluate the potential quality regressions caused by omitting cross-document attention scores during generation, we perform the Sparse RAG Per Context Assessment step followed by the generation step with full attention. As you can see in Table 8, using full attention during generation provides a slight quality improvement, but Sparse RAG quality is very close and much more efficient.
200
+
201
+ Table 8: Comparing Sparse RAG with full attention to Sparse RAG and RAG on PopQA.
202
+
203
+ <table><tr><td>Approach</td><td>EM</td><td>F1</td></tr><tr><td>RAG</td><td>65.43</td><td>69.99</td></tr><tr><td>Sparse RAG</td><td>67.71</td><td>71.16</td></tr><tr><td>Sparse RAG w/ full attention</td><td>67.94</td><td>71.24</td></tr></table>
204
+
205
+ Silver Labels vs LLM Labels In Corrective RAG, the T5 model was trained with silver labels that come from title matching (Yan et al., 2024). We use the same silver labels to replace the LLM labels and train the Sparse RAG model with this new dataset. We also train the CRAG model on our LLM labels for comparison. From the results shown in Table 9, we observe that the quality of the labels generated by the LLMs is slightly higher than that of the silver labels from Yan et al. (2024) leading to higher accuracy and lower K values. We also observe that when trained with exactly the same labels, Sparse RAG still outperforms CRAG on quality. We hypothesize that the superior quality of the LLM-generated labels comes from our two-round process of soliciting responses from two different LLMs. By engaging two distinct models, we likely enhanced the robustness and accuracy of the labels through a form of cross-validation, thereby mitigating potential biases or errors that might arise from relying on a single LLM.
206
+
207
+ Table 9: Comparing CRAG silver labels to our LLM labels on PopQA.
208
+
209
+ <table><tr><td>Approach</td><td>EM</td><td>F1</td><td>K</td><td>DS</td></tr><tr><td>Sparse RAG w/ silver labels</td><td>66.97</td><td>71.05</td><td>8.26</td><td>11.99</td></tr><tr><td>Sparse RAG w/ LLM labels</td><td>67.71</td><td>71.16</td><td>7.84</td><td>12.28</td></tr><tr><td>CRAG w/ silver labels</td><td>66.52</td><td>70.99</td><td>8.9</td><td>-</td></tr><tr><td>CRAG w/ LLM labels</td><td>67.03</td><td>71.02</td><td>-</td><td>-</td></tr></table>
210
+
211
+ Using Golden Context Labels During Inference Since QMSum provides golden per-context labels, we leverage these labels during inference to evaluate the upper bound performance of the Sparse RAG approach under the condition of perfect per-context assessment. The results of this experiment are presented in Table 10, demonstrating the full potential of the Sparse RAG method pending additional context assessment quality improvements.
212
+
213
+ Table 10: Trying golden labels on QMSum.
214
+
215
+ <table><tr><td>Approach</td><td>F1</td><td>RougeLSum</td><td>K</td><td>DS</td></tr><tr><td>Sparse RAG</td><td>23.96</td><td>20.1</td><td>4.45</td><td>16.05</td></tr><tr><td>+ golden label</td><td>26.76</td><td>21.93</td><td>1.13</td><td>21.16</td></tr></table>
216
+
217
+ # 6 CONCLUSION
218
+
219
+ This paper presents Sparse RAG to address the challenges of increased input length and latency. Through a novel approach of massive pre-filling and selective decoding, Sparse RAG efficiently manages the key-value cache of retrieved documents, allowing the LLMs to focus on highly relevant tokens. This selective attention mechanism not only reduces the computational burden during inference but also enhances the generation quality by filtering out irrelevant contexts. Evaluation on four diverse datasets validates Sparse RAG's ability to achieve a balanced trade-off between high-quality generation and computational efficiency, proving its versatility and effectiveness for both short- and long-form content generation tasks. This innovative paradigm showcases the potential for improving LLM performance in various applications by optimizing context management and inference processes.
220
+
221
+ Future research will explore Sparse RAG in multimodal contexts, investigating how Sparse RAG can handle and integrate information from multiple types of data to improve its performance and applicability across diverse scenarios.
222
+
223
+ # ACKNOWLEDGMENTS
224
+
225
+ We would like to thank Yu-hui Chen, Yuqi Li, Qifei Wang, Zonglin Li, Zhong Meng, and Alec Go for their suggestions.
226
+
227
+ # REFERENCES
228
+
229
+ Rohan Anil, Andrew M. Dai, Orhan First, Melvin Johnson, Dmitry Lepikhin, Alexandre Passos, Siamak Shakeri, Emanuel Taropa, Paige Bailey, Zhifeng Chen, Eric Chu, Jonathan H. Clark, Laurent El Shafey, Yanping Huang, Kathy Meier-Hellstern, Gaurav Mishra, Erica Moreira, Mark Omernick, Kevin Robinson, Sebastian Ruder, et al. PaLM 2 technical report. CoRR, abs/2305.10403, 2023. doi: 10.48550/ARXIV.2305.10403. URL https://doi.org/10.48550/arXiv.2305.10403.
230
+ Akari Asai, Zeqiu Wu, Yizhong Wang, Avirup Sil, and Hannaneh Hajishirzi. Self-rag: Learning to retrieve, generate, and critique through self-reflection. In The Twelfth International Conference on Learning Representations, ICLR 2024, Vienna, Austria, May 7-11, 2024. OpenReview.net, 2024. URL https://openreview.net/forum?id=hSyW5go0v8.
231
+ Sebastian Borgeaud, Arthur Mensch, Jordan Hoffmann, Trevor Cai, Eliza Rutherford, Katie Millican, George Bm Van Den Driessche, Jean-Baptiste Lespiau, Bogdan Damoc, Aidan Clark, et al. Improving language models by retrieving from trillions of tokens. In International conference on machine learning, pp. 2206-2240. PMLR, 2022.
232
+ Tom B Brown, Benjamin Mann, Nick Ryder, et al. Language models are few-shot learners. In Advances in neural information processing systems, pp. 1877-1901, 2020.
233
+ Michiel de Jong, Yury Zemlyanskiy, Nicholas FitzGerald, Fei Sha, and William W. Cohen. Mention memory: incorporating textual knowledge into transformers through entity mention attention. In The Tenth International Conference on Learning Representations, ICLR 2022, Virtual Event, April 25-29, 2022. OpenReview.net, 2022. URL https://openreview.net/forum?id=OY1A8ejQgEX.
234
+ William Fedus, Barret Zoph, and Noam Shazeer. Switch transformers: Scaling to trillion parameter models with simple and efficient sparsity. Journal of Machine Learning Research, 23(120):1-39, 2022.
235
+ Albert Gu and Tri Dao. Mamba: Linear-time sequence modeling with selective state spaces. In First Conference on Language Modeling, 2024.
236
+ Kelvin Guu, Kenton Lee, Zora Tung, Panupong Pasupat, and Ming-Wei Chang. Retrieval augmented language model pre-training. In Proceedings of the 37th International Conference on Machine Learning, ICML 2020, 13-18 July 2020, Virtual Event, volume 119 of Proceedings of Machine Learning Research, pp. 3929-3938. PMLR, 2020a. URL http://proceedings.mlr.press/v119/guu20a.html.
237
+ Kelvin Guu, Kenton Lee, Zora Tung, Panupong Pasupat, and Mingwei Chang. Retrieval augmented language model pre-training. In International conference on machine learning, pp. 3929-3938. PMLR, 2020b.
238
+ Edward J. Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. Lora: Low-rank adaptation of large language models. In The Tenth International Conference on Learning Representations, ICLR 2022, Virtual Event, April 25-29, 2022. OpenReview.net, 2022. URL https://openreview.net/forum?id=nZeVKeeFYf9.
239
+ Gautier Izacard and Edouard Grave. Leveraging passage retrieval with generative models for open domain question answering. In Paola Merlo, Jörg Tiedemann, and Reut Tsarfaty (eds.), Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, EACL 2021, Online, April 19 - 23, 2021, pp. 874-880. Association for Computational Linguistics, 2021. doi: 10.18653/V1/2021.EACL-MAIN.74. URL https://doi.org/10.18653/v1/2021.eacl-main.74.
240
+
241
+ Gautier Izacard, Mathilde Caron, Lucas Hosseini, Sebastian Riedel, Piotr Bojanowski, Armand Joulin, and Edouard Grave. Unsupervised dense information retrieval with contrastive learning. Trans. Mach. Learn. Res., 2022, 2022. URL https://openreview.net/forum?id= jKN1pXi7b0.
242
+ Ziwei Ji, Nayeon Lee, Rita Frieske, Tiezheng Yu, Dan Su, Yan Xu, Etsuko Ishii, Yejin Bang, Andrea Madotto, and Pascale Fung. Survey of hallucination in natural language generation. ACM Comput. Surv., 55(12):248:1-248:38, 2023. doi: 10.1145/3571730. URL https://doi.org/10.1145/3571730.
243
+ Huiqiang Jiang, Qianhui Wu, Chin-Yew Lin, Yuqing Yang, and Lili Qiu. Llm lingua: Compressing prompts for accelerated inference of large language models. In Houda Bouamor, Juan Pino, and Kalika Bali (eds.), Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, EMNLP 2023, Singapore, December 6-10, 2023, pp. 13358-13376. Association for Computational Linguistics, 2023a. doi: 10.18653/V1/2023.EMNLP-MAIN.825. URL https://doi.org/10.18653/v1/2023.emnlp-main.825.
244
+ Zhengbao Jiang, Frank F. Xu, Luyu Gao, Zhiqing Sun, Qian Liu, Jane Dwivedi-Yu, Yiming Yang, Jamie Callan, and Graham Neubig. Active retrieval augmented generation. In Houda Bouamor, Juan Pino, and Kalika Bali (eds.), Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, EMNLP 2023, Singapore, December 6-10, 2023, pp. 7969-7992. Association for Computational Linguistics, 2023b. URL https://aclanthology.org/2023.emnlp-main.495.
245
+ Chao Jin, Zili Zhang, Xuanlin Jiang, Fangyue Liu, Xin Liu, Xuanzhe Liu, and Xin Jin. Ragcache: Efficient knowledge caching for retrieval-augmented generation. arXiv preprint arXiv:2404.12457, 2024.
246
+ Mandar Joshi, Eunsol Choi, Daniel S. Weld, and Luke Zettlemoyer. Triviaqa: A large scale distantly supervised challenge dataset for reading comprehension. In Regina Barzilay and Min-Yen Kan (eds.), Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, ACL 2017, Vancouver, Canada, July 30 - August 4, Volume 1: Long Papers, pp. 1601-1611. Association for Computational Linguistics, 2017. doi: 10.18653/V1/P17-1147. URL https://doi.org/10.18653/v1/P17-1147.
247
+ Urvashi Khandelwal, Omer Levy, Dan Jurafsky, Luke Zettlemoyer, and Mike Lewis. Generalization through memorization: Nearest neighbor language models. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net, 2020. URL https://openreview.net/forum?id=Hk1BjCEKvH.
248
+ Yaniv Leviathan, Matan Kalman, and Yossi Matias. Fast inference from transformers via speculative decoding. In International Conference on Machine Learning, pp. 19274-19286. PMLR, 2023.
249
+ Mike Lewis, Marjan Ghazvininejad, Gargi Ghosh, Armen Aghajanyan, Sida Wang, and Luke Zettlemoyer. Pre-training via paraphrasing. Advances in Neural Information Processing Systems, 33:18470-18481, 2020a.
250
+ Patrick S. H. Lewis, Ethan Perez, Aleksandra Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich Kuttler, Mike Lewis, Wen-tau Yih, Tim Rocktäschel, Sebastian Riedel, and Douwe Kiela. Retrieval-augmented generation for knowledge-intensive NLP tasks. In Hugo Larochelle, Marc'Aurelio Ranzato, Raia Hadsell, Maria-Florina Balcan, and Hsuan-Tien Lin (eds.), Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual, 2020b. URL https://proceedings.neurips.cc/paper/2020/hash/6b493230205f780e1bc26945df7481e5-Abstract.html.
251
+ Huayang Li, Yixuan Su, Deng Cai, Yan Wang, and Lemao Liu. A survey on retrieval-augmented text generation. CoRR, abs/2202.01110, 2022. URL https://arxiv.org/abs/2202.01110.
252
+ Ji Lin, Jiaming Tang, Haotian Tang, Shang Yang, Guangxuan Xiao, and Song Han. AWQ: activation-aware weight quantization for on-device LLM compression and acceleration. GetMobile Mob. Comput. Commun., 28(4):12-17, 2024. doi: 10.1145/3714983.3714987. URL https://doi.org/10.1145/3714983.3714987.
253
+
254
+ Alex Mallen, Akari Asai, Victor Zhong, Rajarshi Das, Daniel Khashabi, and Hannaneh Hajishirzi. When not to trust language models: Investigating effectiveness of parametric and non-parametric memories. In Anna Rogers, Jordan L. Boyd-Graber, and Naoaki Okazaki (eds.), Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), ACL 2023, Toronto, Canada, July 9-14, 2023, pp. 9802-9822. Association for Computational Linguistics, 2023. doi: 10.18653/V1/2023.ACL-LONG.546. URL https://doi.org/10.18653/v1/2023.acl-long.546.
255
+ Sewon Min, Kalpesh Krishna, Xinxi Lyu, Mike Lewis, Wen-tau Yih, Pang Wei Koh, Mohit Iyyer, Luke Zettlemoyer, and Hannaneh Hajishirzi. Factscore: Fine-grained atomic evaluation of factual precision in long form text generation. In Houda Bouamor, Juan Pino, and Kalika Bali (eds.), Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, EMNLP 2023, Singapore, December 6-10, 2023, pp. 12076-12100. Association for Computational Linguistics, 2023. URL https://aclanthology.org/2023.emnlp-main.741.
256
+ Dor Muhlgay, Ori Ram, Inbal Magar, Yoav Levine, Nir Ratner, Yonatan Belinkov, Omri Abend, Kevin Leyton-Brown, Amnon Shashua, and Yoav Shoham. Generating benchmarks for factuality evaluation of language models. In Yvette Graham and Matthew Purver (eds.), Proceedings of the 18th Conference of the European Chapter of the Association for Computational Linguistics, EACL 2024 - Volume 1: Long Papers, St. Julian's, Malta, March 17-22, 2024, pp. 49-66. Association for Computational Linguistics, 2024. URL https://aclanthology.org/2024.eacl-long.4.
257
+ Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll L. Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, John Schulman, Jacob Hilton, Fraser Kelton, Luke Miller, Maddie Simens, Amanda Askell, Peter Welinder, Paul F. Christiano, Jan Leike, and Ryan Lowe. Training language models to follow instructions with human feedback. In NeurIPS, 2022. URL http://papers.nips.cc/paper_files/paper/2022/bit/1efde53be364a73914f58805a001731-Abstract-Conference.html.
258
+ Nir Ratner, Yoav Levine, Yonatan Belinkov, Ori Ram, Inbal Magar, Omri Abend, Ehud Karpas, Amnon Shashua, Kevin Leyton-Brown, and Yoav Shoham. Parallel context windows for large language models. In Anna Rogers, Jordan L. Boyd-Graber, and Naoaki Okazaki (eds.), Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), ACL 2023, Toronto, Canada, July 9-14, 2023, pp. 6383-6402. Association for Computational Linguistics, 2023. doi: 10.18653/V1/2023.ACL-LONG.352. URL https://doi.org/10.18653/v1/2023.acl-long.352.
259
+ Machel Reid, Nikolay Savinov, Denis Teptyashin, Dmitry Lepikhin, Timothy Lillicrap, Jean-baptiste Alayrac, Radu Soricut, Angeliki Lazaridou, Orhan First, Julian Schrittwieser, et al. Gemini 1.5: Unlocking multimodal understanding across millions of tokens of context. arXiv preprint arXiv:2403.05530, 2024.
260
+ Noam Shazeer. Fast transformer decoding: One write-head is all you need. arXiv preprint arXiv:1911.02150, 2019.
261
+ Noam Shazeer and Mitchell Stern. Adafactor: Adaptive learning rates with sublinear memory cost. In International Conference on Machine Learning, pp. 4596-4604. PMLR, 2018.
262
+ Freda Shi, Xinyun Chen, Kanishka Misra, Nathan Scales, David Dohan, Ed H. Chi, Nathanael Scharli, and Denny Zhou. Large language models can be easily distracted by irrelevant context. In Andreas Krause, Emma Brunskill, Kyunghyun Cho, Barbara Engelhardt, Sivan Sabato, and Jonathan Scarlett (eds.), Proceedings of the 40th International Conference on Machine Learning, volume 202 of Proceedings of Machine Learning Research, pp. 31210-31227. PMLR, 23-29 Jul 2023. URL https://proceedings.mlr.press/v202/shi23a.html.
263
+ Gemini Team, Rohan Anil, Sebastian Borgeaud, Yonghui Wu, Jean-Baptiste Alayrac, Jiahui Yu, Radu Soricut, Johan Schalkwyk, Andrew M Dai, Anja Hauth, et al. Gemini: a family of highly capable multimodal models. arXiv preprint arXiv:2312.11805, 2023.
264
+ Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, Aurélien Rodriguez, Armand Joulin, Edouard Grave, and Guillaume Lample. Llama: Open and efficient foundation language
265
+
266
+ models. CoRR, abs/2302.13971, 2023. doi: 10.48550/ARXIV.2302.13971. URL https://doi.org/10.48550/arXiv.2302.13971.
267
+ Shi-Qi Yan, Jia-Chen Gu, Yun Zhu, and Zhen-Hua Ling. Corrective retrieval augmented generation. arXiv preprint arXiv:2401.15884, 2024.
268
+ Zhilin Yang, Peng Qi, Saizheng Zhang, Yoshua Bengio, William W. Cohen, Ruslan Salakhutdinov, and Christopher D. Manning. Hotpotqa: A dataset for diverse, explainable multi-hop question answering. In Ellen Riloff, David Chiang, Julia Hockenmaier, and Jun'ichi Tsujii (eds.), Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, October 31 - November 4, 2018, pp. 2369-2380. Association for Computational Linguistics, 2018. doi: 10.18653/V1/D18-1259. URL https://doi.org/10.18653/v1/d18-1259.
269
+ Dani Yogatama, Cyprien de Masson d'Autume, and Lingpeng Kong. Adaptive semiparametric language models. Transactions of the Association for Computational Linguistics, 9:362-373, 2021.
270
+ Ori Yoran, Tomer Wolfson, Ori Ram, and Jonathan Berant. Making retrieval-augmented language models robust to irrelevant context. In *The Twelfth International Conference on Learning Representations*, ICLR 2024, Vienna, Austria, May 7-11, 2024. OpenReview.net, 2024. URL https://openreview.net/forum?id=ZS4m74kZpH.
271
+ Yue Zhang, Yafu Li, Leyang Cui, Deng Cai, Lemao Liu, Tingchen Fu, Xinting Huang, Enbo Zhao, Yu Zhang, Yulong Chen, Longyue Wang, Anh Tuan Luu, Wei Bi, Freda Shi, and Shuming Shi. Siren's song in the AI ocean: A survey on hallucination in large language models. CoRR, abs/2309.01219, 2023. doi: 10.48550/ARXIV.2309.01219. URL https://doi.org/10.48550/arXiv.2309.01219.
272
+ Ming Zhong, Da Yin, Tao Yu, Ahmad Zaidi, Mutethia Mutuma, Rahul Jha, Ahmed Hassan Awadallah, Asli Celikyilmaz, Yang Liu, Xipeng Qiu, and Dragomir R. Radev. Qmsum: A new benchmark for query-based multi-domain meeting summarization. In Kristina Toutanova, Anna Rumshisky, Luke Zettlemoyer, Dilek Hakkani-Tur, Iz Beltagy, Steven Bethard, Ryan Cotterell, Tanmoy Chakraborty, and Yichao Zhou (eds.), Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2021, Online, June 6-11, 2021, pp. 5905-5921. Association for Computational Linguistics, 2021. doi: 10.18653/V1/2021.NAACL-MAIN.472. URL https://doi.org/10.18653/v1/2021.naacl-main.472.
273
+
274
+ # A PROMPTS AND INSTRUCTIONS
275
+
276
+ # A.1 PROMPTS USED FOR LLMS
277
+
278
+ We share the prompt used for calling LLMs to get per context assessment in Table 11.
279
+
280
+ Table 11: The zero-shot prompts for LLM labeling and critique.
281
+
282
+ <table><tr><td>Round 1 prompt</td></tr><tr><td>You are now doing a reading comprehension task. It is important that you be as thorough, detail-oriented, and accurate as possible in your response.
283
+ You are given a question, a set of accepted answers, a document and its title. The document does not necessarily contain the right answer to the question.
284
+ You should read the title and the document and then check if they provide one of the correct answers to the question.
285
+ If the title and document together contain the correct answer to the question, output a score of 1.0, otherwise output a score of 0.0.
286
+ question: ?question;
287
+ accepted answers: ?answers;
288
+ title: ?title;
289
+ document: ?document;
290
+ output:</td></tr><tr><td>Round 2 prompt</td></tr><tr><td>Your job is to correct another model&#x27;s performance on a reading comprehension task.
291
+ The model was given a question, a set of accepted answers, a document and its title. The document and title do not necessarily contain the right answer. The model was instructed to output a score of 1.0 if the document contains the answer, and a score of 0.0 otherwise.
292
+ You will be given the same information as the other model along with its output. You should read the title and document and then check if they provide one of the correct answers to the question.
293
+ Then check if you agree with the previous model&#x27;s output.
294
+ If you agree, output the same score unchanged.
295
+ If you disagree, output the corrected score.
296
+ Your output should be as accurate as possible.
297
+ question: ?question;
298
+ accepted answers: ?answers;
299
+ title: ?title;
300
+ document: ?document;
301
+ previous model&#x27;s score: ?score;
302
+ output:</td></tr></table>
303
+
304
+ # A.2 RATER GUIDELINES
305
+
306
+ We share the instructions provided to the human labelers in Table 12.
307
+
308
+ Table 12: Instructions for raters creating ground-truth relevance dataset.
309
+
310
+ <table><tr><td>Human Rater Instructions</td></tr><tr><td>Please read the question, the answer and the context. Please answer if the context can help answer the question. If it can, select 1. Otherwise select 0.</td></tr><tr><td>1: good
311
+ 0: bad</td></tr><tr><td>Please use the answers as a hint. However, do not use “is the answer in the context?” as a heuristic for making the decision.</td></tr></table>
312
+
313
+ # B DATASET ANALYSIS
314
+
315
+ During the human labeling process, several raters flagged documents and questions that were difficult to label. In total, 23 out of 500 documents were flagged and 15 out of 50 questions were flagged.
316
+
317
+ We explored several ways of filtering our human-labeled subsample of Natural Questions to determine how they impacted context assessment F-scores overall and for each auto-rater. We provide two additional filtered versions of the human-labeled RAG relevance dataset as alternatives. See Table 13 for the auto-rater F-scores for each filtering method. Both statistical filtering approaches (e.g. removing contexts with non-unanimous labels) and targeted filtering approaches (e.g. removing questions or contexts flagged by human raters) lead to some improvement in F-scores for relevance labels, but in all cases, using Gemini Ultra as the rater and PALM2 XL as the critic model provides the highest Average F1 score.
318
+
319
+ Table 13: Evaluation of Labeling Methods w/ Filtering
320
+
321
+ <table><tr><td>Dataset Filters</td><td>#Docs</td><td>% Relevant</td><td>Rater Model</td><td>Critic Model</td><td>Average F1</td></tr><tr><td rowspan="6">Specialized rater corrections</td><td rowspan="6">500</td><td rowspan="6">31</td><td>PALM2 XL</td><td>n/a</td><td>0.729</td></tr><tr><td>PALM2 XL</td><td>PALM2 XL</td><td>0.781</td></tr><tr><td>Gemini Ultra</td><td>n/a</td><td>0.761</td></tr><tr><td>Gemini Ultra</td><td>Gemini Ultra</td><td>0.704</td></tr><tr><td>PALM2 XL</td><td>Gemini Ultra</td><td>0.728</td></tr><tr><td>Gemini Ultra</td><td>PALM2 XL 340B</td><td>0.821</td></tr><tr><td rowspan="6">Filter non-unanimous docs</td><td rowspan="6">351</td><td rowspan="6">23</td><td>PALM2 XL</td><td>n/a</td><td>0.741</td></tr><tr><td>PALM2 XL</td><td>PALM2 XL</td><td>0.811</td></tr><tr><td>Gemini Ultra</td><td>n/a</td><td>0.792</td></tr><tr><td>Gemini Ultra</td><td>Gemini Ultra</td><td>0.739</td></tr><tr><td>PALM2 XL</td><td>Gemini Ultra</td><td>0.763</td></tr><tr><td>Gemini Ultra</td><td>PALM2 XL</td><td>0.856</td></tr><tr><td rowspan="6">Filter flagged docs and questions</td><td rowspan="6">330</td><td rowspan="6">29</td><td>PALM2 XL</td><td>n/a</td><td>0.750</td></tr><tr><td>PALM2 XL</td><td>PALM2 XL</td><td>0.797</td></tr><tr><td>Gemini Ultra</td><td>n/a</td><td>0.782</td></tr><tr><td>Gemini Ultra</td><td>Gemini Ultra</td><td>0.741</td></tr><tr><td>PALM2 XL</td><td>Gemini Ultra</td><td>0.753</td></tr><tr><td>Gemini Ultra</td><td>PALM2 XL</td><td>0.833</td></tr></table>
322
+
323
+ We observed trends in questions and contexts shared in Table 14 that raised concerns about whether and how a human would be able to assess the relevance of the context. These concerns extend to expectations of how well LLMs would do at the task.
324
+
325
+ Most of these concerns involve the absence of sufficient context to correctly answer the question in widely used public datasets. For consistency with the literature, we did not modify the queries or the retrieved contexts in this paper, but we do expect that the ambiguity is impacting our results. In many cases, retrieved contexts would be assessed differently for relevance depending on the true intended meaning of the question, and different answers would be expected.
326
+
327
+ It would be an interesting future exploration to augment the datasets to resolve such ambiguity, as it would likely improve the accuracy of our relevance labels and improve performance overall. For examples with missing time or location context, you could augment the datasets by simply adding new contexts with the missing information or appending the context to the query. For example, to handle the question "Who is the current president of the United States?", you could add a context "Today is November 18, 2024." or you could modify the query to "Who is the current president of the United States on November 18, 2024"? In the paper, we showed that Sparse RAG performs comparably to RAG on the HotpotQA multi-hop reasoning dataset, so we expect that Sparse RAG could effectively leverage information from new contexts in the generation step. However, we expect that appending such temporally specific context to the query itself would likely yield the best quality, as all contexts attend to the query.
328
+
329
+ In real world cases where the question is ambiguous and sufficient context cannot be retrieved to confidently answer the question, we believe that the most desirable behavior would be to request clarification before further action. Future work should explore simulating such a scenario in order to assess RAG question-answering approaches.
330
+
331
+ Table 14: Overview of trends, datasets, and examples with associated comments.
332
+
333
+ <table><tr><td>Trend</td><td>Dataset</td><td>Example question</td><td>Comments</td></tr><tr><td rowspan="6">Questions with time-dependent answers</td><td>NQ</td><td>who is the president of usa right now</td><td>Depends on when the question is asked.</td></tr><tr><td>NQ</td><td>who is the current director of the us mint</td><td>Depends on when the question is asked.</td></tr><tr><td>NQ</td><td>when is the next dead-pool movie being released</td><td>Depends on when the question is asked.</td></tr><tr><td>NQ</td><td>total number of death row inmates in the us</td><td>Fluctuates over time.</td></tr><tr><td>PopQA</td><td>What is Prague the capital of?</td><td>The borders in this region and the name of the country have changed several times in the 20th century.</td></tr><tr><td>PopQA</td><td>What is Dennis Rod-man&#x27;s occupation?</td><td>The accepted answers are &quot;actor, actress, actors, actresses&quot;. He was an actor later in his career, but he rose to prominence as a professional basketball player.</td></tr><tr><td rowspan="2">Missing synonyms, abbreviations, and aliases, combined with unclear granularity</td><td>NQ</td><td>in which regions are most of Africa petroleum and natural gas found</td><td>&quot;Region&quot; can refer to different levels of granularity (e.g. Sub-Saharan Africa vs. Ethiopia), but the only accepted answer is &quot;Nigeria&quot;.</td></tr><tr><td>NQ</td><td>what type of car is a jeep</td><td>The accepted answers are only &quot;off-road vehicles&quot;, &quot;light utility vehicles&quot;, &quot;sport utility vehicles&quot;, but &quot;SUV&quot; is clearly a correct answer as well.</td></tr><tr><td>Non-exhaustive list of answers</td><td>NQ</td><td>cast of law &amp; order special victim unit</td><td>The accepted answers include 16 cast members, but the show went on for 25 seasons with many cast changes and guest stars not included in the list.</td></tr><tr><td>Oddly phrased question</td><td>NQ</td><td>right to property according to the constitution of India is a</td><td>The only correct answer is &quot;constitutional right&quot;, but that is included in the question. It&#x27;s not clear what type of answer would be appropriate here.</td></tr><tr><td>Overly specific answers expected</td><td>NQ</td><td>where does the story the great gatsby take place</td><td>The only accepted answer here is &quot;Long Island of 1922&quot;, but the place is Long Island and the question does not ask about when the story is set.</td></tr><tr><td rowspan="3">Question refers to an entity with a common name without disambiguation</td><td>PopQA</td><td>What genre is Frances?</td><td>There is a musician and a film called &quot;Frances&quot;, and both of those could arguably have a genre associated with them.</td></tr><tr><td>PopQA</td><td>Who was the producer of Hurt?</td><td>The question is referring to a song performed by Christina Aguilera but there are many other songs, movies, and other entities that share the name and also have a producer.</td></tr><tr><td>PopQA</td><td>What is the capital of Cherokee County?</td><td>There are many different Cherokee Counties in different states in the USA.</td></tr></table>
acceleratinginferenceofretrievalaugmentedgenerationviasparsecontextselection/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:191bb5c4e63362c8dd8531a4d20bda4dc1a839354dce1c6ed58ad77aaaea298e
3
+ size 967597
acceleratinginferenceofretrievalaugmentedgenerationviasparsecontextselection/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bc5caa1ed167eeb9f46bcace5fc4a638e04c481db412b144ff6f988b8badcec4
3
+ size 371409
acceleratingneuralnetworktrainingananalysisofthealgoperfcompetition/bf14fb76-b040-4a88-a8d9-fdd96771aa65_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7a195b8973b087513f8b006df7ba8686d477ff4f79670e9c26e275ddd815bf2e
3
+ size 119776
acceleratingneuralnetworktrainingananalysisofthealgoperfcompetition/bf14fb76-b040-4a88-a8d9-fdd96771aa65_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a5ebcc75fe94a467c6976a3616814d3841a3ebd34669201a583c5f2578eced6e
3
+ size 138723