SlowGuess commited on
Commit
32ec3d3
·
verified ·
1 Parent(s): fef67c5

Add Batch 2d229c79-e997-4718-b34f-5a6a908dc52e

Browse files
genomegenerativeneurosymbolicvisualreasoningbygrowingandreusingmodules/6de2a914-0d0b-4629-9163-a148ca966fc0_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:eaa818473fb72def3fdece8ec1ce6509b0760f5e150a3e44edf197b9dcf59bdd
3
+ size 146112
genomegenerativeneurosymbolicvisualreasoningbygrowingandreusingmodules/6de2a914-0d0b-4629-9163-a148ca966fc0_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1402a0412883a9c642118162ee04aca6525ffd1291ae1d42540ff89fc398e89a
3
+ size 173871
genomegenerativeneurosymbolicvisualreasoningbygrowingandreusingmodules/6de2a914-0d0b-4629-9163-a148ca966fc0_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:74633835560e49a38c473435cfa1f3f6f60a522cec99d3f93e25595895c09746
3
+ size 6097166
genomegenerativeneurosymbolicvisualreasoningbygrowingandreusingmodules/full.md ADDED
@@ -0,0 +1,746 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # GENOME: GENERATIVE NEURO-SYMBOLIC VISUAL REASONING BY GROWING AND REUSING MODULES
2
+
3
+ Zhenfang Chen*
4
+
5
+ MIT-IBM Watson AI Lab
6
+
7
+ Rui Sun*
8
+
9
+ Columbia University
10
+
11
+ Wenjun Liu*
12
+
13
+ Tsinghua University
14
+
15
+ Yining Hong
16
+
17
+ University of California, Los Angeles
18
+
19
+ Chuang Gan
20
+
21
+ MIT-IBM Watson AI Lab and UMass Amherst
22
+
23
+ # ABSTRACT
24
+
25
+ Recent works have shown that Large Language Models (LLMs) could empower traditional neuro-symbolic models via programming capabilities to translate language into module descriptions, thus achieving strong visual reasoning results while maintaining the model's transparency and efficiency. However, these models usually exhaustively generate the entire code snippet given each new instance of a task, which is extremely ineffective. On the contrary, human beings gradually acquire knowledge that can be reused and grow into more profound skills for fast generalization to new tasks since we are an infant. Inspired by this, we propose generative neuro-symbolic visual reasoning by growing and reusing modules. Specifically, our model consists of three unique stages, module initialization, module generation, and module execution. First, given a vision-language task, we adopt LLMs to examine whether we could reuse and grow over established modules to handle this new task. If not, we initialize a new module needed by the task and specify the inputs and outputs of this new module. After that, the new module is created by querying LLMs to generate corresponding code snippets that match the requirements. In order to get a better sense of the new module's ability, we treat few-shot training examples as test cases to see if our new module could pass these cases. If yes, the new module is added to the module library for future reuse. Finally, we evaluate the performance of our model on the testing set by executing the parsed programs with the newly made visual modules to get the results. We find the proposed model possesses several advantages. First, it performs competitively on standard tasks like visual question answering and referring expression comprehension; Second, the modules learned from one task can be seamlessly transferred to new tasks; Last but not least, it is able to adapt to new visual reasoning tasks by observing a few training examples and reusing modules<sup>1</sup>.
26
+
27
+ # 1 INTRODUCTION
28
+
29
+ Neuro-symbolic visual reasoning models (Andreas et al., 2016b; Mao et al., 2019b) refer to the algorithm family that combines deep neural networks (lec, 1998; Hochreiter & Schmidhuber, 1997) for learning correlations among the training data and symbolic methods (Yi et al., 2018; Andreas et al., 2016a) to perform explicit and transparent multi-step reasoning. In contrast to pure neural network-based models (Hudson & Manning, 2018; Li et al., 2023), neuro-symbolic approaches achieve strong performance in visual reasoning tasks, simultaneously offering superior model transparency and data efficiency.
30
+
31
+ Nevertheless, such models suffer from several inherent limitations. Firstly, their language parsers (Yi et al., 2018; Andreas et al., 2016b), employed for the conversion from natural language into symbolic programs, typically demand extensive domain-specific language-program pairs to train on, and struggle to generalize effectively to unconstrained natural language instructions. Additionally, these models necessitate a custom design for every module, rendering the process labor-intensive and lacking scalability.
32
+
33
+ ![](images/b93d56aef03105f4b8bc0ab7b81cea107bd7a93326d99a1a8a0373fba3389354.jpg)
34
+ Figure 1: The motivation of the Genome. Compared with VisProg and ViperGPT which exhaustively generate a code snippet for each input case, our Genome is able to generate new modules and reuse old modules to handle the query. The module generated by Genome can be used to handle other instances of the task for better performance. Second, the generated module can be transferred to different tasks like image editing. Finally, it can learn to handle new tasks like Raven (Burke, 1985; Zhang et al., 2019a) by learning modules from only a few training samples. The edited region and the correct answer for the Raven task are labeled with red boxes for better visualization.
35
+
36
+ Recent advancements in large language models (LLMs) (Brown et al., 2020; Ouyang et al., 2022) have ushered in a new era with its remarkable performances across various applications, including chatbots (Shuster et al., 2022), virtual assistants (Dong et al., 2023), and programming assistants (Chen et al., 2021a). Riding this unprecedented wave, researchers reformulate the old wisdom by incorporating LLMs into neuro-symbolic reasoning, bypassing the inflexibility and ineffectiveness of domain-specific language-to-program parsers. Specifically, VisProg (Gupta & Kembhavi, 2022) pre-defines a set of visual modules, and uses LLMs to transform language instructions into symbolic programs consisting of the pre-defined visual modules. Taking a step forward, ViperGPT (Suris et al., 2023) releases the burden on manually-defined visual modules by introducing a code generator that could produce a code snippet based on each input instance of a new task.
37
+
38
+ Promising as these LLM-based neuro-symbolic models can be, they inevitably bear several weaknesses compared to the learning and reasoning processes of human beings. First, both VisProg and ViperGPT exhaustively produce one code snippet for each new instance of a task, which is extremely ineffective. This is in stark contrast with the human learning process: from an early age, we organically accumulate knowledge from particular experiences. Such knowledge acquired from specific cases could be reused and reconfigured, enabling us to quickly adapt to new tasks and new demands (Harlow, 1949; Mitchell et al., 1986; Lake et al., 2016; Ellis et al., 2023). The knowledge blocks grow progressively over time, gradually into a library with extraordinary richness and flexibility for fast generalization to any unseen task - the knowledge library that these models fall short of. Second, both models do not verify and examine the codes they generate. It seems that when the models generate a bad code snippet that cannot solve the input case, they just "let it go" without taking another stab for larger chance towards success. And of course, when they encounter similar cases again, they keep "stepping on the same rake". Human beings, on the other hand, would verify and examine the acquired knowledge by proposing a set of test scenarios before storing them in the library (Brulé & Blount, 1989). It's crucial that a neuro-symbolic reasoning model is equipped with the same abilities to verify the codes it produces, stores them in a library if satisfactory, and makes another attempt when the codes fail.
39
+
40
+ To this end, we introduce a novel Generative Neuro-symbolic Visual Reasoning Model (GENOME), proficient in assimilating new neural modules from a limited set of training examples. This model excels in handling standard visual reasoning tasks such as visual question answering. Addition-
41
+
42
+ ally, it demonstrates outstanding module transfer capabilities for tasks like image editing, and exhibits an exceptional ability to generalize to new reasoning tasks with limited training examples. As illustrated in Figure 2, GENOME comprises three stages: 1) module initialization, 2) module generation, and 3) module execution. From the initial training set examples, an LLM discerns the necessity for a new module to tackle the task and, if required, produces the respective input and output. In the second stage, LLMs implement and refine the new module, ensuring seamless integration with existing modules and resulting in accurate responses to the training queries. During testing, the LLM first converts language instructions into executable high-level programs like COMPARE_ATTRIBUTE(IMAGE,BOX0,BOX1,ATTR) for comparing attributes of different bounding boxes. The program will be run with the new module sets, producing the desired outputs.
43
+
44
+ We assessed the performance of Genome across six visual reasoning tasks, spanning from visual question answering (Hudson & Manning, 2019) to Raven's Progressive Matrices (Zhang et al., 2019a). The experimental findings reveal that Genome delivers competitive results on standard benchmarks, ensuring both transparency and interoperability. Notably, modules honed on these standard tasks can be adeptly adapted to diverse domains, including image editing and knowledge tagging (Gupta & Kembhavi, 2022). Additionally, with minimal training examples, Genome demonstrates the capability to manage new visual reasoning tasks (Burke, 1985; Jiang et al., 2023a) by repurposing modules.
45
+
46
+ # 2 RELATED WORK
47
+
48
+ Visual Reasoning. Our work aims to handle visual reasoning tasks, which require a model to draw new inferences based on the acquired visual cues in images or videos (Hudson & Manning, 2019; Kazemzadeh et al., 2014; Goyal et al., 2017; Zhang et al., 2019a; Jiang et al., 2023a). Typical tasks for visual reasoning include visible question answering (Goyal et al., 2017; Hudson & Manning, 2019), visual grounding (Kazemzadeh et al., 2014; Yu et al., 2016; Chen et al., 2020) and Raven's Progressive Matrices (Burke, 1985; Zhang et al., 2019a). Various models (Hudson & Manning, 2018; Yu et al., 2018; Zhang et al., 2021; Ding et al., 2023) have been developed to handle these tasks but most of them are ad-hoc and carefully designed for a specific task, leaving it an open research question on how to build a general model that can handle different kinds of visual reasoning problems by only showing a few examples.
49
+
50
+ Neuro-symbolic Visual Reasoning. Our work is also closely related to neuro-symbolic visual reasoning models (Andreas et al., 2016b; Mao et al., 2019a; Chen et al., 2021c; 2022), where the models decompose the query of the visual reasoning tasks into a series of reasoning steps and represent each reasoning step with a neural module (i.e., a code snippet for achieving specific functions like localizing objects and recognizing object categories). While these models have better model interoperability and data efficiency than previous connectionist models (Hudson & Manning, 2018; Anderson et al., 2018), they often show their limitations in representing natural language instructions in the wild with the limited pre-defined reasoning steps (Yang et al., 2020; Chen et al., 2021b). Moreover, they need to manually define and implement each neural module one by one, making it hard to scale up and handle multiple tasks within a single model.
51
+
52
+ Foundation Models for Reasoning. Recently, large language models (LLMs) (Brown et al., 2020; Ouyang et al., 2022) have been widely used in language understanding (Hendrycks et al., 2020) and reasoning (Cobbe et al., 2021; Amini et al., 2019). Schick et al. (2023) develop the toolformer to show that LLMs can use external tools to better handle language tasks. Cai et al. (2023) shows that LLMs can make simple tools for natural language tasks by writing code snippets. LLMs have also been used in vision-language tasks. Most of these works (Li et al., 2023; Alayrac et al., 2022) connect LLMs with additional vision encoders and fine-tune them with massive vision-language pairs. As evaluated by Xu et al. (2023b), while these models show great performance on in-domain tasks, they also perform poorly on tasks out of the training domains. They are also extremely computation-expensive. For example, it takes 15 days to use 1536 TPUv4 for training Flamingo (Alayrac et al., 2022).
53
+
54
+ LLMs for Programming. There are some works that use LLMs to write codes to handle tasks. Pereira & Hartmann used LLMs to progressively enhance and specify system subcomponents, empowering users to develop versatile programs through a systematic iterative disambiguation method. Jiang et al. (2023b) learned to generate code with LLMs, which involves a planning phase for outlining solution steps and an implementation phase for generating code. Besides the dense engagement
55
+
56
+ ![](images/3cd28f931e4a4f0d656071180691bdb9ec8a44e628322ce2994411b52e57f02a.jpg)
57
+ (1). Module Initialization
58
+
59
+ ![](images/beef44b5c8d15042c08ada949389ea7d1d6dc1833537c47108ee5d20a5669802.jpg)
60
+ (2). Module Generation
61
+
62
+ ![](images/27e91ac2296cc52d973e66314c4b62b755708bd32af3faff383d7bedb9390dcd.jpg)
63
+ (3). Module Execution
64
+ Figure 2: The framework of our Genome, which contains three stages, module initialization, module generation, and module execution. In stage 1, we feed the questions and the signature of the existing modules to the LLM and ask it to identify whether we can handle the query within the operations of the existing modules. If not, we ask the LLM to generate the signature of the new module (i.e. the input and output) and predict the reasoning steps to handle the query task. In stage 2, we feed the module signature and the testing cases to the LLM and ask the LLM to implement the module and test its pass rate on the training examples. We only accept the modules that successfully handle the query. In stage 3, we first use the LLM to parse the query into symbolic operations and then execute these operations on the test images with the help of the scalable module library. We take VQA as an example and such a framework can also be expanded to other tasks like referring expression comprehension and Raven.
65
+
66
+ with the visual modalities as input, our Genome differs from them in modularizing code snippets for better module expansion and reuse. These unique differences make our Genome model have new capabilities like growing new modules to handle the visual reasoning tasks and transferring the modules into new domains. Some research works (Vendrow et al., 2023; Gao et al., 2023) also used LLMs and few-shot examples to improve AI models' performance. However, they only focused on improving the pure neural network model's performance and automatically discovered groups to use them for model design.
67
+
68
+ Visual Programming by LLMs. Another line of research has been combining vision models (Li* et al., 2022; Kirillov et al., 2023; Radford et al., 2021) with LLMs in an off-shelf manner. Early models (Yang et al., 2022; Chen et al., 2023b) transformed images into captions and append the captions into the LLMs' prompt to handle vision-language tasks. While the method is simple, they also perform inferior and lack transparency. Recently, VisPROG (Gupta & Kembhavi, 2022) uses LLMs to transform language instructions into pre-defined modular operations for step-by-step reasoning. However, it still requires manually implementing each module one by one. Later, ViperGPT (Suris et al., 2023) shows that the LLMs can be used to write a code snippet for each query instance independently to handle vision tasks. However, the code it writes has not been examined and tested by any training examples and there is no guarantee about the performance and code safety. Instead, we propose GENOME that ask LLMs to create new neural models (i.e. general code snippets to achieve specific functions) and handle the given tasks through only a few training examples. Our GENOME has the reliance on a few training examples to learn new modules. However, such newly generated modules can cooperate with each other and be reused for other tasks for better performance. There is also research like (Rahaman et al., 2021) which adopts pure neural network architecture to abstract the reasoning problem by generating the script and dynamically executing it. Differently, our GENOME is a neuro-symbolic method that provides better model transparency in explicit Python scripts and is able to make use of existing large pre-trained models to make and reuse new modules.
69
+
70
+ # 3 METHOD
71
+
72
+ # 3.1 OVERALL
73
+
74
+ In this section, we present a novel framework named as Generative Neuro-symbolic Visual Reasoning Model (GENOME) for the acquisition of neural modules and solutions of visual reasoning tasks with only a limited set of training examples. GENOME comprises several pre-defined opera
75
+
76
+ tors that serve as the initial building blocks. Each neural operator corresponds to a neural module and is implemented using a Python code snippet, thereby enabling specific functions such as object localization within an image. Nevertheless, it is not possible to pre-define all the necessary neural modules prior to addressing the visual reasoning tasks. Consequently, there arises a need to generate new modules based on a limited number of visual reasoning task examples.
77
+
78
+ Figure 2 illustrates that Genome consists of three distinct stages: 1) module initialization, 2) module generation, and 3) module execution. In the module initialization stage, when provided with a small set of training examples from the visual reasoning dataset, the primary objective is to determine whether the existing neural modules are sufficient to address the query examples. If the existing modules are inadequate for the query instance, Genome will identify the requirements for creating new modules, including defining the modules' input and output specifications. During the module generation stage, Genome leverages the LLM to implement the neural module based on the provided training examples and the specified input and output format (i.e., function signature) and add the module to the module library only when it passes the test cases. Once the new module is successfully implemented, the module execution orchestrates the transformation of input queries into a sequence of neural operations. Subsequently, these operations are applied to the neural modules to obtain the correct output. All these three stages are powered by robust code generation capabilities and the in-context learning technique. Prompts for each stage can be found at Figure 15-17
79
+
80
+ # 3.2 MODEL DETAILS
81
+
82
+ Utilizing a limited number of examples from the training set of visual reasoning tasks, we employ the GENOME framework, comprising three distinctive stages: module initialization, module generation, and module execution.
83
+
84
+ Module Initialization. The initial phase within our Genome framework is module initialization, dedicated to determining the set of new modules required to address the visual reasoning task. As depicted in Figure 2-1, we employ an LLM to assess the feasibility of handling these training instances using existing neural modules. Should this not be achievable, we task the LLM with specifying the necessary new modules (e.g. COMPARES_ATTRIBUTE in Figure 2) for an accurate response to the query. The outcome of this stage comprises function signatures detailing the input and output formats for these new modules. Furthermore, it facilitates the transformation of the input query into a sequence of reasoning steps, which function as test cases to validate the correctness of the generated program within module generation. The prompt for the LLM in this stage is shown in Figure 15.
85
+
86
+ Module Generation. The second phase of our GENOME framework is module generation, which focuses on implementing the correct new modules proposed during the module initialization stage. Specifically, after receiving the signature of a new module, we incorporate all the corresponding test cases that call the new module into the prompt and employ learning-in-context techniques to generate multiple program candidates. Note that a new module is usually paired with multiple test cases. These program candidates are subsequently executed using the provided training examples. If a program encounters errors during execution, we incorporate the error information into the LLM's prompt and instruct it to rectify these issues. We only accept program candidates that achieve a pass rate surpassing a predefined threshold $(\eta)$ . This procedure bears resemblance to the code translation of LLMs discussed in (Chen et al., 2023a), but we extend it to accommodate more intricate multi-modal input types and instructions from natural language and raw images. The inclusion of module generation in the context of visual reasoning tasks offers two principal advantages. Firstly, it upholds the transparency and interpretability of neuro-symbolic models while preserving competitive performance. Secondly, it exhibits generative capabilities and scalability as our GENOME can autonomously generate new modules tailored to specific tasks.
87
+
88
+ Module Execution. Given the integration of newly-generated modules with existing neural modules tailored for visual reasoning, the Genome framework initiates query parsing from the testing dataset, transforming them into executable operations through in-context learning. An illustrative prompt for this stage is depicted in Figure 17. Notably, although various visual reasoning tasks may possess distinct inputs and outputs, they can re-purpose these intermediary modules designed for other tasks to enhance overall performance. This feature represents a unique capability for code generation at the module level, an aspect hitherto unexplored by prior methods(Surís et al., 2023; Gupta & Kembhavi, 2022).
89
+
90
+ # 4 EXPERIMENTS
91
+
92
+ In this section, we present a comprehensive series of experiments to evaluate the performance of our models. Initially, we demonstrate our models' effectiveness in learning neural modules on two established benchmarks: GQA (Hudson & Manning, 2019), focusing on compositional visual question answering, and RefCOCO (Kazemzadeh et al., 2014), which assesses referring expression comprehension. Subsequently, we illustrate how the modules acquired from these two datasets can be successfully applied to novel tasks such as image editing and knowledge tagging. Moreover, we highlight the adaptability of our framework to address novel visual reasoning tasks (Raven (Zhang et al., 2019a) and MEWL (Jiang et al., 2023a)), even with limited training examples. Before delving into these experiments, we provide an overview of the experimental settings.
93
+
94
+ Experimental Details. The success of our Genome relies on a set of pre-defined modules and APIs as the starting point. We utilize handcrafted modules from VisProg (Gupta & Kembhavi, 2022) as our initial components. Additionally, we incorporate several new APIs from ViperGPT to enhance module creation. We also include some new APIs from ViperGPT (Surís et al., 2023) for making new modules. In Section 4.4, we also include results parsed by the open-source LLM from WLM (Xu et al., 2023a) to investigate the influence of different LLM models. A comprehensive list of the pretrained modules employed in our approach can be found in Section A.1 of the Appendix. We extract training examples to acquire new modules. More precisely, we extracted 300 examples from GQA, 100 from RefCOCO, 10 from Raven, and 10 from MEWL.
95
+
96
+ Datasets and Evaluation Metric. We show experiments of our Genome on standard vision-language benchmarks, GQA (Hudson & Manning, 2019) and RefCOCO Kazemzadeh et al. (2014). GQA is a popular compositional visual reasoning dataset with synthesis multi-hop questions, making it suitable for multi-step reasoning. RefCOCO is a typical visual grounding dataset, evaluating models' ability to localize objects and understand fine-grained spatial and semantic relationships. Following ViperGPT, we evaluate GQA on test-dev split and RefCOCO on the testA split. Then, we show GENOME's abilities on the other transferred tasks, image editing, and knowledge tagging and compare it with VisProg. Since the image editing and knowledge tagging datasets from VisProg are not publicly available, we built two new datasets for evaluation. The new editing dataset contains 50 images and instruction pairs. The new knowledge tagging dataset contains 50 images with 50 referring expressions. We provide more details about the dataset in Appendix A.4. The datasets will be released for research purposes. Finally, we show that GENOME can learn to handle new visual reasoning tasks like Raven (Zhang et al., 2019a) and MEWL (Jiang et al., 2023a) by observing a few training examples and module learning. Raven is a task for relational and analogical visual reasoning of image sets and has been widely used for non-verbal intelligence tests. MEWL is a recent benchmark proposed to assess how machines learn word meaning in grounded visual scenes. Examples of these tasks can be found at Figure 5 and Figure 6.
97
+
98
+ # 4.1 COMPARISON WITH BASELINES ON VISUAL REASONING.
99
+
100
+ We conducted analysis between our model and several baseline models using the GQA and RefCOCO datasets. Due to the depreciation of the original professional Codex API (code-davinci-002), we replaced it with the currently available API (gpt-3.5-turbo-instruct) and conducted experiments with both our model and the baseline models to ensure a fair comparison. We did not carry out experiments with GPT-4 due to the prohibitive cost.
101
+
102
+ The results, as presented in Table 1, demonstrate that our model achieves competitive performance in both visual question answering and referring expression comprehension, thus confirming its effectiveness. Furthermore, we provide an illustrative module from our model in Figure 11. This newly created module has the capability to utilize various available APIs to select attributes from the images. The step-by-step reasoning process of our model is detailed in Figure 3, offering greater transparency compared to end-to-end models.
103
+
104
+ <table><tr><td>Methods</td><td>GQA</td><td>RefCOCO</td></tr><tr><td>BLIP-2</td><td>44.7</td><td>-</td></tr><tr><td>KOSMOS-2</td><td>-</td><td>57.4</td></tr><tr><td>ViperGPT-CodeX</td><td>48.1</td><td>72.0</td></tr><tr><td>VisPROG-Instruct</td><td>45.4</td><td>-</td></tr><tr><td>ViperGPT-Instruct</td><td>38.2</td><td>62.4</td></tr><tr><td>Ours-Instruct</td><td>45.6</td><td>69.2</td></tr></table>
105
+
106
+ Table 1: Evaluation on standard visual reasoning benchmarks, GQA and RefCOCO.
107
+
108
+ # 4.2 GENOME FOR TRANSFER LEARNING
109
+
110
+ In this section, we demonstrate our model's robust ca
111
+
112
+ pabilities in transfer learning. We augment the modular library by incorporating modules created
113
+
114
+ ![](images/1d968bdfee8a0a03755baa159fc4308d7e5accc8e7a04901ad303a63ff2a7fd1.jpg)
115
+ Figure 3: Qualitative examples of Genome's on GQA and RefCOCO. The query images, language instructions, and the parsed programs are shown on the left. The corresponding new modules and the value of important variables are shown on the right.
116
+
117
+ ![](images/515157b3a212870dcb4958328b8ead15f60bef6e525e3e8cf73d9f1a7d5abf1e.jpg)
118
+ Tag the second actor who played James Bond from the left
119
+
120
+ ![](images/6af2daa2cafadf09798e959580f96f1d9d0b846118322b3980bf30a62522453b.jpg)
121
+ Tag the Grammy-winning musician in the middle
122
+
123
+ ![](images/d0e66596173c60cfeaf15df94c60021abe8c0be72d5a422e4090646fbc9b19bb.jpg)
124
+ Tag the social media platform logo in the same color as Facebook
125
+
126
+ ![](images/d7144d226d818954592249341dcac69210e7d5838ad3fe0672fd22bfc6c829bd.jpg)
127
+ Select the superhero with the same color as the Hulk and create a color pop
128
+
129
+ ![](images/f4f3211eca8a1cbb4896996e02ffac73f0a7c84e1a93f2d5a3c15abddc9a4723.jpg)
130
+
131
+ ![](images/e8f34b9090d5896ae151a4a8887c05e2280236d22319c9bdfb1b098d8afc3909.jpg)
132
+
133
+ ![](images/ba6f64f791057e00045156d3f07578ba37514a54a0053ee1975a74de3a5a231b.jpg)
134
+
135
+ ![](images/c3be8434ad8f083337d517ec68588bcc82ee659c812026788acf4472238747dd.jpg)
136
+
137
+ ![](images/4672b60cb4712d6c868b07a7fbb3701c11cacbfe97be2881e543477b264bd9e5.jpg)
138
+
139
+ ![](images/e8ff1890a89bbf1ca7f362dcc97983da6b2551e35fa5606aa940a634f5f7c2fe.jpg)
140
+
141
+ ![](images/22402e50592bdcc7299bb1dba4ec3ee26acdbcbd0d1b59c3ec85c90658d163ca.jpg)
142
+ Figure 4: Qualitative examples of Genome's on the image editing and knowledge tagging tasks. The language instructions of the tasks are shown the original images while the modified results of the images are shown below the original images. The emphasis of the instructions is highlighted with red colors, which requires our new modules to handle. While the key regions of the output images are bounded with green colors.
143
+
144
+ ![](images/5548c9ea95a316bf7a1618eff10b98c0cf232c76b3fb8cb8c660076fad273e6d.jpg)
145
+
146
+ ![](images/b9a50736b1700ca5ca9cd9ff6d88db46339fa6d5fab2005ce4ca93d5b7fbeb77.jpg)
147
+
148
+ ![](images/5d48e71f3039b3ffb92722885a3751f14ce5f39ce3208dfde68e9dc27a638719.jpg)
149
+ the second gas planet from the left into a solar system
150
+
151
+ from GQA and RefCOCO, employing in-context examples to guide the Language Model (LLM) in generating step-by-step instructions for task execution. Qualitative results for this task are depicted in Figure 4. As illustrated in Figure 4, our model excels in generating semantically accurate images using the newly added module, whereas the baseline VisProg struggles to capture the required relationships with its fixed, pre-defined module library. To provide a more comprehensive evaluation of image editing, we enlist annotators to manually assess the correctness of the generated images. The models' performance is compared in Table 2, where our model outperforms the baseline. In the context of knowledge tagging, we task annotators with marking image regions referenced by ex
152
+
153
+ <table><tr><td rowspan="2">Methods</td><td rowspan="2">Image Editing Accuracy</td><td colspan="3">Tagging</td><td colspan="3">Localization</td></tr><tr><td>Precision</td><td>Recall</td><td>F1</td><td>Precision</td><td>Recall</td><td>F1</td></tr><tr><td>VisProg</td><td>16.7</td><td>18.4</td><td>21.7</td><td>19.9</td><td>32.8</td><td>35.3</td><td>34.0</td></tr><tr><td>GENOME</td><td>55.3</td><td>67.1</td><td>52.3</td><td>58.8</td><td>76.9</td><td>57.9</td><td>66.0</td></tr></table>
154
+
155
+ Table 2: Evaluation of Genome on Transfer Learning with image editing and knowledge tagging tasks. Our Genome shows much better performance on all criteria, showing the effectiveness of the transferred modules. A qualitative comparison can be seen in Figure 7 in the Appendix.
156
+
157
+ <table><tr><td>Methods</td><td>Center</td><td>L-R</td><td>U-D</td></tr><tr><td>ResNet+DRT</td><td>58.1</td><td>65.8</td><td>67.1</td></tr><tr><td>ALANS-V</td><td>98.4</td><td>97.3</td><td>96.4</td></tr><tr><td>GENOME</td><td>80.1</td><td>67.6</td><td>69.1</td></tr><tr><td>Human</td><td>95.5</td><td>86.4</td><td>81.8</td></tr></table>
158
+
159
+ <table><tr><td>Methods</td><td>shape</td><td>color</td><td>material</td></tr><tr><td>Aloe</td><td>34.2</td><td>33.2</td><td>31.0</td></tr><tr><td>Flamingo-1.1B</td><td>49.3</td><td>35.3</td><td>48.5</td></tr><tr><td>GENOME</td><td>43.7</td><td>45.3</td><td>41.0</td></tr><tr><td>Human</td><td>92.4</td><td>87.2</td><td>72.7</td></tr></table>
160
+
161
+ pressions and employ the same metrics as RefCOCO for evaluating the accuracy of bounding boxes and employ the BERT score to assess the correctness of labeled names. Our model demonstrates superior performance in both image editing and knowledge tagging. We show a typical example in Figure 7 of the Appendix to show how our Genome make use of new modules to perform better knowledge tagging result the baseline.
162
+
163
+ # 4.3 GENOME ON FEW-SHOT TASK LEARNING.
164
+
165
+ As a general module learning framework, our model is not only able to learn new modules to handle existing tasks but also can learn to handle new visual reasoning tasks from a few training examples. We evaluate such abilities on new tasks, Raven (Zhang et al., 2019a) and MEWL (Jiang et al., 2023a). Specifically, we first prompt the LLM to learn pattern recognition modules for visual understanding and then ask the LLM to generate a solver module to handle the task. The instances of our model prediction are shown in Figure 5 and Figure 6. Note that visual reasoning from Raven is widely used in intelligent testing for humans, which shows our model's strong capabilities and potential. We report the performance of our model and the baselines in Table 3 and Table 4. Our model is significantly better than previous fully-supervised methods like ResNet+DRT (Zhang et al., 2019a) and Aloe (Ding et al., 2021), showing its effectiveness. Note that all these models ResNet+DST, ALANS-V (Zhang et al., 2022), Aloe (Ding et al., 2021) and Flamingo (Alayrac et al., 2022) are models fully-finetuned on in-domain data, while our Genome is a general few-shot framework to learn modules for problem-solving. Moreover, we can observe the new compositionality and module re-usage from Figure 8 of the Appendix. Although the SOLVER module was originally learned from center-type problems, it can be naturally transferred to other types like left-right and up-down.
166
+
167
+ # 4.4 ABLATIONS
168
+
169
+ To gauge the efficacy of our model, we conducted a series of ablation studies addressing the following key inquiries: Q1 How effective is module learning? Q2 What impact does the quantity of training examples have on model performance? Q3 How crucial is the LLM's capability for optimal performance? In our experiments, GENOME w/o ML represents a configuration without any new module learn
170
+
171
+ ing but relies heavily on ViperGPT and VisProg-defined modules, directing the LLM to pinpoint a region matching the referring expression. On the other hand, $\text{GENOME-WLM}$ replaces the gpt-3.5-turbo-instruct API with WizardCoder-Python-34B-V1.0 from WizardLM (Xu et al., 2023a). The designations $\text{GENOME}(10)/(50)/(100)$ indicate models trained with
172
+
173
+ Table 3: Evaluation of Genome on Raven (Zhang et al., 2019a). Compared with methods trained with massive in-domain data, our model performs competitively.
174
+ Table 4: Evaluation of Genome on MEWL (Jiang et al., 2023a). Compared with approaches trained on extensive in-domain data, our model shows competitive performance.
175
+
176
+ <table><tr><td>Methods</td><td>RefCOCO</td></tr><tr><td>GENOME w/o ML</td><td>62.3</td></tr><tr><td>GENOME-WLM</td><td>64.4</td></tr><tr><td>GENOME (10)</td><td>49.4</td></tr><tr><td>GENOME (50)</td><td>67.0</td></tr><tr><td>GENOME (100)</td><td>67.1</td></tr></table>
177
+
178
+ Table 5: Ablation study of Genome on RefCOCO.
179
+
180
+ ![](images/d2082d85bc029b669ff1da4610792933b028fc3ba01e1df023d7ee06c151ebd8.jpg)
181
+
182
+ ![](images/88d41aa52ee78d39be5bd0b7e02becaf56f5ece378e93679969efba771237926.jpg)
183
+ Figure 5: A qualitative example from the Raven dataset (Zhang et al., 2019a) is provided. This task involves a set of images with varying visual attributes, such as colors, shapes, and locations. Models are tasked with identifying the image that best matches the missing item in the Problem Matrix. Genome exhibits the capability to compose modules (i.e. DETECT_SHAPE and SOLVER) for detecting these attribute rules and constructing a solver module to address the task. The correct answer is indicated by a green box.
184
+ Figure 6: A qualitative illustration from the MEWL dataset (Jiang et al., 2023a) is presented. This task entails a set of images featuring diverse visual attributes, such as material and shapes, and it necessitates models to determine the word that corresponds to the query image. Genome demonstrates the capability to generate modules for identifying these attribute rules and composing a solver module to address the task. The correct answer is indicated by a green box.
185
+
186
+ 10, 50, and 100 examples, respectively. For resource constraints, we limited our experimentation to 800 RefCOCO samples.
187
+
188
+ Table 5 presents the outcomes, leading to these insights: module learning, given sufficient test instances, can bolster task performance (Q1 addressed). A paucity of training examples, such as 10 for RefCOCO, might induce overfitting, but this diminishes with increased training data (50 examples), improving overall performance (Q2 addressed). Finally, model performance appears intrinsically tied to the LLM's capacity, with superior LLMs delivering enhanced results (Q3 addressed).
189
+
190
+ # 5 CONCLUSION
191
+
192
+ In this study, we introduce GENOME, which is designed to tackle visual reasoning tasks when confronted with limited training data. This approach combines language models to parse natural language into executable operations and create specialized visual modules tailored to the given task. Our model exhibits competitive performance on conventional tasks, effortless transfer of acquired modules to novel tasks, and the capability to adapt to new tasks even with limited training data. Our GENOME also proposes numerous avenues for future research. Firstly, it still necessitates task-specific prompts for each distinct reasoning task, and it would be intriguing to explore the use of a universal prompt for all tasks. Secondly, the framework can be extended to encompass a broader range of multi-modal reasoning tasks, incorporating diverse inputs such as audio, video, and tactile information.
193
+
194
+ # 6 ACKNOWLEDGEMENT
195
+
196
+ This work was supported by DSO grant DSOCO21072. We would also like to thank the computation support from AiMOS, a server cluster for the IBM Research AI Hardware Center.
197
+
198
+ # REFERENCES
199
+
200
+ Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278-2324, 1998.
201
+ Jean-Baptiste Alayrac, Jeff Donahue, Pauline Luc, Antoine Miech, Iain Barr, Yana Hasson, Karel Lenc, Arthur Mensch, Katie Millican, Malcolm Reynolds, et al. Flamingo: a visual language model for few-shot learning. arXiv, 2022.
202
+ Aida Amini, Saadia Gabriel, Peter Lin, Rik Koncel-Kedziorski, Yejin Choi, and Hannaneh Hajishirzi. Mathqa: Towards interpretable math word problem solving with operation-based formalisms. arXiv preprint arXiv:1905.13319, 2019.
203
+ Peter Anderson, Xiaodong He, Chris Buehler, Damien Teney, Mark Johnson, Stephen Gould, and Lei Zhang. Bottom-up and top-down attention for image captioning and visual question answering. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 6077-6086, 2018.
204
+ Jacob Andreas, Marcus Rohrbach, Trevor Darrell, and Dan Klein. Learning to compose neural networks for question answering. arXiv preprint arXiv:1601.01705, 2016a.
205
+ Jacob Andreas, Marcus Rohrbach, Trevor Darrell, and Dan Klein. Neural module networks. In CVPR, pp. 39-48, 2016b.
206
+ Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. Advances in neural information processing systems, 33:1877-1901, 2020.
207
+ James F. Brulé and Alexander Blount. Knowledge acquisition. 1989. URL https://apisemantic scholar.org/CorpusID:18663796.
208
+ Jannis Bulian, Christian Buck, Wojciech Gajewski, Benjamin Boerschinger, and Tal Schuster. Tomayto, tomato. beyond token-level answer equivalence for question answering evaluation, 2022.
209
+ Henry R Burke. Raven's progressive matrices (1938): More on norms, reliability, and validity. Journal of Clinical Psychology, 41(2):231-235, 1985.
210
+ Tianle Cai, Xuezhi Wang, Tengyu Ma, Xinyun Chen, and Denny Zhou. Large language models as tool makers. arXiv, 2023.
211
+ Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde de Oliveira Pinto, Jared Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, et al. Evaluating large language models trained on code. arXiv preprint arXiv:2107.03374, 2021a.
212
+ Wenhu Chen, Zhe Gan, Linjie Li, Yu Cheng, William Wang, and Jingjing Liu. Meta module network for compositional visual reasoning. Proceedings of WACV, 2021b.
213
+ Xinyun Chen, Maxwell Lin, Nathanael Scharli, and Denny Zhou. Teaching large language models to self-debug. arXiv preprint arXiv:2304.05128, 2023a.
214
+ Zhenfang Chen, Peng Wang, Lin Ma, Kwan-Yee K Wong, and Qi Wu. Cops-ref: A new dataset and task on compositional referring expression comprehension. In CVPR, 2020.
215
+ Zhenfang Chen, Jiayuan Mao, Jiajun Wu, Kwan-Yee Kenneth Wong, Joshua B Tenenbaum, and Chuang Gan. Grounding physical concepts of objects and events through dynamic visual reasoning. In ICLR, 2021c.
216
+ Zhenfang Chen, Kexin Yi, Yunzhu Li, Mingyu Ding, Antonio Torralba, Joshua B Tenenbaum, and Chuang Gan. Comphy: Compositional physical reasoning of objects and events from videos. arXiv, 2022.
217
+ Zhenfang Chen, Qinhong Zhou, Yikang Shen, Yining Hong, Hao Zhang, and Chuang Gan. See, think, confirm: Interactive prompting between vision and language models for knowledge-based visual reasoning. arXiv, 2023b.
218
+
219
+ Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, Christopher Hesse, and John Schulman. Training verifiers to solve math word problems. arXiv preprint arXiv:2110.14168, 2021.
220
+ David Ding, Felix Hill, Adam Santoro, Malcolm Reynolds, and Matt Botvinick. Attention over learned object embeddings enables complex visual reasoning. Advances in neural information processing systems, 34:9112-9124, 2021.
221
+ Mingyu Ding, Yan Xu, Zhenfang Chen, David Daniel Cox, Ping Luo, Joshua B Tenenbaum, and Chuang Gan. Embodied concept learner: Self-supervised learning of concepts and mapping through instruction following. In CoRL, 2023.
222
+ Xin Luna Dong, Seungwhan Moon, Yifan Ethan Xu, Kshitiz Malik, and Zhou Yu. Towards next-generation intelligent assistants leveraging llm techniques. In Proceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, pp. 5792-5793, 2023.
223
+ Kevin Ellis, Lionel Wong, Maxwell Nye, Mathias Sable-Meyer, Luc Cary, Lore Anaya Pozo, Luke Hewitt, Armando Solar-Lezama, and Joshua B Tenenbaum. Dreamcoder: growing generalizable, interpretable knowledge with wake-sleep bayesian program learning. Philosophical Transactions of the Royal Society A, 381(2251):20220050, 2023.
224
+ Irena Gao, Gabriel Ilharco, Scott Lundberg, and Marco Tulio Ribeiro. Adaptive testing of computer vision models. In CVPR, 2023.
225
+ Yash Goyal, Tejas Khot, Douglas Summers-Stay, Dhruv Batra, and Devi Parikh. Making the v in vqa matter: Elevating the role of image understanding in visual question answering. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017.
226
+ Tanmay Gupta and Aniruddha Kembhavi. Visual programming: Compositional visual reasoning without training. ArXiv, abs/2211.11559, 2022.
227
+ Harry Frederick Harlow. The formation of learning sets. Psychological review, 56 1:51-65, 1949. URL https://api_semanticscholar.org/CorpusID:22804426.
228
+ Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and Jacob Steinhardt. Measuring massive multitask language understanding. arXiv preprint arXiv:2009.03300, 2020.
229
+ Sepp Hochreiter and Jürgen Schmidhuber. Long short-term memory. Neural computation, 9(8): 1735-1780, 1997.
230
+ Sheng Hu, Yuqing Ma, Xianglong Liu, Yanlu Wei, and Shihao Bai. Stratified rule-aware network for abstract visual reasoning. In AAAI, 2021.
231
+ Drew A Hudson and Christopher D Manning. Compositional attention networks for machine reasoning. arXiv preprint arXiv:1803.03067, 2018.
232
+ Drew A Hudson and Christopher D Manning. Gqa: A new dataset for real-world visual reasoning and compositional question answering. In CVPR, 2019.
233
+ Guangyuan Jiang, Manjie Xu, Shiji Xin, Wei Liang, Yujia Peng, Chi Zhang, and Yixin Zhu. Mewl: Few-shot multimodal word learning with referential uncertainty. In ICML, 2023a.
234
+ Xue Jiang, Yihong Dong, Lecheng Wang, Qiwei Shang, and Ge Li. Self-planning code generation with large language model. arXiv, 2023b.
235
+ Sahar Kazemzadeh, Vicente Ordonez, Mark Matten, and Tamara Berg. Referitgame: Referring to objects in photographs of natural scenes. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), pp. 787-798, 2014.
236
+ Alexander Kirillov, Eric Mintun, Nikhila Ravi, Hanzi Mao, Chloe Rolland, Laura Gustafson, Tete Xiao, Spencer Whitehead, Alexander C. Berg, Wan-Yen Lo, Piotr Dólar, and Ross Girshick. Segment anything. arXiv:2304.02643, 2023.
237
+
238
+ Brenden M. Lake, Tomer D. Ullman, Joshua B. Tenenbaum, and Samuel J. Gershman. Building machines that learn and think like people, 2016.
239
+ Junnan Li, Dongxu Li, Silvio Savarese, and Steven Hoi. Blip-2: Bootstrapping language-image pre-training with frozen image encoders and large language models. arXiv, 2023.
240
+ Lianian Harold Li*, Pengchuan Zhang*, Haotian Zhang*, Jianwei Yang, Chunyuan Li, Yiwu Zhong, Lijuan Wang, Lu Yuan, Lei Zhang, Jenq-Neng Hwang, Kai-Wei Chang, and Jianfeng Gao. Grounded language-image pre-training. In CVPR, 2022.
241
+ Jiayuan Mao, Chuang Gan, Pushmeet Kohli, Joshua B. Tenenbaum, and Jiajun Wu. The Neuro-Symbolic Concept Learner: Interpreting Scenes, Words, and Sentences From Natural Supervision. In ICLR, 2019a.
242
+ Jiayuan Mao, Chuang Gan, Pushmeet Kohli, Joshua B Tenenbaum, and Jiajun Wu. The neurosymbolic concept learner: Interpreting scenes, words, and sentences from natural supervision. In ICLR, 2019b.
243
+ Tom Michael Mitchell, Richard M. Keller, and Smadar T. Kedar-Cabelli. Explanation-based generalization: A unifying view. Machine Learning, 1:47-80, 1986. URL https://api.sementicscholar.org/CorpusID:117264.
244
+ Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems, 35: 27730-27744, 2022.
245
+ Zamfirescu JD Pereira and Bjoern Hartmann. Iterative disambiguation: Towards llm-supported programming and system design.
246
+ Alec Radford, Jong Wook Kim, Chris Hallacy, A. Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, and Ilya Sutskever. Learning transferable visual models from natural language supervision. In ICML, 2021.
247
+ Nasim Rahaman, Muhammad Waleed Gondal, Shruti Joshi, Peter Gehler, Yoshua Bengio, Francesco Locatello, and Bernhard Schölkopf. Dynamic inference with neural interpreters. 2021.
248
+ René Ranftl, Alexey Bochkovskiy, and Vladlen Koltun. Vision transformers for dense prediction. ICCV, 2021.
249
+ Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. High-resolution image synthesis with latent diffusion models. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 10684-10695, 2022.
250
+ Timo Schick, Jane Dwivedi-Yu, Roberto Dessi, Roberta Raileanu, Maria Lomeli, Luke Zettlemoyer, Nicola Cancedda, and Thomas Scialom. Toolformer: Language models can teach themselves to use tools. arXiv, 2023.
251
+ Kurt Shuster, Jing Xu, Mojtaba Komeili, Da Ju, Eric Michael Smith, Stephen Roller, Megan Ung, Moya Chen, Kushal Arora, Joshua Lane, et al. Blenderbot 3: a deployed conversational agent that continually learns to responsibly engage. arXiv, 2022.
252
+ Steven Spratley, Krista Ehinger, and Tim Miller. A closer look at generalisation in raven. In ECCV, 2020.
253
+ Dídac Surís, Sachit Menon, and Carl Vondrick. Vipergpt: Visual inference via python execution for reasoning. arXiv, 2023.
254
+ Joshua Vendrow, Saachi Jain, Logan Engstrom, and Aleksander Madry. Dataset interfaces: Diagnosing model failures using controllable counterfactual generation. arXiv, 2023.
255
+ Can Xu, Qingfeng Sun, Kai Zheng, Xiubo Geng, Pu Zhao, Jiazhan Feng, Chongyang Tao, and Daxin Jiang. Wizardlm: Empowering large language models to follow complex instructions. arXiv preprint arXiv:2304.12244, 2023a.
256
+
257
+ Peng Xu, Wenqi Shao, Kaipeng Zhang, Peng Gao, Shuo Liu, Meng Lei, Fanqing Meng, Siyuan Huang, Yu Qiao, and Ping Luo. Lvlm-ehub: A comprehensive evaluation benchmark for large vision-language models. arXiv, 2023b.
258
+ Jianwei Yang, Jiayuan Mao, Jiajun Wu, Devi Parikh, David D Cox, Joshua B Tenenbaum, and Chuang Gan. Object-centric diagnosis of visual reasoning. arXiv preprint arXiv:2012.11587, 2020.
259
+ Zhengyuan Yang, Zhe Gan, Jianfeng Wang, Xiaowei Hu, Yumao Lu, Zicheng Liu, and Lijuan Wang. An empirical study of gpt-3 for few-shot knowledge-based vqa. In AAAI, 2022.
260
+ Kexin Yi, Jiajun Wu, Chuang Gan, Antonio Torralba, Pushmeet Kohli, and Josh Tenenbaum. Neural-symbolic vqa: Disentangling reasoning from vision and language understanding. In NeurIPS, 2018.
261
+ Licheng Yu, Patrick Poirson, Shan Yang, Alexander C Berg, and Tamara L Berg. Modeling context in referring expressions. In Computer Vision-ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, October 11-14, 2016, Proceedings, Part II 14, pp. 69-85. Springer, 2016.
262
+ Licheng Yu, Zhe Lin, Xiaohui Shen, Jimei Yang, Xin Lu, Mohit Bansal, and Tamara L Berg. Mattnet: Modular attention network for referring expression comprehension. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1307-1315, 2018.
263
+ Yan Zeng, Xinsong Zhang, and Hang Li. Multi-grained vision language pre-training: Aligning texts with visual concepts. arXiv preprint arXiv:2111.08276, 2021.
264
+ Chi Zhang, Feng Gao, Baoxiong Jia, Yixin Zhu, and Song-Chun Zhu. Raven: A dataset for relational and analogical visual reasoning. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 5317-5327, 2019a.
265
+ Chi Zhang, Baoxiong Jia, Feng Gao, Yixin Zhu, Hongjing Lu, and Song-Chun Zhu. Learning perceptual inference by contrasting. NeurIPS, 2019b.
266
+ Chi Zhang, Baoxiong Jia, Song-Chun Zhu, and Yixin Zhu. Abstract spatial-temporal reasoning via probabilistic abduction and execution. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2021.
267
+ Chi Zhang, Sirui Xie, Baoxiong Jia, Ying Nian Wu, Song-Chun Zhu, and Yixin Zhu. Learning algebraic representation for systematic generalization in abstract reasoning. In European Conference on Computer Vision, pp. 692-709. Springer, 2022.
268
+ Kecheng Zheng, Zheng jun Zha, and Wei Wei. Abstract reasoning with distracting features. In NeurIPS, 2019.
269
+
270
+ # A APPENDIX
271
+
272
+ In this section, we substantiate our claims in the paper by providing additional implementation details (Section A.1), more experimental analysis (Section A.2), exemplar prompts for each stage (Section A.3), details on dataset collection (Section A.4), qualitative examples of new learned modules (Section A.5).
273
+
274
+ # A.1 IMPLEMENTATION DETAILS
275
+
276
+ Pre-defined Modules and API models. The success of our model still requires a set of predefined APIs. Following the modules in VisProg and ViperGPT, we adopt the following APIs. We adopt GLIP (Li* et al., 2022) for object localization. We adopt gpt-3.5-turbo-instruct from OpenAI and WizardCoder-Python-34B-V1.0 from WizardLM for code generation. We use BLIP (Li et al., 2023) for answering simple questions about images. We use CLIP (Radford et al., 2021) and X-VLM (Zeng et al., 2021) for image-text classification. We use MiDaS (Ranftl et al., 2021) for estimating depths in images. We use stable diffusion (Rombach et al., 2022) for modifying image patches. Based on these APIs, we construct a set of pre-defined modules following VisProg. These pre-defined modules will be used to cooperate with the new learned modules for visual reasoning.
277
+
278
+ <table><tr><td>Descriptions</td><td>Modules</td></tr><tr><td>Image Understanding</td><td rowspan="3">Loc for object location, FaceDet for face detection, Select and Filter Property for image-text classification. Filter_Spatial for selecting image regions;Replace for image editing, colorPop for changing images colors,BgBlur for blurring background, Tag for annotating box regions and Emoji for face tagging. Crop and its variants for cropping patches from the images.List for retrieving factual knowledge, Count for counting object numbers, Eval, Result, BOX2MASK and MASK2BOX for formatting outputs.</td></tr><tr><td>Image Manipulation</td></tr><tr><td>Others</td></tr></table>
279
+
280
+ Table 6: Pre-defined Modules used in Genome.
281
+
282
+ Details on Raven and MEWL. Note that a visual reasoning task does not necessarily use language as input. All we need is to prompt the LLMs to generate modules that recognize the patterns and solve the problem. In RAVEN, by prompting LLM, we can obtain DETECT_COLOR, DETECT_SHAPE, and DETECT_SIZE. The image is fed into these modules and the output is the color, shape, and size of the image. In this way, the input image is converted into a (color, shape, size) triplet. We provide LLM with ten examples from the RAVEN train split to demonstrate how to deduce the pattern of these triplets. By observing few-shot demonstrations, we let LLM generate the SOLVER() module, which detects the pattern of input triplets from the Problem Matrix and chooses the most appropriate answer from the Answer Set. Therefore, the internal of the SOLVER() module is primarily based on judgment, used to identify the patterns of the input triples in the Problem Matrix, thereby finding the answer in the Answer Set. The workflow of RAVEN is shown in Figure 5. As for MEWL, we employ a similar approach to handle it. One example is provided Figure 6. Since MEWL and RAVEN have different patterns, the SOLVER() module is not shared between RAVEN and MEWL. Thus, it utilizes distinct logic.
283
+
284
+ # A.2 MORE EXPERIMENTAL ANALYSIS.
285
+
286
+ Computational Efficiency of Module Reuse. Our modularized design and module reuse strategy offer higher computational efficiency and generate shorter code compared to baseline approaches like ViperGPT, which creates solutions from scratch without modular abstraction and reuse. We calculated the average token count for both our Genome model and ViperGPT, which lacks a module reuse mechanism, in their interactions with LLMs. As shown in table 7, our Genome model's solutions are demonstrably shorter and more efficient.
287
+
288
+ <table><tr><td>Methods</td><td>GQA</td><td>RefCOCO</td></tr><tr><td>ViperGPT</td><td>153.7</td><td>109.1</td></tr><tr><td>GENOME</td><td>62.3</td><td>54.4</td></tr></table>
289
+
290
+ More Ablation Study on Different Components. To dissect different components of our proposed method, we showcase a more comprehensive and detailed ablation study here. We randomly select 800 samples from GQA test-dev split to further investigate the effectiveness of different components of Genome. Moreover, we add baseline experiments of RAVEN and MEWL for better comparison. To better present all experimental results, all the ablation studies are organized into the following sections.
291
+
292
+ Table 7: Comparison of average token number for the generated solutions on GQA and RefCOCO.
293
+
294
+ <table><tr><td>Methods</td><td>GQA</td></tr><tr><td>GENOME</td><td>45.9</td></tr><tr><td>GENOME w/o input and output format</td><td>43.2</td></tr><tr><td>GENOME w/o good initialization</td><td>41.8</td></tr><tr><td>GENOME w/o existing modules in prompt for module making</td><td>45.0</td></tr><tr><td>GENOME w/o creating new modules</td><td>44.7</td></tr><tr><td>GENOME w/ random sampling</td><td>44.3</td></tr><tr><td>GENOME (60)</td><td>44.5</td></tr><tr><td>GENOME (120)</td><td>45.3</td></tr><tr><td>GENOME (300)</td><td>45.9</td></tr><tr><td>GENOME w/ different LLM</td><td>44.3</td></tr><tr><td>GENOME w/o debugging</td><td>44.9</td></tr></table>
295
+
296
+ Table 8: More ablation on GQA dataset.
297
+
298
+ Ablation on Prompt Design. We conducted a series of experiments to observe the impact of prompt design on the overall performance of GENOME. Firstly, we removed the descriptions of input and output formats from the prompt. After removing these descriptions, the performance of GENOME dropped by $2.7\%$ . This is because, without clear guidance on input and output formats, the modules might output in the wrong format, leading to errors in subsequent parsing of the results. Furthermore, on top of removing the input and output format, we also removed some of the in-context examples and descriptions about module signatures from the prompt. The performance further declined. Since our method consists of three stages: module initialization, module generation, and module execution, where module initialization is the first step of our method. Without adequate module initialization as a foundation, the subsequent results are largely impacted. Therefore, we can see that without good initialization, our performance drops by $4.1\%$ .
299
+
300
+ Regarding the use of existing modules and creating new ones, from Table 8, we can observe that not using the predefined modules from VisProg results in a $0.9\%$ decrease in our performance. This demonstrates the robust module generation capability of GENOME. Even without a series of predefined modules, our method can still build modules from scratch, solve problems, and the performance does not drop significantly. If we don't create new modules, then we are merely using the predefined modules. We can see that the result is $44.7\%$ , which is $1.2\%$ lower than our result of $45.9\%$ . This performance gap highlights the effectiveness of the newly generated modules. By generating and using new modules, we can achieve better results.
301
+
302
+ Ablation on Sampling. In this section, we introduce our sampling strategy at first. Then, we conduct an experiment to showcase how the sampling methods will impact GENOME performance. Subsequently, we investigate how the number of training samples affects our results of different tasks.
303
+
304
+ Sampling Strategy. Our sampling strategy: the GQA dataset contains five structural types: choose, logical, compare, verify, and query. These structural types inspired the idea of generating our new modules. Taking COMPARE_COLOR as an example, this newly generated module is generated to address questions related to color within the compare structural type. From the visualization
305
+
306
+ <table><tr><td rowspan="2"># of Samples</td><td colspan="3">RAVEN</td><td colspan="3">MEWL</td></tr><tr><td>Center</td><td>L-R</td><td>U-D</td><td>shape</td><td>color</td><td>material</td></tr><tr><td>5</td><td>46.5</td><td>37.2</td><td>39.8</td><td>38.9</td><td>39.6</td><td>37.9</td></tr><tr><td>10</td><td>80.1</td><td>67.6</td><td>69.1</td><td>43.7</td><td>45.3</td><td>41.0</td></tr><tr><td>20</td><td>80.1</td><td>67.6</td><td>69.1</td><td>43.7</td><td>45.3</td><td>41.0</td></tr></table>
307
+
308
+ Table 9: Number of Sampling Examples on RAVEN and MEWL.
309
+
310
+ of GQA, it is apparent that the query type can be addressed using the existing VQA module from VisProg, and problems in the logical type can be decomposed into sub-problems of choose, compare, and verify types. Therefore, when selecting training samples, we randomly chose 100 samples each from the choose, compare, and verify types. Altogether, these three types comprise 300 samples, all sourced from the GQA train split. Hence, we are not cherry-picking our training samples; rather, we are selecting training samples based on the structural types of GQA.
311
+
312
+ To explore the impact of sampling strategies on our experiment, we conducted an additional experiment with a random sampling of 300 samples, beyond our initial sampling strategy. In this setting, we randomly sampled 300 examples from the GQA train split. The performance was observed to be $44.3\%$ , a decrease of $1.6\%$ compared to $45.9\%$ . This result suggests that a strategic sampling method can more effectively guide the LLM in generating more efficient modules for a given task. Relatively speaking, our method is robust in the face of choices in sampling strategies.
313
+
314
+ Number of Sampling Examples. We conduct a series of experiments to illustrate how the number of training samples influences the performance.
315
+
316
+ In the GQA and RefCOCO datasets, if a small number of training samples are used, it's possible for the generated modules to overfit certain samples, thereby reducing the generalization capability of the newly generated modules. Such overfitting in new modules can negatively impact the final results. Therefore, we can observe that when the number of samples is small, the performance of Genome is poorer. As the number of samples increases, the effectiveness of Genome improves. However, with a further increase in the number of samples, the performance gains of Genome tend to saturate.
317
+
318
+ Regarding RAVEN and MEWL, since their patterns of change are limited, the number of few-shot samples selected is sufficient if it already covers all the variation patterns in RAVEN and MEWL. In other words, if the number of samples exceeds this threshold, there won't be any further improvement in the results; if it's below this threshold, the performance will decline. We selected 10 few-shot samples each in RAVEN and MEWL. As can be seen from the results in the table above, if the number of samples is equal to 5, there is a noticeable decrease in performance. This is because 5 few-shot samples are not enough to cover all the variation patterns of RAVEN or MEWL. If the number of samples is equal to 10 or 20, at this point, the few-shot samples are sufficient to encompass all possible variations. In this case, the same results are obtained.
319
+
320
+ Ablation on LLM's capability. By using a better LLM, our prompts can be better understood, and the LLM will generate higher-quality modules. In this experiment, we compared the results of using gpt-3.5-turbo-instruct (i.e., GENOME) and gpt-3.5-turbo (i.e., different LLM). Our experimental results show that better outcomes are achieved when using the more effective gpt-3.5-turbo-instruct. It is evident that the capabilities of the LLM influence the performance of GENOME. As the abilities of LLMs continue to improve, so will the performance of GENOME. Thanks to the flexibility of GENOME, once a better LLM is available, we can easily switch to the latest LLM to achieve better results.
321
+
322
+ Ablation on Debug Mechanism. The error-correction prompt contains the error message from Python interpreter and wrong code snippet. We prompt the LLM to correct the wrong code based on the error message from Python interpreter. We heuristically set the maximal number of debug iterations as 5. If the wrong code can be corrected within 5 iterations, we will keep it. Otherwise, it will be abandoned. (Details can be found in the Module Generation section of Figure 2) The errors mainly stem from two sources: one is basic syntax errors in Python code, such as indentation and variable name errors. The other source is some fundamental logical errors, such as mistakes made when setting variable types, like treating a variable that should be of the bool type as the string
323
+
324
+ <table><tr><td rowspan="2">Methods</td><td colspan="3">RAVEN</td><td colspan="3">MEWL</td></tr><tr><td>Center</td><td>L-R</td><td>U-D</td><td>shape</td><td>color</td><td>material</td></tr><tr><td>VisProg variant</td><td>36.8</td><td>26.1</td><td>27.8</td><td>35.2</td><td>35.9</td><td>34.9</td></tr><tr><td>ViperGPT variant</td><td>40.6</td><td>30.7</td><td>32.4</td><td>37.8</td><td>38.2</td><td>36.7</td></tr><tr><td>Ours</td><td>80.1</td><td>67.6</td><td>69.1</td><td>43.7</td><td>45.3</td><td>41.0</td></tr></table>
325
+
326
+ type. By observing the table above, we can conclude that the debug process can assist Genome with generating more useful modules to elevate performance and prevent elementary programming mistakes.
327
+
328
+ Additional Baseline Experiments of RAVEN and MEWL. For RAVEN and MEWL, we have implemented the ViperGPT and VisProg baseline experiments in the following way. For VisProg, it requires a manual implementation of all modules by making use of the provided APIs. Thus, to enable VisProg to handle Raven and MEWL tasks, we manually implement and debug new handcrafted modules for VisProg to recognize and discover patterns to handle the task. We call this baseline VisProg variant. We also put the training examples in GENOME' stage 1 into the prompt of VisProg variant for better performance. For ViperGPT, it has no manual modules and ask the LLMs to make use of the APIs to handle the instances. Thus, we manually write solutions for the training examples into the prompt of the ViperGPT to teach ViperGPT to handle the task. We call this approach ViperGPT variant. We have added such analysis into the revised paper. VisProg by itself needs a handcrafted solver module to find the target solution and it would be extremely difficult for ViperGPT to generate a solver from scratch. Thus, we add the solver module learnt from our GENOME model to pre-defined API pool of VisProg and ViperGPT. As shown in table 10, our GENOME model achieves better performance than these two baselines, showing the great value of module learning for handling new tasks from only a few examples.
329
+
330
+ Table 10: Compare our Genome model with baselines, VisProg, and ViperGPT on RAVEN and MEWL.
331
+
332
+ <table><tr><td>New Modules</td><td>GQA</td></tr><tr><td>VERIFY_ATTRIBUTE</td><td>14.1</td></tr><tr><td>CHOOSE_ATTRIBUTE</td><td>10.8</td></tr><tr><td>VERIFY_COLOR</td><td>6.9</td></tr><tr><td>COMPARE_ATTRIBUTE</td><td>5.9</td></tr><tr><td>VERIFY_MATERIAL</td><td>3.6</td></tr></table>
333
+
334
+ Table 11: Percentage of top-5 most-used new modules in GQA.
335
+
336
+ More Details about New Modules. We further take GQA as an example to exhibit the percentage of top-5 most-used new modules in GQA in the Table 11. The data in Table 11 shows the proportion of the five most common new modules appearing in the generated high-level programs. Overall, $38.7\%$ of all generated high-level programs use the newly generated modules (this $38.7\%$ calculation includes other less common modules and excludes duplicate samples, such as a single high-level program containing multiple new modules). From these results, it can be seen that these newly learned modules can be widely applied to GQA, thereby helping Genome achieve good results on GQA.
337
+
338
+ Additional Experiment on I-RAVEN. As independently demonstrated in (Hu et al., 2021) and (Spratley et al., 2020), the Raven dataset (Zhang et al., 2019a) exhibits flaws in its choice design, enabling models to learn shortcuts for solving the RPM reasoning task. Thus, we additionally conduct the balanced I-RAVEN dataset (Hu et al., 2021) for further analysis. Following our setting in RAVEN (Zhang et al., 2019a), we use 10 training samples for learning new modules to handle the task and test the model on the testing set. As shown in table 12, our Genome is still able to handle the abstract reasoning task with high accuracy and data efficiency.
339
+
340
+ <table><tr><td>Methods</td><td>Center</td><td>L-R</td><td>U-D</td></tr><tr><td>LEN (Zheng et al., 2019)</td><td>56.4</td><td>44.2</td><td>44.2</td></tr><tr><td>CoPINet (Zhang et al., 2019b)</td><td>54.4</td><td>51.9</td><td>52.5</td></tr><tr><td>SRAN (Hu et al., 2021)</td><td>78.2</td><td>70.1</td><td>70.3</td></tr><tr><td>GENOME</td><td>85.2</td><td>74.6</td><td>75.4</td></tr></table>
341
+
342
+ Effectiveness of Module Learning. To better investigate the effectiveness of our Genome, we further two variants of our Genome to show the effectiveness of module learning. Genome w/o ML represents a configuration without any new module learning but relies heavily on ViperGPT and VisProg-defined modules, directing the LLM to answer the question related to the image content and pinpoint a region matching the referring expression. It strictly follows the function call style like VisProg and our Genome. We also develop a variant of Genome w/o ML v2 that allows the LLM to call functions like ViperGPT with standard control flow and arbitrary Python logic. For resource constraints, we limited our experimentation to the randomly selected 800 examples from GQA and RefCOCO with gpt-3.5-turbo-instruct API. As shown in table 13, our Genome performs better than both two baselines across tasks, showing the effectiveness of the module learning.
343
+
344
+ Table 12: Experiments for Raven's Progressive Matrices on I-RAVEN dataset (Hu et al., 2021).
345
+
346
+ <table><tr><td>Methods</td><td>GQA</td><td>RefCOCO</td></tr><tr><td>GENOME w/o ML</td><td>43.3</td><td>62.3</td></tr><tr><td>GENOME w/o ML v2</td><td>40.9</td><td>65.5</td></tr><tr><td>GENOME</td><td>45.9</td><td>67.1</td></tr></table>
347
+
348
+ Table 13: Ablation of module learning on GQA and RefCOCO.
349
+
350
+ # A.3 PROMPTS FOR EACH STAGE.
351
+
352
+ The ability of Our Genome is from in-context learning of LLMs (Brown et al., 2020), when the prompts are keys to tell what the LLM should generate. We show the exemplar prompts of our models to learn the VQA task in Figure 15-17.
353
+
354
+ # A.4 DETAILS AND EXAMPLES OF THE NEW DATASETS.
355
+
356
+ To evaluate Knowledge Tagging, 50 tagging instructions are annotated on 50 internet images including personalities and a variety of objects such as logos, flowers, buildings, fruits and sports, among others. For each instruction, we manually annotated the ground truth bounding box and the associated tag. For image editing assessment, we collected 50 editing instructions on 50 images including personalities and various objects like foods, furniture, animals, utensils etc. 25 images are from the COCO dataset and the other 25 images are from the internet. For the image editing tasks, we ask three annotators to estimate whether the editing is correct or not. For the knowledge tagging task, we consider the localization is correct if the detected region has an IoU higher 0.5 with the ground-truth annotation. For text tagging, we compare the prediction with the annotated text with BERT matching (BEM) (Bulian et al., 2022). If the matching score is higher than 0.5, we consider it a successful matching. More examples of the two datasets can be found at Figure 9 and Figure 10.
357
+
358
+ # A.5 QUALITATIVE EXAMPLES.
359
+
360
+ In this subsection, we show the qualitative examples of the learned modules and qualitative cases of how they handle different tasks. We show an example of Genome performs better than VisProg in Figure 7. At the top of Figure 7, our model effectively utilizes the COMPARE_COLOR module acquired from GQA to pinpoint the correct region, whereas VisProg fails to generate the correct program due to its rigid module library. Figure 8 highlights emerging forms of compositionality and module re-usage. Notably, although the SOLVER module was originally trained on center-type
361
+
362
+ # Knowledge Tagging
363
+
364
+ Question: Tag common fruits and vegetables of the same color as the grape.
365
+
366
+ IMAGE
367
+
368
+ ![](images/38621cffbb6e175ceaa0c4c3b092952c4a9b811f12abe37f6da4fe26be09b983.jpg)
369
+ Figure 7: A typical example of how our Genome outperforms VisProg on knowledge tagging. In the top, our model is able to make use of the COMPRECOLOR module learned from GQA to localize the correct region while VisProg fail to generate the correct program with its fixed module library.
370
+
371
+ Ours
372
+
373
+ Generated Program:
374
+
375
+ OBJ0=LOC(image $\equiv$ IMAGE,object $\equiv$ 'fruit or vegetable')
376
+ LIST0=LIST(question $\equiv$ 'fruits and vegetables',max=20)
377
+ OBJ1=CLASSIFY(image $\equiv$ IMAGE,object $\equiv$ OBJ30,categories $\equiv$ LIST0)
378
+ OBJ2=CLASSIFY(image $\equiv$ IMAGE,object $\equiv$ OBJ30,categories $\equiv$ 'grape')
379
+ OBJ3=REDUCE_MASK栈10=OBJ31,mask_list $= 0\mathrm{B}012$
380
+ OBJ4=META_COMPARE(function_name $\equiv$ COMPARE.Color, image $\equiv$ IMAGE,obj_list $=$ OBJ33,obj_cnp $\equiv$ OBJ31,name1 $\equiv$ 'fruit or vegetable',name2
381
+ ='grape',attribute $\equiv$ 'same')
382
+ IMAGE6=TAG(image $\equiv$ IMAGE,object $\equiv$ OBJ34)
383
+ FINAL Result $\equiv$ RESULT(var $\equiv$ Image6)
384
+
385
+ ![](images/ca4faa694678eb00d5bc87c7b58e94aaf3aba7a2375fa4615e74b6595713ed4a.jpg)
386
+
387
+ VisProg
388
+
389
+ Generated Program:
390
+
391
+ OB38=LOC(image=IMAGE,object='grape')
392
+ LIST0=LIST(query='common fruits and vegetables of the same color as the grape',max=20)
393
+ OB31=CLASSIFY(image=IMAGE,object=OB30,categories=LIST0)
394
+ IMAGETAG(image=IMAGE,object=OB31)
395
+ FINAL resultant=RESULT(var='IMAGE0')
396
+
397
+ ![](images/4f6d581637da62de60921b01f3949b1662106397bf868f02d3ad745b247907b8.jpg)
398
+
399
+ problems within the Raven dataset, it demonstrates inherent adaptability to other problem types, including left-right and up-down orientations.
400
+
401
+ New Learned Modules. We show the exemplar new learned modules from the GQA and Ref-COCO in Figure 11-14. As shown in Figure 11, the new learned module (CHOOSE_ATTRIBUTE) is able to use the LLM to retrieve relevant knowledge first and then adopt the image-text classifier to match the attributes. In Figure 13-14, we see that the new module SORT_SPATIAL is able to localize objects with spatial index.
402
+
403
+ ![](images/93164093ace4f9dfd53c9d0cbf331ec578c5be55f08384a90dde9b8989100cde.jpg)
404
+ Generated Program:
405
+
406
+ COLOR $=$ DETECT_COLOR(image $\equiv$ IMAGE)
407
+
408
+ SHAPE=DETECT_SHAPE(image=IMAGE)
409
+
410
+ SIZE=DETECT_SIZE(image=IMAGE) ANGHERED image IMAGE color
411
+
412
+ ANSWER=SOLVER(IMAGE=1 shape $\coloneqq$ SHAPE.size $=$ SIZE)
413
+
414
+ FINAL_result $\equiv$ RESULT(var=ANSWER)
415
+
416
+ ![](images/677c50832549fcfef0b21ac59bf950998ddefd5921c77753363e07f620ecf7b6.jpg)
417
+ Generated Program:
418
+
419
+ BOX0=LOC(image=IMAGE,object='LEFT')
420
+
421
+ IMAGE0=CROP(image=IMAGE,box=BOX0)
422
+
423
+ BOX1=LOC(image=IMAGE, object='RIGHT')
424
+
425
+ IMAGE1=CROP(image=IMAGE,box=BOX1)
426
+ COLOR=DETECT(COLOR:=image2,IMAGE:=i
427
+
428
+ COLOR=DETECT_COLOR(Color86=IMAGE8, Image1=IMAGE1)
429
+ SHAPE=DETECT_COLOR(iman0, IMAGE0, iman1, IMAGE1)
430
+
431
+ SHAPE=DETECT SHAPE Image0=IMAGE0, Image1=IMAGE SIZE=DETECT SIZE (image0=IMAGE0, image1=IMAGE1)
432
+
433
+ ANSWER=SOLVER(image0=IMAGE0, image1=IMAGE1, color
434
+
435
+ =COLOR, shape $=$ SHAPE, size $=$ SIZE)
436
+
437
+ FINAL_result $\equiv$ RESULT(var=ANSWER)
438
+
439
+ ![](images/8a4608ba48b04bac3bed5cbc3703c914fe0ac0715ea395ce1ea0df8593a19917.jpg)
440
+ Generated Program:
441
+
442
+ BOX0=LOC(image=IMAGE,object='TOP')
443
+
444
+ IMAGE0=CROP(image=IMAGE,box=BOX0)
445
+
446
+ BOX1=LOC(image=IMAGE,object='BOTTOM')
447
+
448
+ IMAGE1=CROP(image=IMAGE,box=BOX1)
449
+
450
+ COLOR=DETECT_COLOR image0=IMAGE0, image1=IMAGE1) SHAPE=DETECT_SHAPE(image0=IMAGE0, image1=IMAGE1)
451
+
452
+ SHAPE $\equiv$ DETECT SHAPE(Imagelo=IMAGE0, image1=IMAGE1)
453
+ STFZ $\equiv$ DETECT STZF(Image0=IMAGE0, image1=IMAGE1)
454
+
455
+ ANSWER=SOLVER(image0=IMAGE0, image1=IMAGE1, color
456
+
457
+ =COLOR, shape $=$ SHAPE, size $=$ SIZE
458
+
459
+ FINAL_result $\equiv$ RESULT(var=ANSWER)
460
+
461
+ ![](images/741bfd2e016f6b7860246fbb7eb363a7c187183f0c683e65e0792cec77707256.jpg)
462
+ Figure 8: New compositionality and module re-usage in the Raven dataset. While the SOLVER module was initially trained on center-type problems in the Raven dataset, it exhibits a natural transferability to other types, such as left-right and up-down problems.
463
+ Create a color pop of the first boat from the front
464
+
465
+ ![](images/839d900426d7a94e4a2b8a11b512d06f15d8dd6e35eea84a5c7ac1c2a78c286c.jpg)
466
+ Hide Tim Robbins with ;) and Morgan Freeman with 8)
467
+
468
+ ![](images/fabdd6d6b1a410ab171822a462de085216ae7e164569e3c2d7ecd37cd5bfa75b.jpg)
469
+ Create a color pop of the first child from the right
470
+
471
+ ![](images/03f2e4bab4c76e1c3cb1f30b84659f9762c0768cc1bc89373e759ccabf525ece.jpg)
472
+ Replace the second pizza from the top with a hamburger
473
+
474
+ ![](images/94e59dd2256ee4d5dc7d6e04d2f6209da3c5f5d0b9d2be0739fca1cc30ddaccb.jpg)
475
+ Replace the spoon of the same material as the spatula with a knife
476
+
477
+ ![](images/7b1674a0cf655a4c15e1f37a36f38d2382c1b83455230e20f3ee159945952811.jpg)
478
+ Select the lamp with the same color as the one at the bottom and create a color pop
479
+ Figure 9: More examples of the new image edit dataset. The dataset asks models to edit images' fine-grained and regional details according to diverse language instructions.
480
+
481
+ ![](images/c352b170b26381f7ae50dd60554e0c18f7ad95fec84fcd72020a21174d7afaf1.jpg)
482
+ Tag the famous landmark of Europe in the bottom right
483
+
484
+ ![](images/38cce575c35834fc76f3419d5815d054f2c4d1c873689794726836f24ff65c25.jpg)
485
+
486
+ ![](images/99115ba25371ea5be37e2e6ef5de40403d0716e9f83ebdac1393a6cf50cea938.jpg)
487
+
488
+ ![](images/948a9fdef3c2ca9dcf9642d55f7f32b7be0349b4d11bbb6985a632d1e4def43a.jpg)
489
+
490
+ ![](images/bbc5ed97e20be6f458feaa67b1469bb905adb307186cead5d768dd057bbd7629.jpg)
491
+
492
+ ![](images/c4261af321ff8c271ade10fa36500c832b1388558114274eaa0aa9443192fc4e.jpg)
493
+
494
+ ![](images/4fe9ca2fba7b8bf8c64a9c9f8e3253c9aa4caee602a4c16ffca9322faefe8954.jpg)
495
+ Tag the famous painting of the Louvre in the left
496
+
497
+ ![](images/45c0ef15508096699dce63bdd063d7dcf0df7595bf9e523e90491bf554dca879.jpg)
498
+
499
+ ![](images/7076f239a39db2abd705e668c56b4d9e99747e665ab905bcd4bf361a70b77413.jpg)
500
+ Tag the common bird of the same color as the ibis
501
+
502
+ ![](images/205adcf8f37653fd3c48cff4a5df6415009b36ac36a0fcd8e4b34f52672a73d8.jpg)
503
+ Tag the second dog from the right
504
+
505
+ ![](images/74912d15dc34ea7f2983f3ec8f80363bbf63c041a65db2b6847ed478ffce9f0e.jpg)
506
+ Tag the second famous film director from the left
507
+
508
+ ![](images/66d7d09c18f6625a194cf187c36b897ae0c7a60d84915ae8f5ecb83f1dd31c28.jpg)
509
+ Tag the second Nobel Laureate in Physics from the left
510
+ Famous Scientists
511
+
512
+ ![](images/4c6668706399330a50e0b9d7cd02fbbe03a1a89f8e0d98449b1aad5e031fed3a.jpg)
513
+ Figure 10: More examples of the new knowledge tagging dataset. The dataset requires models to localize the target region and tag the region with the desired information.
514
+
515
+ ```python
516
+ class CHXOOSE_ATTRIBUTE():
517
+ ```
518
+ ```
519
+ Input:
520
+ image: an image object
521
+ box: a list of bounding boxes
522
+ object: a string
523
+ attributel: a string
524
+ attribute2: a string
525
+ Output:
526
+ result: a string
527
+ Examples:
528
+ Question: Is the coat thick or thin?
529
+ BOX0=LOC(image=IMAGE,object='coat')
530
+ ANSWER0=CHOOSE_ATTRIBUTE(image=IMAGE,box=BOX0,object='coat',
531
+ attributel='thick',attribute2='thin')
532
+ FINAL RESULT=RESULT(var=ANSWER0)
533
+ ```
534
+ step_name = 'CHOOSE_ATTRIBUTE'
535
+ def __init__(self):
536
+ print(f'Registering {self step_name} step')
537
+ def expand_box(self,box,image_size,factor=1.5):
538
+ W,H = img_size
539
+ x1,y1,x2,y2 = box
540
+ dw = int(factor*(x2-x1)/2)
541
+ dh = int(factor*(y2-y1)/2)
542
+ cx = int((x1+x2)/2)
543
+ cy = int((y1+y2)/2)
544
+ x1 = max(0,cx - dw)
545
+ x2 = min(cx + dw,W)
546
+ y1 = max(0,cy - dh)
547
+ y2 = min(cy + dh,H)
548
+ return [x1,y1,x2,y2]
549
+ def predict(self,imageboxes,obj,attr1,attr2):
550
+ if lenIMA) > 0:
551
+ box = boxes[0]
552
+ box = selfexpand_box.box, img.size)
553
+ out_img = imgcropIMA)
554
+ else:
555
+ out_img = img
556
+ prompt1 = f'Tell me the attributes when the {obj} is {attr1} in one sentence.'
557
+ prompt2 = f'Tell me the attributes when the {obj} is {attr2} in one sentence.'
558
+ obj_desc1 = API.gpt3_prompt1,'gpt3_general')
559
+ obj_desc2 = API.gpt3_prompt2,'gpt3_general')
560
+ result1 = API.clip(out_img,obj_desc1))
561
+ result2 = API.clip(out_img,obj_desc2))
562
+ if result1 > result2:
563
+ result = attr1
564
+ else:
565
+ result = attr2
566
+ return result
567
+ ```
568
+
569
+ Figure 11: Exemplar generated module from the GQA dataset. This automatically constructed module can make use of different APIs to compare attributes of an image region.
570
+
571
+ ```python
572
+ class COMPRECOLOR():
573
+ ```
574
+ ```
575
+ Input:
576
+ image: an image object
577
+ box1: a list of bounding boxes
578
+ box2: a list of bounding boxes
579
+ object1: a string
580
+ object2: a string
581
+ compare_type: a string
582
+ Output:
583
+ result: a string
584
+ ```
585
+ def expand_box(self,box, img_size, factor=1.5):
586
+ W,H = img_size
587
+ x1,y1,x2,y2 = box
588
+ dw = int(factor*(x2-x1)/2)
589
+ dh = int(factor*(y2-y1)/2)
590
+ cx = int((x1 + x2) / 2)
591
+ cy = int((y1 + y2) / 2)
592
+ x1 = max(0,cx - dw)
593
+ x2 = min(cx + dw,W)
594
+ y1 = max(0,cy - dh)
595
+ y2 = min(cy + dh,H)
596
+ return [x1,y1,x2,y2]
597
+ def predict(self, img, boxes1, boxes2, obj1, obj2, compare_type):
598
+ if len(boxes1) > 0:
599
+ box1 = boxes1[0]
600
+ box1 = selfexpand_box(boxes1, img.size)
601
+ out.img1 = imgcrop(boxes1)
602
+ else:
603
+ out.img1 = img
604
+ if len(boxes2) > 0:
605
+ box2 = boxes2[0]
606
+ box2 = selfexpand_box(boxes2, img.size)
607
+ out.img2 = imgcrop(boxes2)
608
+ else:
609
+ out.img2 = img
610
+ color1 = API.vqa(out.img1, f'What color is the {obj1}?')
611
+ color2 = API.vqa(out.img2, f'What color is the {obj2}?')
612
+ prompt = f'Can the {color1} be regarded as the same color as'
613
+ f{'color2}? You should just reply yes or no without any other words.'
614
+ temp = API.gpt3(prompt, 'gpt3_general')
615
+ if 'same' == compare_type:
616
+ if 'yes' in temp.lower():
617
+ result = 'yes'
618
+ elif 'no' in temp.lower():
619
+ result = 'no'
620
+ elif 'no' in temp.lower():
621
+ result = 'yes'
622
+ else:
623
+ if 'yes' in temp.lower():
624
+ result = 'yes'
625
+ elif 'no' in temp.lower():
626
+ result = 'no'
627
+ return result
628
+ def execute(self, img, boxes1, boxes2, obj1, obj2, compare_type):
629
+ result = self.predict(img, boxes1, boxes2, obj1, obj2, compare_type)
630
+ return result
631
+ ```
632
+
633
+ Figure 12: Exemplar generated module from the GQA dataset.
634
+
635
+ ```python
636
+ class SORT_SPATIAL():
637
+ "
638
+ Select objects from the image that match the spatial location. Objects are represented by the bounding boxes. Returns the bounding boxes that satisfie the condition. Input: image: raw PIL image box_list: a list of unnormalized bounding boxes location: the location can only be left, middle, right, top, bottom, front and behind index: a number for the rank the object Output: box: a bounding box Examples: Question: second sandwich from the right on the bottom BOXLIST0=LOC(image=IMAGE,object='sandwich') BOXLIST1=SORT_SPATIAL(image=IMAGE,box_list=BOXLIST0,location= 'right',index=2) BOXLIST2=SORT_SPATIAL(image=IMAGE,box_list=BOXLIST1,location= 'bottom',index=1) FINAL Result $\equiv$ RESULT(var $\equiv$ BOXLIST2)
639
+ "
640
+ step_name $=$ 'SORT_SPATIAL'
641
+ def predict(self, img, box_list, location, index): if index $< 0$ or index $>$ len (box_list): return [] if index $= = 0$ : return [box_list[0]] if "front" in location or "behind" in location: box_depth_list $=$ self.parse_depth(img, box_list) box_listsorted $=$ sorted (box_depth_list,key $\equiv$ lambda x:x[1]) out_box_list $=$ [box_i[0] for box_i in box_listsorted] if "behind" in location: out_box_list.reverse() else: if "left" in location: box_list $=$ sorted (box_list, key $\equiv$ lambda x:x[0]) elif "right" in location: box_list $=$ sorted (box_list, key $\equiv$ lambda x:x[2], reverse $=$ True) elif "top" in location: box_list $=$ sorted (box_list, key $\equiv$ lambda x:x[1]) elif "bottom" in location: box_list $=$ sorted (box_list, key $\equiv$ lambda x:x[3], reverse $=$ True) else: return [] if index $>$ len (box_list): return [] out_box_list $=$ [box_list[index-1]] return out_box_list
642
+ def check_location(self, img,box, location): w,h $=$ img.size x1,y1,x2,y2=xbox cx $=$ (x1+x2)/2 cy $=$ (y1+y2)/2 if 'left' in location: if cx >w/2: return False
643
+ ```
644
+
645
+ Figure 13: Exemplar generated module from the RefCOCO dataset. The rest part of the code is in Figure 14.
646
+
647
+ ```python
648
+ 1 elif 'right' in location: if cx < w / 2: return False
649
+ 3
650
+ 4 if 'top' in location: if cy > h / 2: return False
651
+ 6
652
+ 7 else 'bottom' in location: if cy < h / 2: return False
653
+ 9
654
+ 10 return True
655
+ 11
656
+ 12 def parse_depth(self, img, box_list): box_depth_list = [] # compute depths for front or background
657
+ 13 depth_map $\equiv$ API.depth(img)
658
+ 16 for box in box_list: x1, y1, x2, y2 = box depth_map $\equiv$ np.array(depth_map) avg_depth $=$ np.array(depthmap[x1:x2,y1:y2]) box_depth_list.append((box, avg_depth))
659
+ 18
660
+ 19
661
+ 20 return box_depth_list
662
+ 21
663
+ 22
664
+ 23 def execute(self, img, box_list, location,index): return self.predict(img,box_list,location,index)
665
+ ```
666
+
667
+ Figure 14: Exemplar generated module from the RefCOCO dataset. The former part of the code is in Figure 13. This generated module is able to localize objects based on their location in images and the depth of images.
668
+
669
+ ```vba
670
+ Pre-defined Modules:
671
+ class LOC():
672
+ "" Generate boxes of the object on the image.
673
+ Input: image: an image object object: an object string Output: box: a list of bounding boxes Examples: BOX0=LOC(image=IMAGE,object='camel')
674
+ "''" class COUNT():
675
+ "" Count the number of boxes in the list. Input: box: a list of bounding boxes Output: number: number of boxes Examples: ANSWERO $\equiv$ COUNT (box $\equiv$ BOX1)
676
+ "''" Suppose you are a program expert. Given a set of pre-defined modules, could you identify whether it is possible to write a program to get the answer to the question? If not, what new modules do we need? Note that you can only use the below pre-defined modules: LOCCOUNTCROP 3
677
+ Question: Is the purse to the left or to the right of the person? Yes. The program is: BOXO=LOC(image=IMAGE,object='person') IMAGEO=CROP_LEFTOF(image=IMAGE,box=BOXO) BOX1=LOC(image=IMAGEO,object='purse') ANSWERO $\equiv$ COUNT (box $=$ BOX1) ANSWERl $\equiv$ EVAL (expr $\equiv$ f'left' if {ANSWER0} > 0 else 'right' FINAL RESULT $\equiv$ RESULT(var $\equiv$ ANSWER1)
678
+ Question: Which object is larger, the sphere or the blue cube? No. We need to make a new module "COMPARE_SIZE" first. Here is the header of the class: class COMPARESIZE():
679
+ "" Compare the size of two objects in the image. One object is identified by the first bounding box of box0 Another object is identified by the first bounding box of box1 Input: image: an image object box0: a list of bounding boxes box1: a list of bounding boxes Output: flag: return True if first object is larger else False Examples: Question: Which object is larger, the sphere or the blue cube? BOXO=LOC(image=IMAGE,object='sphere') BOX1=LOC(image=IMAGE,object='blue cube') FLAGO=COMPARE_SIZE(image=IMAGE,box0=BOX0,box1=BOX1) ANSWER2 $\equiv$ EVAL (expr $\equiv$ f'sphere' if {FLAGO} else 'blue cube' FINAL RESULT $\equiv$ RESULT(var $\equiv$ ANSWER)
680
+ ""
681
+ 60
682
+ Question: INSERT NEW QUESTION
683
+ ```
684
+
685
+ Figure 15: Prompt of module initialization (Stage 1) to propose the module candidates.
686
+
687
+ ```txt
688
+ Pre-defined APIs:
689
+ class API(): def loc(cls, image: Image.Image, obj_name: str) -> List[List[int]]: "" Returns boxes of the specific object on the image. Input: image: an image object obj_name: a text string Output: boxes: a list of bounding boxes Example: [[261, 160, 525, 299]] = API.loc(image, "camel") return boxes def gpt3(cls, prompt: str) -> str: "" Returns response from gpt3 Input: prompt: prompt to gpt3 Output: response: gpt3 response Example:"joe biden" = API.gpt3("Who is the US President?") return response
690
+ Given the above APIs, could you write a new module, Loc().?
691
+ class Loc(): Generate boxes of the object on the image. Input: img: an image object obj_name: an object string Output: selectedboxes:a list of bounding boxes Examples: BOX1=LOC(image $\equiv$ IMAGEO,object $=$ 'food')
692
+ Response:
693
+ class Loc(): step_name $\equiv$ 'LOC' def execute(self, img, obj_name): if obj_name $\equiv$ 'TOP': bboxes $\equiv$ [self.top_box(img)] elif obj_name $\equiv$ 'BOTTOM': bboxes $\equiv$ [self(bottom_box(img)] elif obj_name $\equiv$ 'LEFT': bboxes $\equiv$ [self.left_box(img)] elif obj_name $\equiv$ 'RIGHT': bboxes $\equiv$ [self.right_box(img)] else: bboxes $\equiv$ self.predict(img,obj_name) return bboxes
694
+ def predict(self, img, obj_name): return API.loc(img,obj_name)
695
+ Given the above APIs, could you write a new module,_MODULE_NAME_? _MODULE_HEAD_
696
+ ```
697
+
698
+ Figure 16: Prompt of module generation (Stage 2) to make a module based on the module's input and output.
699
+
700
+ ```txt
701
+ Think step by step to answer the question.
702
+ You can only use modules below:
703
+ LOC
704
+ COUNT
705
+ EVAL
706
+ RESULT
707
+ VERIFY_ATTRIBUTE
708
+ VERIFY_COLOR
709
+ VERIFY_MATERIAL
710
+ Question: Is the vehicle in the top of the image?
711
+ Program:
712
+ BOX0=LOC(image $\equiv$ IMAGE,object $=$ 'TOP')
713
+ IMAGEO=CROP(image $\equiv$ IMAGE,box $\equiv$ BOXO)
714
+ BOX1=LOC(image $\equiv$ IMAGE0,object $=$ 'vehicle')
715
+ ANSWER0=COUNT (box $\equiv$ BOX1)
716
+ ANSWER1=EVAL (expr $=$ f'yes' if {ANSWER0} > 0 else 'no'
717
+ FINAL RESULT=RESULT(var=ANSWER1)
718
+ Question: Who is carrying the umbrella?
719
+ Program:
720
+ BOX0=LOC(image $\equiv$ IMAGE,object $=$ 'umbrella')
721
+ IMAGEO=CROP(image $\equiv$ IMAGE,box $\equiv$ BOXO)
722
+ ANSWER0 $\equiv$ VQA(image $\equiv$ IMAGE0,question $=$ 'Who is carrying the umbrella?'
723
+ FINAL RESULT=RESULT(var=ANSWER0)
724
+ Question: Do the towel and the box have a different colors?
725
+ Program:
726
+ BOX0=LOC(image $\equiv$ IMAGE,object $=$ 'towel')
727
+ BOX1=LOC(image $\equiv$ IMAGE,object $=$ 'box')
728
+ ANSWER0=COMPARE_ATTRIBUTE(image $\equiv$ IMAGE,box1 $\equiv$ BOX0,box2 $\equiv$ BOX1,objet1 $=$ 'towel' object2 $=$ 'box',attribute $=$ 'color',question $\equiv$ QUESTION)
729
+ FINAL RESULT=RESULT(var=ANSWER0)
730
+ Question: Is the knife made of ceramic?
731
+ Program:
732
+ BOX0=LOC(image $\equiv$ IMAGE,object $=$ 'knife')
733
+ ANSWER0=VERIFY MATERIAL(image $\equiv$ IMAGE,box $\equiv$ BOX0,material $=$ 'ceramic',object $=$ 'knife',question $\equiv$ QUESTION)
734
+ ANSWER1=EVAL (expr $=$ f''yes' if {ANSWER0} else 'no'
735
+ FINAL RESULT=RESULT(var=ANSWER1)
736
+ Question: Is the coat thick or thin?
737
+ Program:
738
+ BOX0=LOC(image $\equiv$ IMAGE,object $=$ 'coat')
739
+ ANSWER0=CHOOSE_ATTRIBUTE(image $\equiv$ IMAGE,box $\equiv$ BOX0,objet $=$ 'coat',attribute1 $=$ 'thick',attribute2 $=$ 'thin')
740
+ FINAL RESULT=RESULT(var=ANSWER0)
741
+ ... .
742
+ Question: INSERT NEW QUESTION
743
+ Program:
744
+ ```
745
+
746
+ Figure 17: Prompt of module execution (Stage 3) to parse programs for a new test case.
genomegenerativeneurosymbolicvisualreasoningbygrowingandreusingmodules/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4b32e11833c8266fb807887639b320ef2e3f0f8efff7f41fc3a6013add3c3b36
3
+ size 1075303
genomegenerativeneurosymbolicvisualreasoningbygrowingandreusingmodules/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b2a4a9a5992caf957998a5388292630a5484926a651a1917aba184b4171607ce
3
+ size 726544